April 4 2019
By Dr Squirrel Main
In late 2018, Dr Squirrel Main, Evaluation & Research Manager at The Ian Potter Foundation travelled to the US to attend the American Evaluation Association Annual Conference.
During this trip, Squirrel attended the Foundation Evaluation Directors’ Meeting where she was able to exchange ideas with 20 peers in the US philanthropic sector. Read more about Squirrel’s learnings and reflections from this knowledge sharing opportunity.
Late in 2018, I was fortunate enough to travel to the US and meet with a range of people working in the philanthropic sector. The main reason for my trip was to attend the American Evaluation Association Annual Conference in Cleveland, Ohio.
This trip was a truly informative and intellectually stimulating experience. I met several people who had been in roles similar to mine for 15–20 years. I also realised that The Ian Potter Foundation can hold its head high in the international ‘foundation evaluation’ arena. We are engaging in what many would consider ‘best practice’ and in fact our workshops and focus on Sustainable Development Goal outcomes were of interest to many of the foundations I visited.
The highlight of my trip was the Foundation Evaluation Directors' meeting, which was held on the two days prior to the American Evaluation Association Annual Conference. It was there I exchanged ideas with 20 peers from the Ford Foundation, Pew Charitable Trust, Moore Foundation and the like. They were a forward-thinking and encouraging group comprising some of the sharpest minds in philanthropy and I feel lucky that a group of us has agreed to continue regular conversations via internet teleconferencing.
I met with staff from a wide range of foundations and organisations, and come away with some key learnings and trends, which I have summarised below.
In terms of reporting from grantees, the basic theme was ‘less is more,’ with the proviso that a foundation should only collect data from grantees if it would be using the data, as many believe it is unethical to collect data if the foundation was not directly using the information. It is interesting to note that over the past four years, The Ian Potter Foundation has shortened its final report from 17 to seven items.
The US foundations were also showing a reduced emphasis on dashboards (commonly used for monitoring progress at the beginning of the 21st century) although their legacy was still present. Instead, foundation staff were consciously directing energy towards resourcing grantees and meeting their evolving needs. For example, to combat the trend of over-reliance on reporting survey data, vanguard foundations were investing more in grantees' administrative and data collection capabilities.
There was a large emphasis on learning, particularly via strategic questions, created by staff and board, for each program area. These learning questions are forward-facing, for example:
For example, what will it take to broker these kinds of relationships with other foundations that we need for this new strategy?
For example, how can we continue to ensure that smaller organisations feel comfortable applying to The Ian Potter Foundation?
These are not retrospective questions that don't link to forward strategy. The purpose of considering such questions involved, to varying degrees, is to build board and staff evaluative thinking. For example: What is the question? What will we do with the answer when we have it?
By the way, the people who taught me this—Julia Coffman and Tanya Beer from the Center for Evaluation Innovation (CEI)—are on tour here in Australia (in Brisbane, Sydney, Melbourne, and Adelaide) during April and May as part of Philanthropy Australia’s Thought Leadership Roadshow.
In the more established foundations (such as Hewlett, Annie E Casey) these learning questions included investigating implementation failures which were more common than theory failure. Foundations also normalised failure: ‘If you're not striking out sometimes, you're not swinging for the fences.’ Indeed, in the more respectable and established foundations, it was clear that failing was an accepted part of grantmaking.
In larger foundations, scaling was a holy grail. Evidence-based programs have a successful scale rate of 12%, so foundations were developing ways to identify the ‘secret sauce’ (curriculum, people, dosage) of programs. These foundations were beginning to use implementation science to assist them in scaffolding grantees' scaling efforts.
Another trend included a greater acceptance of diverse evaluation methodologies. Emergent evaluation of trickier approaches like capacity building and collaborative grantmaking was still new within the field. As such, there was the beginning of a shift from summative to developmental evaluations. Developmental evaluations involve evaluating foundations’ approaches to wicked/complex/emergent strategies (e.g. progress on preventing homelessness rather than counting beds added to a shelter). To ensure that money had been put to good use, foundations were relying more on implementation markers: interim outcomes that look at grantee capacity, grant output and the overall policy environment.
There was also an interesting tension between new and old philanthropy: ‘Tech-based’ foundations pushed older foundations to be more ambitious. But newer foundations did not have the same depth of experience working with government.
Overall, many foundations were focused on revising their big-picture strategy therefore foundation staff cautioned that board and staff buy-in can be difficult in 'changing winds of strategy.' Despite this caution, it was exciting to explore a landscape of new possibilities. I’ve returned brimming with ideas and welcome conversations with our Australian colleagues in philanthropy as we all move forward in our strategic learning.