Impact Evaluation at Cartier Philanthropy

Assessing what works, where, and why across 22 development partners in 40 countries

March 2024 · Soumita Roy · 10 min read
Cartier Philanthropy, Geneva

OverviewContextThe WorkInside a DDSPartnersCase StudyReflections

Project Overview: As Programme Analyst at Cartier Philanthropy, I conducted impact evaluation across a portfolio of 22 non-profit partners operating in 40 countries. I authored 22 comprehensive due diligence reports, built analytical dashboards in R and Tableau, constructed geographic visualisations in ArcGIS, and tracked the financial health and programme performance of interventions spanning education, health, agriculture, and sustainable livelihoods.

My Contribution

  • Authored 22 Due Diligence Summary reports (25-30 pages each), covering organisational review, impact assessment, financial health, and risk analysis for each partner
  • Used R for data analysis and Tableau to create interactive visualisations, integrating indicator selection to strengthen programme evaluations across the portfolio
  • Built geographic maps of partner operations using ArcGIS, and tracked organisational evolution over time using the Wayback Machine for longitudinal web analysis
  • Tracked financial sustainability of partners' programmes across 40 countries, assessing donor diversification, unit economics, and budget-to-expenditure ratios
  • Performed mixed-methods research, triangulating case studies, RCTs, annual reports, and programme officer interviews to produce funding recommendations
22
Due diligence reports authored
40
Countries covered
5
Thematic areas
60+
KPIs tracked across portfolio

The context

Cartier Philanthropy is the philanthropic arm of the Richemont group, channelling funding to high-impact development programmes across Sub-Saharan Africa, South Asia, and the Caribbean. Unlike many foundations that write cheques and wait for annual reports, Cartier takes an unusually hands-on approach: every partner undergoes rigorous due diligence before funding, and every programme is tracked through structured evaluation throughout the grant cycle.

I joined as Programme Analyst at a moment when the portfolio was expanding into new geographies and thematic areas. My mandate was twofold. First, to conduct deep-dive due diligence on prospective and existing partners, assessing everything from their theory of change to their unit economics. Second, to build the analytical infrastructure, dashboards and indicator frameworks, that would allow the team to track programme performance in near-real time.

Due diligence and partner evaluation

The core of my work was preparing comprehensive Due Diligence Summaries (DDS) for each partner. These were not perfunctory checklists. A single report could run 25-30 pages, covering the organisation's history and governance, its theory of change, programme model, beneficiary selection process, KPI tracking record, financial health, unit economics, external evaluations, and risk assessment. I authored 22 such reports over six months, covering partners across Haiti, India, Mali, Senegal, Tanzania, Zambia, Sierra Leone, Cambodia, and several other countries.

Each report required triangulating information from multiple sources: partner-submitted documents, annual reports, published RCTs, government data, and conversations with programme officers. The goal was to give the foundation's decision-makers a clear-eyed view of whether a programme was delivering on its promises, and at what cost per beneficiary.

Cost per beneficiary across the portfolio
Unit economics vary by orders of magnitude across programme types (USD, latest available year)
Source: Partner DDS reports authored during tenure, FY22-23 data. Unit costs are not directly comparable across programme types.
Reading: The 1,700x range between DMI's mass-media campaigns ($0.11 per person reached) and myAgro's farming packages ($196 per farmer) illustrates why cross-programme comparison requires careful contextualisation. A dollar spent on radio spots in Tanzania reaches millions; a dollar on greenhouse farming in India transforms one household. Part of my job was helping programme officers navigate this complexity rather than defaulting to crude cost-per-head rankings.

Data analysis and visualisation

Beyond narrative assessment, much of the work was quantitative. I used R to clean and analyse programme data submitted by partners, and Tableau to build interactive dashboards that the team could use to monitor KPIs across the portfolio. This involved designing indicator frameworks that were comparable across very different programme types, from micro-lending in Haiti to school health in Zambia.

A recurring challenge was indicator selection: partners often tracked dozens of metrics, not all equally informative. Part of my contribution was helping programme officers identify the two or three indicators that most meaningfully captured whether a programme was on track, and building the visual infrastructure to make those indicators legible at a glance.

For several partners, I also constructed geographic visualisations using ArcGIS to map the spatial distribution of programme activities. For Imagine Worldwide, for example, I built maps showing the rollout of tablet-based learning across schools in Sierra Leone, overlaid with district-level education statistics. These maps gave programme officers a spatial dimension to complement the tabular KPI data, making it easier to spot geographic gaps in coverage or identify clusters where implementation was particularly strong.

Portfolio allocation by thematic area
Cumulative CP funding commitments across evaluated partners (USD)
Source: Partner DDS reports. Funding figures are cumulative across grant cycles for evaluated partners only.

Financial sustainability tracking

A less visible but equally important strand of the work was assessing the financial health of partner organisations. This meant reviewing audited financials, tracking donor diversification, and flagging cases where a partner's budget had outpaced its income, or where reliance on a single funder created vulnerability. In a portfolio spanning 40 countries, from a post-conflict Haiti to a fast-growing India, the range of financial risks was considerable.

Scale vs. cost-effectiveness
Beneficiaries reached vs. cost per beneficiary (USD, log scale). Bubble size proportional to CP funding.
Source: Partner DDS reports, FY22-23 data. Beneficiary counts reflect latest available annual figures.
Reading: DMI and Healthy Learners occupy the high-reach, low-cost quadrant, but their delivery models (mass media and school-based health) are inherently cheaper to scale than asset-transfer programmes like Fonkoze or myAgro. The chart is useful not for ranking partners but for understanding which delivery models naturally compress unit costs at scale, and which face structural cost floors.

Inside a due diligence report

Each Due Diligence Summary followed a standardised six-section structure, designed to give the foundation's programme officers and board a complete picture of a partner's strengths, weaknesses, and trajectory. The structure is worth describing because it shaped how I approached every analytical task: each section demanded a different mix of quantitative and qualitative methods, and the final recommendation had to synthesise across all six.

Section 1
The Basics
Mission, history, governance, theory of change, programme model, beneficiary selection, and network positioning. Validated against public filings and third-party sources.
Section 2
Achieving Impact
Primary KPIs with target vs. actual tracking, baseline-midline-endline results, external evaluations (RCTs, quasi-experimental designs), and cost-effectiveness benchmarking.
Section 3
Financial Situation
Budget vs. expenditure analysis, donor diversification review, unit cost trends, top funders, and long-term financial sustainability signals.
Section 4
Organisation Capacity
Leadership assessment, staffing (headcount, gender ratio, local vs. expat), turnover rates, technical capacities, growth strategy, and systems infrastructure.
Section 5
Partnership Analysis
Rationale for CP support, highlights from site visits and meetings, overall performance assessment on impact, scale-up potential, strategy, and cost-effectiveness.
Section 6
Risks & Mitigants
Operational, political, and contextual risks. Programme fidelity at scale. Reputational risks for Cartier. Follow-up items and conditions for continued funding.

The analytical toolkit varied by section. Section 2 (Achieving Impact) was the most data-intensive: I built R scripts to ingest partner KPI data, compute year-on-year trends, and flag deviations from targets. For partners with published RCTs, like Educate Girls (IDInsight, 2015-18) and DMI (Burkina Faso child survival trial), I reviewed the evaluation methodology and assessed whether the reported treatment effects were likely to hold at the partner's current scale and in its current operating context.

Section 3 (Financial Situation) required a different kind of analysis. I tracked unit cost trajectories over multiple fiscal years, computed donor concentration ratios, and compared budget projections against actuals to identify organisations that were growing faster than their revenue base could support. For some partners, I used the Wayback Machine to reconstruct the historical evolution of their public-facing claims, a form of longitudinal web analysis that helped assess whether an organisation's narrative was consistent over time or had shifted to match funder expectations.

Analytical intensity by DDS section
Approximate share of analytical effort per report section, based on time and data volume
Source: Author's estimate based on typical report production workflow.
The analytical stack in practice: A typical DDS drew on R for KPI trend analysis and anomaly detection, Tableau for interactive dashboard prototyping, ArcGIS for geographic mapping of programme coverage, Excel for financial modelling and donor diversification analysis, and the Wayback Machine for longitudinal tracking of partner websites. The narrative synthesis, theory of change validation, and risk assessment sections required qualitative judgement informed by programme officer interviews and site visit reports.

Selected partners

Each partner operated in a different context, addressed a different problem, and measured success differently. What they shared was a commitment to evidence-based intervention and a willingness to be scrutinised. Below is a selection of the organisations for which I authored full due diligence reports, chosen to illustrate the range of programme types, geographies, and evaluation challenges in the portfolio.

🇭🇹

Fonkoze

Haiti · Sustainable Livelihoods

Ultra-poverty graduation programme for Haitian women using BRAC's model. 18-month accompaniment with asset transfers, health services, and case management. 95% graduation rate in 2022.

🇮🇳

Educate Girls

India · Education

Community volunteer model enrolling out-of-school girls in rural Rajasthan, MP, and UP. Ran the world's first education Development Impact Bond. IDInsight RCT showed 28% higher learning gains.

🇹🇿

Development Media International

Tanzania · Health & Behaviour Change

Evidence-based mass-media campaigns reaching 91 million people. Radio and TV spots on child survival, family planning, and nutrition. RCT-validated in Burkina Faso.

🇲🇱

myAgro

Mali, Senegal · Agriculture

Mobile layaway savings platform for smallholder farmers. Prepaid seed and fertiliser packages with training. 156% yield increase reported among 100,000+ farmers.

🇮🇳

Kheyti

India · Climate-Smart Agriculture

Affordable modular greenhouse ("Greenhouse-in-a-Box") for smallholder farmers. 7x increase in food production, 50x improvement in water efficiency. Targets income doubling.

🇿🇲

Healthy Learners

Zambia · School-Based Health

Training teachers as community health workers to reach 763,000 students. Reduced stunting by 52%, cut morbidity by 38%, at $1.62 per child per year.

🇸🇱

Imagine Worldwide

Sierra Leone · Ed-Tech

Solar-powered tablets for autonomous literacy and numeracy learning. No internet required. Targeting less than $5 per child at scale across six African countries.

🇿🇲

iDE

Zambia, Cambodia · Agri & WASH

Farm Business Advisors connecting remote farmers to markets in Zambia. Sanitation marketing driving toilet coverage from 23% to 85% in Cambodia since 2009.

Selected partner KPIs: targets vs. actuals
Performance ratio (actual / target) for key indicators, FY22-23
Target (100%)
Actual
Source: Partner annual reports and DDS reviews. Performance ratios normalised to targets = 100%.
Reading: Most partners met or exceeded targets on primary KPIs, but the picture is more nuanced than the bars suggest. Educate Girls' 107% reflects strong enrolment figures, yet cost per enrolled girl rose from $59 to $73 year-on-year as operations moved into harder-to-reach districts. DMI's 114% is impressive in reach terms, but reach is a leading indicator, not proof of behaviour change. The DDS process was designed to surface exactly these kinds of distinctions.

A closer look: evaluating Educate Girls

To give a sense of what this work looked like in practice, consider the due diligence on Educate Girls, one of the portfolio's largest partnerships at $3.8 million across three grant cycles. EG operates in some of India's most educationally disadvantaged districts, deploying community volunteers to identify and re-enrol out-of-school girls.

The evaluation had to weigh several competing signals. On one hand, EG's headline numbers were impressive: 245,000 girls enrolled, a successful Development Impact Bond, and IDInsight's RCT showing 28% higher learning gains in programme schools. On the other, certain KPI trajectories warranted scrutiny. Cost per enrolled girl had risen from $59 in FY21 to $100 in FY24 (estimated), partly because saturation in early districts meant reaching increasingly remote, harder-to-serve populations. The pandemic-era community learning camps (Camp Vidya) were discontinued in FY23-24 after limited impact on actual school enrolment.

The DDS needed to make a judgement: were rising costs a sign of diminishing returns, or an expected feature of scale-up into harder geographies? The answer, arrived at through a mix of quantitative analysis and qualitative triangulation, was largely the latter, but with the recommendation that EG's upcoming expansion into Uttar Pradesh be monitored closely, given the state's weaker administrative infrastructure and the consequent risk to programme fidelity.

Educate Girls: cost per enrolled girl over time
Rising unit costs reflect geographic expansion, not diminishing impact (USD)
Source: Educate Girls DDS, annual reports. FY24 is an estimate based on budget projections.
The art of due diligence is not in cataloguing what an organisation reports, but in identifying the two or three things that might not be true, and stress-testing them.

Reflections

The Cartier Philanthropy experience taught me things that a PhD in economics does not. Econometrics trains you to estimate treatment effects; impact evaluation in the field trains you to decide which treatment effects are worth estimating in the first place. Working across 22 partners forced me to develop judgement about what constitutes useful evidence versus performative measurement, a distinction that is surprisingly underappreciated in development practice.

It also sharpened my ability to translate quantitative findings for non-technical audiences. Programme officers at Cartier did not want p-values; they wanted to know whether Fonkoze's graduation rate in an increasingly insecure Haiti was likely to hold, and what would happen to myAgro's unit economics if West African rainfall patterns continued to shift. Learning to answer those questions clearly, without hedging behind confidence intervals, was a skill I continue to draw on.

Perhaps most valuably, the role gave me a panoramic view of how development interventions actually work on the ground, across sectors, across continents, and across scales. That breadth of exposure, from a $400,000 grant to a Haitian NGO to a $3.8 million multi-cycle partnership with an Indian education powerhouse, has informed how I think about innovation policy, impact measurement, and the gap between what works in an RCT and what works in practice.

From impact evaluation to research

The experience of evaluating theories of change across 22 organisations directly shaped how I approach my PhD research on innovation ecosystems. The habits of triangulation and scepticism carry over.

Analytical range

In six months I worked with RCT data, logframes, financial statements, qualitative interviews, ArcGIS maps, and Tableau dashboards. That breadth of method is rare in a single role and hard to replicate in a classroom.

Communicating to decision-makers

Every DDS report ended with a funding recommendation. Writing for a decision rather than for a journal taught me a different kind of rigour: one where clarity and judgement matter more than caveats.

Global perspective

Evaluating programmes across Sub-Saharan Africa, South Asia, and the Caribbean gave me first-hand appreciation for how context shapes implementation. No model travels without adaptation.

Tools and methods

Analytical methods

Due DiligenceImpact EvaluationTheory of ChangeLogframe AnalysisKPI DesignFinancial AnalysisMixed MethodsCost-Effectiveness Analysis

Technical tools

RTableauArcGISExcelWayback Machine

Professional skills

Donor ReportingStakeholder ManagementProgramme Officer Liaison