Foundations and nonprofits exist to make a better world. The racial equity conversation in the United States has led to a new round of questions about how they contribute to that better world. How much philanthropic capital is directed to communities of color? Does the leadership of the social sector reflect the diversity of our society?
These questions are essential—and difficult to answer. Right now, the social sector must make do with what we have and seek meaning in imperfect data. Accordingly, we wanted to take time to share the questions we are hearing at Candid on data and racial equity. Then, we wanted to explore what answers—what meaning—we can and can’t find in the data. With luck, both questions and answers can cast light upon the path ahead.
There are countless dimensions of identity: race, gender, age, sexual orientation, disability status, political beliefs, education level, socioeconomic status, and many more. In this document we will focus on the dimension of race, but much of what we share here is applicable to other dimensions.
Candid gets at least ten different types of questions about racial equity, which we can organize into three general categories: (1) Where does money go? (2) Who leads organizations? (3) What do organizations do?
Where does money go?
- What proportion of grant dollars goes to serve people of color?
- What proportion of grant dollars goes to address racial equity?
- What proportion of grant dollars goes to organizations led by people of color?
- What proportion of grant dollars goes to organizations in communities of color?
- What proportion of grant dollars goes to organizations engaged in advocacy and systems change?
Who leads organizations?
- What proportion of nonprofits are led by people of color?
- What proportion of foundations are led by people of color?
What do nonprofits do?
- What proportion of the population served by a nonprofit comprises people of color?
- What proportion of nonprofits works to address systemic racism?
- What proportion of nonprofits is engaged in advocacy and systems change?
It is worth emphasizing that every one of these questions is important, but they are not the same. They reflect different aspects of the struggle for justice. They often overlap in complex ways. For example, consider a grant to an organization deeply rooted in a Black neighborhood, led by Black people, that tackles the psychological consequences of racism through counseling. Under our (imperfect) definitions, that grant would be counted in the first four questions but not the fifth.
Each of these questions offers room for interpretation. Consider leadership. When someone asks if an organization is “led by people of color,” they could mean one of at least four things: (1) the CEO is a person of color; (2) the majority of the leadership team is made up of people of color; (3) most staff members are people of color; or (4) the majority of the board of directors comprises people of color. Every one of those is a relevant question for both nonprofits and foundations; each reflects organizational power in a different way.
Another tricky dimension is what it means to be “in communities of color.” Often, people mean this literally: the neighborhood where an organization is located. Often, racial bias does play out as spatial bias, and geography is the lens to think about community. But other times, communities are not easily defined on a map.
We also sometimes hear the phrase “in communities of color” unconsciously used as a proxy for “small.” There is a strong argument that small organizations are more likely to have an authentic understanding of the challenges faced in a community—but it is worth emphasizing that there are large organizations rooted in communities of color, too.
Grant intention, description, and impact
Over the last six decades, Candid has collected data on more than 17 million grants. In a recent piece, we outlined how we collect that data. Once we have the data, we categorize (or “code”) it. For this coding, we use our Philanthropy Classification System.
For many decades, our staff manually coded grants in our database. As philanthropy grew, it became obvious that we could manually code only a fraction of the grants made by all U.S. foundations (around 4 million per year). Accordingly, we built a set of algorithms that automatically code grants by subject area, population served, support strategy, and more. This coding is based on multiple sources of data—including data about recipient organizations and their communities—but the single most important input into those algorithms is funders’ descriptions of their grants.
One challenge we face is navigating the differences among funders’ unstated intentions, grant descriptions, and actual impact. To help clarify these differences, consider five hypothetical project grant descriptions:
- “to serve homeless African Americans”
- “for job training in East St. Louis”
- “for educational equity”
- “for work to address climate change”
- “for support”
We see in these examples the overlap across intention, description, and impact. The intended beneficiary of the first grant is clear. Given the demographics of East St. Louis, it is likely—but not certain—that the majority of the beneficiaries of the second grant will be Black. The third grant is clearly working for equity, but it does not describe which dimension of equity—the program might, for example, focus on equity for people with disabilities. The fourth grant might have been made by a donor specifically motivated by the disproportionate impact of climate change on communities of color.
Our algorithms would code the first grant as serving “people of African descent.” The coding of the second, third, and fourth grants would depend on the data we have on the recipient organization; in all likelihood, we would not code these grants as for “people of African descent.”
We often see versions of the fifth example, “for support.” Often these are general operating support grants. For example, an operating support grant to an arts nonprofit in Chicago that produces art rooted in West African traditions serving a predominantly Black audience. It would be entirely appropriate to categorize this grant as serving people of African descent. But in a case like this, our coding will depend on what information we have for the particular nonprofit.
Our colleague Seema Shah has described this as the distinction between “implicit” and “explicit” grantmaking. Consider this example from her analysis of philanthropic grantmaking to Black men and boys (see page 16):
At Candid we believe we are effectively capturing explicit grantmaking. Implicit grantmaking is harder. In addition, we acknowledge that much grantmaking is motivated by a more general altruism and does not target a particular population group. Our data explains much of a complex story, but not all of it.
Again, each of these dimensions is important. Intention is a first step toward action. Descriptions are critical as foundations explain their work to the world. And impact is the reason we all do this work.
Putting grants data in context
Candid has 11.9 million grants coded by population focus. We consider this data a goldmine for the field, a glimpse into the way that resources are allocated for good.
We are, however, very aware of the challenges of placing this data in a broader context. It is difficult to compare our data to the universe of grants or even a given foundation’s portfolio. In particular, we would highlight two challenges: (1) donors think about giving in different ways; (2) the data is incomplete.
First, each donor has their own mental model for organizing their philanthropy. Some think about philanthropy through the lens of place—a neighborhood, a city, a state, a country, the globe. Others organize giving around population served—women, Vietnamese Americans, homeless transgender youth. Others structure their grantmaking around an issue: poverty or climate change or chamber music. Still others think about their giving in terms of institutions: a particular homeless shelter or a particular church or a particular university.
These frameworks represent a variety of ways of making sense of the world. And these frameworks intersect in complex ways. A donor might give to Spelman College because they care about education, or women, or Black people, or Atlanta … or simply because they care about Spelman College.
A second challenge is completeness. As explained in the previous section, not all grants get a population code—and those that do may not get a code on every dimension of identity. This is on purpose, because we seek to honestly reflect the state of practice. In some cases, a grant implicitly meant for a given population group is not described that way. In many more cases, a donor neither intends nor designates a population group.
Candid’s coding offers a powerful glimpse into the minds of donors and the work of philanthropy. But that does not mean it captures every dimension of giving.
So far, this essay has focused on the flow of money. What about the people who work in and volunteer for nonprofits? The nonprofit sector now employs more than 12 million people in the United States alone. Decisions about people within nonprofits matter not just to the sector but to the economy as a whole.
We have partial data on the people of the social sector: 23,276 nonprofits have added data on the demographics of their staffs and/or boards to their GuideStar profiles. We have already seen evidence that this data matters. For example, a focused effort to collect diversity data revealed significant racial disparities among environmental groups, contributing to a broader reassessment of race within the environmental community.
It is worth noting the ways that racial demographics reflect power in organizations. For example, in many cases, the percentage of people of color among frontline staff is much higher than on the board of directors.
Consideration of demographics within an organization also needs to take geographic context into account. A statewide education nonprofit in South Dakota might be unlikely to have any Black people on its board. But it would surely face criticism if it did not have any Native people.
Nonprofits exist to serve others. Accordingly, another key question is the demographics of the beneficiaries of nonprofits. Candid has limited data on this area and is working to gather more. Some 11,491 organizations have achieved a Platinum seal of transparency. To achieve a Platinum seal, a nonprofit must share data on its beneficiary group(s).
We also acknowledge that it is not always easy for nonprofits to define their beneficiary group. How should an organization working on climate change or electoral reform or street art describe its beneficiaries?
Ultimately, most donors give because they care about people. And the work of nonprofits is made possible by people. The human dimension of the social sector is critical—and just as complex as people themselves.
A path forward
Our role at Candid is to provide information to help people do good. We do not believe it is our job to make a pronouncement about the final meaning of the data. Indeed, we expect and encourage varying interpretations of the facts we provide.
With that said, we acknowledge that we are close to the data. Accordingly, we would like to offer four thoughts for the field.
First, we encourage continued exploration of our data. In this document, we’ve laid out some limitations of our data, but we believe those limitations should invite conversation. Philanthropy will increasingly confront difficult questions like those raised above—and it will need data to provide answers. For example, critics legitimately ask about how much philanthropy goes to elite institutions that often serve the already-privileged (universities, private hospitals, cultural institutions). Candid is not taking a position on these types of questions. But we absolutely believe that for society to have such a conversation, it should be rooted in data, even if that data is imperfect.
Second, we would like to suggest language for how to most accurately cite our data about population groups. It would be appropriate to say our data shows that a grant or group of grants is “explicitly designated for” a given population group. We would consider it less accurate to say that our data shows whether a percentage of total grants “goes to” that population group.
Third, going forward, in Candid’s own research we will avoid explicit comparisons of our coded data with population percentages. For example, consider this comparison: “X percent of giving is designated for Y group, but Y group makes up Z percent of the U.S. population.” Each clause in that sentence is coherent and important. But a direct comparison between the two clauses might confuse some readers. We do believe that population statistics can be highly relevant to conversations about the allocation of philanthropic dollars. In our own work, Candid will continue to use population data as general context, not explicit comparison.
Fourth, Candid sees an opportunity to make this data better. If the field is to explain itself—to justify its existence—to society, organizations need to take the time to describe their work with intention and clarity. Foundation grants require descriptions, and those descriptions can’t be afterthoughts. Nonprofits need to make sure they clearly explain their work. In both cases, Candid is working to offer a path to scale. More than 1,000 foundations already proactively share data about their grants; more than 100,000 nonprofits have updated their respective profiles. For the health of the social sector, more foundations and more nonprofits need to proactively share data about their work.
As a field, we can continue to improve our data. And we’re going to need to. Systemic racism has been with us for centuries. Perhaps now can be the moment when we accelerate our shared journey toward racial justice. Data can help us navigate the choices ahead.