Cultural philanthropy: Influencing the culture to improve the world

Disclosure: I personally know some of the people discussed in this article.

When you decide to use your money to do good, you’re presented with a plethora of options. You might purchase malaria nets, or give direct cash transfers to the world’s poorest. You could fund research on AI safety, or lobby for new climate change policies. Or you could walk down the street and give some money to a homeless person in your community.

But there is another kind of philanthropy—one that is much less common, but growing in importance. It’s based on the idea that the culture we live in influences the decisions of everyday people, entrepreneurs and policymakers. Recognising that influence, this kind of philanthropy wants to change that culture.

I’m going to call this “cultural philanthropy”. It is very distinct from other forms of supporting culture, e.g. building a new wing at the Met or paying for stage-hands’ balaclavas at the Royal Opera House (no, seriously). That kind of philanthropy is done out of a general love for the arts (and, often, a desire for status). “Cultural philanthropy”, as I use the phrase, is specifically defined by a clear theory of change: the idea is to use culture to disseminate ideas that will go on to change the world.

Stripe and its founders, the Collison brothers, are two prominent examples of cultural philanthropists. Stripe Press, a publishing division of the payments giant, produces books, articles, podcasts and films with a view to improving the world. Whether exploring why technological development has stalled or advocating for better heat systems, Stripe Press is laser-focused on diagnosing the world’s problems and offering solutions. Neither Stripe nor the Collisons will directly make money from this (though they do hope that by increasing “the GDP of the internet”, Stripe will be able to take a cut). The primary motivation is altruistic. The hope, it seems, is that distributing these ideas might make people think about things differently, encouraging them to make better decisions, which will in turn put humanity on a better path.

Sam Bankman-Fried, a crypto billionaire and effective altruist, is another advocate for cultural philanthropy. His FTX Future Fund is explicit about it: its website claims that “books and media are enormously important for exploring ideas, shaping the culture, and focusing society’s attention on specific issues.” Unlike the Collisons, who are doing much of this culture-changing-work in-house, SBF takes a more distributed approach: he’s funding a $500,000 blogging prize for “effective ideas”. This is similar to some of the work Dustin Moskovitz and Cari Tuna, two other effective altruists, fund through Open Philanthropy, their foundation. Open Philanthropy has recently given grants to YouTubers promoting effective altruist ideas. And while it appears to be picking up steam, cultural philanthropy isn’t entirely new: the Rockefeller Foundation’s previous support for Future Perfect and the Gates Foundation’s support for The Guardian are older examples (though still quite recent!).

You might draw parallels between this and the billionaires who philanthropically own news outlets, such as Jeff Bezos with The Washington Post, Marc Benioff with Time and Laurene Powell Jobs with The Atlantic. But this is different. For one, those organisations are still run somewhat like a business—profit is not an afterthought. More importantly, Bezos, Benioff and Powell Jobs all seem interested in the idea of journalism in and of itself, as being key to a healthy democracy. They are less interested in the specific ideas those publications disseminate, and they don’t have nearly as much editorial control over them.1

Nor is it quite like the Murdochian approach to media. Rupert Murdoch’s outlets are certainly no stranger to advocacy, but I don’t think they promote those ideas because he wants the world to be a certain way. He instead chases audiences, profit and power for its own sake. (The Sun switching from supporting the Tories to Labour and back again is a good example of this, as is his reluctance to back Trump at the beginning of his 2016 campaign. Murdoch just wants to back winners.)

If there is an analog to the Collison-SBF-Moskovitz approach, it is billionaire funding of think tanks. In both cases, the hope is to spread valuable ideas. But rather than doing all the work in house and taking a narrow focus on who might receive the ideas, the new approach takes a more bottom-up approach, finding ideas where they might arise, and hoping they take root in the culture.2 (It is not a coincidence, I think, that two of the people leading Stripe Press’s Works In Progress are former think-tankers; nor that the Collisons, SBF and Open Philanthropy all also fund the new Institute For Progress think tank).

The strategy has a plausible, if uncertain, theory of change. Matthew Yglesias summed it up quite well while discussing Future, a much-hyped online publication from Andreessen Horowitz:

“I think to make pro-tech, pro-markets, pro-innovation sustainable, you need a public culture that reflects those values. That means publications that propound them.”

Like many of the philanthropists funding this work, I have a pro-tech, pro-markets and pro-innovation worldview—which means I view this sort of cultural philanthropy as something that could be really good for the world. But, as with all things, views differ. Reasonable people might dislike the views of SBF, Moskovitz and the Collisons. And some people funding similar projects have very different views: most notably Peter Thiel, who promotes views I find repellent (and, in the case of René Girard’s mimetic theory, just plain weird). And while I believe most of these people—including Thiel!—have good intentions and genuinely want to make the world better, that will not always be the case. It is entirely plausible that people will masquerade as cultural philanthropists while actually spreading ideas that serve only their interests.

That isn’t necessarily a problem: people ought to be allowed to back whatever ideas they want. But we need to emphasise transparency. As money pours into cultural philanthropy, certain ideas will gain traction that otherwise wouldn’t have. Yet without transparency, the average person—even the average lawmaker—might not know that the only reason they’re suddenly hearing about a certain policy is because a couple of Silicon Valley billionaires wanted it to be heard. And if they did know that, maybe they’d think about it a bit more critically. While this problem is important, it is by no means a dealbreaker. Something as simple as clear, prominent disclosures on the bottom of each philanthropically-funded article would help, I think—as would clear statements from the philanthropists on what they are doing, and why they’re doing it.3

But transparency aside, I am broadly optimistic about the potential of cultural philanthropy. As FTX notes, culture plays a key role in how both the ruling class and the masses understand the world, and subtly shifting it in a more positive direction could have significant gains. A world in which “cheems mindset” is common parlance and serious consideration is given to the benefits of frontier technologies like artificial wombs strikes me as a very appealing one. The cultural philanthropists might help us get there.

Footnotes

1. Elon Musk’s Twitter acquisition fits in this category, too: if his stated motivations are true, he is buying the platform because he thinks it is crucial to a modern democracy, rather than because he wants to promote any specific ideas. (There is some evidence, though, that he is in fact buying it to promote more right-wing ideas, which would push it in the direction of cultural philanthropy.)

2. When I talk about the “culture”, I don’t necessarily mean popular culture. All of these philanthropists seem particularly concerned with shaping, for lack of a better term, “elite” culture—policymakers, entrepreneurs and journalists who are in a position to have a potentially large impact. But they don’t limit themselves to targeting these people, and there’s an understanding that getting mass adoption of these ideas would help things along (it’s just much harder).

3. The Blogging Prize complicates things, because people are writing with the hope of receiving funding, rather than receiving funding before writing. I also have a suspicion that some people are going to write and publish takes they don’t fully believe, purely in the hope of winning the prize. Given the prominence of the prize and the people backing/paying attention to it, this could have a fairly large distorting effect on the “marketplace of ideas”, as it were. A potential solution might be for the Prize to require all entrants to put e.g. “This blog is an entrant for the Blogging Prize, funded by the FTX Future Fund” on each post, though that’s a little crude and a pain to enforce.

13 musings from EA Global

  1. An awful lot of people are interested in effective altruism.
  2. A lot of people want to use journalism as a tool for magnifying EA’s impact.
  3. A lot of EAs (including very senior ones) are absolutely terrified of journalists and think that shutting them out is a good thing to do.
  4. There are more women in EA than I realised/expected, though they are still underrepresented.
  5. There is a lot of interesting work going on at the intersection of agriculture / climate resiliency / global development / existential risk reduction.
  6. The Barbican is a fantastic conference venue.
  7. A sizeable number of people worry that effective altruism is too dominated by discussion of AI safety.
  8. There are a lot of very bright and motivated students in EA. They are much better at networking than I ever was.
  9. We are very lucky that Kremer et al did a meta-analysis of water treatment, despite GiveWell previously deeming it to be cost ineffective. We need to figure out how to encourage more of this kind of work.
  10. Despite the large amounts of attention longtermism gets, the bulk of EA money in the next few years is expected to fund work in global health and development.
  11. Vegan food, when prepared well, is very good. Meat substitutes have got a lot better in the last couple of years, but vegan food that doesn’t try to imitate meat (which is often food inspired by Asian cooking) is always better.
  12. EA needs to adopt a “startup mindset” of aiming for high-value outcomes, even if they are unlikely to be achieved. It needs to get very comfortable with the idea of failure.
  13. Will MacAskill is absolutely ripped.

What is Bayes’ Theorem and Bayesian reasoning?

It’s a helpful way to teach yourself how to react appropriately to new information.

Introduction

Let’s say you strongly believe something to be true. For example, I might have a pile of 100 coins, and I tell you that 99 of them are fair but one is rigged. When you randomly draw a coin from the pile, you’re pretty sure the coin isn’t rigged — you’re 99% sure, in fact. You decide to flip it a few times anyway. The first time it lands on heads. The second time? Heads. The third time? Heads again. And the fourth time? Heads!

There are a few different ways you could process this information.

  1. You could ignore what you’ve just seen. Everything you knew before flipping the coin is still valid, so you still think there’s a 99% chance of it being a fair coin — the four heads in a row is just a fluke.
  2. You could decide definitively that the coin is rigged. If the coin was fair, the odds of seeing what you’ve just seen are very low: 50% * 50% * 50% * 50% = 6.25%. There’s a much higher chance of seeing four heads in a row if the coin is rigged, so you’re now sure that the coin is rigged.
  3. You could take the middle ground. Everything you knew before flipping the coin is still valid — there was a much higher chance of you picking a fair coin than a rigged one — so you’re still pretty sure the coin is fair. But you acknowledge that what you just saw would be very unlikely if the coin were fair, so you’re a bit less confident in its fairness than before — instead of being 99% confident, you’re now only, say, 86% confident.

Option 3 strikes me, and many other people, as the most sensible way of thinking. It’s less of an all-or-nothing approach, considering both what you knew beforehand and what you’ve just learnt. Helpfully, it’s also pretty easy to apply in day-to-day life. It’s called Bayesian reasoning.

Why is Bayesian reasoning useful?

We have a tendency to either under- or over-react to new pieces of evidence. The example above, where one person ignored the evidence altogether because of their prior beliefs and another let the evidence completely overrule their prior beliefs, was extreme. But without realising, you’ve probably done something similar before.

Consider the last time you read about a murder. I’d bet it made you a fair bit more scared of being murdered, even though statistically one additional murder doesn’t meaningfully change the overall likelihood of being murdered. Being more scared is a completely human reaction, but sometimes it’s helpful to step back and remember what your beliefs were before you learnt about the new event.

Similarly, we all know people who are entrenched in their views and refuse to change their minds, even when presented with important new evidence. Being open to having your mind changed is really important.

Approaching situations with a Bayesian mindset is a good way to overcome this. It helps you ensure that your beliefs change with the evidence, and that your beliefs are grounded in everything you know, not just the thing that’s come to mind most recently.

Decoding Bayesian jargon

Bayesians — people who employ Bayesian reasoning — like to use jargon to simplify how they talk about things. So before we jump in any further, let’s learn the jargon.

When we approach a situation, we often go in with a prior belief, often shortened to just “prior”. In the above example, your prior was your 99% certainty that the coin isn’t rigged. You have a whole range of priors on all sorts of topics, at all sorts of confidence levels: you might be 25% confident that aliens exist, or 75% confident that technology will help solve climate change. (Priors can be objective probabilities, as in the example where you know how many coins are rigged, or they can be your subjective estimates of something’s likelihood. We’ll discuss this more shortly…)

Every so often, you’re confronted by new evidence. In the above example, that evidence was seeing the coin land on heads four times in a row.

If you’re a Bayesian, you then use the evidence to update your priors. In the example above, this was the process where you acknowledged that the evidence you just saw was unlikely, so you reduce your confidence in the coin’s fairness a little bit.

Bayes’ Theorem: how to apply this in practice

In most situations, all you need to do is remember that Bayesian reasoning exists. When you encounter a new piece of evidence, X, remind yourself of your prior belief, think about how likely it is that you’d see X if your prior belief was true, and think about how likely it is you’d see X in general, regardless of whether your prior belief was true or not. Taking all these things into account will help you “update” your priors.

If you want to really be an expert, though, you can learn Bayes’ Theorem. Bayes’ Theorem is the mathematical representation of what we’ve just talked about: it tells you just how much to update your priors by. The theorem looks like this:

Where:

  • P(A|B) means the probability of A happening, given B (the | means ‘given’ in probability notation).
  • P(B|A) means the probability of B happening, given A.
  • P(A) means the probability of A happening.
  • P(B) means the probability of B happening.

It looks complicated, but I promise you that it’s simpler than it seems. Let’s apply it to our coin example. We’re trying to figure out the probability of the coin being fair, given that it showed four heads. First, we need to think about how likely it is that we’d get four heads if the coin was fair. Then we need to multiply that by the probability of the coin being fair, regardless of the evidence. Then we need to divide the whole thing by the probability of getting four heads, regardless of whether the coin is fair or not.

Okay, now let’s go back to the equation. Calculating the probability of the coin being fair (A), given that it showed four heads (B) is super easy.

Where:

  • P(A|B) means the probability of A happening, given B.
  • P(B|A) means the probability of B happening, given A.
  • P(A) means the probability of A happening.
  • P(B) means the probability of B happening.

P(B|A) — the probability of getting four heads if the coin was fair — is 50% * 50% * 50% * 50%, or 6.25% — the value we calculated earlier.

P(A) — the probability of the coin being fair — is your prior. In this case, you thought there was a 99% chance of the coin being fair. 

P(B) — the probability of getting four heads — is a little trickier to calculate. We think coins are fair 99% of the time, so 99% of the time we think there’s a 6.25% chance of getting four heads. But 1% of the time the coin is rigged — and if the coin is rigged, it has a 100% chance of getting four heads. So to calculate the overall probability of getting four heads, we add those two scenarios together: (99%*6.25%) + (1%*100%) = 7.1875%

So putting all of that together: 

  • P(A|B)= (6.25% * 99%)/(7.1875%)
  • P(A|B)= 86%

The real magic of this formula is that the more “surprising” the evidence, the more your prior gets updated. Instead of seeing four heads in a row, for instance, let’s say you saw ten. That’s very unlikely to happen if the coin is fair (it’s a 0.09% likelihood). And if we apply Bayes’ theorem with that new value, we get P(A|B)= 8.8%. In this case, the smart thing to do would be to assume the coin is rigged.

An example of when this is useful

Bayesians often like to point to cancer tests as a way to demonstrate the method’s value. Let’s say you’re about to get a test for breast cancer. On your way to the clinic, you do some googling, and you learn that only 1% of women have breast cancer. You also learn that people that do have cancer test positive only 80% of the time. Finally, you learn that people that don’t have cancer still test positive 9.6% of the time. 

The next day, you receive a positive test result for breast cancer. You’re probably freaking out! But should you? Bayes’ theorem can help you out. Let’s pull up the equation again:

Where:

  • P(A|B) means the probability of A happening, given B.
  • P(B|A) means the probability of B happening, given A.
  • P(A) means the probability of A happening.
  • P(B) means the probability of B happening.

In this case, we want to figure out the probability that you have cancer (A), given a positive test result (B).

P(B|A), the probability of getting a positive test result given that you do have cancer, is 80%: people that do have cancer test positive 80% of the time.

P(A), the probability of having cancer, is 1%.

P(B) is the overall probability of having a positive result. Again, we can break it down into:

The 1% of people who do have cancer get a positive result 80% of the time; and the 99% of people who don’thave cancer get a positive result 9.6% of the time. Putting that together gives us:

  • P(B) = (1%*80%)+(99%*9.6%)
  • P(B) = 10.304%

So putting everything together:

  • P(A|B)= (80%*1%)/10.304%
  • P(A|B) = 7.76%

That means you’ve got a 7.76% chance of having cancer. That’s not nothing, obviously, but it’s not an astronomically high percentage either. And chances are, it’s a lot lower probability than you expected before doing the maths — and a lot less scary.

Use with caution

Bayesian reasoning and Bayes’ theorem is a super useful way of thinking. But we quite often don’t have real, calculated numbers to plug into the theorem. And the theorem’s results are only as good as the numbers you put into it.

Instead of picking a coin from a pile where I know one in 100 is rigged, I might be trying to figure out if a coin I got from the bank is fair, given that it showed heads four times in a row. Bayes’ Theorem requires me to put my prior in — the probability of a coin from a bank being fair. What number should I put in here? I’d guess 99.9%, but that isn’t a particularly educated guess — I have no idea how common rigged coins are, or how good banks are at spotting them.

We can imagine other examples where I don’t know what numbers to put into Bayes’ Theorem. Let’s say tomorrow we find a colony of bacteria on Mars, and I want to revise my estimates for the probability of intelligent life on Mars. To do that, I’d need to know the probability of finding bacteria if there was intelligent life, my prior probability for there being life on Mars, and the overall probability of there being bacteria on Mars. I don’t know any of those numbers, and even if I was to guess there’s a very good chance that my guesses would be off by an order of magnitude.

In all of these cases, I could make up some numbers, plug them into the theorem, and confidently present the theorem’s output as an “accurate” probability. But the theorem’s output would in fact be close to meaningless, because the inputs are all made up.

It’s very easy to hide our uncertainty using numbers. By assigning a numerical probability to something, we can trick ourselves and others into thinking that the probability is “objective” and “correct”, even if it has very little basis in reality. The only way we can actually learn about reality is to do what scientists have always done: investigate, learn, and test hypotheses. Even when we do this, our priors will often be subjective — influenced by what we’ve chosen to investigate, our methods for investigation, and how much weight we put on the conclusions.

Bayes’ Theorem isn’t a replacement for the knowledge-seeking process; it’s just a helpful way to synthesise the knowledge you discover in the knowledge-seeking process. But as long as you remember that, it can be a very useful tool for making sure your beliefs take into account new evidence.

Sources and further reading

Economic research and improving economic conditions

For Giving What We Can, I wrote two pieces explaining why economic research and improving economic conditions matters. From the articles:

Around 4.7 billion people (approximately 62% of the world’s population) live on less than $10 a day. And around 700 million people live on less than $1.90 per day, defined as “extreme poverty.”

Research shows that rising incomes are correlated with higher levels of self-reported happiness. Fortunately, we can help make that happen. Since 1990, over one billion people have been lifted out of extreme poverty.

Economic research has the potential to influence change on a large scale, and history suggests that it can lead directly to positive outcomes.

It’s hard to find a single estimate of the overall impact a field as wide as economic research has had. But numerous case studies provide examples of economic research having a very large impact. One example is the deworming treatment mentioned above. GiveWell decided to fund deworming treatments as a result of economic research into the area, and they estimated that their recommendation led to over 319 million treatments being provided to children between 2012 and 2019.

You can read the full posts, which include recommendations for charities to donate to, here and here.