Some recent articles

I occasionally write about this blog’s themes for The Economist, here are a few recent pieces:

On nuclear fusion

Recent developments have made some people optimistic that “net energy gain” reactions—the holy grail where a nuclear fusion reaction produces more energy than it consumes—could soon be achieved.

On biological weapons

If a pathogen was engineered to be particularly virulent and lethal, it could kill millions of people across the globe. Researchers at Oxford University’s Future of Humanity Institute think such a weapon could even lead to human extinction.

On surviving nuclear war

Potassium iodide can help, but only up to a point. It stops the thyroid, a gland in the neck, from absorbing radioactive iodine. The pills have been distributed in the aftermath of nuclear power-plant meltdowns, and there is evidence to suggest they work.

My colleagues are constantly writing excellent pieces, too; you should subscribe to read them all.

My review of “Whole Earth” and “We Are As Gods”

I reviewed the new biography and documentary about Stewart Brand. The best part of both is where they dive into the tension between Brand’s techno-utopianism and environmentalism (even though I tend to side with Brand that the two can and should co-exist).

You can read the whole thing at The Economist, here’s an excerpt:

Mr Brand’s technophilia helped shape Silicon Valley. But it drove a wedge between him and his ecologically minded friends. He had always been an outlier, enjoying Ayn Rand’s libertarian books at university. His fascination with humans settling in space—he financed the subject’s first major conference in 1974—widened the divide. In 2009 Mr Brand distanced himself from his fellow environmentalists, advocating for genetically modified organisms and nuclear power. As for the eco-warriors, he labelled them “irrational, anti-scientific and very harmful”. In response George Monbiot, an activist, suggested that Mr Brand was a spokesperson for the fossil-fuel industry. The criticism echoed Mr Kesey’s remark decades earlier: “Stewart recognises power. And cleaves to it.” 


What is Bayes’ Theorem and Bayesian reasoning?

It’s a helpful way to teach yourself how to react appropriately to new information.

Introduction

Let’s say you strongly believe something to be true. For example, I might have a pile of 100 coins, and I tell you that 99 of them are fair but one is rigged. When you randomly draw a coin from the pile, you’re pretty sure the coin isn’t rigged — you’re 99% sure, in fact. You decide to flip it a few times anyway. The first time it lands on heads. The second time? Heads. The third time? Heads again. And the fourth time? Heads!

There are a few different ways you could process this information.

  1. You could ignore what you’ve just seen. Everything you knew before flipping the coin is still valid, so you still think there’s a 99% chance of it being a fair coin — the four heads in a row is just a fluke.
  2. You could decide definitively that the coin is rigged. If the coin was fair, the odds of seeing what you’ve just seen are very low: 50% * 50% * 50% * 50% = 6.25%. There’s a much higher chance of seeing four heads in a row if the coin is rigged, so you’re now sure that the coin is rigged.
  3. You could take the middle ground. Everything you knew before flipping the coin is still valid — there was a much higher chance of you picking a fair coin than a rigged one — so you’re still pretty sure the coin is fair. But you acknowledge that what you just saw would be very unlikely if the coin were fair, so you’re a bit less confident in its fairness than before — instead of being 99% confident, you’re now only, say, 86% confident.

Option 3 strikes me, and many other people, as the most sensible way of thinking. It’s less of an all-or-nothing approach, considering both what you knew beforehand and what you’ve just learnt. Helpfully, it’s also pretty easy to apply in day-to-day life. It’s called Bayesian reasoning.

Why is Bayesian reasoning useful?

We have a tendency to either under- or over-react to new pieces of evidence. The example above, where one person ignored the evidence altogether because of their prior beliefs and another let the evidence completely overrule their prior beliefs, was extreme. But without realising, you’ve probably done something similar before.

Consider the last time you read about a murder. I’d bet it made you a fair bit more scared of being murdered, even though statistically one additional murder doesn’t meaningfully change the overall likelihood of being murdered. Being more scared is a completely human reaction, but sometimes it’s helpful to step back and remember what your beliefs were before you learnt about the new event.

Similarly, we all know people who are entrenched in their views and refuse to change their minds, even when presented with important new evidence. Being open to having your mind changed is really important.

Approaching situations with a Bayesian mindset is a good way to overcome this. It helps you ensure that your beliefs change with the evidence, and that your beliefs are grounded in everything you know, not just the thing that’s come to mind most recently.

Decoding Bayesian jargon

Bayesians — people who employ Bayesian reasoning — like to use jargon to simplify how they talk about things. So before we jump in any further, let’s learn the jargon.

When we approach a situation, we often go in with a prior belief, often shortened to just “prior”. In the above example, your prior was your 99% certainty that the coin isn’t rigged. You have a whole range of priors on all sorts of topics, at all sorts of confidence levels: you might be 25% confident that aliens exist, or 75% confident that technology will help solve climate change. (Priors can be objective probabilities, as in the example where you know how many coins are rigged, or they can be your subjective estimates of something’s likelihood. We’ll discuss this more shortly…)

Every so often, you’re confronted by new evidence. In the above example, that evidence was seeing the coin land on heads four times in a row.

If you’re a Bayesian, you then use the evidence to update your priors. In the example above, this was the process where you acknowledged that the evidence you just saw was unlikely, so you reduce your confidence in the coin’s fairness a little bit.

Bayes’ Theorem: how to apply this in practice

In most situations, all you need to do is remember that Bayesian reasoning exists. When you encounter a new piece of evidence, X, remind yourself of your prior belief, think about how likely it is that you’d see X if your prior belief was true, and think about how likely it is you’d see X in general, regardless of whether your prior belief was true or not. Taking all these things into account will help you “update” your priors.

If you want to really be an expert, though, you can learn Bayes’ Theorem. Bayes’ Theorem is the mathematical representation of what we’ve just talked about: it tells you just how much to update your priors by. The theorem looks like this:

Where:

  • P(A|B) means the probability of A happening, given B (the | means ‘given’ in probability notation).
  • P(B|A) means the probability of B happening, given A.
  • P(A) means the probability of A happening.
  • P(B) means the probability of B happening.

It looks complicated, but I promise you that it’s simpler than it seems. Let’s apply it to our coin example. We’re trying to figure out the probability of the coin being fair, given that it showed four heads. First, we need to think about how likely it is that we’d get four heads if the coin was fair. Then we need to multiply that by the probability of the coin being fair, regardless of the evidence. Then we need to divide the whole thing by the probability of getting four heads, regardless of whether the coin is fair or not.

Okay, now let’s go back to the equation. Calculating the probability of the coin being fair (A), given that it showed four heads (B) is super easy.

Where:

  • P(A|B) means the probability of A happening, given B.
  • P(B|A) means the probability of B happening, given A.
  • P(A) means the probability of A happening.
  • P(B) means the probability of B happening.

P(B|A) — the probability of getting four heads if the coin was fair — is 50% * 50% * 50% * 50%, or 6.25% — the value we calculated earlier.

P(A) — the probability of the coin being fair — is your prior. In this case, you thought there was a 99% chance of the coin being fair. 

P(B) — the probability of getting four heads — is a little trickier to calculate. We think coins are fair 99% of the time, so 99% of the time we think there’s a 6.25% chance of getting four heads. But 1% of the time the coin is rigged — and if the coin is rigged, it has a 100% chance of getting four heads. So to calculate the overall probability of getting four heads, we add those two scenarios together: (99%*6.25%) + (1%*100%) = 7.1875%

So putting all of that together: 

  • P(A|B)= (6.25% * 99%)/(7.1875%)
  • P(A|B)= 86%

The real magic of this formula is that the more “surprising” the evidence, the more your prior gets updated. Instead of seeing four heads in a row, for instance, let’s say you saw ten. That’s very unlikely to happen if the coin is fair (it’s a 0.09% likelihood). And if we apply Bayes’ theorem with that new value, we get P(A|B)= 8.8%. In this case, the smart thing to do would be to assume the coin is rigged.

An example of when this is useful

Bayesians often like to point to cancer tests as a way to demonstrate the method’s value. Let’s say you’re about to get a test for breast cancer. On your way to the clinic, you do some googling, and you learn that only 1% of women have breast cancer. You also learn that people that do have cancer test positive only 80% of the time. Finally, you learn that people that don’t have cancer still test positive 9.6% of the time. 

The next day, you receive a positive test result for breast cancer. You’re probably freaking out! But should you? Bayes’ theorem can help you out. Let’s pull up the equation again:

Where:

  • P(A|B) means the probability of A happening, given B.
  • P(B|A) means the probability of B happening, given A.
  • P(A) means the probability of A happening.
  • P(B) means the probability of B happening.

In this case, we want to figure out the probability that you have cancer (A), given a positive test result (B).

P(B|A), the probability of getting a positive test result given that you do have cancer, is 80%: people that do have cancer test positive 80% of the time.

P(A), the probability of having cancer, is 1%.

P(B) is the overall probability of having a positive result. Again, we can break it down into:

The 1% of people who do have cancer get a positive result 80% of the time; and the 99% of people who don’thave cancer get a positive result 9.6% of the time. Putting that together gives us:

  • P(B) = (1%*80%)+(99%*9.6%)
  • P(B) = 10.304%

So putting everything together:

  • P(A|B)= (80%*1%)/10.304%
  • P(A|B) = 7.76%

That means you’ve got a 7.76% chance of having cancer. That’s not nothing, obviously, but it’s not an astronomically high percentage either. And chances are, it’s a lot lower probability than you expected before doing the maths — and a lot less scary.

Use with caution

Bayesian reasoning and Bayes’ theorem is a super useful way of thinking. But we quite often don’t have real, calculated numbers to plug into the theorem. And the theorem’s results are only as good as the numbers you put into it.

Instead of picking a coin from a pile where I know one in 100 is rigged, I might be trying to figure out if a coin I got from the bank is fair, given that it showed heads four times in a row. Bayes’ Theorem requires me to put my prior in — the probability of a coin from a bank being fair. What number should I put in here? I’d guess 99.9%, but that isn’t a particularly educated guess — I have no idea how common rigged coins are, or how good banks are at spotting them.

We can imagine other examples where I don’t know what numbers to put into Bayes’ Theorem. Let’s say tomorrow we find a colony of bacteria on Mars, and I want to revise my estimates for the probability of intelligent life on Mars. To do that, I’d need to know the probability of finding bacteria if there was intelligent life, my prior probability for there being life on Mars, and the overall probability of there being bacteria on Mars. I don’t know any of those numbers, and even if I was to guess there’s a very good chance that my guesses would be off by an order of magnitude.

In all of these cases, I could make up some numbers, plug them into the theorem, and confidently present the theorem’s output as an “accurate” probability. But the theorem’s output would in fact be close to meaningless, because the inputs are all made up.

It’s very easy to hide our uncertainty using numbers. By assigning a numerical probability to something, we can trick ourselves and others into thinking that the probability is “objective” and “correct”, even if it has very little basis in reality. The only way we can actually learn about reality is to do what scientists have always done: investigate, learn, and test hypotheses. Even when we do this, our priors will often be subjective — influenced by what we’ve chosen to investigate, our methods for investigation, and how much weight we put on the conclusions.

Bayes’ Theorem isn’t a replacement for the knowledge-seeking process; it’s just a helpful way to synthesise the knowledge you discover in the knowledge-seeking process. But as long as you remember that, it can be a very useful tool for making sure your beliefs take into account new evidence.

Sources and further reading

What made Xerox PARC so successful?

One of the best books I read last year was Michael Hiltzik’s Dealers of Lightning. It’s about Xerox PARC, the incredibly influential, yet still mostly unknown, research division of Xerox. PARC developed some of personal computing’s most important technologies: Ethernet and modern GUIs are its best-known inventions, but there are dozens of other examples. PARC was also a very important talent incubator — Adobe and Pixar were founded by its alumni, while other employees went on to influential roles at Apple and Microsoft.

While reading, I tried to figure out what made PARC so successful, with an eye to how that success could be replicated today. Here’s what struck me as particularly important.

Contents

  1. Timing
  2. Exposure to outside ideas
  3. Network building
  4. Limited bureaucracy
  5. A social environment
  6. Why didn’t it work?
  7. Meta Reality Labs: the modern PARC?

All quotes are from the book and attributable to Hiltzik, unless otherwise specified.

Timing

PARC was founded in 1969, and I’m not sure it could have been founded at any other time. The late ’60s were “a buyer’s market for high-caliber research talent”: the vast expense of the Vietnam War had cut into government budgets, and a recession “[exerted] the same effect on corporate research”. That meant Xerox didn’t have to fight particularly hard to hire talented people.

Computing was also at “a historic inflection point”, making big leaps forward relatively easy to come by. A 2010s version of PARC might have had equally talented researchers producing equally excellent research, but the research wouldn’t be as revolutionary simply because the easy, game-changing problems had already been solved. (Patrick Collison talks about this a bit here — we’re mostly confronting “wicked problems” these days.)

“Historic inflection points” also attract brilliant people. Take Butler Lampson, who decided to abandon physics for computer science. “He found the task of advancing a science that history’s greatest intellects had been mining for 300 years fundamentally uninteresting. Especially when a brand-new field beckoned in which every new discovery represented a terrific leap forward in human enlightenment.”

Smart people like new things, both for the intellectual challenge and for the increased ease of fame and fortune. So you’ll often find brilliant people working on new, exciting fields where there’s a lot of opportunity for rapid progress — and because those fields attract brilliant people, the progress is indeed rapid. Tyler Cowen and Ezra Klein have noted that an awful lot of smart people are attracted to crypto right now; perhaps that’s a modern example of this kind of talent clustering.

Exposure to outside ideas

Serendipity was behind a lot of PARC’s success. It seems people would regularly hear about ideas by chance — at a conference, at a party, from an invited speaker — and decide to work on them as a result. As David Thornburg said, “we were physically adjacent to Stanford University, so there were visitors dropping in and out of the lab all the time.” This was intentional. Bert Sutherland, who managed one of PARC’s labs, had a policy “to keep [the lab’s] atmosphere enriched via continual contact with the outside world.” Learning what’s going on elsewhere can spark new ideas — and introduce you to potential solutions you’d never have come up with yourself.

PARC was also an important source of inspiration for other organisations, most famously Apple. Steve Jobs and a few other engineers toured PARC, which supposedly inspired many of the key features of Apple’s Lisa and Macintosh. According to Hiltzik, the story’s been warped over the years and Apple didn’t exactly pilfer PARC’s ideas. But the visit was still important: for Bill Atkinson, an Apple engineer, “seeing the overlapping windows on the Alto screen was … more a confidence-builder than a solution to his quandary.” In Atkinson’s words, “knowing it could be done empowered me to invent a way it could be done.”

Network building

While at ARPA, Bob Taylor — who went on to found PARC — realised that research was all about talent. “He would visit his grant recipients several times a year, but not solely to hear the researchers’ obligatory progress reports. He was engaged in something more like community outreach, developing new teams, nurturing up-and-coming young researchers, cultivating an entire new generation of virtuosi.” He also organised conferences, bringing researchers together in an attempt to forge connections (and expose them to new ideas).

This worked, both in encouraging research and in helping Taylor recruit. Because of the years he spent building a network, Taylor knew — and was trusted by — practically everyone. That made PARC a hub for many of the world’s most talented people.

Limited bureaucracy

PARC’s leaders hated bureaucracy, and employees had plenty of freedom to go off and pursue ideas. Its founders’ “experience had taught them that the only way to get the best research was to hire the best researchers they could find and leave them unburdened by directives, instructions, or deadlines.”

Some senior PARC leaders came from ARPA, which appears to have inspired this lack of bureaucracy. ARPA was rare among government agencies, in that it didn’t overburden staff with paperwork. Bob Taylor, founder of PARC and director of ARPA’s Information Processing Techniques Office, said he “had no formal proposals for the ARPANET”, one of the most important projects in human history. Here’s the book’s account of the meeting that greenlit ARPANET:

“After listening politely for a short time, Herzfeld interrupted Taylor’s rambling presentation … All he had was a question. ‘How much money do you need to get it off the ground?’ ‘I’d say about a million dollars or so, just to start getting organized.’ ‘You’ve got it,’ Herzfeld said. ‘That,’ Taylor remembered years later of the meeting at which the Internet was born, ‘was literally a twenty-minute conversation.'”

This lack of bureaucracy only worked because PARC’s managers were humble. The whole idea of a manager “approving” a project in some way implies that the manager knows more than the researcher what’s worthwhile — which, if you’re hiring the world’s best researchers, probably isn’t true. Instead, managers at PARC seem to have often just trusted researchers, accepting that they were smarter than their bosses. PARC manager Harold Hall, who said “I had developed and honed the skill of making myself useful to people whose intellectual gifts dwarfed my own”, epitomised this humility.

A social environment

PARC wasn’t just a workplace. It was a community of people that spent most of their life together. “There were family picnics and a softball team”, a “volleyball net for daily lunchtime matches”, and regular tennis matches, bike-rides and communal meals — all of which served as venues to talk about work while doing something other than work.

Alan Kay, whose lab was “socially … PARC’s most cohesive unit”, was a firm believer in the importance of these events. “He believed strongly that a lab’s success depended on a shared vision,” and spending so much time with other people helps to create that vision.

The sections of the book on in-person socialising significantly changed my views on remote work, and I’m now much less certain that revolutionary work can be done remotely. At the very least, it seems that regular team retreats are a necessity.

Why didn’t it work?

PARC was immensely innovative, but you could argue that it was ultimately a failure. You don’t own a Xerox laptop or use a Xerox phone, after all. Hiltzik offers many reasons for why Xerox failed to commercialise most of PARC’s technology, but one in particular sticks out.

In the ’70s, Xerox’s business was essentially based on microtransactions. They loaned copiers to businesses, and each time a customer copied a page, a meter on the machine “clicked” and Xerox was paid a small fee. Xerox’s salespeople’s commissions came from those fees.

This incentive model would not easily translate to computers: how do you charge customers for each ‘use’ of a computer? And without a per-use fee, there was no obvious revenue stream for the salespeople’s commissions. “To sell computers, Xerox would not only have to build a new kind of machine, but also a new system of compensating and motivating its more than 100,000 sales executives.” That proved difficult.

Not all of PARC’s problems can be blamed on its parent company, though. There were plenty of internal issues too. PARC founder Jack Goldman, for instance, “had little conception of the economics of product development.”

Perhaps more importantly, as PARC aged it developed a sort of sclerosis. Bob Taylor, seemingly out of misplaced confidence, ignored his colleagues’ advice that important things were happening at the Homebrew Computer Club and at Apple. “That’s not going to happen, because we have the smartest people here. I believe if you have the smartest people, you’ll end up ahead,” he reportedly said. Yet it turned out that some very smart people worked elsewhere, and they ended up leapfrogging PARC.

Meta Reality Labs: the modern PARC?

Writing in 1999, Hiltzik offered an explanation for why nothing like PARC existed anymore. One reason was the first thing we looked at: timing. “The science of computing is no longer at the historic inflection point it occupied at the start of the 1970s,” Hiltzik wrote.

That was certainly true in 1999. But today might be different. Most technologists seem fairly sure that we’re on the cusp of a new computing platform. It might not be called the metaverse and it might not be delivered through smartglasses, but the odds are good that we’ll soon be living in a hybrid and omnipresent digital world. This impending shift is much bigger than the PC-to-smartphone one, because it requires entirely new UX paradigms. Countless clunky demos show that we’re a long way from figuring out what those paradigms are.

Yet people are trying to figure it out — most notably, the people at Meta Reality Labs. Its researchers are working on new interaction mechanisms (haptic gloves and brain-computer interfaces), new operating systems, new manufacturing techniques, and all sorts of stuff I won’t even pretend to understand. This research is broad, experimental, and potentially revolutionary, much like PARC’s. In fact, Reality Labs may be the 21st century’s PARC.

The parallels aren’t perfect: for one thing, Reality Labs’ work is more obviously tied to Meta’s goals than PARC’s was to Xerox’s. But I’m not the only one to draw the link between the two. In an interview last year, Reality Labs head Michael Abrash said “[Doug] Engelbart and Xerox PARC are the only time that fundamentally the way we interact with the digital world has ever changed … AR glasses are going to require that to happen.” The next few years will show if, like PARC, Reality Labs can change the world — and if Meta, unlike Xerox, will be able to capitalise on it.

For its part, PARC still exists, albeit in a very different form. And its influence on humanity may not be over: PARC researchers are involved in the Marine Cloud Brightening Project, an effort to explore whether we can use geoengineering to help fight climate change.

You can buy Dealers of Lightning here; I highly recommend reading it.

(Potential) Technological Solutions to Climate Change

What this is and why I’ve made it

Many people and organisations are attempting to use technology to solve climate change. Their work spans a broad range of areas, including energy production (nuclear fusion, next-gen geothermal energy), new materials (carbon-free cement, biopolymers), and tech that will help us adapt to a worst-case scenario (solar geoengineering, water desalination, wildfire-fighting drones).

I struggled to find a good overview of all the climate tech solutions out there, so I decided to do some research and build my own. I think more people should be aware of the potential benefits of climate tech: I’ve noticed lots of climate-related pessimism and negativity among my friends, and think that learning about potential solutions to climate change can help alleviate that pessimism.

Limitations

I’ve focused on deep-tech/hardware rather than software solutions, in part because deep-tech interests me the most, and in part because including all software areas would make the list unwieldy.

I’ve not included technology that is commonplace today, such as electric cars, solar panels, or plant-based food.

I’m not an expert in any area of this, so may have made mistakes; if you spot a mistake please let me know in the comments or email me.

Strictly categorising things is hard, so some solutions may belong to categories other than the one I’ve put them in.

I am not endorsing any of these ideas or startups, or making any claims about their chances of success. When I mention startups working on the problems, I’ve just selected ones that I’ve come across; there may be others working on the problems that I have not named.

This list is far from comprehensive; please make suggestions for other areas I should include in the comments.

Contents

  1. Energy
  2. Agriculture
  3. Transport
  4. Materials
  5. Carbon Capture and Storage
  6. Solar Geoengineering
  7. Adaptation

(Potential) Technological Solutions to Climate Change: A Categorised List

1. Energy

1.1 Energy Production

Nuclear fusion, which would fuse atoms together to produce large amounts of clean energy.

Producing green hydrogen with new electrolysis technology.

Converting natural gas into hydrogen and solid carbon, which can be sequestered.

  • Startups working on this: C-Zero.

Converting coal-fired power stations with small, modular nuclear reactors.

Advanced nuclear power stations, which are easier to build and cheaper to run.

Next-generation geothermal power stations, which could provide 24/7 “baseload” power.

Producing gas from waste using gasification.

Floating wind turbines, which could allow us to deploy turbines further out to sea — where winds tend to be stronger.

  • Organisations working on this: Equinor.

Broad spectrum solar panels, which help solar panels turn more light into electricity, increasing the amount of power produced.

1.2 Energy Storage

Iron flow and iron air batteries, for long-duration grid energy storage.

Organic redox flow batteries, which are cheap and stable.

Electro-thermal energy storage, for long-duration grid energy storage.

  • Startups working on this: Malta.

Geomechanical pumped hydro storage, which stores energy underground in the form of pressurised water.

Gravity storage, which lifts composite bricks when energy is plentiful then drops them when electricity is needed.

Solid-state batteries, which have higher energy densities and can charge faster.

Coating lithium-ion batteries in graphene to increase their energy density.

Making battery anodes from silicon, increasing energy density.

  • Startups working on this: Sila.

Making carbon nanotube electrodes for “ultra-fast carbon batteries”.

Producing paper biofuel cells, which are biodegradable.

  • Startups working on this: BeFC.

1.3 Energy Transmission

Aluminium-encapsulated carbon core conductors, for long-life and efficient power transmission.

Evaporative cryogenic cooling systems for high-temperature superconductors, allowing for higher levels of power transmission.

  • Startups working on this: Veir.

Grid management software and monitoring.

2. Agriculture

Using robots and AI to increase farming efficiency.

Making pet and human food from insects.

  • Startups working on this: Ynsect.

Growing meat in a lab.

Using microbes to provide crops with nitrogen, as an alternative to synthetic fertiliser.

Creating alginate microcapsules for biofertilisers.

Using microbes to produce proteins from electricity and air.

Using microalgae to produce proteins and fatty acids.

Shocking livestock slurry with synthetic lightning to eliminate methane emissions.

Masks for cows that contain a catalytic converter to neutralise exhaled methane.

  • Startups working on this: Zelp.

Feed supplements containing garlic powder and bitter orange extracts, to reduce cattle methane emissions.

3. Transport

Replacement, decarbonised-fuel engines for diesel-powered machines.

Electric and hydrogen-powered planes.

A new ion exchange process to extract lithium (needed in electric vehicle batteries) from brine more efficiently.

Using AI to find more rare minerals to mine.

Electric freight ships.

Battery powered freight trucks.

Hydrogen fuel-cell powered freight trucks.

4. Materials

Using electro-extraction to remove and recycle minerals from e-waste.

Using a new manufacturing process to make silicon wafers for solar panels at lower cost and higher efficiency.

Using electricity to produce carbon dioxide free steel.

Producing cement with lower or no CO₂ emissions.

Using CO₂ to make concrete.

Creating cellulose and bioplastics from biomass, using aldehyde-assisted fractionation.

  • Startups working on this: Bloom.

Producing amino acids and sweeteners through fermentation.

  • Startups working on this: DMC.

Using microorganisms to turn air and greenhouse gases into a thermoplastic.

Using catalysts to replicate photosynthesis, turning CO₂ into jetfuel and other materials.

  • Startups working on this: Twelve.

Using casein to make plastic alternatives.

Growing microalgae and using them to produce ingredients for cosmetics and food products.

Using hydrothermal processing to return clothing to its raw ingredients, which can then be reused.

  • Startups working on this: Circ.

5. Carbon Capture and Storage

Pulling CO₂ from the air and turning into rock using enhanced carbon mineralisation.

Using satellite imagery to improve forest carbon markets.

Using electrochemistry to de-acidify the ocean and enhance its carbon-storing properties.

Dissolving olivine-containing rock into oceans, increasing its CO₂ uptake and decreasing its acidity.

Using electrochemistry to sequester the ocean’s CO₂ as a seashell-like material.

Growing kelp to capture carbon, then sinking it beneath the ocean to sequester it.

Mixing pulverised minerals into agricultural soil to accelerate carbon capture through rock weathering.

Using abundant minerals to create cost-efficient direct air capture.

Baking forestry waste in a low oxygen environment to produce biochar, a stable form of carbon.

Turning biomass into bio-oil using pyrolysis, then injecting the bio-oil underground.

  • Startups working on this: Charm.

6. Solar Geoengineering

N.B.: All of this research is very early stage, and a long way away from being used (if it ever is used).

Putting a giant solar sail in space (around 1.5 million kilometers away from Earth) to reduce the amount of sunlight reaching Earth, cooling the planet.

Injecting aerosols into the stratosphere to reflect or absorb sunlight, cooling the planet.

Adding aerosols to marine clouds to make them more reflective, cooling the planet.

7. Adaptation

7.1 Water shortages

Drawing water from the air, using solar-powered panels, to produce more drinking water.

  • Startups working on this: Source.

Adding silver iodide to clouds to encourage precipitation and alleviate drought.

Using 3D printed membranes to improve the efficiency of water desalination.

Solar-powered desalination systems.

7.2 Food shortages

Producing single cell proteins from CO₂ and hydrogen.

Producing single cell proteins from natural gas.

Scaling up seaweed farming.

Genetically modifying crops to increase resilience and yield.

  • Organisations working on this: CGIAR.

7.3 Natural disasters

Drones that can fight wildfires.

Satellites for disaster monitoring and early warning systems.

Further reading and sources

Breakthrough Energy, Bill Gates’ climate fund.

Stripe Climate, Stripe’s carbon removal fund.

Demeter

PwC State of Climate Tech 2021

ALLFED

Can tech help fight climate change? Five innovations making a difference today — Secure Futures

This is climate tech — GreenBiz

Clean Energy Ventures

The Next Generation of Climate Innovation — BCG

High-tech climate solutions that could cut emissions in the long term — Reuters

10 adaptation technologies — Climate Action

10 European startups tackling natural disasters and other emergencies — EU Startups

Seven emerging technologies that will be vital for fighting climate change — Recharge

Reflections on a meeting about space-based solar geoengineering — Harvard Solar Geoengineering Research Blog