Longtermism argues that we should prioritise the interests of the vast number of people who might live in the distant future rather that the relatively few people who do live today.

Do we have a responsibility to care for the welfare of people in future generations? Given the tremendous efforts people are making to prevent dangerous climate change today, it seems that many people do feel some responsibility to consider how their actions impact those who are yet to be born. 

But if you take this responsibility seriously, it could have profound implications. These implications are maximally embraced by an ethical stance called ‘longtermism,’ which argues we must consider how our actions affect the long-term future of humanity and that we should prioritise actions that will have the greatest positive impact on future generations, even if they come at a high cost today. 

Longtermism is a view that emerged from the effective altruism movement, which seeks to find the best ways to make a positive impact on the world. But where effective altruism focuses on making the current or near-future world as good as it can be, longtermism takes a much broader perspective. 

Billions and billions

The longtermist argument starts by asserting that the welfare of someone living a thousand years from now is no less important than the welfare of someone living today. This is similar to Peter Singer’s argument that the welfare of someone living on the other side of the world is no less ethically important than the welfare of your family, friends or local community. We might have a stronger emotional connection to those nearer to us, but we have no reason to preference their welfare over that of people more spatially or temporally removed from us. 

Longtermists then urge us to consider that there will likely be many more people in the future than there are alive today. Indeed, humanity might persist for many thousands or even millions of years, perhaps even colonising other planets. This means there could be hundreds of billions of people, not to mention other sentient species or artificial intelligences that also experience pain or happiness, throughout the lifetime of the universe.  

The numbers escalate quickly, so if there’s even a 0.1% chance that our species colonises the galaxy and persists for a billion years, then that means the expected number of future people could number in the hundreds of trillions.  

The longtermism argument concludes that if we believe we have some responsibility to future people, and if there are many times more future people than there are people alive today, then we ought to prioritise the interests of future generations over the interests of those alive today.  

This is no trivial conclusion. It implies that we should make whatever sacrifices necessary today to benefit those who might live many thousands of years in the future. This means doing everything we can to eliminate existential threats that might snuff out humanity, which would not only be a tragedy for those who die as a result of that event but also a tragedy for the many more people who were denied an opportunity to be born. It also means we should invest everything we can in developing technology and infrastructure to benefit future generations, even if that means our own welfare is diminished today. 

Not without cost

Longtermism has captured the attention and support of some very wealthy and influential individuals, such as Skype co-founder Jaan Tallinn and Facebook co-founder Dustin Moskovitz. Organisations such as 80,000 Hours also use longtermism as a framework to help guide career decisions for people looking to do the most good over their lifetime.  

However, it also has its detractors, who warn about it distracting us from present and near-term suffering and threats like climate change, or accelerating the development of technologies that could end up being more harmful than beneficial, like superintelligent AI.  

Even supporters of longtermism have debated its plausibility as an ethical theory. Some argue that it might promote ‘fanaticism,’ where we end up prioritising actions that have a very low chance of benefiting a very high number of people in the distant future rather than focusing on achievable actions that could reliably benefit people living today. 

Others question the idea that we can reliably predict the impacts of our actions on the distant future. It might be that even our most ardent efforts today ‘wash out’ into historical insignificance only a few centuries from now and have no impact on people living a thousand or a million years hence. Thus, we ought to focus on the near-term rather than the long-term. 

Longtermism is an ethical theory with real impact. It redirects our attention from those alive today to those who might live in the distant future. Some of the implications are relatively uncontroversial, such as suggesting we should work hard to prevent existential threats. But its bolder conclusions might be cold comfort for those who see suffering and injustice in the world today and would rather focus on correcting that than helping build a world for people who may or may not live a million years from now.