Artificial intelligence has untold potential to transform society for the better. It also has equal potential to cause untold harm. This is why it must be developed ethically.

Artificial intelligence is unlike any other technology humanity has developed. It will have a greater impact on society and the economy than fossil fuels, it’ll roll out faster than the internet and, at some stage, it’s likely to slip from our control and take charge of its own fate.

Unlike other technologies, AI – particularly artificial general intelligence (AGI) – is not the kind of thing that we can afford to release into the world and wait to see what happens before regulating it. That would be like genetically engineering a new virus and releasing it in the wild before knowing whether it infects people.

AI must be carefully designed with purpose, developed to be ethical and regulated responsibly. Ethics must be at the heart of this project, both in terms of how AI is developed and also how it operates.

This sentiment is the main reason why many of the world’s top AI researchers, business leaders and academics signed an open letter in March 2023 calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, in order to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts”.

Some don’t think a pause goes far enough. Eliezer Yudkowsky, the lead researcher at the Machine Intelligence Research Institute has called for a complete, worldwide and indefinite moratorium on training new AI systems. He argued that the risks posed by unrestrained AI are so great that countries ought to be willing to use military action to enforce the moratorium.

It is probably impossible to enforce a pause on AI development without backing it with the threat of military action. Few nations or businesses will willingly risk falling behind in the race to commercialise AI. However, few governments are likely to be willing to go to war force them to pause.

While a pause is unlikely to happen, the ethical challenge facing humanity is that the pace of AI development is significantly faster than the pace at which we can deliberate and resolve ethical issues. The commercial and national security imperatives are also hastening the development and deployment of AI before safeguards have been put in place. The world now needs to move with urgency to put these safeguards in place.

Ethical by design

At the centre of ethics is the notion that we must take responsibility for how our actions impact the world, and we should direct our action in ways that are beneficent rather than harmful.

Likewise, if AI developers wish to be rewarded for the positive impact that AI will have on the world, such as by deriving a profit from the increased productivity afforded by the technology, then they must also accept responsibility for the negative impacts caused by AI. This is why it is in their interest (and ours) that they place ethics at the heart of AI development.

The Ethics Centre’s Ethical by Design framework can guide the development of any kind of technology to ensure it conforms to essential ethical standards.This framework should be used by those developing AI, by governments to guide AI regulation, and by the general public as a benchmark to assess whether AI conforms to the ethical standards they have every right to expect.

The framework includes eight principles:

Ought before can

This refers to the fact that just because we can do something, it doesn’t mean we should. Sometimes the most ethically responsible thing is to not do something.

If we have reasonable evidence that a particular AI technology poses an unacceptable risk, then we should cease development, or at least delay until we are confident that we can reduce or manage that risk.

We have precedent in this regard. There are bans in place around several technologies, such as human genetic modification or biological weapons that are either imposed by governments or self-imposed by researchers because they are aware they pose an unacceptable risk or would violate ethical values. There is nothing in principle stopping us from deciding to do likewise with certain AI technologies, such as those that allow the production of deep fakes, or fully autonomous AI agents.

Non-instrumentalism

Most people agree we should respect the intrinsic value of things like humans, sentient creatures, ecosystems or healthy communities, among other things, and not reduce them to mere ‘things’ to be used for the benefit of others.

So AI developers need to be mindful of how their technologies might appropriate human labour without offering compensation, as has been highlighted with some AI image generators that were trained on the work of practising artists. It also means acknowledging that job losses caused by AI have more than an economic impact and can injure the sense of meaning and purpose that people derive from their work.

If the benefits of AI come at the cost of things with intrinsic value, then we have good reason to change the way it operates or delay its rollout to ensure that the things we value can be preserved.

Self-determination

AI should give people more freedom, not less. It must be designed to operate transparently so individuals can understand how it works, how it will affect them, and then make good decisions about whether and how to use it.

Given the risk that AI could put millions of people out of work, reducing incomes and disempowering them while generating unprecedented profits for technology companies, those companies must be willing to allow governments to redistribute that new wealth fairly.

And if there is a possibility that AGI might use its own agency and power to contest ours, then the principle of self-determination suggests that we ought to delay its development until we can ensure that humans will not have their power of self-determination diminished.

Responsibility

By its nature, AI is wide-ranging in application and potent in its effects. This underscores the need for AI developers to anticipate and design for all possible use cases, even those that are not core to their vision.

Taking responsibility means developing it with an eye to reducing the possibility of these negative cases becoming a reality and mitigating against them when they’re inevitable.

Net benefit

There are few, if any, technologies that offer pure benefit without cost. Society has proven willing to adopt technologies that provide a net benefit as long as the costs are acknowledged and mitigated. One case study is the fossil fuel industry. The energy generated by fossil fuels has transformed society and improved the living conditions of billions of people worldwide. Yet once the public became aware of the cost that carbon emissions impose on the world via climate change, it demanded that emissions be reduced in order to bring the technology towards a point of net benefit over the long term.

Similarly, AI will likely offer tremendous benefits, and people might be willing to incur some high costs if the benefits are even greater. But this does not mean that AI developers can ignore the costs nor avoid taking responsibility for them.

An ethical approach means doing whatever they can to reduce the costs before they happen and mitigating them when they do, such as by working with governments to ensure there are sufficient technological safeguards against misuse and social safety nets in place should the costs rise.

Fairness

Many of the latest AI technologies have been trained on data created by humans, and they have absorbed the many biases built into that data. This has resulted in AI acting in ways that negatively discriminate against people of colour or those with disabilities. There is also a significant global disparity in access to AI and the benefits it offers. These are cases where the AI has failed the fairness test.

AI developers need to remain mindful of how their technologies might act unfairly and how the costs and benefits of AI might be distributed unfairly. Diversity and inclusion must be built into AI from the ground level through training data and methods, and AI must be continuously monitored to see if new biases emerge.

Accessibility

Given the potential benefits of AI, it must be made available to everyone, including those who might have greater barriers to access, such as those with disabilities, older populations, or people living with disadvantage or in poverty. AI has the potential to dramatically improve the lives of people in each of these categories, if it is made accessible to them.

Purpose

Purpose means being directed towards some goal or solving some problem. And that problem needs to be more than just making a profit. Many AI technologies have wide applications, and many of their uses have not even been discovered yet. But this does not mean that AI should be developed without a clear goal and simply unleased into the world to see what happens.

Purpose must be central to the development of ethical AI so that the technology is developed deliberately with human benefit in mind. Designing with purpose requires honesty and transparency at all stages, which allows people to assess whether the purpose is worthwhile and achieved ethically.

The road to ethical AI

We should continue to press for AI to be developed ethically. And if technology companies are reluctant to pay careful attention to ethics, then we should call on our governments to impose sensible regulations on them.

The goal is not to hinder AI but to ensure that it operates as intended and that the benefits flow on to the greatest possible number. AI could usher in a fourth industrial revolution. It would pay for us to make this one even more beneficial and less disruptive than the past three.

As a Knowledge Partner in the Responsible AI Network, The Ethics Centre helps provide vision and discussion about the opportunity presented by AI.