#

Don’t let AI make inequality worse, says UN adviser

Artificial intelligence is transforming the way we live, becoming an increasingly familiar part of our everyday lives, and workplaces. A worldwide survey by consultancy McKinsey & Company found that 72 percent of businesses were using AI.

But globally, access to the technology, and the data that feeds it, is not equal, according Renata Dwan.

Dwan is special adviser to the UN Secretary-General’s envoy on technology, and she’s part of the team building the Global Digital Compact, a proposed framework spearheaded by the UN aiming for a more inclusive, equitable, and secure digital future. AI is the latest addition to the guidelines, including proposals to foster the fair implementation of the technology in least-developed countries.

The following interview has been edited for length and clarity.

Renata Dwan: For many countries and communities in the global south, AI represents an opportunity to leapfrog development. They can jump through their health service to modernize, to automate, to increase productivity. But it also has the potential to magnify the digital divide they already face, mainly in countries that don’t have access to the data that is required for training AI models, or to new AI products and systems. So the question we have to ask ourselves is: Is AI going to be an opportunity for the majority of the world to catch up in their development journey or to fall further back?

The matter of governance is essentially how we think about AI’s management, its regulation, and its use; how do we govern AI to address its immense potential, but to also navigate its risks, not all of which we’re yet certain about?

RD: By its very structure, AI is a global technology. It relies on raw earth materials that are sourced and supplied globally. It relies on vast amounts of data that go beyond borders. The products and the developers at the forefront of the development of AI models are working at global levels. So it is a global technology, and its governance must be global.

Now, we’re also navigating a time of great geopolitical tensions. Many governments desire sovereignty in their technology policies and capacities, seeking to develop their own capacities for AI, their own AI models, training, and development of AI centers. However, that is not a capacity that is open to all states.

The energy requirements of data centers are huge, so harnessing those resources requires collaboration, which means effective harnessing of the potential of AI requires collaboration.

We’re at a time when it’s difficult to have conversations for political reasons, but also, as the speed of technology develops so quickly, we need those conversations, we need the exchange, we need the collaboration of best practices so that we learn … That is one of the key issues why the UN’s proposal in the Global Digital Compact is to have an annual policy dialogue that can be supplemented and fed by forums like Doha. This is so important for our collective learning on this journey.

RD: There are two debates in the AI world right now. There’s the techno-optimist debate, that AI is going to solve absolutely all our problems, and all of us will reach wealth and happiness and live forever. And then there can be the doomsday approach, that AI is going to take control of humanity, and there are risks around manufacturing weapons of mass destruction.

I think many of the initiatives we’ve seen at the governance level, international initiatives, are very important because they are looking at these very advanced AI models, the safety risks they present, and the need for human control to be maintained throughout. And that’s really critical. But we also need to think about the risk of AI making worse the divides that already exist within our societies, between communities, across borders.

We need to look at how we become literate in addressing the potential threats of AI in areas such as information integrity. We need to put the emphasis on building our capacities as societies to harness AI technology for the good. That requires working with tech companies to a much closer extent than perhaps intergovernmental structures like the UN are used to. It requires us to address market limits in order to direct AI in the public interest.

This post appeared first on cnn.com