Global players list 12 most urgent concerns that AI poses to mankind

The list captures many existential concerns and controversies that surfaced since AI with human-like intelligence emerged in recent years PHOTO: REUTERS

SINGAPORE – A list of the 12 most pressing concerns that artificial intelligence (AI) poses to humanity was finalised after a three-day conference of global experts held in Singapore. 

The list, which captures many existential concerns and controversies that have surfaced since AI with human-like intelligence emerged in recent years, will be a reference point to steer research goals and policy decisions globally. 

Forty-two delegates from around the world, including developers, researchers and policymakers, gathered for the workshop-style Singapore Conference on AI (Scai), held between Dec 4 and 6.

The Straits Times looks at the 12 issues highlighted by the delegates and what can be done in a report titled Preliminary Conversations Towards AI for the Global Good: The SCAI Questions, published on Dec 6. 

1. Can AI be trusted? 

AI systems are increasingly being used to make critical decisions, but current systems sometimes “hallucinate” and produce inaccurate results. These range from amusing answers to potentially disastrous consequences, for instance, when applied to self-driving cars or decisions on the operating table. 

An AI system needs to be tested against scenarios as close as possible to the tasks that it will carry out in real life, and developers should build AI tools without focusing solely on statistical performance, delegates wrote in a summary. Standardised benchmarks to assess an AI model’s reliability also need to be designed.

2. Good data, good AI 

Robust AI models are backed by large sets of data that help them to make sound analyses and decisions. But building good data sets that support useful AI has been a challenge, especially when the data is sensitive and guarded because of privacy concerns.

To strike this balance, regulation should encourage the creation and flow of data, ensure that databases are of a high quality, and ensure that the data exchanged is secure.

3. Will AI kill us? 

Among the most catastrophic scenarios listed, AI could potentially fuel global-level threats like accelerated global warming or genocide with AI-enabled nuclear weapons. Delegates noted that widespread social harm and cybercrimes assisted by AI are already happening, with AI-driven economic collapse plausible in the near future. 

Nations must establish clear warning signs across multiple areas, developers must be subject to stringent audits, and critical systems must be stress-tested before they are linked to AI. 

4. How tight is the leash on AI?

Laws need to catch up to ensure that the risks of AI do not slip through gaps that existing policies do not cover. The authorities also need to establish ways to report breaches, and spell out clearly who is responsible for AI risks. 

They must also strike a balance so that regulation does not impede innovation, and derive frameworks on when the law should step in to crack down on AI. 

5. AI for science 

AI holds the key to solving some of humanity’s most difficult problems, and can potentially develop cures for diseases and create tools to combat climate change – which delegates said ought to be a priority for AI builders for the sake of humanity. Innovation in this space can be advanced by more cooperation across borders to share funds and knowledge. 

6. How can AI learn like humans? 

AI is getting smart, but mankind’s ability to learn naturally is still unmatched by machines. The human brain remains more adaptive, responsive and energy-efficient than any AI – a barrier machines must overcome to reach to new levels to understand psychology, linguistics, philosophy and other fields. 

7. What are AI’s values? 

If AI is to operate without human supervision, users need to know that it will behave according to “human values.” Dubbed the “alignment problem”, the issue is made more complex as human values are tough enough to define. 

Delegates urged a bottom line of basic human values to be decided on a global level, addressing what kind of AI systems nobody should be able to produce, and the minimum expectations for how AI should behave. 

8. Fair competition and access

AI technology typically lies in the hands of profit-driven corporations, when it should ideally be used for the public good. Concentration in the hands of the profit-driven can affect the price and quality of AI, and potentially restrict access to AI for some communities.

For fairer competition, delegates called for the barriers to entry to develop AI to be lowered. They suggested that AI providers be regulated similarly to telecommunications and public utilities providers.

9. AI for learning 

Students can benefit from having an always-available tutor in AI, which can give immediate feedback and personalised content. Parents can get immediate feedback on their child’s performance, while teachers can spend less time marking, to focus on teaching.

Yet before these can happen, AI needs to overcome its tendency to “hallucinate”, and integrate with existing technology used in classrooms.

10. Crackdown on fake news 

Writing tools and image generators have allowed AI-made misinformation to become harder to spot. Such content can spread like wildfire through public channels.

Delegates called for a way to identify the source of information online and force content platforms to monitor their channels. Algorithms need to be trained on languages other than English in order to accurately detect misinformation on global platforms.

11. AI for social good

Who looks out for the little guy? Delegates called for the voices of non-profit organisations, governments and social enterprises to be amplified, as these sources help to assess the impact of AI that will be rolled out. Developers should not rush the roll-out of the technology in order to rigorously evaluate its impact and avoid wasted investments.

12. Safety standards

Societies have yet to agree on the standards for auditing AI for its safety during its development. These include assessing its potential to cause security concerns, and economic or psychological damage.

Join ST's Telegram channel and get the latest breaking news delivered to you.