Namaste Yogis. Welcome to the Blockchain & AI Forum, where your technology questions are answered! Here no question is too mundane. As a bonus, a proverb is also included. Today’s question was submitted by Mr. Bunch from UCLA, and he asked if the United Nations is united on artificial intelligence regulations?

Mr. Bunch, you came to the right place. Recently I traveled to Geneva, Switzerland and met with two U.N. employees. We discussed numerous topics, including your question. I’ll answer you by sharing what I learned.
In 2020, the U.N. decided to launch an AI initiative related to regulations of emerging AI technology. Markus Krebsz, Professor of Management, University of Stirling, was selected to lead the investigation. Professor Krebsz is also the Founder of the Human-AI Institute. Four years later, Professor Krebsz produced a report titled, U.N. Economic Commission for Europe Working Party 6. For the sake of brevity and sanity, I’ll refer to the report as WP6. Besides, WP6 sounds like a fun rock band, and good time! Lol. Let’s examine what’s inside WP6, commencing with the purpose, followed by findings and recommendations.
https://unece.org/sites/default/files/2024-07/ECE_CTCS_WP.6_2024_11_E.pdf
PURPOSE OF WP6
WP6 provides guidance to member states (the U.N. calls nations member states because they are all members of the UN) for an overall approach to regulations of products and/or services with embedded AI systems or digital technologies. The author reports the most appropriate use of WP6 include:
• Setting legitimate regulatory objectives
• Identifying and assessing risk
• Identifying relevant international standards
• Establishing mutually recognizable conformity assessment procedures
• Setting market surveillance and other enforcement mechanisms
• To promote convergence of national technical regulations among agencies currently in place or yet to be developed.
KEY FINDINGS & RECOMMENDATIONS
Krebsz starts with an obvious but nevertheless important disclosure—zero risk is impossible. There is no way to avoid making an error when establishing AI regulations. Hence, constant evaluation of AI regulations is prudent.
Second, certain products/services may be intrinsically high risk to human society. Therefore, governments must assess risk and choose appropriate conformity assessment methods, say Krebsz. In high-risk situation, AI cannot be allowed to override human control and human decision, according to WP6. Krebsz cites the example of sophisticated diagnostic system, where even if the medical equipment could generate algorithmic decision-making, human decision making must be included.
The third recommendation concerns AI technology embedded within products. Krebsz notes consumers may not be fully aware certain products contain AI embedded technology. Thus, consumers might be surprised to discover the technology reacting in an unexpected manner. The solution is robust testing under various scenarios, say Krebsz. Krebsz applauds the National Institute of Standards and Technology AI Risk Management Framework for their work on this issue.
WP6 supplies a series of AI bullet point recommendations and policy guidance, including but not limited to:
• AI embedded systems should not impinge of human freedom nor cause bias.
• AI technology should not further widen the digital divide as outlined in previous U.N. resolution.
• No AI should cause mental health problems.
• AI should be designed to encourage trustworthiness. In other words, transparency under the hood of the technology.
• Scenario and simulation testing and evaluation of residual risk should be part of a national AI policy.
Apparently, Professor Krebsz’s report was a smashing success! The member states representatives adopted WP6 and all signed the corresponding resolution, also written by Krebsz!
Time to leave you with a proverb from Mali, where they say: “The bird that flies off the earth and lands on an anthill is still on the ground”.
Until next time,
Yogi Nelson
