Namaste Yogis. Welcome to the Blockchain & AI Forum, where your blockchain and artificial intelligence technology questions are answered! Here no question is too mundane. As a bonus, a proverb is also included. Today’s question, was submitted by Ish and he wants to know what tools are being created to ensure artificial intelligence (AI) security?

Ish, you came to the right place. I did a little research and discovered an excellent report from the RAND Corporation. Let’s review what they say, but first a word about the Rand Corporation.
The RAND Corporation is a non-profit policy analysis organization. RAND was established in 1948. Their mission is to advise governments and large corporation on matters of national importance. RAND was contracted by the United Kingdom to examine the range of tools designed for the development, deployment and use of trustworthy AI in the United Kingdom and the United States. Wow! No easy task, Batman! What did RAND conclude?
Let’s begin with tools for trustworthy AI. RAND acknowledges the rather obvious point—there is not a universally accept definition for the term “tools for trustworthy AI”. However, governments and large institutions have consistently articulated certain principles that would enhance trustworthiness in AI. Those principles include fairness, transparency, accountability, privacy, safety and explainability. Notwithstanding the lack of consensus, RAND made the attempt and offered the following definition:
Tools for trustworthy AI are specific approaches or methods to help make AI more trustworthy and can help to bridge the gap between the high-level AI principles and characteristics, on the one hand, and the practical implementation of trustworthy AI, on the other.
Based on their definition, RAND identified far more tools for developing trustworthy AI than what I expected—233 to be precise. RAND divides the tool set into three categories: technical, procedural, and educational. You may be surprised to learn the US depends more on technical professionals and academia to develop AI trust tools when compared to the UK. Additionally, large US corporation develop more trustworthy AI tools compared to their counterparts in the UK. The UK prefers procedural methosds and relies less on academia.
As expected, the RAND study offers a series of recommendations. RAND offers the following guiding principle:
AI tools need to be complemented by a collaborative and inclusive approach that involves multiple perspectives and actors, such as governments, businesses, civil society, academia and international organizations. In other words, be willing to listen to voices from across the spectrum. RAND also suggested governments use the following methodology:
- Link up with relevant stakeholders to proactively track and analyze the landscape of tools for trustworthy AI in the UK, the US and beyond.
- Systematically capture experiences and lessons learnt on tools for trustworthy AI, share those insights with stakeholders and use them to anticipate potential future directions.
- Promote the consistent use of a common vocabulary for trustworthy AI among stakeholders in the UK and the US.
- Encourage the inclusion of assessment processes in the development and use of tools for trustworthy AI to gain a better understanding of their effectiveness.
- Continue to partner and build diverse coalitions with international organizations and initiatives, and to promote interoperable tools for trustworthy AI.
- Join forces to provide resources such as data and computing power to support and democratize the development of tools for trustworthy AI.
Time to end with a proverb from the UK where they say: Every cloud has its silver lining.
Until next time,
Yogi Nelson
