Namaste Yogis. Welcome to the Blockchain & AI Forum, where technology questions are answered! Here no question is too mundane. As a bonus, a proverb is also included. Today’s question was submitted by Tiffany and she is curious about Taiwan’s recently proposed basic artificial intelligence (AI) law.

Tiffany, you came to the right place. You are spot on. Taiwan did recently propose a “basic AI law”. In Taiwan there is an organization named the Computer and Communications Industry Association (CCIA). CCIA represents the view of its members on matters pertaining to computer and communication technology. Let’s examine what the CCIA says regarding the recently proposed basic AI law.
CCIA members have reason to rejoice. Why? Essentially, the CCIA finds the draft law: “generally aligns with best practices, which rely on flexible and international technical standards-based approaches to AI governance that are crucial for supporting innovation and diffusion. Imposing overly-prescriptive rules while the technology still develops could slow innovation and would likely become quickly outdated as global standards and AI technologies and applications are constantly changing.”
The CCIA did submit specific recommendations. For instance, in Section 3, Principles, CCIA says certain terms need clearer definition, including: privacy protection and data governance, transparency and explainability, fairness and non-discrimination, and accountability. Clarity is always better. However, the overarching theme of CCIA comments is to water down proposed safe guards. Take the example of discrimination. The CCIA suggest that protection against discrimination by AI should be minimized as opposed to what the proposed law says—No Discrimination. If you are on the receiving end of “minimized impact of discrimination” the CCIA is not coming to your rescue because apparently CCIA is okay with a little discrimination. Not a good look for CCIA.
CCIA submitted comments regarding Article 9 – Harm Prevention. Article 9 calls on the government to prevent specified harms, including the creation of misleading or falsified information in violation of existing laws, and requests relevant agencies to develop or procure tools or methods for verification. The CCIA wants a narrower application of Harm Prevention relative to text verification because, says CCIA, the technology is evolving and the emphasis instead should be on efficacy and contextualized to relevant outputs. CCIA’s explanation is too self-serving for my taste.
Let’s talk about Risk Reduction because nothing in life, much less technology, is risk free. Article 12 of the proposed new law would mandate the government to create a risk reduction framework. The framework would exempt pre-deployed AI research from the risk reduction measures. Sounds reasonable. CCIA wants to, again, more narrowly define transparency and disclosure to align with international standards, specifically the US based National Institute for Standards and Technology AI Risk Management Framework.
With regards to Article 15, Data Openness, Sharing and Reuse, CCIA has plenty to say. CCIA notes correctly, that government has vast quantities of data. CCIA wants the government to ensure, through policies and/or rules, that data that can be made publicly available and optimized for AI training. But wait. Whose data is it? Isn’t the data yours and mine? Why should the government turnover our data to private for-profit companies for building AI models. Apparently CCIA believes it has a right to our data and wants to use the compulsion authority of government to get its hand on it. I want to decide who gets my data, not the CCIA.
Time to hit the road with a proverb from Taiwan: “where there is a mountain, there is a road; where there is water, there is a bridge.”
Until next time,
Yogi Nelson




