Many view Chinese and Western approaches to AI regulation as oil and water. While significant differences exist, there is more overlap than one might think. As the U.S. considers its own approach, we may see more agreement and less technological balkanization.
In 2018, when the European Union’s General Data Protection Regulation (GDPR) came into force, it was a relatively unique piece of legislation. The two major centers of data-driven innovation and disruption — the United States and China — did not have anything comparable.
Fast forward to 2021, stakeholders are paying similar attention to the regulation of AI. However, this time, the story seems to be different. Governments, academics and civil society organizations around the world have recognized both the tremendous transformative potential of AI and the risks that it can introduce or accentuate.
AI-Related Regulatory Initiatives Are Proliferating Quickly
Compared to the situation with data privacy five years ago, AI-related regulatory initiatives seem to be more broad-based, with more than 150 instances of AI principles and guidelines issued around the world so far. For example:
- The European Union proposed its AI Act in April this year, as well as associated regulations around digital services and markets.
- In the United States, the Federal Trade Commission stressed the need for truth, fairness and equity in algorithms in April. The Food and Drug Administration issued an action plan on AI-based software in medical devices in January. Five financial regulators, including the Federal Reserve and the Comptroller of the Currency, undertook an extensive industry consultation in March.
- Similar initiatives are underway in numerous other geographies, including the United Kingdom, India, the United Arab Emirates and Singapore.
China Actively Drafting Broad-Ranging Legislation
More recently, there has also been a flurry of regulatory activity in China. Earlier this year, there were several well-publicized initiatives in the broader technology space, including those targeting data security and sovereignty, and more recently in gaming. In August and September, this was followed by a series of initiatives specifically aimed at governing AI:
- AI ethics principles from the National Governance Committee for New Generation AI.
- Multi-ministry guidance on strengthening governance of “internet information services algorithms,” with a three-year implementation horizon.
- Draft regulation to govern algorithmic recommendation engines.
Global Convergence or Mishmash of National Standards?
These initiatives will have direct implications for firms that provide AI or AI-enabled products and services inside China. However, the broader and perhaps more significant question is whether AI-related regulatory frameworks and standards are likely to emerge as a new battleground between China and the West. Are we heading toward a balkanized set of regulatory regimes from China, the U.S., the E.U. and other significant geographies?
At first glance, a degree of fragmentation is inevitable. Gaining agreement on a single global framework for AI regulation is unlikely, despite well-meaning efforts from international bodies. However, the existence of different regulatory frameworks does not necessarily preclude a significant level of alignment among them. For example, the six pillars in China’s ethical guidelines for new generation AI — enhancing the well-being of humankind, promoting fairness and justice, protecting privacy and security, ensuring controllability and trustworthiness, strengthening accountability and improving ethical literacy — seem well-aligned to those proposed in the West.
Similarly, a comparison between the draft regulations for algorithmic recommendation engines in China and the equivalent Digital Services Act in the E.U. highlights several common requirements, in addition to some differences.
U.S. Regulators Can Pick and Choose From Overseas Approaches
Given that U.S. regulators are still firming up their approach to AI regulation, they have an opportunity to adopt — and adapt — several aspects of the E.U. and the Chinese approaches, while adding others more suited to the U.S. context. In many areas, the E.U.’s draft AI, Digital Services and Digital Markets acts might provide a natural starting point for the U.S. For example, many of the EU requirements around transparency, fairness and openness to independent third-party oversight are likely to also appeal to US regulators.
Of the requirements proposed by China’s regulators, some are unlikely to ever make it to Western statute books. Proposals such as those seeking to ensure social order, public morality or spread of “mainstream values” and positive energy, are unlikely to translate well to the U.S. context. However, other requirements in the Chinese proposals could well appeal to U.S. regulators. For example, the requirements in the Chinese draft rules to ensure that algorithms do not cause excessive spending have interesting parallels in the U.S. Securities and Exchange Commission’s consultation on the effect of digital engagement practices on retail online traders. Similarly, many U.S. states appear to share the Chinese regulators’ concerns when it comes to the role of AI in scheduling tasks for gig workers.
For Global Firms, Common Foundational System for AI Quality Will Be Critical
What are the implications for international firms as they seek to prepare themselves for upcoming AI regulatory requirements around the world? Early indications are that as long as they build a foundational AI quality management system, this should enable them to meet country-specific requirements with light-touch customizations.
Key components of such a foundational system include: adequate risk assessment and mitigation systems; high levels of robustness, security and accuracy; use of high quality datasets to minimize risks and discriminatory outcomes; logging of activity to ensure traceability of results; detailed documentation to provide all information necessary on the system and its purpose for authorities to assess compliance; provision of clear and adequate information to users; and appropriate human oversight measures.