AI governance and standardization in China
2018 White Paper
In 2018, the Chinese government-issued AI white paper highlights the key factors that should be considered in the standards-setting for AI. Before it starts explaining its framework for standards-setting, the white paper introduces the concept of AI, the current situation and trends, and safety, ethical, and privacy issues related to AI. After summarizing the existing AI standards and regulations, both locally and internationally, their structure of the AI standardization system is elucidated.
The structure consists of six parts: foundation, platforms/support, key technology, products and services, applications, and security/ethics. These can be briefly elaborated as follows.
-
Foundational standards target mainly regularizing fundamentals of AI such as terminology, reference architecture, data, and testing and assessment.
-
The main focuses of the platforms/support component of these standards are big data, cloud computing, intelligent perception and connection, edge computing, smart chips, and AI platforms.
-
Once the data is prepared according to foundational standards and necessary supportive technologies are gathered as per platforms/support standards, AI-related technologies should be used in accordance with key technical standards to perform humanlike learning from data.
-
The intelligent systems made by following the above standards are then embedded into products and used in services, however, should be done in obedience to products and services standards.
-
While using these products and services in different application domains, applications standards should be followed. Their standards system framework names smart manufacturing, smart logistics, smart cities, smart homes, smart transportation, smart finance, and smart healthcare as potential example applications.
Throughout the entire process from data garnering to final employment of AI products/services in different applications, developers and users should comply with ethical and moral dimensions and uphold societal security and human rights as per security/ethics standards.
Eight principles for AI governance and “responsible AI"
In 2019, China's New Generation AI Governance Expert Committee published "Eight principles for AI governance and “responsible AI”. The motivation behind the proposed principles is the healthy development of a new generation of AI. If followed, one can ensure that AI is safe/secure, and reliable. The eight principles are as follows:
-
Harmony and friendliness: The primary objective of AI should be to enhance the common well-being of humanity. In other words, AI should serve the progress of human civilization while complying with the ethical and moral dimensions, and upholding the societal security and human rights.
-
Fairness and justice: Biasness and discrimination should be avoided in each phase from data gathering to the final product development of AI. Instead, fairness, justice, and equality of opportunity should be promoted. The rights and interests of stakeholders should be protected.
-
Inclusivity and sharing: AI should be for everyone. Coordinated, shared, and inclusive development of AI should be encouraged to secure the adaptability of people from different backgrounds and abilities. Resources should be openly accessible to evade data and platform monopolies.
-
Respect privacy: In the development of AI, personal privacy should be strictly protected. If personal data is used, necessary boundaries and standards should be enforced to protect them. Contributing individuals should have the right to know and decide which data should be used and where they should be used.
-
Secure/safe and controllable: In order to attain trustworthiness, AI systems should emphasize transparency, explainability, reliability, and controllability. The robustness, safety, and security of AI systems must be given special attention, and an external assessment of these components is needed.
-
Shared responsibility: In general, it is the responsibility of AI developers, users, beneficiaries, and all the stakeholders involved to prevent the misuse of AI against laws, regulations, ethics, morals, standards, and norms. When needed, the specific responsibilities of these parties should also be defined.
-
Open collaboration: To promote the healthy development of AI, knowledge and ideas should be shared and exchanged across disciplines, domains, regions, and borders. For collaborations at the international level, consensus on an international AI governance framework, standards, and norms should be initiated.
-
Agile governance: In order to promote the innovative, orderly, and natural development of AI, management mechanisms, governance systems, and recommender practices should be continuously updated. Future risks associated with the advancements in AI should be anticipated and necessary adjustments should be made accordingly.
2021 White Paper
In 2021, China published a white paper on Trusted artificial intelligence. It highlights the need for regulations and law to guide AI development, and also describes the use of 3rd parties for evaluations and verifications. It is suggested that AI enterprises and insurance institutions can explore the insurance mechanism of AI product application, conduct quantitative assessment of risk accidents, provide risk compensation, and help improve the trusted AI ecosystem.