AI research should be conducted in a fair, inclusive and open manner that protects the interests of all parties involved, from developers to consumers. (Photo: IC)
China issued new guiding principles on Monday for artificial intelligence research and applications. Experts said they will serve as an instructive framework for scientists and lawmakers to promote the "safe, controllable and responsible use" of AI for the benefit of mankind.
The document was published by the National Governance Committee for New Generation Artificial Intelligence. The committee consists of AI and public policy experts from different universities and research institutions who examine the effect of AI on laws, ethics and society.
The eight general principles in the document say scientists developing AI and its subsequent applications should respect and uphold human values and ethics and prevent their work from being misused or abused by malicious actors.
In addition, AI research should be conducted in a fair, inclusive and open manner that protects the interests of all parties involved, from developers to consumers. Also emphasized are privacy protection, international cooperation, responsible use of AI and creating timely regulations to keep up with AI's rapid development.
"AI technology is developing very fast and is changing everything in society, including economic structures, governance, national security and even inter-national relations," said Xue Lan, dean of Schwarzman College at Tsinghua University and chairman of the committee.
As a result, Xue said, AI technology has also raised many new and complex issues, such as data privacy, machine ethics, AI safety and risks and the misuse of AI technologies, such as spreading misinformation using "deepfake videos"－AI-manipulated footage that has become increasingly difficult for ordinary viewers to recognize.
Last week, the United States Congress held its first hearing on "deepfake media" and its role in degrading trust in government institutions and news outlets. Legislators warned such technology, if unregulated, could have a disastrous effect on elections.
Zeng Yi, a researcher at the Institute of Automation of the Chinese Academy of Sciences, said concern about "deepfakes" and AI's impact on society in general is shared by many countries.
"Therefore, the world needs a global collaborative mechanism to govern AI issues," he said, adding that some 40 nations and international organizations have published guidelines on the technology.
"It is crucial for China to be a part of the conversation and provide its own knowledge and experience, so everybody can learn best practices from each other and improve," he said.
However, Zeng also pointed out that many AI scientists and engineers are not trained to evaluate the long-term socioeconomic impacts of their creations. "More education for developers and the general public about the impact of AI is the key to ensure the principles we issued today are carried into future practice."
Li Renhan, a member of the National Governance Committee for New Generation Artificial Intelligence, said China's rapid AI progress in recent years is mainly due to four reasons: large data resources, wide application scenarios, high AI-related research output and strong government support.
Li said AI experts should communicate and interact with legislators and entrepreneurs to create timely rules that mitigate the negative impact of AI while maximizing its benefits for development.
"AI is not as uncontrollable or mystical as some people think," he said. "Our regulatory and supervision mechanisms should steer it in the right direction and leave room for exploration and growth."