China will introduce rules governing the use of deep synthesis technology in January 2023. Deepfakes, where artificial intelligence is used to manipulate images and videos, are a concern for Beijing as it ramps up control over online content.
Fotografielink | Istock | Getty Images
In January, China will introduce first-of-its-kind regulation on “deepfakes,” ramping up control over internet content.
Deepfakes are synthetically generated or altered images or videos that are made using a form of artificial intelligence. The tech can be used to alter an existing video, for example by putting the face of a politician over an existing video or even creating fake speech.
The result is fabricated media that appears to be real but isn’t.
Beijing announced its rules governing “deep synthesis technologies” earlier this year, and finalized them in December. They will come into effect on Jan. 10.
Here are some of the key provisions:
- Users must give consent if their image is to be used in any deep synthesis technology.
- Deep synthesis services cannot use the technology to disseminate fake news.
- Deepfake services need to authenticate the real identity of users.
- Synthetic content must have a notification of some kind to inform users that the image or video has been altered with technology.
- Content that goes against existing laws is prohibited, as is content that endangers national security and interests, damages the national image or disrupts the economy.
The powerful Cyberspace Administration of China is the regulator behind these rules.
Since the end of 2020, China has sought to rein in the power of the country’s technology giants and introduced sweeping regulation in areas ranging from antitrust to data protection. But it has also sought to regulate emerging technologies and gone further than any other country in its tech rules.
Earlier this year, China introduced a rule governing how technology firms can use recommendation algorithms, in another first-of-its-kind law.
Analysts say the law tackles two goals — tighter online censorship and getting ahead of regulation around new technologies.
“Chinese authorities are clearly eager to crackdown on the ability of anti-regime elements to use deepfakes of senior leaders, including Xi Jinping, to spread anti-regime statement,” Paul Triolo, the technology policy lead at consulting firm Albright Stonebridge, told CNBC.
“But the rules also illustrate that Chinese authorities are attempting to tackle tough online content issues in ways few other countries are doing, seeking to get ahead of the curve as new technologies such as AI-generated content start to proliferate online.”
Triolo added that the AI regulations that Beijing has introduced in recent years are “designed to keep content regulation and censorship efforts one step ahead of emerging technologies, ensuring that Beijing can continue to anticipate the emergence of technologies that could be used to circumvent the overall control system.”
Deep synthesis technology isn’t all bad. It can have some positive applications across areas such as education and health care.
But China is trying to tackle its negative role in producing fake information.
Kendra Schaefer, Beijing-based partner at Trivium China consultancy, pointed CNBC toward her note published in February when the draft rules were announced, in which she discussed the implications of the landmark regulation.
“The interesting bit is that China is taking aim at one of the critical threats to our society in the modern age: the erosion of trust in what we see and hear, and the increasing difficulty of separating truth from lies,” the note said.
Through the introduction of regulation, China’s various regulatory bodies have been building experience in enforcing tech rules. There are some parts of the deepfake regulation that are unclear, such as how to prove you have consent from another to use their image. But on the whole, Trivium said in its note, China’s existing regulatory system will help it enforce the rules.
“China is able to institute these rules because it already has systems in place to control the transmission of content in online spaces, and regulatory bodies in place that enforce these rules,” the note said.