This article is more than 1 year old
China striving to be first source of artificial general intelligence, says think tank
The work is hard to spot – which is bad for science, but good for paranoia
Chinese researchers published 850 papers pertaining to artificial general intelligence (AGI) between 2018 and 2022, indicating Beijing's efforts to create a thinking machine are real and active – possibly including research on brain/computer interfaces.
So says think tank the Center for Security and Emerging Technology (CSET) in a recently released report that claims Beijing's effort "challenges emerging global norms, underscoring the need for a serious open source monitoring program to serve as a foundation for outreach and mitigation."
The report is based on examination of scientific papers touching on a dozen relevant technologies – an effort that found 500 items of research concerning "routine AI applications" plus "a significant body of research … on AGI precursor technologies, indicating that China's claims to be working toward artificial general intelligence are genuine and must be taken seriously."
Those claims were first articulated in 2017, when China published a document titled the "New Generation Artificial Intelligence Development Plan". One of the goals of that plan is "to build China's first-mover advantage in the development of AI."
The CSET report suggests that plan is advancing nicely, identifies the universities that have made the biggest contributions to the AGI effort, and notes that five of the most prolific sources of AGI research are institutions located in the city of Beijing.
But that may not mean Beijing-based boffins are doing all the heavy lifting.
"While acknowledging the Beijing-area concentration, given AGI's multi-disciplinary basis and the multiple paths through which it may be realized, the possibility of breakthroughs elsewhere in China cannot be ruled out," the report states. "Limited data suggest that Beijing may be serving as China's AGI research hub for testing and deployment done elsewhere in China, in Wuhan especially."
- China reportedly let AI control a satellite, which then observed rivals India and Japan
- US cyber ambassador says China knows how to steal its way to dominance of cloud and AI
- Washington plans to block Chinese access to AI cloud services
- China cracks down on AI-generated news anchors
The report asserts "China appears to be exploring multiple paths to AGI, including a potential approach not covered in this study, namely, cognitive sharing through BCIs."
BCIs are brain/computer interfaces – a meeting of wetware and hardware.
The report notes that some of China's AGI research involves researchers from overseas, but "data show that the bulk of it is situated in Chinese institutions." The authors appear to worry that it's therefore hard for the rest of the world to understand China's achievements – or lack thereof – and note that "hiding scientific research, for example, by restricting access to academic journals, may lead to false assumptions that devolve into a vicious cycle of measures and countermeasures."
To demonstrate the problems associated with that situation, the authors refer to the "missile gap" – the mistaken belief in the 1960s that the Soviet Union's missile tech considerably exceeded that of the United States, leading to poorly-informed policy responses that did little to ease the tensions of the time.
"Pursuing this train of thought further, it is highly likely that one's inability to gauge the status and intent of a potential rival through open sources will lead to clandestine efforts to procure this same information – and more – driving science further underground to no-one's long-term benefit," the authors assert.
The report concludes with a call for US policymakers contemplating AGI safeguards to recognize that rivals understand the strategic significance of the tech, and of developing it first.
"Although an unrestrained race to the top is risky, unilateral restrictions on AGI development, trust-based agreements that cannot be verified, and one-sided adherence to ELSI/ELSA (ethics, legal and social implications/aspects) protocols are risky as well." ®