250
THE original aim of artificial intelligence (AI) was to mimic human cognitive functions and provide insights into real-world problems. AI has gained global recognition through creative works in literature and cinema. Its applications span various sectors including military, space exploration and healthcare. In healthcare, AI aids in diagnosis, treat-ment suggestions and financial projections for structural health management. Cybersecurity, rooted in cybernetics, aims to safeguard networks, devices and data against damage or unauthorized access. AI, particularly through ma-chine learning, enhances cybersecurity by automating risk identification and mitigation. Cybersecurity frameworks like NIST and ISO offer guidelines for protecting diverse domains, reflecting concerns from infrastructure to human security.
China, the US and EU released an unprecedented joint communiqué in November 2023, pledging to work together globally to address the problems brought forth by cutting-edge artificial intelligence (AI) technologies, especially “frontier” AI-like generative models like ChatGPT. This paper raised issues about the possible use of AI for misin-formation and the significant threats it poses to biotechnology and cybersecurity. Officials from the US and China are engaging in further bilateral discussions to explore potential collaboration on risk management and regulation of AI. Notably, recent regulatory initiatives by these key actors show notable convergence, such as China’s rules, the EU’s AI Act and US President Joe Biden’s executive order on AI. The common objective of these regimes is to stop AI exploi-tation while encouraging innovation.
The UN’s Intergovernmental Panel on Climate Change is modelled after an international panel that would advise gov-ernments on AI capabilities and emerging trends. Ian Bremmer, Eric Schmidt and Mustafa Suleyman are among those who have proposed closer international management of AI.
One major area of conflict related to AI is the ongoing dispute between China and the United States over the global semiconductor industry. To manufacture devices that can run cutting-edge AI models used by OpenAI, Anthropic and other companies on the technological frontier, the US Commerce Department released its first comprehensive licens-ing regime for the export of advanced chips and chip-making technology in October 2022. China responded in August 2023 by imposing export restrictions on rare materials germanium and gallium, which are both required for the pro-duction of semiconductors. Because states are not sufficiently restrained from implementing export controls by inter-national trade law under the World Trade Organization, tit-for-tat rivalry over chips is feasible. There is minimal chance of new formal regulations that can be legitimately enforced by a reputable international organization because former US President Donald Trump eliminated the WTO’s appeal body in 2018. Reduced trade and increased geopo-litical tensions are the results of this.
Technical standards, which have long served as the foundation for the usage of any significant technology, represent another area of contention. China has been pushing its chosen standards in the technical committees of several of these agencies, where it has assumed more and more leadership responsibilities. With 39 nations and territories, China had standardization agreements in place as of 2019.
Geopolitical strife is reshaping global AI regulations and deepening disagreements about the intangible resources re-quired for the technology. Large data repositories as well as highly specialized, smaller data pools are needed for AI tools. Businesses and nations will vie for access to various types of data and there are likely to be more international conflict over data flows. Collective solutions that are vast in scope will be thwarted by the new legal framework sur-rounding AI. Driven by its commitment to open markets and national security, the United States pushed a model of unrestricted international data transfers. At the same time, European legislation has been more circumspect when it comes to data protection. China and India have passed national laws requiring “data localization,” imposing more stringent controls on cross-border data transfers.
The question of whether and when states might require the disclosure of the algorithms underlying AI instruments is beginning to spark rivalry on a global scale. According to the EU’s planned AI Act, big businesses must provide gov-ernment authorities access to some models’ inner workings to make sure people won’t be harmed by them. With Bi-den’s executive order requiring disclosures about “dual-use foundation models” and trade agreements forbidding dis-closure of “property source code and algorithms,” the American approach is more convoluted and less cohesive. States are likely to attempt to compel companies to reveal technical design decisions while simultaneously forbidding them from disclosing this information to other governments as the significance of these decisions gains traction.
After agreeing at first that AI would be harmful, great powers are now fighting over the technology’s foundations, which are causing the legal system to become disjointed. In addition to undermining a nation’s attempts to control AI, this disjointed legal system can enable autocracies to control public opinion and take advantage of information flow. It can even spark global conflict. There could be a major loss if a global attempt to control AI is never really realized.
The writer is Associate Research Officer at Centre for International Strategic Studies AJK.