On October 30, 2023, President Biden took a significant step by signing an executive order that charts a course for the utilization of AI in the United States, aiming to both harness its potential and mitigate the associated risks. Before this directive, it's crucial to note that President Trump made history by signing two executive orders in 2019 and 2020, marking the initial official forays into AI governance by an American president. Trump's focus primarily centered on how the government uses AI and strategies to maintain U.S. leadership in this field. In contrast, since assuming office, President Biden's official approach to AI has predominantly emphasized ethical and responsible implementation. His primary focus is ensuring that the application of AI adheres to ethical standards and societal responsibility, particularly in its impact on American citizens.
Biden's signing of this directive followed a series of meetings between the President and key tech executives in the US, including figures such as Musk, Zuckerberg, Pichai, among others. Apparently, during these discussions, the all the participants acknowledged the government's role in overseeing artificial intelligence. However, they held differing perspectives on the approach this oversight should take.
Shortly after these meetings, the Biden Administration swiftly issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This move signals a significant shift from the previous trajectory the US had been following, indicating a more proactive governmental involvement in the development of AI.
Key Takeaways from the Executive Order
The "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" marks a pivotal move by the Biden administration, particularly amid the burgeoning growth of generative AI and the amplified discussions surrounding the potential applications of AI. This comprehensive order encompasses a wide range of recommendations addressing various crucial issues from different domain, which is indeed a reflection of the wide potential uses and therefore impacts of the AI technology. It initiates the establishment of new standards for AI safety and security, aiming to safeguard Americans from potential risks posed by AI systems. It calls upon Congress to pass bipartisan data privacy legislation, safeguarding the privacy of all Americans. Simultaneously, the executive order prioritizes the advancement of equity and civil rights. It acknowledges that irresponsible AI usage could exacerbate discrimination, bias, and abuse in critical sectors such as justice, healthcare, and housing. Another critical area of focus within the executive order is the dangers associated with increased workplace surveillance, bias, and potential job displacement due to AI implementation.
While predominantly addressing domestic concerns, the executive order also extends to global issues. It emphasizes the need to bolster American leadership internationally by expanding engagements across bilateral, multilateral, and multistakeholder platforms to collaborate on AI. Furthermore, it stresses the necessity of establishing AI standards in coordination with international partners. Moreover, the executive order emphasizes the promotion of innovation and competition. In addition to recommending the catalyzation of AI research and the creation of a competitive system, it specifically highlights the importance of expanding opportunities for highly skilled immigrants and nonimmigrants in critical areas to study, remain, and work in the United States.
Discussions on the Directive
Since the recent announcement of the Executive Order, diverse technical perspectives have emerged. While some have praised the Biden Administration's initiative, others have criticized it. They argue that targeting the core models that underpin artificial intelligence systems and serve as the foundation for AI applications could impede the overall progress and growth of AI technology. This approach might lead to stifling innovation and slowing advancements by placing excessive regulatory burdens on these foundational models.
Simultaneously, various experts have emphasized that an Executive Order does not hold the weight of law. Therefore, its effectiveness will be contingent upon how Congress proceeds hereafter. The impact and implications of this directive will become clearer based on the legislative actions or responses following its announcement. The underlying argument for this position is rooted in the historical trend of limited congressional action in regulating new technology, notably evident in the case of social media. This has facilitated the largely unchecked expansion of the tech industry, creating a regulatory void that grants tech companies substantial autonomy. Consequently, this has raised concerns about critical issues like data privacy and the societal impact of unbridled technological progress.
Analyzing the Executive Order within the Realm of Global Affairs
While technical discussions are quite valuable on understanding better the executive order and its impacts, it is of great importance to analyze the issue also from the perspective of global affairs. As it can be seen, the directive does not touch upon only domestic issues but it touches upon global issues such as the cooperation between allies and the American AI global leadership.
First, it's crucial to highlight the unavoidable necessity for the Biden Administration to address the issue of AI. Unlike past administrations that had the luxury to adopt a "wait and see" approach, the current administration lacks this privilege. Despite previous attempts, the US has grappled with enacting crucial AI-related regulations into law. In comparison to several other nations and entities, the US has notably lagged behind in this regard. For instance, the EU is on the verge of finalizing the comprehensive EU AI Act, which sets severe regulations targeting the several AI applications that are considered to be the riskiest. Similarly, China, a significant AI competitor to the US, has implemented various rules concerning AI development and usage. Additionally, the UK is actively striving to establish itself as a leading AI safety hub. Given this landscape, it becomes evident that the US was at a juncture necessitating action in this domain. The global progress in AI regulation and governance, especially by key competitors and major entities, highlights the pressing need for the US to step forward in this realm.
Second, what we can understand -even why not explicitly stated- there will be an unspoken war of setting the standards. Specifically, who sets the standards, and manages to impose and spread these standards to a broader area, will be one step ahead on dominating and leading the AI development and application, therefore the race. What the US is aiming to do is to create a global AI governance under its leadership which would help it maintaining the position of the leadership in this domain. Therefore, the executive order places a great importance on the US’ partnership with its allies. Indeed, this is a policy that Biden follows also in his foreign policy.
An essential aspect worth noting in the Executive Order is its invocation of the Defense Production Act. These mandates developers of high-power AI systems, which have dual-use capabilities for both civilian and military purposes, to disclose critical details and safety test outcomes to the US government if these systems pose substantial risks to national security, economic security, or public health.
It is important to mention here that the Defense Production Act is a law that grants the President broad authority to shape the domestic industrial base in times of crisis or national emergency. Passed in 1950 during the Korean War, the law was designed to ensure the availability of critical materials, equipment, and services needed for the national defense. Specifically, the Defense Production Act serves as a tool to bolster the nation's industrial and technological capabilities during times of war, national emergencies, or situations where national security interests are at stake.
Placing dual-use technologies like AI under such a law links AI directly to national security, elevating the issue from a domestic concern to a global-scale matter. This move should be considered within the context of an ongoing international AI arms competition, with the US and China as key players. Furthermore, it can be said that it is notable that there is a parallel between the new American approach and the Chinese strategy, where central leadership influences major technology directions, factoring in the state's national security interests.
Lessons Learned for Türkiye
Türkiye has witnessed significant technological advancements in recent years, striving to establish itself among the leading global technological contributors. With a vision to achieve this status, Türkiye has been formulating and pursuing various policies. In the realm of AI, Türkiye took significant strides by announcing its national strategy in 2021. However, amidst the global movement where several countries and organizations are advancing AI governance and establishing standards, Türkiye needs to promptly introduce its own guidelines.
Yet, in this context, Türkiye faces the crucial task of striking a delicate balance between fostering innovation and ensuring responsible AI use. This balance is pivotal in positioning Türkiye among the most advanced nations while safeguarding its citizens and national security interests. Therefore, it's essential for Türkiye to swiftly announce its guidelines for AI, ensuring a strategic approach that both nurtures innovation and promotes responsible AI deployment.