Describing artificial intelligence (AI) as "the most consequential technology of our time," President Biden signed a groundbreaking executive order in October. The legislation carries significant implications for federal agencies, private companies, and workers alike. Dubbed the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” it establishes a comprehensive framework to address AI risks and potential.
In his announcement, Biden emphasized the order as the "most significant action any government anywhere in the world has ever taken on AI safety, security, and trust." Forbes echoed the sentiment, describing it as “unparalleled in its ambition and scope, at least in recent memory.”
Among its directives, the Biden order requires federal agencies to reevaluate and mitigate their use of AI tools while also managing the risks posed by this technology. The order also places new safety obligations on AI developers, especially those producing advanced AI systems.
Companies must now engage in "red teaming," an intensive safety and security testing process designed to simulate cyberattacks and notify the federal government of the results before releasing new products. This mandate invokes the Defense Production Act of 1950, demonstrating the government's commitment to treating AI safety with a level of urgency comparable to managing other national crises, such as the COVID-19 pandemic and last year’s baby formula shortage.
Biden’s EO also directs federal offices to adopt formal risk management methods and resources when purchasing AI systems. Continuous monitoring and evaluation of deployed AI and its impact on the rights and safety of citizens is now mandatory.
Federal agencies have also been tasked with establishing official standards for watermarking AI-generated content by private companies. These measures reflect a concerted effort by the federal government to ensure transparency and accountability in the rapidly evolving AI landscape.
With the global tech landscape racing to develop advanced AI systems, Biden's order could influence international norms and encourage other nations to adopt similar measures. The best outcome would be a global culture of responsible AI development. This would be a pivot from today’s attitude of churning out new AI applications as quickly as possible and mark an important step forward for the industry.
One of the biggest concerns about AI is its potential to displace human workers. The executive order addresses this in several ways. First, it directs the U.S. Department of Labor (DOL) to evaluate how federal agencies can assist workers through retraining programs and unemployment insurance. Applicable to AI and "other technological advances," the directive underscores the order's relevance to employers' broad spectrum of algorithmic decision-making tools.
The order also introduces AI "principles and best practices" for labor standards and job quality, setting a precedent for responsible AI deployment by addressing equity, protected activity, compensation, and health and safety implications. DOL guidance on AI monitoring or augmenting employee work aligns with the Fair Labor Standards Act (FLSA), signaling a commitment to safeguarding workers' rights.
In alignment with recent updates to the Office of Federal Contract Compliance Program’s Scheduling Letter and Itemized Listing, the order now requires federal contractors to document their use of AI, algorithms, and other technology-based selection procedures. This may reflect the federal government's intention to regulate the integration of AI into various sectors of the American economy.
Ultimately, the Biden Administration’s executive order on AI is a massive step toward more responsible development and use of artificial intelligence. Its multifaceted approach addresses safety, security, and worker support, signaling a commitment to shaping a future where AI serves humanity responsibly and ethically.
In the days to come, it will be interesting to see how other nations react. Many will likely follow suit with similar legislation to guide responsible AI practices. However, there is no guarantee the world will unite on a single approach to AI development given its massive importance for the future.