Warshaw Burstein LLP | Emerging Tech Law Group | Biden Administration Issues 2023 Executive Order on Artificial Intelligence
This links to the home page
Recent Developments

Biden Administration Issues 2023 Executive Order on Artificial Intelligence

08/05/2024

Summary of Key Points

  • Private companies need to be aware of a 2023 Executive Order (“EO”) on artificial intelligence with wide-reaching implications that was signed in furtherance of promoting a “safe, secure, and trustworthy development and use of artificial intelligence.”
  • The EO has directed the National Science Foundation (“NSF”) to establish a National AI Research Resource (NAIRR) to facilitate coordination between the government and private sector for the creation of a pilot program to establish at least one NSF Regional Innovation Engine that will prioritize work on AI technology, as well as the subsequent establishment of four new National AI Research Institutes.
  • The Small Business Administration has been directed to allocate funding for AI initiatives and to ensure that grant programs are eligible for AI-related small businesses.
  • The EO requires the US Patent and Trademark Office (PTO) to provide guidance to patent examiners on issues of “inventorship and the use of AI.”
  • The EO also directs the secretaries of State and Homeland Security to explore the expansion of existing programs for highly skilled foreign talent.

Executive Order on AI

On October 30, 2023, a sweeping Executive Order (“EO”) on artificial intelligence was signed in furtherance of promoting a “safe, secure, and trustworthy development and use of artificial intelligence.” The EO contemplates accountability in AI development and deployment across organizations while promoting private sector innovation through directives to create agency-level programs and frameworks that will impact nearly all facets of the emerging industry. The EO will direct the development of standards, tools, and tests to advance safety and security considerations, to promote innovation, and to advance American leadership around the world, and more. Previously, the Biden-Harris administration had secured the support and voluntary commitments of 15 leading companies in the development of safe, secure, and trustworthy AI infrastructure and applications.1 Since the issuance of the EO, the Biden-Harris administration has subsequently released a proposed “Blueprint for an AI Bill of Rights” which incorporates many of the principles identified in the EO, and is a must-read for ecosystem enterprises for best practices with respect to AI compliance integration through its annexed From Principles to Practice, a handbook for technological design process integration.2

A Brief History of AI Regulations

The EO represents the latest propounded directive in a surprisingly established lineage of legislation tracing back to the 115th Congress, which codified the first definition of AI in the John S. McCain National Defense Authorization Act for Fiscal Year 2019. Thereafter, the National AI Initiative Act of 2020, which became law on Jan 1, 2021, sought to expand AI research and development and to further coordinate activities between the private and defense sectors.

The Act also established the National Artificial Intelligence Initiative Office, within the White House Office of Science Technology Policy. The 117th Congress, concluding on January 3, 2023, also saw the introduction of the Advancing American AI Act, the FAA Reauthorization Act of 2018 which included provisions relating to the periodic review of the state of AI in aviation, the American Data Privacy and Protection Act, and the AI Training Act. In 2021, the American Bar Association published its inaugural Chapter on Artificial Intelligence.

Who Will the EO Impact?

In defining AI, the EO has adopted broad terminology which could otherwise dragnet technology companies who might only employ nominal aspects of AI in their operational pipelines, and in some instances, broadly expands rulemaking authority to encompass AI-adjacent and STEM-related sectors in seeking to regulate and support nascent emerging technologies. Notably, the EO uses the definition of “artificial intelligence,” or “AI,” found at 15 U.S.C. 9401(3): “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”

The express textual construction of this definition could therefore arguably impact any machine-based system which makes predictions, recommendations, or decisions, and therefore bring those companies under the jurisdiction of the EO. The EO is guided by eight principles and priorities. Of relevance to businesses are principles 1, 4, 5, and 6 as follows:

Principle 1: Requires companies to contemplate the development of internal framework policies to test, understand, and mitigate client risks prior to deployment.

Principle 4: Requires policies to be consistent with the advancement of equity and civil rights.

Principle 5: Requires companies to assess and contemplate client protections in the use of the technology in their daily lives.

Principle 6: Requires that companies create robust protections surrounding the collection, use and retention of data in a secure and lawful manner which promotes privacy.

Implications for Startups and Private Enterprise Businesses

For the first time in 2022, more AI startups raised first-time capital in the United States than in the next seven countries combined. It is evident in the drafting of the EO, that spirit of innovation has been reflected in a deliberate emphasis on catalyzing substantial progress on AI innovation. The EO anticipates the creation of tools that will allow companies to access key AI resources and data, promoting a fair and open ecosystem where founders and employees will have access to technical assistance. Additionally, the EO directs agency level support to help small businesses commercialize AI breakthroughs such as the Federal Trade Commission in the exercise of its authorities.

Beyond the obvious need for companies to carefully track, identify, and implement an array of regulatory frameworks, the EO has broad implications on the technology applications themselves which will require software developers to proactively address critical considerations such as intellectual property provenance. For example, the EO requires the US Patent and Trademark Office (PTO) to provide guidance to patent examiners on issues of “inventorship and the use of AI”.

This broad directive should alert inhouse counsels to the possibility of issues arising with respect to patent eligibility for the intangible assets of a company that employs AI in their processing architecture. Similarly, the EO directs the US Copyright Office to study issues related to AI technology including the treatment of copyrighted works in AI training and the scope of protection for works produced using AI. The impact by the EO on both utility design patent frameworks and copyright protections will likely challenge underlying assumptions by existing and emerging companies and will require significant analysis, especially so in considering that the companies likely to be affected heavily rely on intangible assets in calculating their valuations and intellectual property strategy.

While the impacts of the EO on intellectual property could significantly impact the product offerings and capitalization strategies of private companies, the EO also directs the secretaries of State and Homeland Security to explore the expansion of existing programs for highly skilled foreign talent and to engage in rulemaking to streamline the processes for noncitizens to meet the talent development needs of U.S. companies. This includes directing the Department of Labor to identify AI and other STEM-related occupations for which there are insufficient US workers.

This directive is especially interesting as it is expansive in scope and could apply to other emerging technology sectors or ancillary AI-adjacent fields and marks a pronounced shift by the Biden administration in its approach to addressing technical specialist gaps in identified STEM-related fields. Additionally, companies need to be cognizant of developing employer principles and best practices that are being considered by the Department of Labor in addressing potential labor-market displacement effects because of AI technologies through policymaking activities.

Additionally, the EO directs the National Institute of Standards and Technology (NIST) to development a foundational framework of guidelines and best practices for “developing and deploying safe, secure and trustworthy AI systems.” This guidance will need to be closely studied by private enterprise to ensure compliance with NIST regulations as interpreted by agency-level domains.

Similarly, the Small Business Administration has been directed to allocate funding for AI initiatives and to ensure that grant programs are eligible for AI-related small businesses. This broad directive represents a sea change in AI adoption and the acknowledgment of its broad potential as a disruptive technology.

Finally, perhaps the most important business-related concern, the EO has directed the National Science Foundation (“NSF”) to establish a National AI Research Resource (NAIRR) to facilitate coordination between the government and private sector for the creation of a pilot program to establish at least one NSF Regional Innovation Engine that will prioritize work on AI technology, as well as the subsequent establishment of four new National AI Research Institutes. These research centers will work with the private sector to make distributed computational data and modeling resources available to the research community and tackle major societal and global challenges including deploying AI in the areas of climate change, electric grid infrastructure, public health, and small business innovation.

What is Next?

The EO orders the development of a multitude of reports together with the creation of institutes, policies, and regulatory frameworks, including the creation of a National Security Memorandum by the national Security Council that will further regulate AI and security apparatuses. The EO has directed the NIST, in coordination with the Department of Commerce, to develop two sets of guidelines. Of relevance, the NIST will produce standards and procedures for developers of AI in managing AI structured testing to identify potential flaws and vulnerabilities.

Companies should anticipate participation in future solicitation input opportunities with the secretary of Commerce to be integration into a report to the president on the potential benefits, risks and implications as well as policy and regulatory recommendations. Interestingly, for those companies operating AI systems in Fintech, the Treasury Department is required to issue a report on best practices for financial institutions to manage AI-specific cybersecurity risks and outlines a process to mandate the adoption by critical infrastructure owners and operations of the NIST AI Risk Management Framework via appropriate regulatory actions.

Looking ahead, the EO has laid the groundwork for the development of future national AI-related legislation in areas such as consumer data protection, cybersecurity, innovation, and more. Additionally, private companies would be well-counseled to quickly develop and implement a regulatory alert framework to be aware of, and engage in, responsive feedback opportunities to guide the agency-level reporting process and to engage in critical areas of thought-leadership including the White House AI Council, the Artificial Intelligence Safety and Security Advisory Board at the Department of Homeland Security, the National AI Research Institutes, and others as private companies navigate the emerging national regulatory frameworks created by the EO.

Conclusions

The EO represents a major shift in policy positioning around AI regulation at the federal level. Going forward, companies should anticipate increased regulatory burdens, and to anticipate opportunities for innovation and collaboration through various agency-level initiatives and funding programs. Companies will also need to proactively engage in the development of AI products which comply with the emerging regulatory frameworks to be promulgated by the principles set forth as relevant. Companies will need to be aware of promulgated rules by regulated and non-regulated agencies, which can possibly impact the core functions, operations, and conversely product offerings and revenue structures of emerging technology companies employing AI and AI-adjacent related products and services.

It is recommended that companies seek experienced counsel to navigate the dynamic regulatory environment to ensure compliance and to develop a comprehensive strategic approach to ensure stakeholder representation in rulemaking activities and market participation. Law firms, like WBNY, can provide regulatory compliance and advisory guidance services including representation in rulemaking, strategic planning and risk management, and employee educational resources and training.

In conclusion, as the regulatory framework for AI evolves, small businesses will increasingly need to rely on legal experts to navigate these changes effectively. Law firms equipped with expertise in AI and technology law will be crucial partners in ensuring that these companies not only comply with new regulations but also thrive in a rapidly changing technological landscape.
 


1 Fact Sheet, Briefing Room Statement and Releases, July 21, 2023. FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI | The White House

2Blueprint for an AI Bill of Rights

* * *

If you have any questions regarding matters involving emerging technologies law, or corporate practice law in general, please contact James A. Wolff, at jwolff@wbny.com, 212-984-7795, or any of the undersigned, or your regular Warshaw Burstein attorney.