The State of AI Regulation
- ninozkarodriguez
- Jan 23, 2024
- 9 min read

AI Regulatory & Legal Developments - 2023 in Review and A Look Into 2024
By Nina Rodriguez
Regulating AI has become a pivotal focus in the United States, with significant developments emerging on a federal level in 2023. While AI tools are not yet everyday commodities, we’re at the tipping point where they’re clearly reshaping our day-to-day lives and how we work, as we witness its explosion into the mainstream. Today, several key sectors utilize AI, including but not limited to:
Healthcare: AI is used in diagnostics, personalized medicine, drug discovery, and patient management systems.
Finance: AI is employed for fraud detection, algorithmic trading, credit scoring, and customer service in the financial sector.
Retail: AI powers recommendation engines, demand forecasting, inventory management, and personalized shopping experiences.
Manufacturing: AI is used for predictive maintenance, quality control, supply chain optimization, and process automation in manufacturing.
Technology: AI is embedded in various tech products and services, including virtual assistants, language translation, and cybersecurity.
Education: AI is utilized for personalized learning, adaptive assessments, and administrative tasks in the education sector.
Energy: AI is used for energy grid optimization, predictive maintenance of equipment, and in the exploration of new energy sources.
Human Resources: AI is applied in talent acquisition, employee engagement, and workforce management.
Entertainment: AI is used in content recommendation, game development, and the creation of personalized entertainment experiences.
Government: AI is applied in areas like public safety, fraud detection, and administrative processes.
Real Estate: AI is used for property valuation, predictive analytics in housing markets, and virtual property tours.
This ever-expanding landscape of AI applications has prompted policymakers to focus on balancing the benefits of this emerging technology with the intersects of U.S. global innovation leadership and managing AI’s risks to individual consumers, workers, and businesses.
Existing Regulation
In 2023, the U.S. experienced an uptick in initiatives aimed at shaping the ethical, legal, and operational aspects of AI technologies. From concerns surrounding privacy and data security to the promotion of innovation and fair competition, the regulatory landscape reflects a dynamic response to the challenges and opportunities presented by AI. In this context, examining the key points and advancements in AI regulations, through the lenses of our 3 branches of government over the course of the year, offers valuable insights into the evolving governance framework shaping the deployment and impact of AI in the country.
The Legislative Branch
Over the past year, AI has drawn bipartisan interest and support. For example, in its 118th session this year, Congress introduced over 40 bills designed to regulate the use of AI.[1] In its session, Congress importantly highlighted that AI as a concept has existed since the 1950s. Of most importance to us today, is the development and advancements in generative AI (GenAI). Specifically, the technological advancements in machine learning models, their ability to generate content, and their now readily available access by the general public. Of particular concern, is the implementation of appropriate guardrails around AI usage in health care, education, and national security.
Senate Majority Leader, Chuck Schumer, has developed what he has coined as the “SAFE Innovation Framework” to provide a policy roadmap designed to ensure the appropriate development and deployment of AI.[2] Senator Schumer emphasized the vast potential of AI to benefit society, acknowledging its breakthroughs in various fields. However, the Senator highlighted AI’s associated risks, including job displacement, misuse, disinformation, and bias. He expresses the need for the U.S. to be a leader in AI innovation, to set standards, but cautioned against letting adversaries “like the Chinese Communist Party” shape the technology's rules. To address these challenges, the Senator proposed a policy response with central objectives focused on security, accountability, democratic values, transparency, and innovation:
Security: Safeguard our national security with AI and determine how adversaries use it, and ensure economic security for workers by mitigating and responding to job loss;
Accountability: Support the deployment of responsible systems to address concerns around misinformation and bias, support our creators by addressing copyright concerns, protect intellectual property, and address liability;
Foundations: Require that AI systems align with our democratic values at their core, protect our elections, promote AI’s societal benefits while avoiding the potential harms, and stop the Chinese Government from writing the rules of the road on AI;
Explain: Determine what information the federal government needs from AI developers and deployers to be a better steward of the public good, and what information the public needs to know about an AI system, data, or content.
Innovation: Support US-led innovation in AI technologies – including innovation in security, transparency and accountability – that focuses on unlocking the immense potential of AI and maintaining U.S. leadership in the technology.
The Senator stressed the importance of bipartisan efforts in the Senate to develop legislation and policies for responsible AI development and deployment and revealed the existence of committees currently focused on developing such legislation and non-committee chairs working to develop the Senate’s policy response. Finally indicating that the Senate’s policy response to AI would be dealt with “urgency.”
Further, Democratic Senator Richard Blumenthal (D-CT) and Republican Josh Hawley (R-MO), leaders of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, introduced their own AI regulation framework in September.[3] Unlike the SAFE Innovation framework, their approach emphasizes transparency and accountability, aiming to address AI harms and safeguard consumer data. The Blumenthal-Hawley framework proposes specific policies alongside broad principles, including the creation of an independent oversight body for licensing AI development companies, the removal of Section 230 immunity for AI-generated content, enhanced national security measures, transparency requirements for AI developers, and increased consumer protection regarding personal data and generative AI involving children.
The Executive Branch
The current Biden administration has taken several concrete steps towards regulating AI through both, existing legal authorities and creation of avenues to address responsible AI development and deployment.
The White House has published an “AI Bill of Rights of Rights” that identifies 5 principles to guide the “use and deployment of automated systems to protect the American public in the age of artificial intelligence”[4]
Safe and Effective Systems
Algorithmic Discrimination Protections
Data Privacy
Notice and Explanation
Human Alternatives, Consideration, and Fallback
Further, on October 30th, 2023, President Biden issued an executive order on Safe, Secure and Trustworthy Artificial Intelligence.[5] The Executive Order outlines new standards for AI safety and security, privacy protection, equity and civil rights, consumer and worker advocacy, innovation and competition promotion, global leadership advancement, and responsible government use of AI. The Executive Order directs actions under various headings, including the sharing of safety test results by developers of powerful AI systems, the establishment of rigorous safety testing standards by the National Institute of Standards and Technology, the prioritization of data protection legislation by Congress, and the prevention of AI-related discrimination and bias. The Executive Order emphasizes transparency and collaboration with international partners, aiming to regulate AI use in the United States comprehensively. A draft policy from the Office of Management and Budget complements the order, focusing on AI governance in federal agencies. The implementation and impact of these measures will be a work in progress, with ongoing developments.
In addition, the Federal Trade Commission has ramped up its focus on AI since 2021. In 2021, the FTC identified 3 existing regulations that it deems “important to the developers and users of AI”[6]
Section 5 of FTC Act prohibiting unfair or deceptive practices
Fair Credit Reporting Act
Equal Credit Opportunity Act
As of late 2023, the FTC has published a new report examining how GenAI is being used and affecting creative professionals in music, filmmaking, software development, and other fields.[7] This report comes on the heels of a publicly held round table whereby FTC staff and working creative professionals discussed GenAI application benefits and pitfalls. Creative professionals expressed concerns of AI application in their respective fields, such as:
The unauthorized use of past work for AI training;
The lack of disclosure on data usage;
Competition with AI-generated content for job opportunities;
Style mimicry, and;
Fake endorsements.
Participants urged AI developers to adopt an opt-in approach rather than relying on opt-outs. While some issues fall outside the FTC's jurisdiction, the report emphasizes targeted enforcement to ensure fair competition and prevent deceptive practices in AI-related markets. The FTC will monitor the generative AI industry and utilize its enforcement tools to safeguard consumers and foster fair competition. The report was unanimously approved by the Commission.
The Judicial Branch
The second half of 2023 brought a number of federal class action lawsuits involving the developers of some of the most popular GenAI products, including but not limited to in the areas of privacy, intellectual property, and tort.
In June, several plaintiffs anonymously filed suit against OpenAI LP (“OpenAI”) and Microsoft, Inc. (“Microsoft”),[8]alleging that OpenAI stole millions of people’s private information from the internet and used it to train its GenAI tools. The suit claims that OpenAI harvests data from people’s interactions with its products and from applications that have integrated ChatGPT. For example, one’s music preferences from Spotify, conversations on Slack or Microsoft Teams, and locations on Snapchat. The company then allegedly misappropriates that “stolen” data to train its AI models, in violation of federal and state privacy laws, terms of service agreements, and the Computer Fraud and Abuse Act.
Other cases have alleged violations of copyright law – fundamentally questioning the source of data used by developers of GenAI to train GenAI models. At the heard of most of these cases revolve is whether the collection and utilization of publicly available data, potentially subject to copyright protection, constitute infringement.
One such case, Andersen et al. v. Stability AI Ltd., involved plaintiffs representing a potential class of artists against Stability AI, Ltd. Inc., an AI platform that generates images according to a user’s prompts. Plaintiffs in this case allege that the entity unlawfully collects the data of billions of copyrighted images from online sources in order to train its GenAI. The GenAI then uses these images to create new images without attributing the creations to the original artists who allegedly unsuspectingly supplied the training material. The plaintiffs argue that this practice deprived artists of commissions and allowed the defendants to profit from the copyrighted works of these artists. Defendants in turn asserted that their models analyze the properties of online images to generate parameters. These parameters are later utilized to assist the model in generating new and unique images from text prompts. The defendants clarified that their models do not reproduce or copy any portion of the underlying images used for training.
In a telling opinion, the district court judge noted that the images produced by the models were not "substantially similar" to the plaintiffs' art. Further, he mentioned that, given the extensive training data of "five billion compressed images," it was implausible that the plaintiffs' works were involved in the creation of these images. This important take gives us a glimpse into how intellectual property claims will be handled in this AI era.
What’s in store for 2024?
Despite bipartisan interest, as well as support from leaders of major technology companies, uniform and comprehensive AI regulation remains elusive. There has been no consensus reached between substance or process, with different groups playing catch up in learning the technology, while others clamor to develop their own versions of legislation through tailor-made yet incomprehensible policies.
As AI expands into more industries it has, and will continue to, peak the attention of federal and state regulators - Panic --> Public pressure --> Internal Pressure --> Action
When disruptive technology is introduced, or sensationalized by the masses, it creates a sense of fear for the unknown. Individuals faced with existential questions about consciousness and bombarded with headlines such as “100 Jobs that Will be Wiped Out by Artificial Intelligence by 2050,” tend to clamor for a sense of control.
Once these fear receptors are triggered, the marketplace begins to put pressure on regulators to “do something about it.” The problem; however, is that regulators tend to be at a loss for how to deal with the complexities and constantly evolving challenges these disruptive technologies pose. As such, the solution tends to be a simple, albeit, premature and obsolete, one size fits all approach. When backed against a wall, regulators will attempt to regulate disruptive technologies by using existing rules, definitions, and processes.
As we’ve explored, current regulation seems to be centered around already existing concepts such as privacy, disclosure, and consumer protection. However, there is an obvious need for innovative resolutions to address novel issues posed by AI. California for example, has already set the tone by enacting temporary Deepfakes legislation.[9] Highlighting both the urgent threats posed by this type of technology, such as the “impact of digital content forgery technologies and deepfakes on civic engagement” and the novel approaches required to combat them.
AI regulation changes rapidly, and it’s essential to refer to the latest legal documents, official announcements, and reliable news sources for the most recent information. Cosmorizon will continue to stay up-to-date with developing trends and advancements in the regulatory landscape.
[1] See e.g., Congressional ~Research Service, Artificial Intelligence: Overview, Recent Advances, and Considerations for the 118th Congress (Aug. 4, 2023) available at https://crsreports.congress.gov/product/pdf/R/R47644.
[2] Sen. C. Schumer, SAFE Innovation Framework available at https://www.democrats.senate.gov/imo/media/doc/schumer_ai_framework.pdf
[3] Sen. R. Blumenthal & Sen. J. Hawley, Blumenthal & Hawley Announce Bipartisan Framework on Artificial Intelligence Legislation (Sep. 8, 2023) available at https://www.blumenthal.senate.gov/newsroom/press/release/blumenthal-and-hawley-announce-bipartisan-framework-on-artificial-intelligence-legislation.
[4] The White House, Blueprint for an AI Bill of Rights available at https://www.whitehouse.gov/ostp/ai-bill-of-rights/
[5] The White House, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023) available at https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
[6] See E. Jillson, Ainming for truth, fairness, and equity in your company’s use of AI (Apr. 19, 2021) available at https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai
[7] See FTC, Generative Artificial Intelligence and the Creative Economy Staff Report: Perspectives and Takeaways (Dec. 2023) https://www.ftc.gov/system/files/ftc_gov/pdf/12-15-2023AICEStaffReport.pdf
[8] See PM v. OpenAI LP, N.D. Cal., No. 3:23-cv-03199 (June 6, 2023) available at https://www.bloomberglaw.com/public/desktop/document/PMetalvOPENAILPetalDocketNo323cv03199NDCalJun282023CourtDocket/1?doc_id=X1Q5O7KNE0B9N58DMJL3VN7K9SN.
[9] Cal. Gov. Code § 11547.5.
Commentaires