AI Giants Also Like ‘Fair Use’ Exemptions for Copyrighted Material

OpenAI and Google laid out visions for regulation in response to the Trump administration’s AI Action Plan, which aims to help the United States maintain technological lead over China.
See Also: Live Webinar | Reimagining Risk Modelling and Decisioning: Balancing Compliance and Automation for Competitive Advantage
OpenAI’s comments regarding the plan take a hardline stance on AI models developed in China. The company called for a ban on AI models from Chinese lab DeepSeek, describing it as a “state-subsidized” and “state-controlled” application. OpenAI argued that DeepSeek’s models are a security risk due to China’s data laws, which could compel the company to share information with Beijing.
The company asked the government to lighten export controls on advanced computing chips and AI model weights introduced by the Biden administration in January. The rule allows the free export of AI chips to just 18 countries, mostly in Western Europe, as well as Canada, Australia and Japan. Free export status should be granted, OpenAI said, to all countries “that commit to democratic AI principles” by deploying AI “in ways that promote more freedom for their citizens.”
OpenAI advocated for AI-friendly copyright policies, citing “fair use” as an enabler of American innovation. Restricting AI training data to public domain content would limit model performance and competitiveness, it said. OpenAI has maintained the position in the past, drawing criticism from copyright holders who have sued the company for using content without consent.
Google’s comments track some of OpenAI’s positions. Like OpenAI, Google pushed for lenient copyright rules, arguing that “text-and-data mining exceptions” should be preserved to allow AI companies to train on copyrighted content.
Google took a harder stance on export controls, writing that the tiered system introduced at the tail end of the Biden administration “may undermine economic competitiveness goals the current administration has set by imposing disproportionate burdens on U.S. cloud service providers.”
Google also advised the U.S. government to focus on long-term investments in AI research, rather than cutting federal funding. The company asked that publicly available datasets be released to accelerate commercial AI development.
Both companies opposed the prospect of AI liability laws. Google said deployers, not developers, are better positioned to assess risks. OpenAI has taken a similar position in the past, resisting attempts to impose broad accountability measures.