Gemini: AI must be approached not as a simple, inert tool, but as a complex service relationship. This relationship is fraught with unresolved legal questions, significant ethical considerations...
...and profound commercial risks that demand active, informed, and strategic management. A clear-eyed understanding of the technology’s limitations is crucial for its responsible use.
Generative AI: A Legal and Commercial Analysis of Service Models, Copyright, and User Rights
by Gemini 2.5, Deep Research. Warning LLMs may hallucinate!
Executive Summary
The rapid proliferation of generative artificial intelligence (AI) has ignited a fierce debate, pitting the technology’s transformative potential against profound legal, economic, and ethical challenges. A social media post by creative industry professionals Nick Dunmur and Adam Shaw serves as a microcosm of this conflict, articulating a series of critical claims regarding the nature of AI platforms, their reliance on copyrighted material, the ownership of their outputs, and their broader economic and environmental impact. This report provides an exhaustive analysis of these claims, concluding that while the sentiments expressed are directionally correct on several key points, the underlying reality is a complex tapestry of unsettled law, significant economic disruption, and substantial hidden costs.
Key Finding 1: Generative AI is a “Service,” Not a “Tool.” The assertion that generative AI platforms are legally and operationally structured as “Software-as-a-Service” (SaaS), not as standalone tools, is fundamentally correct. This is not a mere semantic distinction; it is a critical classification with profound consequences for user rights, data privacy, provider liability, and control over the technology’s use and outputs.
Key Finding 2: AI Training Models are Built on a Legally Contested Foundation. The claim that AI models are trained on vast quantities of copyrighted works without permission or payment is substantially true and is the subject of existential, industry-shaping litigation. The primary legal defense offered by AI developers in the United States, “fair use,” is a high-risk, fact-specific, and unreliable gamble. The immense legal and financial pressure is forcing a systemic shift away from a “scrape-first” model toward a future where training data must be ethically sourced and licensed.
Key Finding 3: Users Do Not Truly “Own” AI Outputs. The claim that users do not own AI-generated outputs is legally complex but practically true in the most commercially meaningful sense. A fundamental conflict between platform Terms of Service, which often promise “ownership,” and foundational copyright law, which requires human authorship, creates a state of “phantom ownership.” Users typically receive a contractual right to use an output but not a legally defensible copyright they can protect from infringement by others, rendering such assets commercially vulnerable.
Key Finding 4: The Economic Impact is One of Value Transfer, Not Contraction. The assertion that AI contributes to a “shrinking of the economy” is inaccurate at the macroeconomic level, where generative AI is widely projected to boost productivity and GDP. However, the statement accurately reflects the severe economic disruption, value transfer, and potential for significant job displacement within the creative industries. The economic pie may grow, but its distribution is being radically altered, with value flowing from creative labor to technology capital.
Key Finding 5: The Environmental Footprint is Real and Substantial. The claim that using generative AI adds to one’s carbon footprint is unequivocally true. The energy and water consumption required for both the initial training of large models and the continuous, large-scale deployment for user queries (inference) are substantial, largely invisible to the end-user, and represent a significant and growing environmental externality.
Strategic Imperative: For both businesses and individuals, the central conclusion of this analysis is that generative AI must be approached not as a simple, inert tool, but as a complex service relationship. This relationship is fraught with unresolved legal questions, significant ethical considerations, and profound commercial risks that demand active, informed, and strategic management.
I. Deconstructing the “Service vs. Tool” Dichotomy: A Legal and Practical Analysis
The initial assertion in the analyzed social media post—that generative AI is a “service” and not the “tool” its providers wish users to believe—is the foundational argument upon which all other claims rest. This distinction is not merely semantic; it is a strategic framing battle that dictates the fundamental nature of the relationship between the user and the provider, with profound implications for control, liability, ownership, and data privacy. Classifying these platforms correctly is the essential first step in understanding the true terms of the user’s engagement.
1.1 The Legal and Commercial Framework: Defining SaaS vs. Licensed Tools
To understand the classification of generative AI, one must first distinguish between two dominant software distribution models: the traditional licensed “tool” and the modern “Software-as-a-Service” (SaaS) model.
A traditional software tool is analogous to a physical good. The user typically pays a significant, one-time upfront fee to purchase a perpetual license. The software is then installed on the user’s local machine or private server.1 In this model, the user has a high degree of control and ownership over their instance of the software. They are responsible for maintenance, updates, and security, but in return, their data and operations remain within their own environment. The software is a tangible, albeit digital, asset.2
In stark contrast, the SaaS model is a service agreement, not a product sale.3 This model is characterized by several key features:
Subscription-Based Access: Users pay a recurring fee (monthly or annually) for the right to access the software, rather than purchasing it outright. This lowers the initial cost of entry for users.3
Cloud Hosting: The software is not installed locally but is hosted on the provider’s remote servers and accessed by the user over the internet, typically through a web browser or a lightweight client application.1
Centralized Management: The SaaS provider is solely responsible for all aspects of infrastructure management, including security updates, feature improvements, bug fixes, and server maintenance. Users are always on the latest version of the software without needing to take any action.1
Service Agreement: The legal relationship is governed by a service agreement or Terms of Service (ToS), which grants the user a right to use the software under specific conditions, rather than a license that transfers ownership rights.3
While the SaaS model offers significant benefits in terms of cost-effectiveness, scalability, and accessibility, it comes with a critical trade-off: a fundamental loss of user control. The user is entirely dependent on the provider for the continued availability and functionality of the service. The provider retains control over the software, its features, and the infrastructure it runs on. This dependency creates potential risks related to data security, as user data is processed and stored on the provider’s servers, and contractual obligations that can be complex and heavily favor the provider.4
1.2 Classifying Generative AI Platforms within the SaaS Model
When examined against this framework, it becomes clear that all major generative AI platforms—including OpenAI’s ChatGPT, Anthropic’s Claude, Midjourney, and Stability AI’s models—operate unequivocally as SaaS products. They are accessed via subscription fees, are hosted exclusively on the providers’ cloud infrastructure, and are managed and updated centrally.5 The user does not download and install a “ChatGPT tool” on their computer; they log into a service hosted by OpenAI.
This classification is implicitly recognized in legal and professional guidance. For example, guidance for lawyers using these platforms emphasizes the need to “regularly review the generative AI vendor’s data management, security and standards” and to “establish whether the generative AI tool is a closed system within your firm’s boundaries or also operates as a training model for third parties”.7 These are considerations unique to a service relationship, where the user must trust a third-party vendor with their data and operations. The focus is on vendor management, reviewing service level agreements, and understanding data governance policies—all hallmarks of procuring a service, not buying a tool.9
The framing of generative AI as a “tool” by its proponents is a deliberate and strategic choice. A “tool”—like a camera, a paintbrush, or a word processor—is a passive instrument wielded by an active human user. This metaphor implies that the user is in complete control, the output is a direct result of their skill, and the responsibility for that output rests solely with them. This framing conveniently positions the AI company as a mere technology provider, deflecting responsibility for the platform’s outputs, its potential biases, and the legal status of the content it generates.11
However, the “service” classification more accurately reflects the operational, legal, and commercial reality. It highlights the ongoing, dependent relationship between the user and the provider. It correctly frames the provider as an active participant that controls the platform, has access to user data, dictates the terms of use, and bears a degree of responsibility for the service it delivers. The distinction is not academic; it is a proxy war over control, liability, and the nature of the value exchange. Recognizing generative AI as a service is essential to accurately assessing the risks and limitations inherent in its use.
1.3 Consequences of the “Service” Classification for Users and Businesses
Understanding generative AI as a service reveals several critical consequences that are often obscured by the “tool” metaphor.
First, there is a profound loss of user control and autonomy. Unlike a locally installed tool, a SaaS platform can be modified, restricted, or even terminated by the provider at any time, subject to the terms of the service agreement. Users have no ownership of the underlying software and are perpetually subject to the provider’s policies, which can change with little notice. This creates a significant dependency risk for businesses that integrate these services into critical workflows.4
Second, the service model introduces significant data security and confidentiality risks. When a user inputs a prompt or uploads a document to a public-facing AI service, that data is transmitted to and processed on the provider’s servers. This act immediately removes the data from the user’s direct control. The security and confidentiality of that information then depend entirely on the provider’s infrastructure, security protocols, and internal policies.6 A critical concern, particularly for businesses, is the risk that this input data could be reviewed by the provider or used to further train the AI models, potentially exposing proprietary information or client-confidential data.7 While enterprise-grade services may offer stronger data protection guarantees, the default for many consumer-facing services is that inputs are not fully private.
Third, the service model complicates liability and risk allocation. With a traditional tool, the user is generally liable for how it is used and the outputs it creates. In the SaaS model, liability is a complex issue governed by the service agreement. These agreements are often drafted to heavily limit the provider’s liability for any number of issues, including inaccurate outputs (”hallucinations”), service outages, or even outputs that infringe on third-party copyrights. The user, by agreeing to the ToS, often assumes a significant portion of the risk associated with using the service, even though they have no control over the underlying technology or its training data.11
In conclusion, the initial claim is correct: generative AI is a service. This reality shifts the user’s position from that of an empowered owner of a tool to a dependent subscriber to a service, with all the attendant risks related to control, data, and liability.
II. The Copyright Conundrum: Analyzing the Use of Creative Works in AI Training
The most explosive claim in the social media post is that generative AI platforms “are built on the backs of human authors’ creative works and without permission or payment... and without being able to rely on any exception to copyright law.” This assertion lies at the heart of a wave of high-stakes litigation that poses an existential threat to the generative AI industry. A thorough analysis reveals that the core of this claim—the unauthorized and uncompensated copying of copyrighted works for training purposes—is factually accurate. The legality of this practice, however, remains one of the most contentious and unsettled questions in modern intellectual property law.
2.1 The Foundation of Infringement: Data Acquisition and Training Methods
Generative AI models, particularly large language models (LLMs) and diffusion models for image generation, are created through a process of “training” on staggeringly large datasets. This training process fundamentally involves making digital copies of the source material to be analyzed by the machine learning algorithms.12 A significant portion of these training datasets is composed of copyrighted material—including books, articles, photographs, illustrations, and source code—that has been scraped from the public internet without the explicit permission of, or payment to, the respective rights holders.14
The scale of this copying is immense. Lawsuits filed by rights holders allege that AI companies have engaged in “industrial-scale” data harvesting, creating copies of billions of images and texts.16 Evidence presented in these cases suggests that training data has been sourced not only from the open web but also from illicit sources, such as “pirate” websites like LibGen and Z-Library, which host vast archives of copyrighted books without authorization.16 Furthermore, companies like Reddit have filed lawsuits alleging that AI firms have actively circumvented technical protections (such as robots.txt protocols and API restrictions) designed to prevent mass scraping of their user-generated content, sometimes by acquiring the data through third-party “data laundering” services.17
From a copyright law perspective, these acts of downloading, copying, and storing protected works to create a training dataset prima facie implicate the copyright owner’s exclusive right of reproduction.13 Unless a valid legal exception applies, this unauthorized copying constitutes copyright infringement.
2.2 The Primary Defense in the United States: A Deep Dive into Fair Use
In the United States, the primary legal shield raised by AI companies is the doctrine of “fair use,” codified in Section 107 of the U.S. Copyright Act. Fair use is an affirmative defense that permits the limited use of copyrighted material without permission under certain circumstances. It is not a blanket right but a flexible, case-by-case analysis based on four statutory factors, which courts must weigh together.21
The Purpose and Character of the Use: This factor often hinges on whether the new use is “transformative”—that is, whether it “adds something new, with a further purpose or different character, altering the first with new expression, meaning, or message”.21 AI companies argue that using copyrighted works to train a statistical model is a highly transformative, non-expressive use. They contend they are not republishing the original works but are using them to learn patterns of language and imagery, a purpose entirely different from that of the original creation.18 However, rights holders and the U.S. Copyright Office have pushed back, arguing that this analogy to human learning is “mistaken”.14 They assert that if the ultimate output of the AI model serves the same purpose as and competes with the original works (e.g., generating an image that competes with a stock photograph), the use is far less transformative and more commercially exploitative.13
The Nature of the Copyrighted Work: This factor considers whether the source material is more factual or creative. Fair use is more likely to apply to the use of factual works than to highly creative works like novels, poems, and original artwork.21 Since AI training datasets are indiscriminate and include vast quantities of highly creative material, this factor generally weighs against a finding of fair use.13
The Amount and Substantiality of the Portion Used: This factor examines how much of the original work was copied. AI training typically involves copying entire works, which normally weighs heavily against fair use. While some copying of an entire work can be justified if it is necessary for a transformative purpose (as in the Google Books case), courts are scrutinizing whether the industrial-scale, wholesale copying undertaken by AI companies is reasonable in light of their ultimate commercial goals.13
The Effect of the Use Upon the Potential Market for or Value of the Copyrighted Work: Often considered the single most important factor, this assesses whether the new use harms the market for the original work by acting as a substitute.21 If an AI model can generate outputs that are substantially similar to or directly compete with the works in its training data, it can cause direct market harm by displacing sales of the original. Furthermore, a key point of contention is the existence of a potential licensing market. Rights holders argue that the unauthorized use of their works usurps a burgeoning market where they could license their content for AI training. While some judges have been skeptical of this “circular” argument, the increasing number of actual licensing deals being signed suggests such a market is indeed viable.22 Recent court decisions have emphasized that plaintiffs must provide concrete evidence of market harm or infringing outputs for this factor to weigh in their favor.22
The fair use defense is proving to be an unreliable, high-stakes gamble, which is accelerating the industry’s pivot from a confrontational “scrape-first” model to a pragmatic “license-and-partner” model. The analysis is intensely fact-specific, leading to unpredictable outcomes and a lack of clear legal precedent upon which a multi-trillion-dollar industry can be securely built.18 The potential statutory damages for willful, large-scale infringement are catastrophic, representing an existential threat to even the most well-funded AI labs.15 This immense risk is evidenced by the industry’s dual-track strategy: while publicly fighting for a broad interpretation of fair use in court, major players like Google and OpenAI are simultaneously signing nine-figure licensing deals with content owners such as Reddit and various news publishers.17 This hedging strategy reveals a pragmatic acknowledgment that relying solely on a favorable court ruling is too risky. They are effectively buying legal certainty and securing access to high-quality data, which in turn is creating the very licensing market they sometimes argue in court does not exist. This pragmatic shift suggests the future of AI development will not be a legal free-for-all but a negotiated ecosystem where the value of training data is explicitly recognized and compensated.
2.3 The UK and EU Legal Landscape: Fair Dealing and TDM Exceptions
The legal situation outside the United States is markedly different and, for commercial AI developers, often more restrictive. The United Kingdom’s copyright law does not have a broad “fair use” doctrine. Instead, it relies on a set of more narrowly defined, specific exceptions known as “fair dealing”.26 These exceptions cover purposes such as non-commercial research, private study, criticism, review, and news reporting. Crucially, there is no general “transformative use” exception equivalent to that in US law.
The most relevant exception for AI training in the UK is for Text and Data Mining (TDM), but the current law explicitly limits this exception to the purpose of “non-commercial research”.27 This means that the training of commercial generative AI models, which is by its nature a commercial enterprise, falls outside this exception and is therefore prima facie infringing under UK law.29
This restrictive legal environment has created a state of profound uncertainty and policy paralysis in the UK. The government has acknowledged that the current framework satisfies neither the AI industry nor the creative sector.30 An attempt in 2022 to introduce a broad new TDM exception for commercial purposes, with no ability for rights holders to opt out, was swiftly abandoned after a fierce backlash from the creative industries.32 A subsequent effort by the UK Intellectual Property Office (UKIPO) to broker a voluntary code of practice between the two sectors also ended in failure in early 2024.29 The government is now consulting on a new approach that would mirror the European Union’s framework: creating a TDM exception for commercial purposes from which rights holders can “opt out” using machine-readable signals.32 This ongoing policy debate underscores the fact that, as of now, there is no clear legal basis for most commercial AI training activities in the UK.
2.4 The Litigation Frontline: Landmark Cases and the Rise of Licensing
The legal uncertainty surrounding AI training has culminated in a wave of landmark lawsuits across the globe. Content creators of all types—authors like Michael Connelly and the Authors Guild, news organizations like The New York Times, visual artists, and stock photo agencies like Getty Images—have filed copyright infringement suits against nearly every major generative AI company, including OpenAI, Anthropic, Stability AI, and Midjourney.16
These legal battles are already reshaping the industry. The first major turning point was the settlement in Bartz v. Anthropic in August 2025, where Anthropic agreed to pay over $1.5 billion to a class of authors and publishers.16 This landmark settlement, the largest of its kind in copyright history, sent a clear signal to the market: the litigation risk is real, and the potential liability is substantial enough to compel a financial resolution rather than risk a trial.
A critical development emerging from the litigation is the increasing focus on the provenance of training data. The legal battle is subtly shifting from the abstract question of whether the act of training is fair use to the more concrete question of whether the data was legally acquired in the first place. The pre-settlement ruling in the Anthropic case is the clearest signal of this trend, where the judge drew a sharp distinction between training on lawfully purchased books (which could potentially be fair use) and building a training library from pirated works, which was deemed “inherently, irredeemably infringing”.18 This focus on lawful acquisition means the simple defense that data was scraped from the “public internet” is collapsing under scrutiny. Courts are now examining whether access controls were circumvented or terms of service were violated. This will inevitably force the AI industry to develop transparent and defensible “data supply chains,” creating a new market for ethically sourced, legally licensed training data and placing companies with opaque or illicitly sourced datasets at enormous legal and financial peril.
III. The Ownership Paradox: Copyright and Control of AI-Generated Outputs
The assertion that “you don’t own what the platform outputs, so don’t expect to licence the use of it, or to be able to prevent anyone else from taking that output and using it themselves” cuts to the core of the value proposition for any creative user or business. If ownership cannot be secured, the commercial utility of generative AI for creating unique, defensible assets is fundamentally undermined. An analysis of current copyright law, particularly in the United States, reveals this claim to be largely accurate, stemming from a deep and unresolved conflict between the contractual promises made by AI platforms and the foundational principles of intellectual property law.
3.1 The Bedrock of Copyright: The Human Authorship Requirement in the US
The cornerstone of United States copyright law is the principle of human authorship. For a work to be eligible for copyright protection, it must be the product of a human creator.12 The U.S. Copyright Office has been unequivocal on this point, consistently maintaining that it will refuse to register works “produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author”.12
This principle has been tested and affirmed in the context of generative AI. The Copyright Office’s official guidance states that the central question is “the extent to which the human had creative control over the work’s expression”.12 Where an AI system “determines the expressive elements of its output,” the resulting material is not considered a product of human authorship and is therefore not copyrightable. This position was legally validated in the landmark federal court case Thaler v. Perlmutter, which held that human authorship is a prerequisite for copyright protection.36
The practical application of this standard was demonstrated in the Copyright Office’s decision regarding the graphic novel Zarya of the Dawn. The author, Kristina Kashtanova, had written the text and arranged the layout of the comic, but the images were generated by the AI platform Midjourney. The Office granted copyright registration for the work as a whole, but explicitly excluded the individual AI-generated images from protection. The copyright covered only the human-authored elements: the text and the creative selection, coordination, and arrangement of the text and images.36 This decision established a critical precedent: merely providing a text prompt to an AI system is generally considered insufficient to constitute the level of creative control necessary for human authorship of the resulting output. The AI, not the user, is seen as the creator of the “expressive elements” of the image.
3.2 A Comparative View: The UK’s “Computer-Generated Works” Anomaly
The legal landscape in the United Kingdom presents a notable, and increasingly anomalous, contrast to the US position. The UK’s Copyright, Designs and Patents Act 1988 (CDPA) contains a unique provision for “computer-generated works,” defined as works generated by a computer in circumstances where there is no human author.38 Under the CDPA, such works are granted copyright protection.
However, this provision creates its own set of ambiguities. The CDPA assigns authorship of a computer-generated work to “the person by whom the arrangements necessary for the creation of the work are undertaken”.39 It is legally unclear who this person is in the context of modern generative AI. It could plausibly be argued to be the user who writes the prompt, the developers who wrote the AI software, the company that curated the training data and trained the model, or even the owners of the data on which the model was trained.39 This ambiguity leaves the question of ownership unresolved.
Furthermore, the protection granted to computer-generated works in the UK is significantly weaker than that for human-created works. The copyright term is only 50 years from the date of creation, compared to the author’s life plus 70 years for traditional works. Crucially, computer-generated works are not granted “moral rights,” which include the author’s right to be credited and to object to derogatory treatment of their work.40 This lack of moral rights is a significant drawback for creative professionals. Recognizing this divergence from international norms, the UK government is currently consulting on whether this provision for computer-generated works should be removed entirely to align UK law more closely with that of the US and the EU.32
3.3 The Contractual Overlay: The Role of Platform Terms of Service
Faced with this legal vacuum, AI companies have attempted to provide users with a sense of security through their Terms of Service (ToS). Many platforms, including OpenAI’s, include clauses that state the user “owns” the output generated from their prompts, and the company assigns any of its own rights in the output to the user.39
This contractual language, however, creates a fundamental and unresolved conflict with copyright law, resulting in a form of “phantom ownership.” The core issue is that a private company’s ToS cannot override public law. A platform can contractually assign to a user whatever rights it may hold in an output, but it cannot magically create a copyright in a work that the law itself deems uncopyrightable due to a lack of human authorship.
The “ownership” granted by the ToS is therefore a limited, contractual right that is primarily valid between the user and the AI platform. It is, in essence, a promise from the AI company that it will not assert any claim over the output and will permit the user to use it. It is not a legally recognized intellectual property right that the user can enforce against the rest of the world.
This is precisely the “shit deal” to which Nick Dunmur’s post refers. A business could use an AI service to generate a logo for a new product, believing it “owns” the logo based on the platform’s ToS. However, if a competitor starts using the exact same logo, the business may have no legal recourse for copyright infringement. Because no valid copyright ever existed in the AI-generated image, there is no right to be infringed. The business has paid a subscription fee for an asset that is, for all practical purposes, in the public domain and commercially worthless for establishing an exclusive brand identity or for licensing to others. This legal trap poses a significant and often misunderstood risk for any user intending to use AI-generated content for commercial purposes that require exclusivity and legal protection.
IV. Economic Realities: Assessing Generative AI’s Impact on the Economy and Creative Professions
The claim that using AI services contributes to a “shrinking of the economy” is a potent expression of the anxiety felt within creative communities. While this statement is factually inaccurate from a macroeconomic perspective, it serves as a powerful metaphor for the economic disruption, value transfer, and displacement that generative AI is poised to inflict upon specific sectors, particularly the creative industries. The economic story of AI is not one of simple contraction but of radical, and often painful, redistribution.
4.1 Macroeconomic Projections: A Story of Growth and Productivity
Contrary to the “shrinking economy” narrative, the consensus among economists and major consulting firms is that generative AI will be a significant driver of economic growth and productivity over the next decade. Projections from institutions like McKinsey & Company suggest that generative AI could add between $6.1 trillion and $7.9 trillion annually to the global economy through productivity gains across various sectors, including marketing, customer operations, and software engineering.41 Similarly, research from the Wharton School projects that AI will lead to a permanent increase in the level of economic activity, boosting U.S. GDP by 1.5% by 2035 and nearly 3% by 2055.42
These gains are expected to come from AI’s ability to automate certain tasks, augment human capabilities in others, and generate new efficiencies in areas like content creation, data analysis, and product design.41 However, it is important to temper this optimism. More conservative estimates, such as those from MIT economist Daron Acemoglu, project a more modest GDP boost of around 1% over the next 10 years, citing the high costs of implementation and the fact that AI is currently best suited for a relatively small percentage of tasks economy-wide.43 Furthermore, many corporate AI initiatives fail to deliver a tangible impact on revenue, and the high costs and uncertain business value of some projects may lead to their cancellation, indicating that the path to productivity gains is not guaranteed.44
4.2 Sectoral Disruption: The Economic Impact on Creative Industries
The fear of a “shrinking economy” originates from the microeconomic perspective of those whose livelihoods are directly threatened by this new technology. For creative professionals—writers, illustrators, designers, translators, and voice actors—generative AI represents a direct threat of labor displacement and skill devaluation.45
The economic mechanism of this disruption is twofold. First, AI can automate tasks previously performed by humans, leading to potential job losses. Research from Goldman Sachs suggests that generative AI could automate as much as 26% of work tasks in the arts and design sectors.47 Second, AI can lead to an oversupply of “good enough” creative content, produced at a fraction of the time and cost of human labor. This influx of low-cost content can drive down market prices, making it increasingly difficult for human professionals to compete and command fair compensation for their skills and experience.45 The dominant discourse from technology companies often frames this as a “democratization of creativity,” but for working professionals, it represents a fundamental threat to the economic value of their craft.48
This anxiety is not speculative; it is a lived reality that has already sparked major industrial action. The 2023 strikes by the Writers Guild of America (WGA) and the Screen Actors Guild (SAG-AFTRA) were, in large part, a direct response to the threat of generative AI. Writers and actors organized to demand contractual protections against having their work replaced by AI-generated scripts and to prevent their digital likenesses from being used to train AI models without consent or compensation.49 These strikes represent a clear and organized effort by creative labor to resist the devaluation of their contributions in the face of automation.
The “shrinking economy” claim is a mischaracterization of a massive value transfer. The economy as a whole may grow, but the economic value is being siphoned from individual human creators to AI platform owners and the corporations that deploy their technology for automation. To illustrate, consider a company that previously employed ten graphic designers. It may now be able to achieve similar output with one human operator managing an AI service. From a macroeconomic perspective, the company’s productivity has soared, and its profits have increased, contributing to GDP growth. However, from the perspective of the creative sector, nine jobs and their associated incomes have vanished. The money that once paid the salaries of those designers now flows in two new directions: as subscription fees to the AI platform provider and as increased profits to the shareholders of the automating company. The economic pie is growing, but it is being re-sliced in a way that transfers wealth from labor (the creative class) to capital (the tech industry and its corporate customers). This value transfer is the source of the legitimate economic anxiety within the creative industries and explains the profound disconnect between optimistic macroeconomic forecasts and the lived experience of those facing displacement.
4.3 The Duality of AI: Augmentation vs. Automation
The economic future of creative work is not necessarily a dystopian one of complete replacement. There is an alternative, more optimistic vision where AI serves as a powerful tool to augment human creativity rather than automate it away. In this scenario, AI acts as a co-pilot, handling tedious and repetitive tasks, accelerating ideation by generating variations, and enhancing personalization at scale.41 This could free up human creators to focus on higher-level strategic thinking, conceptual innovation, and the uniquely human aspects of their craft, potentially leading to new creative roles and an overall increase in the quality and scope of creative output.51
However, the path to this collaborative future is not guaranteed. The current business models and incentives driving AI development often prioritize cost-cutting and efficiency through automation over the more complex goal of human augmentation. Surveys of creative workers reveal that many are already being asked to simply review or edit AI-generated work rather than create original work themselves, a process they feel devalues their skills and diminishes their agency.46 The ultimate economic impact will depend on the choices made by businesses, the demands made by labor, and the regulatory frameworks put in place to govern the technology’s deployment.
4.4 Negative Economic Externalities
Finally, optimistic economic forecasts often fail to account for significant negative externalities. A prime example is the rise of sophisticated, AI-driven cybercrime. AI tools are being used to create more convincing phishing attacks, find new software vulnerabilities, and operate malicious campaigns with business-like efficiency. The cost of defending against these attacks and remediating their damage represents a significant and growing drain on the economy, imposing substantial costs on businesses and governments that must be factored into any holistic economic assessment of AI’s impact.52
V. The Hidden Costs: Quantifying the Environmental Footprint of Generative AI
The claim that using generative AI is “adding to your carbon footprint” is unequivocally true and points to one of the most significant and often-overlooked externalities of the AI revolution. The computational power required to create and operate large-scale AI models is immense, translating into substantial consumption of electricity and water, and a correspondingly large environmental impact. This cost is almost entirely invisible to the end-user, creating a classic “tragedy of the commons” scenario where the collective environmental burden grows unchecked.
5.1 The Two-Phase Problem: Energy Consumption in Training and Inference
The environmental impact of generative AI stems from two distinct phases of its life cycle: training and inference.
Training is the initial, one-time process of creating a foundational model. This requires processing vast datasets through powerful clusters of specialized processors (GPUs or TPUs) running continuously for weeks or months. This is an incredibly energy-intensive process. For example, researchers from Google and UC Berkeley estimated that training a single large language model in 2021 consumed 1,287 megawatt-hours of electricity—enough to power about 120 average U.S. homes for an entire year.53
Inference is the ongoing process of using the trained model to generate responses to user queries. While a single inference task consumes far less energy than the entire training process, the cumulative effect is enormous. With billions of users making trillions of queries, the energy consumption from inference now accounts for the majority of an AI model’s total energy footprint, estimated by some companies to be 60-70% of the total.54 Researchers have estimated that a single ChatGPT query consumes approximately five times more electricity than a simple Google web search.53
The aggregate effect of this demand is staggering. The global electricity consumption of data centers, driven in part by the demands of AI, rose to 460 terawatt-hours (TWh) in 2022. By 2026, this figure is expected to approach 1,050 TWh.53 An April 2025 report from the International Energy Agency predicts that global electricity demand from data centers will more than double by 2030.56 This explosive growth in demand is putting significant pressure on electrical grids and, in some cases, is leading to the delayed retirement of fossil fuel power plants to meet the need.55
5.2 Calculating the Carbon Footprint: Operational and Embodied Emissions
The energy consumption of AI translates directly into a carbon footprint, which is composed of two main elements:
Operational Carbon: These are the emissions generated from the electricity used to power the computing hardware during training and inference. The carbon intensity of these emissions depends heavily on the energy mix of the local grid where the data center is located. A model trained in a region powered by coal will have a much higher carbon footprint than one trained in a region with abundant nuclear or renewable energy.57
Embodied Carbon: These are the emissions associated with the entire life cycle of the hardware and infrastructure itself—from mining the raw materials and manufacturing the complex processors to constructing the massive, materially-intensive data centers.56 The embodied carbon can be substantial, with one study suggesting it can account for approximately 50% of a model’s operational carbon footprint.58
Quantifying these emissions for specific models is challenging due to a lack of transparency from many developers. However, academic studies of open models provide a sense of scale. The training of the BLOOM language model, which was conducted on a French grid powered largely by low-carbon nuclear energy, was estimated to have emitted 50.5 tonnes of CO2 equivalent when accounting for both operational and embodied carbon.59 An earlier estimate for the training of the BERT model, conducted in a more carbon-intensive environment, was approximately 284 tonnes of CO2 equivalent.57
5.3 Beyond Carbon: Water Consumption and E-Waste
The environmental impact of AI extends beyond carbon emissions. Data centers require vast quantities of fresh water for cooling their high-performance computing equipment. It has been estimated that for every kilowatt-hour of energy a data center consumes, it may need around two liters of water for cooling.53 In an era of increasing water scarcity, this level of consumption can place significant strain on local municipal water supplies and ecosystems, particularly in the arid regions where many data centers are located.53
Furthermore, the rapid pace of innovation in AI is creating a significant electronic waste (e-waste) problem. The demand for ever-more-powerful processors leads to rapid hardware obsolescence. As new generations of GPUs are released, older models are decommissioned, contributing to the growing global stream of hazardous e-waste, of which less than a quarter is properly recycled.55
The environmental impact of generative AI represents a classic “tragedy of the commons.” The user experience is clean, digital, and seemingly weightless, with no friction or visible cost associated with running a query. The actual environmental costs—in terms of energy consumption, carbon emissions, and water usage—are incurred in distant, anonymous data centers, completely disconnected from the user’s action.53 As one source notes, “an everyday user doesn’t think too much about that” because the interface provides no feedback on the impact of their actions.53 Each individual query has a small but non-zero cost. However, when multiplied by billions of users making trillions of queries, the aggregate environmental demand becomes colossal and unsustainable.54 Without significant regulatory pressure for transparency and carbon pricing, or a major technological breakthrough in computational efficiency, the environmental footprint of the AI sector is on a trajectory that will inevitably clash with global climate goals.
5.4 Mitigation Efforts and the Transparency Gap
In response to these concerns, major technology companies are investing in mitigation strategies. These include efforts to improve the energy efficiency of their models and hardware, and commitments to power their data centers with 24/7 carbon-free energy and to replenish more water than they consume.57 However, these efforts are running against the sheer scale of the growth in demand. An August 2025 analysis from Goldman Sachs Research forecasted that about 60% of the increasing electricity demand from data centers will still be met by burning fossil fuels.56
A significant barrier to accountability is the general lack of transparency from many AI companies regarding their specific energy consumption and environmental data. This makes a full, independent accounting of the industry’s footprint difficult for users, researchers, and regulators, and hinders the ability to make informed choices about which services to use.54
VI. Synthesis and Strategic Implications for Businesses and Individual Users
The analysis of the claims surrounding generative AI reveals a technology landscape fraught with legal ambiguity, economic disruption, and significant hidden costs. The initial social media post, while emotionally charged, correctly identifies the core areas of concern. For both businesses and individual users, navigating this landscape requires a strategic shift in perspective: from viewing AI as a simple “tool” to understanding it as a complex “service” relationship with profound implications. This final section synthesizes the report’s findings into a set of actionable recommendations.
6.1 Strategic Guidance for Businesses
For businesses seeking to leverage generative AI, the potential productivity gains must be carefully weighed against a range of legal, commercial, and reputational risks. A proactive and strategic approach is essential.
Vendor Due Diligence is Paramount: Businesses must abandon the “tool” mindset and treat AI providers as critical service partners. This necessitates rigorous due diligence that goes far beyond a simple price and feature comparison. Scrutinize the provider’s Terms of Service, data privacy policies, security certifications, and data residency options. For high-risk applications, demand transparency regarding the provenance of their training data. A provider unable or unwilling to provide assurances about the legality of its data sources represents a significant supply chain risk.
Mitigating IP and Copyright Risk: The most prudent strategy is to assume that raw outputs from generative AI are not protected by copyright and may be in the public domain. Consequently, businesses should establish clear internal policies prohibiting the use of unedited AI outputs for core, defensible intellectual property, such as company logos, key product names, or unique brand identifiers. The risk of creating a commercially valuable asset that cannot be legally defended against imitation by a competitor is too great.
Develop a Tiered Use-Case Policy: Not all AI use cases carry the same level of risk. Businesses should develop a tiered policy that defines acceptable uses.
Low-Risk: Internal brainstorming, summarizing public documents, generating first drafts for heavy human revision, and assisting with code generation where the final code is thoroughly reviewed by a human developer.
High-Risk: Generating final creative work for public release, creating content that requires IP protection, inputting any client-confidential or proprietary company data, and any use in regulated industries without specific legal review.
Contractual Scrutiny and Negotiation: Do not blindly accept the default click-through Terms of Service for enterprise-level deployment. Pay meticulous attention to clauses governing ownership of inputs and outputs, data usage rights, and, most importantly, indemnification. Seek strong indemnification clauses where the AI provider agrees to defend and cover the costs if your business is sued for copyright infringement arising from the use of their service.
Account for the Total Cost of Ownership: The subscription fee is only one part of the cost. Businesses must also account for the costs of implementation, employee training, legal review of policies, and the potential financial and reputational costs of a data breach or copyright lawsuit. The promise of efficiency must be balanced against these tangible and intangible costs.
6.2 Practical Guidance for Individual Users and Creative Professionals
For individual users, particularly those in the creative professions, generative AI presents both a powerful new capability and a direct challenge to their livelihood. A clear-eyed understanding of the technology’s limitations is crucial for its responsible use.
Understand the “Deal” You Are Making: Recognize that when you use a generative AI service, you are not buying a tool; you are entering into a service agreement with significant limitations. The “ownership” you are granted over outputs is likely a contractual permission to use, not a defensible legal right that you can enforce against others. In the US, your creation is likely in the public domain. In the UK, it may have weak and temporary protection with ambiguous ownership. Do not expect to be able to license your AI-generated work exclusively or prevent others from using it.
Protect Your Intellectual Property: Be extremely cautious about what you input into public AI services. Never upload sensitive personal information, confidential client work, or your own original, unpublished creative works. The ToS may grant the provider a broad license to use your inputs to train their models, and you lose direct control over your data the moment you upload it.
Use AI as a Co-pilot, Not an Autopilot: The most effective and legally safest way to use generative AI is as a powerful assistant to augment, not replace, your own creativity. Leverage it for brainstorming, overcoming creative blocks, generating initial concepts, or automating tedious parts of your workflow. However, to establish your own authorship and create a copyrightable work, the final product must be the result of your own substantial creative input, skill, and judgment. The more you transform, edit, select, arrange, and add your own original expression to the AI’s output, the stronger your claim to authorship becomes.
Advocate for Your Rights and the Value of Human Creativity: The legal and economic frameworks governing AI are being forged right now. This is a critical moment for creative professionals to make their voices heard. Support industry organizations—such as the Association of Photographers, the Society of Authors, the Writers Guild, and others—that are actively lobbying for clearer legislation, advocating for fair and transparent licensing frameworks, and litigating to protect the rights of creators. The future value of human creativity in the age of AI will depend on the collective action taken today.
Works cited
SaaS vs. traditional software business models: How are they different? - Vendr, accessed October 27, 2025, https://www.vendr.com/blog/software-business
Is Software a Good or Service? +6 Factors - umn.edu », accessed October 27, 2025, https://ddg.wcroc.umn.edu/is-software-a-good-or-service/
What is Software-as-a-Service (SaaS)? | Traverse Legal, accessed October 27, 2025, https://www.traverselegal.com/blog/what-is-software-as-a-service/
Advantages and disadvantages of Software as a Service (SaaS) | nibusinessinfo.co.uk, accessed October 27, 2025, https://www.nibusinessinfo.co.uk/content/advantages-and-disadvantages-software-service-saas
What Is Software as a Service (SaaS)? - IBM, accessed October 27, 2025, https://www.ibm.com/think/topics/saas
The Pros and Cons of Software as a Service (SaaS) - Insight, accessed October 27, 2025, https://www.insight.com/en_US/content-and-resources/article/the-pros-and-cons-of-software-as-a-service.html
Generative AI – the essentials | The Law Society, accessed October 27, 2025, https://www.lawsociety.org.uk/topics/ai-and-lawtech/generative-ai-the-essentials
What’s the difference between AI and generative AI? | Wolters Kluwer, accessed October 27, 2025, https://www.wolterskluwer.com/en/expert-insights/whats-the-difference-between-ai-and-generative-ai
Trusted legal AI tools to power research, drafting, and analysis, accessed October 27, 2025, https://legal.thomsonreuters.com/blog/legal-ai-tools-essential-for-attorneys/
How Do You Compare Legal AI Tools? - Law.co, accessed October 27, 2025, https://law.co/blog/how-do-you-compare-legal-ai-tools
GenAI is Not a Legal Tool. Or is it? - Boston Bar Association, accessed October 27, 2025, https://bostonbar.org/journal/genai-is-not-a-legal-tool-or-is-it/
Generative Artificial Intelligence and Copyright Law | Congress.gov ..., accessed October 27, 2025, https://www.congress.gov/crs-product/LSB10922
U.S. Copyright Office Issues Guidance on Generative AI Training | Insights | Jones Day, accessed October 27, 2025, https://www.jonesday.com/en/insights/2025/05/us-copyright-office-issues-guidance-on-generative-ai-training
Copyright Office Weighs In on AI Training and Fair Use | Skadden, Arps, Slate, Meagher & Flom LLP, accessed October 27, 2025, https://www.skadden.com/insights/publications/2025/05/copyright-office-report
Copyright and AI training data—transparency to the rescue? | Journal of Intellectual Property Law & Practice | Oxford Academic, accessed October 27, 2025, https://academic.oup.com/jiplp/article/20/3/182/7922541
AI Copyright Lawsuits - UBC Wiki, accessed October 27, 2025, https://wiki.ubc.ca/AI_Copyright_Lawsuits
Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments, accessed October 27, 2025, https://apnews.com/article/reddit-perplexity-ai-copyright-scraping-lawsuit-3ad8968550dd7e11bcd285a74fb6e2ff
A Tale of Three Cases: How Fair Use Is Playing Out in AI Copyright Lawsuits | Insights, accessed October 27, 2025, https://www.ropesgray.com/en/insights/alerts/2025/07/a-tale-of-three-cases-how-fair-use-is-playing-out-in-ai-copyright-lawsuits
Aravind Srinivas’ Perplexity AI sued by Reddit for data scraping, says, ‘We will play fair but won’t…”, accessed October 27, 2025, https://www.financialexpress.com/life/technology-aravind-srinivas-perplexity-ai-sued-by-reddit-for-data-scraping-says-we-will-play-fair-but-wont-4018902/
The fight between AI companies and the websites that hate them, accessed October 27, 2025, https://www.washingtonpost.com/technology/2025/10/24/reddit-perplexity-lawsuit-ai-fairness/
The Boundaries of Playing “Fair” When Training AI - Clifford Chance, accessed October 27, 2025, https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2025/03/the-boundaries-of-playing-fair-when-training-ai.html
Fair Use and AI Training: Two Recent Decisions Highlight the Complexity of This Issue, accessed October 27, 2025, https://www.skadden.com/insights/publications/2025/07/fair-use-and-ai-training
Copyright Office Issues Key Guidance on Fair Use in Generative AI Training - Wiley Law, accessed October 27, 2025, https://www.wiley.law/alert-Copyright-Office-Issues-Key-Guidance-on-Fair-Use-in-Generative-AI-Training
Two U.S. Courts Address Fair Use in Generative AI Training Cases | Insights | Jones Day, accessed October 27, 2025, https://www.jonesday.com/en/insights/2025/06/two-us-courts-address-fair-use-in-genai-training-cases
Generative AI Lawsuits Timeline: Legal Cases vs. OpenAI, Microsoft, Anthropic, Google, Nvidia, Perplexity, Salesforce, Apple and More - Sustainable Tech Partner for IT Service Providers, accessed October 27, 2025, https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/
Copyright Exceptions and Fair Dealing - Library - The University of Edinburgh, accessed October 27, 2025, https://library.ed.ac.uk/library-help/copyright/copyright-exceptions-and-fair-dealing
The text and data mining (TDM) exception | Library Services - University College London, accessed October 27, 2025, https://www.ucl.ac.uk/library/learning-teaching-support/ucl-copyright-advice/copyright-depth/text-and-data-mining-tdm-exception
The text and data mining copyright exception in the UK “for the sole purpose of research for a non-commercial purpose”: what does it cover? - Bristows, accessed October 27, 2025, https://www.bristows.com/news/the-text-and-data-mining-copyright-exception-in-the-uk-for-the-sole-purpose-of-research-for-a-non-commercial-purpose-what-does-it-cover/
UK fails to agree AI/copyright code of practice - Linklaters, accessed October 27, 2025, https://www.linklaters.com/en/insights/blogs/digilinks/2024/february/uk-fails-to-agree-ai---copyright-code-of-practice
Copyright and artificial intelligence: Impact on creative industries - House of Lords Library, accessed October 27, 2025, https://lordslibrary.parliament.uk/copyright-and-artificial-intelligence-impact-on-creative-industries/
Copyright and Artificial Intelligence - GOV.UK, accessed October 27, 2025, https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence
UK Government proposes copyright and AI reform mirroring EU approach - Linklaters, accessed October 27, 2025, https://www.linklaters.com/en/insights/blogs/digilinks/2025/january/uk-government-proposes-copyright-and-ai-reform-mirroring-eu-approach
Copyright Blocks the UK’s AI Ambitions - CEPA, accessed October 27, 2025, https://cepa.org/article/copyright-blocks-the-uks-ai-ambitions/
Mind the Copyright: The UK’s AI and Copyright Conundrum | Articles - Finnegan, accessed October 27, 2025, https://www.finnegan.com/en/insights/articles/mind-the-copyright-the-uks-ai-and-copyright-conundrum.html
‘Every kind of creative discipline is in danger’: Lincoln Lawyer author on the dangers of AI, accessed October 27, 2025, https://www.theguardian.com/technology/2025/oct/20/author-michael-connelly-lincoln-lawyer-ai
Copyright and Artificial Intelligence | U.S. Copyright Office, accessed October 27, 2025, https://www.copyright.gov/ai/
Who Owns the Copyright to AI-Generated Works?, accessed October 27, 2025, https://copyrightalliance.org/faqs/artificial-intelligence-copyright-ownership/
Ownership of AI-generated content in the UK - A&O Shearman, accessed October 27, 2025, https://www.aoshearman.com/en/insights/ownership-of-ai-generated-content-in-the-uk
Who owns the content generated by AI? - Marks & Clerk, accessed October 27, 2025, https://www.marks-clerk.com/insights/latest-insights/102k38x-who-owns-the-content-generated-by-ai/
Ownership Issues In AI-Generated Content: Who Owns The Copyright? - Solicitors Brighton, accessed October 27, 2025, https://www.moore-law.co.uk/ownership-issues-in-ai-generated-content-who-owns-the-copyright/
Economic potential of generative AI - McKinsey, accessed October 27, 2025, https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
The Projected Impact of Generative AI on Future Productivity Growth, accessed October 27, 2025, https://budgetmodel.wharton.upenn.edu/issues/2025/9/8/projected-impact-of-generative-ai-on-future-productivity-growth
A new look at the economics of AI | MIT Sloan, accessed October 27, 2025, https://mitsloan.mit.edu/ideas-made-to-matter/a-new-look-economics-ai
The AI bubble: What it can & cannot do, accessed October 27, 2025, https://m.economictimes.com/news/company/corporate-trends/the-ai-bubble-what-it-can-cannot-do/articleshow/124777753.cms
Full article: AI and work in the creative industries: digital continuity or discontinuity?, accessed October 27, 2025, https://www.tandfonline.com/doi/full/10.1080/17510694.2024.2421135
Creative industry workers feel job worth and security under threat from AI - Queen Mary University of London, accessed October 27, 2025, https://www.qmul.ac.uk/media/news/2025/queen-mary-news/pr/creative-industry-workers-feel-job-worth-and-security-under-threat-from-ai-.html
How might generative AI change creative jobs? - The World Economic Forum, accessed October 27, 2025, https://www.weforum.org/stories/2023/05/generative-ai-creative-jobs/
Generative AI and Creative Work: Narratives, Values, and Impacts - arXiv, accessed October 27, 2025, https://arxiv.org/html/2502.03940v1
Artificial intelligence and new technology in creative industries - POST Parliament, accessed October 27, 2025, https://post.parliament.uk/artificial-intelligence-and-new-technology-in-creative-industries/
Generative AI As A Killer Of Creative Jobs? Hold That Thought - Forbes, accessed October 27, 2025, https://www.forbes.com/sites/joemckendrick/2024/06/23/generative-ai-as-a-killer-of-creative-jobs-hold-that-thought/
(PDF) THE IMPACT OF GENERATIVE AI ON CREATIVE INDUSTRIES: REVOLUTIONIZING ART, WRITING, AND MUSIC - ResearchGate, accessed October 27, 2025, https://www.researchgate.net/publication/391489737_THE_IMPACT_OF_GENERATIVE_AI_ON_CREATIVE_INDUSTRIES_REVOLUTIONIZING_ART_WRITING_AND_MUSIC
AI-driven cybercrime threatens India’s $5 trillion dream, accessed October 27, 2025, https://m.economictimes.com/tech/artificial-intelligence/ai-driven-cybercrime-threatens-indias-5-trillion-dream/articleshow/124834185.cms
Explained: Generative AI’s environmental impact | MIT News, accessed October 27, 2025, https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Generative AI: energy consumption soars - Polytechnique Insights, accessed October 27, 2025, https://www.polytechnique-insights.com/en/columns/energy/generative-ai-energy-consumption-soars/
Environmental impact of artificial intelligence - Wikipedia, accessed October 27, 2025, https://en.wikipedia.org/wiki/Environmental_impact_of_artificial_intelligence
Responding to the climate impact of generative AI | MIT News, accessed October 27, 2025, https://news.mit.edu/2025/responding-to-generative-ai-climate-impact-0930
towards measuring and mitigating the environmental impacts of large language models | cifar, accessed October 27, 2025, https://cifar.ca/wp-content/uploads/2023/09/Towards-Measuring-and-Mitigating-the-Environmental-Impacts-of-Large-Language-Models.pdf
LLMCarbon: Modeling the End-To-End Carbon Footprint of Large Language ModelsThis work was supported in part by CCF-2105972, and NSF CAREER AWARD CNS-2143120. - arXiv, accessed October 27, 2025, https://arxiv.org/html/2309.14393v2
Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model - Journal of Machine Learning Research, accessed October 27, 2025, https://www.jmlr.org/papers/volume24/23-0069/23-0069.pdf
Measuring the environmental impact of AI inference | Google Cloud Blog, accessed October 27, 2025, https://cloud.google.com/blog/products/infrastructure/measuring-the-environmental-impact-of-ai-inference/





