Apple has long made a name for itself for doing things differently: sleeker design, more privacy options and a sense of responsibility that distinguished it from competitors. Now, the company is embroiled in a controversy that seems all too familiar.
Authors Grady Hendrix and Jennifer Roberson filed a putative class action in federal court last week against Apple. They claim Apple utilized their copyrighted books without permission or payment to train its “OpenELM” artificial intelligence model. The suit argues that Apple employed a dataset that was understood to comprise pirated materials, consuming entire books into its models without permission, credit or compensation.
The lawsuit places Apple alongside Microsoft, Meta and OpenAI, which are already facing lawsuits from writers, publishers and news organizations. Just days earlier, Anthropic paid $1.5 billion to resolve a similar class action, the largest known copyright recovery to date.
The deal was a signal that plaintiffs have both the legal momentum and the financial muscle to extract concessions from tech firms. Apple, which joined the generative AI race later and more reluctantly than its rivals, now faces the same issue. They must explain why writers’ work needs to be exploited as raw material for a hugely profitable business.
The stakes are large. Huge language models are trained on vast volumes of text, and published books are among the most valuable resources for that data. To technology companies, this kind of content is priceless. To authors, books are not datasets. They are lifework and the foundation of their living. Watching those works reconstructed into AI software capable of summarizing or even imitating them, without so much as a request for permission, understandably brings up issues of ownership and fairness.
Apple’s position is especially vulnerable. It has spent decades building up goodwill by presenting itself as the consumers’ guardian, promoting privacy and ethical design choices. A narrative that it has turned to pirated books in order to advance its AI is in direct opposition to that reputation. Investors and consumers might begin to wonder whether Apple is willing to sacrifice its principles in a bid to catch up in a rapidly changing market.
The legal issue remains unresolved. If the courts rule in favor of authors, the economic ramifications for technology companies would be enormous. Mandatory licensing deals would reshape the economics of AI, where companies would be forced to negotiate with publishers and writers directly. If the courts decide in the companies’ favor, the precedent would give AI developers broad leeway to use publicly available texts, even if the original work’s creator never consented. Either outcome would have lasting implications for intellectual property in the digital era.
What this lawsuit points to most sharply is that there is no “free” data. There is a writer’s craft and an ownership claim behind every line of text. Innovation does not have to stop, but it will have to come to terms with the fact that technological progress is being built on the unpaid labor of others. Those firms that opt to disregard this fact any further will prosper in the short run, but they do so at the peril of damaging both their reputations and their position with the public in the long run.
Apple can settle as Anthropic did, or attempt to fight the case in court. Regardless of the strategy, the question remains: how should society value creative labor during a time when machines can learn from it endlessly, instantaneously and without consent? For Apple, the answer may decide not only the outcome of a lawsuit but also the validity of its entire AI strategy.










