X, formerly known as Twitter, introduced changes to its privacy policy, stating that users’ content would now be available to train AI models. This change takes effect on Nov. 15, 2024. The change gives X wide-ranging rights over the content being posted on the site. This move will likely raise eyebrows among users and open them to scrutiny about their approach to privacy, data ethics and intellectual property. Creative communities, privacy advocates and even legal experts are already speaking up against the change in policy updates.
The new terms also mean X gets a “worldwide, non-exclusive, royalty-free license” to use user content to train its AI for machine learning. This might see user-created posts accessed, analyzed and used to further train and improve the AI in a pivot that puts X in line with other platforms like Reddit, which have already started licensing data as a way of making new money. This has the potential to be a vast source of revenue, as ad revenue has struggled to get back to pre-Elon Musk acquisition levels. On the other hand, licensing user data does raise ethical and transparency questions, especially as AI systems become pervasive.
Perhaps of most concern to users is the ambiguous wording of how one can opt out of this practice. Until recently, privacy settings allowed users to disallow data use for AI training under a “data sharing and personalization” menu. The new terms, however, say nothing about whether or how users can turn off data sharing, raising questions about users’ control over personal information. The ambiguity in opting out has infuriated many users who feel that their personal or creative content might be used without permission. The uncertainty has produced many calls for increased transparency and finer control over how their content is used.
The most vociferous concerns are from creative professionals such as artists, authors and photographers using the platform to showcase their work. It is a fear that their work, being their intellectual property, might be used to train AI models that would eventually have the ability to replace or mimic human creativity and take away their professional livelihoods. Personal photos and artwork have also been taken off some users’ profiles in an individual, passive rebellion while they try to protect their content from AI repurposing. It also does not detail how many different types of accounts are protected, even private accounts may find their data used in ways that could surprise them. The backlash of the creative community signals the tension between technological advancement with AI and a desire to protect intellectual property rights in the digital age.
Adding to this cause for alarm, X has changed its terms of service to require that any disputes arising from the new policies be heard in the U.S. District Court for the Northern District of Texas or Tarrant County, Texas. This preference has raised a few eyebrows because X is headquartered in Austin, more than 100 miles from Tarrant County. The choice of venue has some speculating that this move may result in rulings that would be friendly to X’s corporate interests, given the courts in that district have generally proven to be very conservative. Critics argue that limiting legal recourse to a specific venue may disadvantage users who live far from Texas and question if the legal maneuver is in users’ best interests.
Since Musk took over, X has felt the financial squeeze from advertiser pullbacks and a weak reception to its paid subscription service. Data licensing could become revenue worth having on board, putting the platform in line with what other tech giants are studying for similar revenue models; however, this transition provides a financial lifeline while raising ethical questions about data commodification. Users increasingly worry that their data may be monetized in ways beyond their control or understanding, heightening calls for ethical data practices in tech.
The issues involving X’s AI data policies are not unique to the site. Tech giants like Google and Microsoft have also come under fire for using user data to advance AI tools. The rapid rise of AI has seen users and regulators try to keep up with its ethical implications. In further AI development, user-generated content-dependent platforms have voiced an urgent need to strike the right balance between technological innovation and moral consideration.
With the looming Nov. 15 deadline, X is under increasing pressure to issue policy statements on user data privacy, opting out options and practices for AI data. The stakes are higher now than ever, with users clamoring for transparency and greater control. X could retain or break its community’s trust through its chosen response. How X balances AI aspirations against users’ privacy concerns will be greatly scrutinized by both its user base and the broader tech industry. For many, this update creates significant questions about the direction of AI, data privacy and ethical practices in social media—concerns likely only to grow as those technologies get more embedded into life.