OpenAI Faces Tighter Regulation Under EU’s Digital Services Act
Artificial intelligence is no longer operating on the edges of public life. It is now shaping how people search for information, solve personal problems, write business content, and even interpret world events. That is exactly why European regulators are paying closer attention to OpenAI. According to recent reporting, OpenAI and ChatGPT could face tighter obligations under the European Union’s Digital Services Act, or DSA, if ChatGPT Search is formally treated as a very large online search engine.
This development is important because the DSA is not a minor technical rule. It is one of the European Union’s most powerful digital laws, designed to force large online services to become more transparent, more accountable, and more responsive to public-risk concerns. If OpenAI falls more deeply into that framework, the company may have to meet stronger obligations related to transparency, user safety, illegal content reporting, and systemic risk controls.
The timing is not accidental. Europe has spent the last few years building a legal structure for the digital era, and AI is now at the center of that strategy. The OpenAI EU AI Act debate already showed that Brussels wants to regulate advanced AI before it becomes too deeply embedded in society without clear guardrails. The DSA adds a second layer by focusing less on the model itself and more on the public impact of the service built around it.
Why the DSA Changes the Story
For years, many users saw ChatGPT mainly as a chatbot. But once an AI assistant starts functioning like a search interface, a recommendation engine, and an information gateway, regulators begin to view it differently. That is the core of the current issue. Reuters reported that OpenAI is set to be classified as a very large search engine under the EU’s Digital Services Act, which would trigger tighter regulation.
That possible classification matters because the DSA imposes heavier responsibilities on platforms and search services that reach major scale in Europe. OpenAI’s own DSA help page states that ChatGPT Search had around 120.4 million average monthly active recipients in the European Union during the six-month period ending September 30, 2025. That figure is far above the threshold that usually attracts stronger EU scrutiny for very large digital services.
Once a service enters that category, it is no longer treated as a fast-growing tech product that can adjust later. It becomes a system that may influence elections, public health conversations, media visibility, consumer choices, and social trust. That is why the EU wants stronger oversight before such tools become impossible to govern effectively.
The OpenAI EU AI Act Connection
The OpenAI EU AI Act discussion is closely linked to the DSA, even though the two laws are not identical. The AI Act focuses on AI systems, risk categories, transparency duties, and provider responsibilities. The DSA focuses on how digital services operate in the public sphere, especially when they influence what users see or trust online.
OpenAI’s help documentation says it is committed to complying with applicable EU AI Act requirements and to helping customers understand the information they may need for their own compliance work. That statement sounds straightforward, but in reality, compliance under the AI Act is highly technical and may vary depending on the product, the user type, and the deployment context.
The practical issue for OpenAI is that it is now being pulled into two powerful regulatory tracks at once. Under the AI Act, the company must think like a model provider. Under the DSA, it may increasingly be treated like a major digital intermediary with social responsibilities that extend beyond model performance. That combination creates a far more demanding legal environment than a standard consumer software company would normally face.
Privacy Concerns Are Fueling Pressure
One major reason OpenAI is under such close watch is the deeply personal nature of the data users share with ChatGPT. Many people do not use AI in the same way they use a search engine. They use it like a diary, a therapist, a personal adviser, or a confidential assistant. That creates a different privacy risk profile from ordinary software usage.
The concern becomes more serious when users believe their conversations are ephemeral, private, or automatically erased without nuance. In practice, retention rules are more complex. OpenAI’s Europe privacy policy says personal data is retained only as long as necessary for service delivery or other legitimate business purposes, but it also says some data may be kept longer for legal, security, fraud-prevention, or safety reasons.
This is why search interest around OpenAI retention policy keeps growing. People want to know not only what they type, but how long it lives, where it goes, and whether deletion actually means deletion. In a European regulatory context, these are not side questions. They are central questions.
OpenAI Retention Policy and Deleted Chats
The phrase OpenAI deleted chats sounds simple, but the reality is more layered. Public OpenAI materials explain that deleted chats are removed from the user account immediately and are scheduled for permanent deletion from systems within a stated period, unless they have already been de-identified or must be retained for security or legal reasons. Public privacy materials also note that certain records may be retained longer when necessary for compliance or protection of the service.
That means the question does OpenAI keep deleted chats cannot be answered with a flat no. In ordinary circumstances, the company presents deletion as part of its standard data handling process. But it also clearly leaves room for exceptions tied to legal obligations, fraud prevention, abuse detection, audit needs, and other legitimate compliance grounds.
For regulators, wording like that matters. Europe tends to examine not only whether companies disclose policies, but whether average users can realistically understand them. If people believe a deleted conversation disappears instantly while legal or safety exceptions still apply behind the scenes, regulators may ask whether the disclosure is truly clear enough for informed consent.
Does OpenAI Train on API Data
Another major issue is captured in the keyword does OpenAI train on API data. This question is especially important for startups, journalists, agencies, developers, and businesses that integrate AI into their workflows. OpenAI’s business privacy materials say it does not train on business data by default, and developer-facing documentation describes data-control options, including configurations such as zero data retention for qualified use cases.
That distinction matters because many people assume all prompts are treated the same way across all OpenAI products. They are not. Consumer use, enterprise use, and API use can involve different terms, settings, and data pathways. A person casually using ChatGPT and a business using the API under a controlled configuration are not necessarily entering the same privacy environment.
This is exactly why regulators are likely to demand more precision from OpenAI in Europe. Broad privacy language is no longer enough when millions of people and businesses rely on these tools in very different contexts. The clearer OpenAI can explain how each product handles prompts, retention, and training, the stronger its compliance position will be.
OpenAI Trust Center and Public Confidence
The OpenAI Trust Center has become a key part of how the company presents itself to customers, partners, and regulators. OpenAI’s trust and transparency materials emphasize security, privacy, and compliance resources, including content related to governance and legal obligations. In theory, this helps show that the company is not ignoring concerns around safety and accountability.
But a trust center alone does not end regulatory concern. Trust must be supported by enforceable processes, clear reporting channels, and verifiable safeguards. Under the DSA, authorities will care less about branding language and more about whether OpenAI has systems that work consistently at scale.
That is why the OpenAI Trust Center is important, but not sufficient by itself. It can support confidence, yet it cannot replace full legal compliance, meaningful transparency, and independent oversight where required.
People Are Telling ChatGPT About Their Most Intimate Problems
One of the most sensitive dimensions of this story is behavioral, not technical. People are telling ChatGPT about their most intimate problems because the interface feels immediate, nonjudgmental, private, and always available. Public discussion around AI companionship, emotionally intense use, and vulnerable users has made regulators far more alert to the psychological dimension of AI systems.
This matters because once users begin to depend on an AI tool for emotional processing, relationship guidance, or personal crisis reflection, the stakes become much higher. The platform is no longer just helping someone draft an email or summarize a report. It may be shaping personal decisions in moments of stress, confusion, or loneliness.
European regulators are unlikely to ignore that shift. If an AI service attracts intimate disclosures at scale, then data retention, safety messaging, age protections, and responsible design become far more urgent. This is one reason the broader OpenAI regulatory debate is moving beyond pure innovation and into social responsibility.
Why Europe Is Taking a Harder Line
The EU’s tougher stance also reflects a wider political philosophy. European policymakers generally prefer regulating powerful digital systems before they become too deeply rooted to challenge. That approach has already shaped privacy law, competition law, and platform governance across the bloc. AI is simply the next frontier.
In that context, OpenAI is an obvious focus. Its products are influential, rapidly growing, and increasingly tied to search-like behavior. Once an AI service begins to mediate knowledge at large scale, European institutions see it as part of the public information environment, not just a private innovation story.
The discussion sometimes overlaps with broader strategic concerns such as digital sovereignty and dependence on external technology ecosystems. Even keywords like EU-China rare earths point to a larger European mindset: control over future technologies is now viewed as both an economic issue and a geopolitical one.
What Comes Next
OpenAI is entering a period where regulatory pressure in Europe will probably increase rather than fade. If ChatGPT Search is officially treated as a very large online search engine, the company may need to strengthen risk assessments, reporting processes, and public accountability under the DSA. At the same time, the AI Act will continue shaping the rules for model governance and provider responsibilities.
For users, businesses, and publishers, the message is clear. AI tools are no longer operating in a lightly supervised zone. What people share with them, how platforms retain it, and how AI systems influence public knowledge are now all subjects of serious regulatory attention.
In the end, the story is bigger than OpenAI alone. Europe is trying to define what responsible AI should look like before AI becomes too embedded in daily life to regulate effectively. Whether OpenAI sees this as a burden or an opportunity, it will likely shape the company’s future in one of the world’s most important digital markets.

