EU Steps Up AI Oversight
OpenAI's ChatGPT is being designated a 'very large search engine' (VLOSE) by the European Union under its Digital Services Act (DSA). This significant regulatory step, first reported by Germany's Handelsblatt, comes as ChatGPT reportedly attracts over 120 million monthly active users within the EU, far surpassing the 45 million user threshold for VLOSE status.
What the Search Engine Label Means for ChatGPT
As a VLOSE, OpenAI now faces a more demanding compliance framework. This includes conducting assessments for systemic risks related to illegal content, user rights, public safety, and well-being. The company must also put risk mitigation measures in place, undergo annual independent audits, and potentially offer data access to approved researchers. Additionally, OpenAI needs to ensure transparency in its recommendation systems, providing users at least one option that doesn't rely on user profiling. These requirements mirror those already applied to major search platforms like Google Search and Microsoft Bing, which are also designated VLOSEs.
OpenAI's Strategic Crossroads
This regulatory development occurs as OpenAI, which recently secured significant funding boosting its valuation, navigates a key moment. The EU's approach, focusing on risk management and fundamental rights, contrasts with the typically innovation-focused stance seen in the United States. By framing ChatGPT as a search engine, the EU is extending its existing digital governance model to advanced AI systems, aiming to foster technological progress while safeguarding society. This aligns with the EU's wider AI Act strategy, which categorizes AI systems by their risk level.
Adapting to these enhanced transparency and accountability rules will be a crucial challenge for OpenAI in the European market. This could influence development timelines and feature rollouts specifically for the region. OpenAI has already begun establishing its European presence by setting up OpenAI Ireland Limited.
Concerns Over Innovation Pace
Some observers worry that the VLOSE classification could slow OpenAI's rapid innovation cycle within the EU. The strict compliance demands, such as risk assessments and possible data sharing, may increase operational costs and complexity. This oversight could delay new features or restrict certain data use strategies for model training in Europe, potentially creating a disadvantage compared to other regions with less stringent regulations. Critics suggest that while intended for user protection, such comprehensive rules might inadvertently hinder the innovation they aim to govern.
The EU's deliberate push for AI regulation, through both the DSA and the AI Act, is designed to shape global AI governance. However, this could lead to market divisions and create compliance challenges for international tech companies. Such frameworks might compel firms to choose between global consistency and tailored EU adaptations, potentially impacting the availability and full functionality of advanced AI tools for European users.