10 Key Changes in the EU AI Act Deal You Need to Know About

By

The European Union has reached a provisional agreement to soften its groundbreaking AI Act, giving businesses more time to comply with high-risk rules and reducing regulatory overlaps. This deal, struck between EU member states and the European Parliament, aims to balance innovation with safety. Here are ten critical updates that reshape the timeline, scope, and enforcement of this landmark legislation.

1. Extended Deadlines for High-Risk AI Systems

Under the new agreement, the original compliance deadline of August 2, 2026, has been pushed back. Stand-alone high-risk AI systems now have until December 2, 2027, while AI used in products governed by EU sectoral safety rules (e.g., medical devices, machinery) must comply by August 2, 2028. This extra time allows enterprises to adapt their systems and processes without rushing. The reprieve is especially welcome for companies developing complex AI applications, as they can now align with both the AI Act and existing sectoral regulations. However, until formal adoption, the original deadline technically remains in effect.

10 Key Changes in the EU AI Act Deal You Need to Know About
Source: www.computerworld.com

2. Removal of Overlapping Rules for AI in Machinery

A key source of confusion was dual regulation under both the AI Act and existing machinery safety directives. The provisional deal eliminates this overlap: AI features integrated in machinery products will now follow only sector-specific safety rules. To ensure equivalent protection, the new framework includes safeguards that maintain health and safety standards. This simplification reduces administrative burden and legal uncertainty for manufacturers, who previously faced contradictory requirements. It also aligns with industry calls for a coherent, risk-based approach.

3. Narrower Definition of ‘Safety Component’

What counts as a “safety component” under the AI Act has been tightened. AI features that merely assist users or improve performance—without creating health or safety risks if they fail—will no longer automatically be classified as high-risk. For example, an AI that suggests settings on a machine but does not control critical safety functions would fall outside the most stringent rules. This change prevents overregulation of low-risk applications, allowing innovation to flourish while still protecting against genuine dangers.

4. Mechanism to Resolve Overlaps with Sectoral Laws

For broader sectors like medical devices, toys, lifts, machinery, and watercraft, the co-legislators have created a formal mechanism to resolve conflicts between the AI Act and existing EU laws. This avoids a patchwork of requirements and ensures a single, coherent compliance path. The mechanism will be activated when an AI system is already governed by sector-specific regulations, allowing the AI Act’s provisions to be adapted or deferred as appropriate. Companies in these high-stakes industries can now plan with greater certainty.

5. Delayed Deadline for AI Regulatory Sandboxes

EU member states were originally required to establish AI regulatory sandboxes—controlled environments for testing innovative AI before full deployment—by August 2, 2026. The provisional deal pushes this date back by one year to August 2, 2027. This delay gives national authorities more time to design effective sandboxes that provide meaningful guidance without stifling innovation. Startups and SMEs, in particular, will benefit from these safe spaces to trial high-risk applications under regulatory oversight.

6. Earlier Watermarking Obligations for AI-Generated Content

While many deadlines were extended, watermarking obligations are being accelerated. The European Commission had proposed a start date of February 2, 2027, but the provisional deal moves it to December 2, 2026. This means developers of generative AI systems must label AI-created content (text, images, audio, video) with clear, machine-readable markers sooner than expected. The earlier date reflects the urgency of combating disinformation and ensuring transparency as synthetic content becomes ubiquitous.

10 Key Changes in the EU AI Act Deal You Need to Know About
Source: www.computerworld.com

7. Exemptions Extended to Mid-Sized Companies

Originally, only small and medium-sized enterprises (SMEs) could benefit from certain exemptions, such as lighter documentation and reporting requirements. The provisional deal extends these relaxations to small mid-cap companies—firms with up to 499 employees. This change acknowledges that mid-sized businesses often face similar resource constraints as SMEs when complying with complex regulations. By broadening the exemption, the EU encourages innovation across a larger segment of the market while still maintaining oversight for the largest players.

8. Central Supervision by the EU AI Office for General-Purpose AI

General-purpose AI systems—like large language models and foundation models—will now be centrally supervised by the newly created EU AI Office. This ensures uniform enforcement across all member states, preventing a fragmented approach that could hinder cross-border deployment. The office will handle tasks such as rule interpretation, compliance checks, and coordination with national authorities. This centralized model mirrors the successful oversight structure seen in other digital regulations, like the Digital Services Act.

9. National Authorities Retain Responsibility in Specific Areas

Despite the central supervision for general-purpose AI, national authorities keep their powers over AI systems used in law enforcement, border management, judicial proceedings, and financial institutions. This division respects member states’ sovereignty in sensitive domains where local context and legal frameworks are paramount. It also ensures that use cases with direct implications for citizens’ rights and security receive the closest scrutiny from authorities familiar with national legal systems.

10. Formal Adoption Still Pending

The provisional deal is not yet law. Both the European Parliament and the Council must formally adopt the text before August 2, 2026, to make the changes official. Until then, the original August 2, 2026 deadline for high-risk systems technically remains in force. The co-legislators aim to complete the process swiftly to provide legal certainty. Once adopted, the new timelines and rules will give businesses a clearer path forward, but companies should not wait—preparation for compliance, regardless of the final date, is already underway.

This deal marks a pragmatic step by the EU to balance technological progress with responsible governance. By extending deadlines, reducing overlaps, and clarifying definitions, lawmakers have responded to industry feedback while upholding the core safety and transparency goals of the AI Act. Stakeholders now have a roadmap for compliance, but the final approval is the next critical milestone.

Related Articles

Recommended

Discover More

Discovering Fedora Workstation 44: Key Updates and FeaturesRedefining Dinosaur Life: New Discoveries Reveal Unexpected ComplexityYour Guide to May 2026 Skywatching: Meteors, Planets, and a Blue Moon6 Key Kubernetes v1.36 Updates for Controller Health and ObservabilityMicrosoft Dominates Forrester Sovereign Cloud Wave as Digital Sovereignty Becomes Mandatory