What AI Systems Just Became Illegal Under the EU AI Act?

Last updated: March 2026 9 min read

TL;DR: The EU AI Act's most stringent provisions take effect in August 2026, completely banning social scoring systems and imposing strict requirements on high-risk AI applications with fines up to €35 million for violations.

Key Takeaways

What’s at Stake in Europe’s AI Regulation Revolution?

The European Union has drawn a line in the sand. As of March 2026, the world’s most comprehensive AI regulation — the EU AI Act — is reshaping how artificial intelligence can be developed, deployed, and used across the 27-member bloc. But the real disruption begins in August 2026, when the Act’s most stringent provisions take effect, fundamentally altering the AI landscape with outright bans on certain systems and strict requirements for others.

The stakes couldn’t be higher. Companies face fines up to €35 million or 7% of global annual turnover for deploying prohibited AI systems. Yet beyond the financial penalties lies a deeper question: can traditional centralized approaches to AI development navigate this complex regulatory environment while preserving innovation? The answer may require rethinking how we build, govern, and deploy AI systems entirely.

What AI Systems Are Now Completely Prohibited?

The EU AI Act establishes four categories of banned AI practices that pose unacceptable risks to fundamental rights and human dignity, with violations carrying the heaviest penalties of up to €35 million or 7% of global annual turnover.

The prohibited systems include:

These bans reflect the EU’s commitment to protecting fundamental rights over technological capability. China’s social credit system, which tracks citizens’ behavior to assign trustworthiness scores, exemplifies exactly what Europe has rejected. Similarly, AI systems that use psychological manipulation tactics — such as voice assistants designed to exploit children’s trust — face complete prohibition.

The biometric identification ban is particularly significant. While law enforcement retains limited exceptions for preventing terrorist attacks or locating missing children, the default position prohibits real-time facial recognition in public spaces. This directly challenges the surveillance-heavy approach adopted in countries like China and raises questions about systems already deployed in cities worldwide.

Which High-Risk AI Systems Face New Requirements?

High-risk AI systems, defined as those impacting safety or fundamental rights in critical areas, must comply with strict requirements including CE marking, risk assessments, data governance standards, and human oversight starting August 2, 2026.

The Act identifies eight high-risk categories:

Each high-risk system requires comprehensive documentation including:

Technical documentation proving compliance with AI Act requirements Automatic logging systems recording all decisions and data inputs Risk management systems identifying and mitigating potential harms Data governance ensuring training data quality and bias minimization Human oversight capabilities allowing meaningful intervention Accuracy, robustness, and cybersecurity measures

The European Commission estimates these requirements will add compliance costs of €6,000-10,000 per high-risk AI system, with ongoing monitoring expenses. For multinational corporations deploying AI across thousands of use cases, these costs compound quickly.

How Do Foundation Models Navigate New Obligations?

Foundation models with more than 10²⁵ floating point operations (FLOPs) during training — including systems like GPT-4, Claude, and Gemini — face specific obligations including systemic risk evaluations, adversarial testing, and incident reporting to the European AI Office.

The threshold captures the most capable models while exempting smaller, specialized systems. Affected models must:

Notably, open-source foundation models receive significant exemptions unless they pose systemic risks. This carve-out recognizes the different risk profiles and governance structures of open development models compared to proprietary commercial systems.

The European AI Office, established within the European Commission, will oversee foundation model compliance and coordinate with national authorities. This represents Europe’s most direct intervention in AI development, moving beyond reactive regulation toward proactive oversight of the most powerful systems.

What Compliance Challenges Do Traditional AI Companies Face?

Centralized AI companies face significant challenges in achieving EU AI Act compliance, including complex documentation requirements, cross-border liability issues, and the difficulty of implementing meaningful human oversight in automated systems.

The compliance burden creates several pain points:

Documentation Complexity: High-risk systems require extensive technical documentation that must be maintained throughout the system lifecycle. For companies with hundreds of AI applications, this creates massive administrative overhead.

Cross-Border Liability: Companies operating globally must navigate conflicting regulatory frameworks. An AI system compliant in the United States may violate EU requirements, forcing costly system modifications or market restrictions.

Human Oversight Requirements: The Act mandates “meaningful” human oversight, but defining this for complex AI systems remains ambiguous. How much human intervention is required? At what decision points? These questions lack clear answers.

Dynamic Compliance: AI systems learn and evolve, potentially drifting from their original compliance parameters. Traditional audit approaches struggle with this dynamic nature.

Supply Chain Transparency: Companies using third-party AI components must ensure their entire stack meets EU requirements, creating complex vendor management obligations.

These challenges are compounded by enforcement uncertainty. While the Act is clear about requirements, how national authorities will interpret and enforce provisions remains unclear as of March 2026.

How Can Decentralized AI Systems Address Governance Challenges?

Decentralized AI architectures offer inherent advantages for regulatory compliance through blockchain-based transparency, community governance models, and distributed accountability that can more easily adapt to evolving regulatory requirements.

Several key advantages emerge:

Built-in Transparency: Blockchain-based systems create immutable records of AI decisions, training data sources, and model modifications. This transparency directly addresses the EU AI Act’s documentation requirements while providing auditable compliance trails.

Community Governance: Decentralized networks can implement governance structures where stakeholders vote on compliance measures, risk assessments, and policy changes. This distributed decision-making can be more responsive than corporate hierarchies.

Distributed Liability: Rather than concentrating legal responsibility in a single entity, decentralized systems can distribute obligations across network participants, potentially reducing individual compliance burdens.

Modular Compliance: Decentralized marketplaces allow individual AI models to be certified independently, creating reusable compliance components rather than requiring full-stack certification for every application.

Perspective AI demonstrates these principles in practice. Built on Base blockchain, the platform creates transparent records of model usage and performance while enabling community governance of marketplace standards. Users can verify model provenance and compliance status directly through blockchain records.

The token-based incentive structure aligns stakeholder interests around compliance. Network participants benefit from maintaining high standards that attract users and avoid regulatory penalties. This creates market-driven compliance rather than top-down enforcement.

What Practical Framework Should Organizations Follow?

Organizations should adopt a three-tier approach: immediate prohibition compliance, systematic high-risk system identification, and proactive governance structure development that can adapt to evolving requirements.

Immediate Actions (March-August 2026)

  1. Prohibition Audit: Inventory all AI systems to identify any that might violate the four banned categories
  2. Geographic Segmentation: Determine which systems operate in or affect EU users
  3. Legal Review: Engage specialized counsel familiar with EU AI Act interpretation
  4. Vendor Assessment: Evaluate third-party AI services for compliance status

High-Risk System Preparation (August 2026 Deadline)

  1. Risk Classification: Systematically categorize all AI systems using the eight high-risk categories
  2. Technical Documentation: Begin comprehensive documentation for systems likely to be classified as high-risk
  3. Governance Infrastructure: Establish human oversight procedures and incident response capabilities
  4. Testing Protocols: Implement bias testing, adversarial evaluation, and performance monitoring

Long-term Governance Strategy

  1. Regulatory Monitoring: Establish processes to track evolving interpretations and enforcement patterns
  2. Technology Architecture: Consider how decentralized approaches might reduce compliance complexity
  3. Stakeholder Engagement: Build relationships with regulators, industry groups, and compliance experts
  4. Innovation Balance: Develop frameworks that maintain innovation capacity while ensuring compliance

The key insight: compliance isn’t a one-time achievement but an ongoing process requiring adaptive systems and governance structures.

What Enforcement Patterns Are Emerging Across EU Member States?

Early enforcement patterns show significant variation across EU member states, with Germany and France taking proactive stances while smaller nations focus on high-profile cases, creating a patchwork of interpretation and penalty application.

As of March 2026, enforcement approaches vary considerably:

Germany has established a dedicated AI Act enforcement unit within its Federal Office for Information Security, conducting proactive audits of major AI systems. German authorities issued the first significant fine in February 2026 — €8.5 million against a recruitment platform using biased hiring algorithms.

France focuses on foundation model oversight through its digital affairs ministry, working closely with the European AI Office. French authorities have been most active in investigating potential social scoring violations in social media recommendation systems.

Netherlands emphasizes sectoral enforcement, with financial regulators scrutinizing AI in banking while transport authorities focus on autonomous vehicle systems.

Smaller member states generally rely on complaint-driven enforcement, lacking resources for proactive monitoring but responding quickly to high-profile violations.

This fragmented approach creates compliance uncertainty. A system approved by Italian authorities might face different scrutiny in Sweden, forcing companies to meet the most stringent interpretations across all member states.

How Do Open Source and Decentralized Models Navigate EU Requirements?

Open-source AI models benefit from specific exemptions in the EU AI Act, but their downstream applications must still comply with relevant requirements, creating opportunities for compliant-by-design decentralized systems.

The Act recognizes fundamental differences between development models:

Foundation Model Exemptions: Open-source models released under free and open-source licenses are exempt from most foundation model obligations unless they pose systemic risks. This acknowledges that open development models have different risk profiles than proprietary commercial systems.

Community Governance Advantages: Open-source projects can implement transparent governance structures that naturally align with EU transparency requirements. Decision-making processes, code changes, and risk assessments can be publicly documented and community-verified.

Distributed Development Benefits: Unlike centralized development, open-source AI spreads development responsibility across contributors worldwide, making traditional regulatory approaches challenging but also reducing concentration of risk.

Decentralized AI marketplaces like Perspective AI occupy a unique position in this landscape. By enabling community governance of model standards and creating transparent blockchain records of model provenance and usage, they demonstrate how decentralized approaches can be compliant-by-design rather than compliance-as-afterthought.

The token economics also create interesting dynamics. Network participants have economic incentives to maintain compliance standards that preserve platform value and user trust. This market-driven compliance can be more effective than top-down enforcement in rapidly evolving technology sectors.

What’s Next for AI Governance Beyond the EU Act?

The EU AI Act represents just the beginning of global AI governance, with the United States developing sector-specific approaches, China implementing its own algorithmic regulations, and international coordination efforts gaining momentum through the UN and G7.

Several parallel developments are shaping the global governance landscape:

United States Approach: The Biden administration’s October 2023 Executive Order on AI emphasizes standards development and federal agency coordination rather than comprehensive legislation. However, sector-specific regulations are emerging — the FTC is developing AI guidelines for consumer protection, while financial regulators address AI in banking.

China’s Algorithmic Regulations: China’s approach focuses on algorithmic transparency and data security through regulations like the Algorithmic Recommendation Management Provisions, which require companies to disclose recommendation algorithms and allow user opt-outs.

International Coordination: The UN AI Advisory Body, established in 2023, is developing global governance recommendations. G7 nations have established the Hiroshima AI Process to coordinate approaches to foundation model governance.

Industry Standards: Organizations like ISO and IEEE are developing technical standards for AI safety and governance that could form the foundation for future regulations worldwide.

The trend is clear: regulatory complexity is increasing, not decreasing. Organizations building AI systems need governance approaches that can adapt to multiple regulatory frameworks simultaneously.

This complexity creates opportunities for decentralized approaches that build compliance capabilities into their fundamental architecture rather than layering compliance onto existing systems. As Perspective AI demonstrates, transparent, community-governed AI systems may navigate this regulatory landscape more effectively than traditional centralized approaches.

The question isn’t whether AI will be regulated — it’s whether current governance models can keep pace with technological change while preserving innovation. Decentralized approaches offer one promising path forward, but their success will ultimately depend on proving they can deliver both innovation and accountability in equal measure.

FAQ

What AI systems are completely banned under the EU AI Act?

The EU AI Act prohibits social scoring systems, subliminal manipulation techniques, AI systems exploiting vulnerabilities of specific groups, and real-time biometric identification in public spaces (with limited law enforcement exceptions).

When do the EU AI Act's high-risk provisions take effect?

The high-risk AI system requirements take effect on August 2, 2026, giving companies 18 months from the Act's February 2024 passage to achieve compliance.

What are the maximum fines under the EU AI Act?

Fines can reach €35 million or 7% of global annual turnover for prohibited AI practices, while violations of high-risk system requirements face fines up to €15 million or 3% of turnover.

How does the EU AI Act define high-risk AI systems?

High-risk AI systems are those used in critical infrastructure, education, employment, law enforcement, migration, or justice administration, as well as biometric identification systems and AI in medical devices.

Do open-source AI models need to comply with the EU AI Act?

Open-source foundation models are generally exempt unless they pose systemic risks, but downstream applications using these models must still comply with relevant AI Act requirements based on their specific use cases.

How can decentralized AI systems help with EU AI Act compliance?

Decentralized systems provide built-in transparency through blockchain records, enable community-driven governance for compliance monitoring, and distribute liability across networks rather than concentrating it in single entities.

Experience Compliant AI Innovation

Perspective AI's decentralized marketplace demonstrates how open, transparent AI systems can meet regulatory requirements while preserving innovation and user control.

Launch App →