TWINLADDER
TwinLadder
TWINLADDER

The Competence Thread: How European AI Case Law Reveals the Crisis Article 4 Was Built to Solve

February 28, 2026|court order

From Dutch lawyers citing phantom judgments to German courts rejecting AI-written expert reports, a pattern emerges across European enforcement actions: every case traces back to a competence failure. This analysis maps the landscape of European AI case law from late 2024 through early 2026, connecting court rulings, data protection enforcement, and copyright battles to the central thesis of Article 4 — that AI systems are only as trustworthy as the people operating them.

TwinLadder

Klausīties šo rakstu

0:000:00

The Competence Thread: How European AI Case Law Reveals the Crisis Article 4 Was Built to Solve

A pattern hiding in plain sight

Alex Blumentals, Strategic Director, TwinLadder

I have spent the past eighteen months reading every European AI enforcement action I can find. Court rulings, data protection decisions, copyright judgments, disciplinary proceedings. Hundreds of pages of legal reasoning across a dozen jurisdictions. And the more I read, the clearer a single pattern becomes.

Every case — every single one — traces back to the same root cause. Not malicious technology. Not rogue algorithms. Not even regulatory gaps. The root cause is that people did not have the competence to work with AI systems responsibly.

A Dutch lawyer pastes ChatGPT output into a court filing and cites cases that do not exist. A German expert submits an AI-written report to a court without disclosing that a machine wrote it. A financial services company automates credit decisions without human oversight. A social media platform feeds European personal data into an AI model without asking anyone's permission.

Different countries. Different legal frameworks. Different industries. Same failure: the humans in the loop lacked the skills, knowledge, or institutional support to handle AI responsibly.

This is exactly the crisis that Article 4 of the EU AI Act was designed to address. When the regulation entered application on 2 February 2025, it imposed a deceptively simple obligation: providers and deployers of AI systems must ensure "a sufficient level of AI literacy" among their staff and anyone operating AI on their behalf. The wording is deliberately vague on what "sufficient" means. But the case law tells us. Every enforcement action you will read about below defines — by negative example — what insufficient AI literacy looks like.

This is not a catalogue of cases. It is a map of competence failures. And it explains why the organisations that take Article 4 seriously will be the ones that survive what is coming next.


Part I: When professionals fail to verify — the conduct cases

Liga, Legal Counsel, TwinLadder

The professional conduct cases are perhaps the most instructive in the entire European AI enforcement landscape, because they involve people who should know better. Lawyers are trained to verify sources. Experts are appointed to provide independent judgment. When these professionals fail at the most basic task — checking whether AI output is real — the implications for every other profession are sobering.

The UK precedent: Ayinde v Haringey and Al-Haroun (June 2025)

The first major European case to address AI-fabricated citations in court arrived on 6 June 2025, when the UK High Court published its ruling in Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin).

In the Al-Haroun case, a solicitor submitted a witness statement citing numerous authorities to support the claimant's position. Of these, eighteen were found not to exist. Others were misquoted or inapplicable. The fabricated citations were elaborate — falsely attributing nonsensical opinions to real judges, embellished with docket numbers belonging to actual but irrelevant cases.

The solicitor had not generated these citations himself. His client had used AI to produce the research, and the solicitor submitted it without independently verifying any of it. The court was unambiguous: a lawyer is answerable for their research, arguments, and representations under their core duties to the court and to the client. AI does not change that. The solicitor and his firm were referred to the SRA for investigation.

Alex: What strikes me about this case is the delegation chain. The client used AI. The lawyer trusted the client. Nobody verified the output. This is not a technology problem — it is a workflow problem, an institutional competence problem. The AI worked exactly as designed: it produced plausible-sounding text. The failure was entirely human.

The pattern repeats: Dutch lawyers disciplined (February 2026)

Eight months after the UK ruling, the same failure appeared in the Netherlands. In February 2026, three Dutch lawyers were disciplined for citing nonexistent cases in court filings, cases generated by ChatGPT. Two of the three were ordered to complete AI training courses in addition to receiving formal warnings.

The disciplinary body noted that although warnings are the mildest sanction available, AI reliability in legal practice is now a priority focus for Dutch regulators. Lodewijk Smeehuijzen, law professor at the Vrije Universiteit Amsterdam, summarised the position bluntly: "You cannot cite sources that do not exist."

Alex: The Dutch case is more alarming than the UK one, precisely because it came after the UK one. Al-Haroun was international news. Every lawyer in Europe had been warned. And still, within months, three Dutch professionals made the identical mistake. This tells us something important: awareness alone does not build competence. Knowing that AI can hallucinate is not the same as having the skills and habits to catch it when it does.

The expert witness frontier: LG Darmstadt (November 2025)

The German contribution to this pattern came from a different angle. On 10 November 2025, the Regional Court of Darmstadt (LG Darmstadt) ruled that a court-appointed expert who relied extensively on AI to prepare a report — without disclosing it — could have their fee set to zero euros.

The legal basis was Section 407a(3) of the German Code of Civil Procedure (ZPO), which requires court-appointed experts to prepare reports "personally" and does not authorise delegation to third parties. The court held that an AI-generated report was inherently inadmissible as evidence because it failed this personal preparation requirement. Under Section 8a of the Judicial Remuneration and Compensation Act (JVEG), an expert is compensated only to the extent that submitted work is admissible — and so, no admissible work meant no compensation.

Alex: This case extends the competence question beyond lawyers into the broader professional ecosystem. Court-appointed experts are trusted because of their personal expertise and independent judgment. When an expert outsources that judgment to an AI system without disclosure, they are not just cutting corners — they are undermining the entire basis of their professional role. The Darmstadt ruling says: if you cannot personally stand behind your work, you have not done the work.

The regulatory response: SRA guidance and beyond

The Solicitors Regulation Authority in the UK has responded to the wave of AI cases by developing new guidance, including a forthcoming "GenAI FAQ" and a Good Practice Note on AI use and client data. In a February 2026 webinar, the SRA outlined its developing framework, emphasising that compliance officers for legal practice (COLPs) bear responsibility for regulatory compliance when new technology is introduced.

The message across jurisdictions is converging: professional bodies expect their members to develop AI competence, not merely to have AI policies. The UK, the Netherlands, and Germany have now each produced enforcement precedents that say the same thing in different legal languages — you are responsible for what you submit, regardless of which tool produced it.


Part II: GDPR as the de facto AI regulator

Liga, Legal Counsel, TwinLadder

While the EU AI Act was making its way through implementation timelines, existing data protection law has been doing the heavy lifting of AI regulation across Europe. The General Data Protection Regulation, designed for a pre-AI world, has proved remarkably adaptable — and European data protection authorities have been the most active AI regulators on the continent.

Italy: the most aggressive enforcer

Italy's Garante per la Protezione dei Dati Personali has established itself as Europe's most assertive data protection authority when it comes to AI. The trajectory is instructive.

In December 2024, the Garante imposed a EUR 15 million fine on OpenAI for GDPR violations related to ChatGPT's processing of personal data, capping a saga that had begun with the Garante's temporary ban of ChatGPT in March 2023. The fine was accompanied by an order requiring OpenAI to conduct a six-month public awareness campaign on how ChatGPT processes personal data — itself an implicit acknowledgment that AI literacy is a regulatory concern.

Then in April 2025, the Garante fined Luka Inc. EUR 5 million for GDPR violations related to its Replika chatbot. The violations were extensive: no valid legal basis for processing, inadequate transparency (the privacy policy was available only in English for Italian users), no meaningful age verification despite evidence that minors were being exposed to emotionally manipulative and sexually suggestive conversations. Critically, the Garante had already identified these deficiencies in a February 2023 enforcement decision. The April 2025 action confirmed that Luka had failed to implement the required corrective measures over two years.

Alex: The Replika case is a competence failure on two levels. First, the company deployed a conversational AI system without understanding how GDPR applies to emotional AI that processes minors' data. Second — and more damning — the company was told exactly what to fix and did not fix it. That is not a knowledge gap. That is an institutional competence deficit. The organisation lacked the capability to translate regulatory requirements into operational reality.

Hamburg: the Bridge Blueprint

In September 2025, Hamburg's Commissioner for Data Protection and Freedom of Information, Thomas Fuchs, unveiled a draft discussion paper titled "The Bridge Blueprint," aiming to connect the GDPR's principles-based requirements with the AI Act's technical requirements. The paper invited dialogue from industry, civil society, and legal practitioners on how to make this connection practical.

Hamburg also demonstrated enforcement teeth: in 2025, the DPA fined a financial services provider nearly EUR 500,000 for automatically rejecting credit card applications based solely on algorithmic processing — without human oversight or adequate explanations. This action, under Article 22 GDPR, targeted precisely the kind of automated decision-making that the AI Act's transparency requirements are designed to complement.

Alex: Hamburg's dual approach — guidance alongside enforcement — is instructive. The Bridge Blueprint acknowledges that organisations need help navigating overlapping regulatory frameworks. But the credit card fine shows that "we were confused about the rules" is not a defence. This is the competence paradox in regulatory form: the rules are complex, but you are expected to understand them. Article 4 exists to ensure you invest in that understanding.

Worldcoin/World: biometrics at the boundary

In December 2024, Bavaria's State Office for Data Protection Supervision (BayLDA) concluded its investigation into Worldcoin (now rebranded as World), finding that the company's iris-scanning biometric identification procedure "entails a number of fundamental data protection risks for a large number of data subjects" and does not comply with GDPR. The BayLDA ordered implementation of a GDPR-compliant data deletion procedure within one month and mandated explicit consent for certain processing steps going forward. Previously collected data gathered without sufficient legal basis was ordered deleted.

World has appealed the decision, arguing that its Privacy Enhancing Technologies (PETs) meet the legal definition of anonymisation. The company contends the investigation primarily concerns outdated operations replaced in 2024. The appeal is pending.

Ireland v X/Grok: training on European data without consent

The Irish Data Protection Commission's action against X (formerly Twitter) regarding its Grok AI model represents one of the most consequential data protection interventions in the AI space. In August 2024, the DPC used its Section 134 powers for the first time to prohibit X from processing personal data contained in public posts of EU/EEA users for training Grok.

After High Court proceedings, X agreed in September 2024 to permanently discontinue the processing. But the story did not end there. In April 2025, the DPC launched a formal statutory inquiry into whether the personal data had been lawfully processed when originally used as training data. Potential enforcement actions could include fines reaching 4% of X's global annual turnover.

Alex: The X/Grok case illustrates what I call "competence at scale." It is not just about whether individual employees understood data protection law. It is about whether an organisation with billions of users had the institutional competence to understand that feeding European personal data into an AI model without consent would trigger enforcement. The fact that X agreed to stop, and then the DPC still opened a formal inquiry into the original processing, tells us something important: retrospective competence matters. You cannot undo a compliance failure by promising to do better next time.

The EDPB framework: Opinion 28/2024

In December 2024, the European Data Protection Board adopted Opinion 28/2024, addressing three critical questions at the intersection of AI and data protection.

First, AI model anonymity: for a model to be considered anonymous, it should be "very unlikely" both that individuals whose data was used in training can be identified, and that personal data can be extracted through queries. This assessment must be made case-by-case.

Second, legitimate interest: the Opinion provides a three-step test for DPAs to assess whether legitimate interest is an appropriate legal basis for processing personal data in AI model development and deployment.

Third — and most consequential — the consequences of unlawful processing: where a breach of Articles 5 or 6 GDPR is established regarding the training phase, supervisory authorities may impose corrective measures including fines, temporary limitations on processing, erasure of datasets, or ordering the retraining of the AI model.

That last point deserves emphasis. The EDPB has signalled that an AI model trained on unlawfully processed data may itself need to be retrained — a remedy that could cost millions and take months. This creates an enormous incentive to get data processing right the first time, which in turn requires exactly the kind of institutional AI competence that Article 4 demands.

The enforcement frontier: CNIL and the EDPB coordination

France's CNIL has taken a guidance-first approach to AI, publishing a series of practical recommendations throughout 2025 covering training data annotation, security during AI development, and the GDPR status of AI models. But the trajectory is shifting. Enforcement decisions from November 2025 through February 2026 reflect a move from guiding actors toward compliance to demanding "effective, structured, and demonstrable" compliance.

The EDPB has formally designated transparency and information provision as its 2026 coordinated enforcement theme, with every national DPA running parallel investigations into how organisations communicate their data processing practices. For AI deployers, this means the era of vague privacy notices covering AI processing is ending.

Alex: The regulatory picture across Europe's data protection authorities is converging toward a single message: you must understand what your AI systems are doing with personal data, and you must be able to explain it. This is not just a legal compliance requirement — it is a competence requirement. You cannot explain what you do not understand.


Part III: The CJEU architecture — building the doctrinal foundations

Liga, Legal Counsel, TwinLadder

While national authorities handle enforcement, the Court of Justice of the European Union is building the doctrinal framework that will shape AI regulation for decades. Three rulings and a groundbreaking preliminary reference are defining how European law treats automated decision-making.

SCHUFA: the "predominantly relies" doctrine (December 2023)

In Case C-634/21, decided 7 December 2023, the CJEU issued its first ruling on Article 22 GDPR — the right not to be subject to solely automated decision-making. The case concerned Germany's SCHUFA credit reference agency, which generates credit scores that lenders use when assessing loan applications.

The CJEU applied a broad interpretation of "decision": although SCHUFA did not itself reject the loan application, it played a "determining role" in the outcome because lenders relied heavily on its scores. This was sufficient to bring the scoring process within Article 22's scope. The obligation to comply with Article 22 therefore falls on the entity generating the automated assessment, not just the entity making the final decision.

The SCHUFA doctrine matters for AI because it captures any system where automated output significantly influences a downstream human decision. In practice, most AI-assisted decision-making works this way: the AI recommends, the human approves. SCHUFA says that if the human "predominantly relies" on the AI output, the entire process may be subject to Article 22's restrictions — including the right to human intervention and the right to contest the decision.

Dun & Bradstreet: the right of explanation (February 2025)

The CJEU built on SCHUFA in Case C-203/22, decided 27 February 2025. The case involved a consumer in Austria who was refused a mobile phone contract based on Dun & Bradstreet's automated credit assessment.

The Court ruled that data controllers must provide concise, transparent, and intelligible explanations of "the procedure and principles actually applied" to the data subject's personal data. Crucially, the CJEU held that communicating a complex mathematical formula — such as an algorithm — or providing a detailed technical description of processing steps does not satisfy this obligation. The explanation must be understandable to the data subject, not just technically accurate.

The Court also struck down Austrian legislation that permitted blanket exclusion of access rights where disclosure would compromise trade secrets. The GDPR requires a case-by-case balancing of data subject rights against business interests; Member States cannot tip that balance categorically in favour of secrecy.

Alex: The Dun & Bradstreet ruling establishes something profound: organisations must not only use AI transparently, they must be capable of explaining it transparently. And the explanation must make sense to an ordinary person, not just to a data scientist. This is a competence requirement embedded in law. If your team cannot explain what your AI does in plain language, you are not compliant. Full stop.

The first AI Act reference: C-806/24 from Bulgaria

On 25 November 2024, Bulgaria's Sofia District Court made the first request for a preliminary ruling to the CJEU that directly invokes the EU AI Act. The case involves a telecoms company's automated fee calculation system. The consumer argued that the system constitutes automated decision-making subject to the AI Act's transparency and human review requirements.

The Bulgarian court submitted seventeen questions of law, citing the AI Act alongside the Unfair Terms Directive and the Consumer Rights Directive. The case is pending, but its significance is already clear: national courts are beginning to frame disputes in AI Act terms, and the CJEU will eventually need to interpret Article 4's competence requirements alongside the existing GDPR framework.

Alex: Case C-806/24 is a preview of where all of this is heading. National courts are not waiting for the AI Act to be fully enforceable — they are already asking the CJEU what it means. For organisations, this means the compliance clock is ticking faster than the official timeline suggests. By the time the CJEU rules, its interpretation will apply retroactively to conduct that started today.


Part IV: The copyright front — who owns AI knowledge?

Liga, Legal Counsel, TwinLadder

European copyright law is developing a separate but related body of AI jurisprudence, and 2025 produced three landmark rulings that are reshaping how AI companies can — and cannot — use protected content.

GEMA v OpenAI: memorisation is infringement (November 2025)

On 11 November 2025, Munich Regional Court I handed down what may be the most consequential European copyright ruling on AI to date. Germany's music collecting society GEMA brought action against OpenAI for using protected song lyrics — including Kristina Bach's "Atemlos," Herbert Grönemeyer's "Männer," and Reinhard Mey's "Über den Wolken" — to train its GPT-4 and GPT-4o models without licensing.

The court ruled that memorisation of copyrighted content in an AI model constitutes reproduction under Section 16 of the German Copyright Act. It rejected OpenAI's argument that because the content exists only as distributed probability values across model parameters, rather than as stored copies, it does not constitute reproduction. The court held that it is sufficient that the model is capable of reproducing the content in recognisable form.

The court also rejected the text and data mining exception under Section 44b of the German Copyright Act. Since training transferred not merely abstract information but the lyrics themselves into the model parameters, this exceeded the scope of the TDM exception. The remedy was striking: OpenAI must cease storing unlicensed German lyrics on German infrastructure, and the judgment must be published in a local newspaper.

OpenAI has announced an appeal. The case will likely reach the Munich Higher Regional Court.

Getty v Stability AI: the UK diverges (November 2025)

Days before the Munich ruling, the UK High Court reached a very different conclusion in Getty Images v Stability AI (4 November 2025). The court largely rejected Getty's copyright infringement claims, finding that while the Stable Diffusion model was exposed to copyrighted works during training, the model does not store the training data. Generated images are produced without direct access to underlying training data, and the model ceased being an "infringing copy" once it no longer contained the copied content.

The divergence between Munich and London — decided within a week of each other — creates jurisdictional fragmentation that will persist until either the CJEU or national appellate courts establish clearer doctrine. The UK Government is required to publish its full report on the use of copyright works in AI development by March 2026.

Kneschke v LAION: the opt-out question (December 2025)

On 10 December 2025, the Hamburg Higher Regional Court upheld the lower court's ruling that the non-profit research organisation LAION could rely on copyright exceptions for text and data mining when it downloaded photographer Robert Kneschke's image from a stock photo agency to analyse and correlate with text descriptions for AI training datasets.

The critical finding concerned the opt-out mechanism: Kneschke's reservation of rights was stated in natural language in the stock agency's terms of service, not in a machine-readable format. The court ruled this was insufficient to constitute a valid opt-out under the TDM provisions, at least for downloads made in 2021. The Hamburg Higher Regional Court allowed a further appeal to the Federal Court of Justice, so the question is not yet settled.

Alex: The copyright cases illustrate a dimension of competence that most organisations have not yet considered: the competence to understand what rights you have, and what rights others have, in the data your AI systems use. GEMA v OpenAI tells content owners that their rights survive the training process. Kneschke v LAION tells them that those rights must be asserted in the right technical format. And Getty v Stability AI tells them that the answer may depend on which jurisdiction they are in.

For organisations deploying AI, this means you need people who understand not just how to use AI, but the legal conditions under which your AI was built. Article 4 competence is not just about prompt engineering — it is about understanding the provenance and limitations of the tools you rely on.


Part V: The Article 4 thread — pulling it all together

Alex Blumentals, Strategic Director, TwinLadder

Let me pull the thread that connects every case in this landscape.

Article 4 of the EU AI Act has been in application since 2 February 2025. It requires providers and deployers to take measures ensuring "a sufficient level of AI literacy" among their staff. National market surveillance authorities begin enforcement on 2 August 2026. Over 230 companies have already pledged voluntary compliance through the AI Pact, and the European Commission has published a living repository of AI literacy practices to guide implementation.

But here is what the case law reveals about what "sufficient" actually means in practice.

Sufficient means the ability to verify AI output. The Dutch lawyers, Al-Haroun's solicitor, and the Darmstadt expert all failed at verification. They accepted AI output at face value. Article 4 literacy means building the habit — and the institutional processes — to check.

Sufficient means understanding the legal basis for AI processing. The Garante's actions against OpenAI and Replika, Hamburg's credit card fine, Ireland's action against X/Grok — all involve organisations that deployed AI without adequate understanding of data protection requirements. Article 4 literacy means knowing what legal framework applies to your AI use case before you deploy it.

Sufficient means the ability to explain what your AI does. The CJEU's Dun & Bradstreet ruling demands explanations that ordinary people can understand. Article 4 literacy means your team can translate technical processes into plain-language descriptions.

Sufficient means understanding the provenance of your AI tools. GEMA v OpenAI shows that AI models may carry copyright liabilities from their training. Article 4 literacy means understanding where your AI came from and what risks it carries.

Sufficient means institutional capacity, not just individual knowledge. Replika was told what to fix and failed to fix it over two years. Article 4 literacy is not a one-time training event — it is an ongoing organisational capability.

What the enforcement timeline tells us

The enforcement timeline creates a window of opportunity — and a trap. Article 4 obligations already apply, but enforcement does not begin until August 2026. Many organisations will interpret this as breathing room. The case law suggests it is not.

Every case described in this analysis was decided under pre-existing law — professional conduct rules, GDPR, copyright statutes. The AI Act adds a layer; it does not replace what was already there. Organisations that wait for AI Act enforcement to begin before investing in competence are ignoring the enforcement that is already happening under other frameworks.

The Hamburg DPA's Bridge Blueprint explicitly frames this convergence: GDPR principles and AI Act requirements are not separate obligations but complementary perspectives on the same underlying question. Can your organisation deploy AI responsibly?

Spain's AEPD has already clarified that it is empowered to act against AI systems processing personal data unlawfully, even before national AI legislation is enacted. The CNIL in France is escalating from guidance to enforcement. The EDPB's 2026 coordinated enforcement theme is transparency — precisely the area where AI competence gaps are most visible.

The competence paradox in European enforcement

There is a painful irony in this landscape. The organisations being sanctioned for AI failures are, in many cases, sophisticated actors. Law firms. Financial services providers. Technology companies. These are not organisations that lack access to legal advice or compliance resources. They are organisations where the competence to handle AI responsibly was assumed rather than built.

This is the competence paradox that the Twin Ladder framework is designed to address. At Level 0 — the floor that Article 4 mandates — organisations ensure basic AI literacy: their people understand what AI can and cannot do, they know to verify outputs, they know what rules apply. At Level 1, professionals develop the skill to work alongside AI as a genuine capability multiplier, maintaining judgment while leveraging speed. At Level 2, organisations build operational competence — systematic processes that ensure AI is deployed, monitored, and governed as an integrated part of how the organisation works.

Every case in this landscape is a Level 0 failure. The Dutch lawyers were not at Level 0. The Darmstadt expert was not at Level 0. The financial services company fined in Hamburg was not at Level 0. And Article 4 says Level 0 is the legal minimum.

The question for every European organisation is no longer whether to invest in AI competence. The case law has answered that. The question is whether to invest before or after you become one of the cases.


Key takeaways

For legal professionals:

  • Verification of AI-generated content is a non-delegable professional duty (Al-Haroun, Dutch lawyers cases)
  • Awareness of AI hallucination risks is not sufficient — embedded verification processes are required
  • Court-appointed experts must disclose AI use and maintain personal accountability (LG Darmstadt)
  • The SRA, Dutch Bar, and German courts are converging on the same standard: you are responsible for what you submit

For organisations deploying AI:

  • GDPR enforcement against AI is intensifying across Europe, with EUR 1.2 billion in fines issued in 2025 alone
  • Automated decision-making without human oversight triggers Article 22 GDPR liability (Hamburg DPA, SCHUFA doctrine)
  • Training AI on personal data without valid legal basis risks model retraining orders (EDPB Opinion 28/2024)
  • The DPC v X/Grok case shows that retrospective compliance does not cure original violations
  • Spain, France, and Germany are not waiting for national AI Act implementation to enforce

For AI Act compliance:

  • Article 4 AI literacy obligations are already in force as of February 2025
  • National enforcement begins August 2026, but existing law enforcement is not pausing
  • The first CJEU preliminary reference under the AI Act (C-806/24) signals judicial engagement ahead of full enforcement
  • The European Commission's AI literacy repository and AI Pact provide implementation models
  • "Sufficient" AI literacy is being defined in real time by enforcement actions — the standard is rising

For content and IP strategy:

  • AI memorisation of copyrighted content constitutes infringement in Germany (GEMA v OpenAI)
  • The UK takes a different approach, potentially creating jurisdictional fragmentation (Getty v Stability AI)
  • Machine-readable opt-outs are required for effective copyright reservation (Kneschke v LAION)
  • The Dun & Bradstreet ruling requires plain-language explanations of AI decision-making, not technical formulae

The overarching lesson: Every case in this landscape is, at its core, a competence story. The technology worked as designed. The humans did not. Article 4 exists because European regulators understood that the gap between AI capability and human competence is the most dangerous vulnerability in the system. The enforcement record proves they were right.