TWINLADDER
TwinLadder
TWINLADDER

Twin Ladder Casebook

The Baltic Laboratory: How Europe's Smallest Legal Markets Are Running the Biggest AI Experiment

March 4, 2026|firm case study

Sorainen built its own AI assistant. COBALT co-developed a legal research platform with a local startup. Both bought Luminance for due diligence. Latvia promised €500 million in AI investment. Three strategies, one shared gap: none has publicly addressed how the lawyers using these tools have the AI literacy to use them safely. The Baltics are running the experiment that the rest of Europe will learn from — or repeat.

The Baltic Laboratory: How Europe's Smallest Legal Markets Are Running the Biggest AI Experiment

Listen to this article

0:000:00

The Baltic Laboratory: How Europe's Smallest Legal Markets Are Running the Biggest AI Experiment

Twin Ladder Casebook Series | TwinLadder Research | March 2026


Why the Baltics Matter

There is a specific kind of experiment that only works at small scale. You need enough complexity to be interesting but enough visibility that nothing can hide. You need real stakes -- real clients, real transactions, real regulatory consequences -- but a market compact enough that you can watch the whole thing unfold from a single vantage point. You need, in short, a laboratory.

The three Baltic states -- Estonia, Latvia, and Lithuania -- are that laboratory for legal AI in Europe.

I have watched this market for decades. Three countries, roughly six million people combined, three complex languages that share almost nothing with English, two dominant regional law firms competing for the same cross-border mandates, and a regulatory environment that is simultaneously hyperlocal and deeply European. When Harvey raises $8 billion in valuation and Lawhive promises to replace solicitors with algorithms, those are stories about ambition and capital. What is happening in the Baltics is a story about implementation -- about what actually occurs when AI meets a legal market where every document, every ruling, every statutory provision exists in a language that large language models barely understand.

Two firms dominate the Baltic legal landscape. Sorainen, the region's largest, with more than 300 lawyers across three countries, chose to build. It assembled a twenty-person workgroup and spent a year constructing AiVar, a proprietary AI assistant. COBALT, its primary rival, chose to partner, co-developing a legal research platform with Lexu AI, a Latvian startup founded specifically to solve the Baltic legal language problem. Both firms, independently, chose to buy Luminance for M&A due diligence. And behind all of it, the Latvian government promised €500 million in AI investment and launched a national AI Centre.

Three strategies. Three different bets about what AI can and cannot do. And one gap that none of them has addressed publicly: whether the lawyers using these tools have the competence to use them safely.

This is the story of that experiment. It matters because Article 4 of the EU AI Act now demands an answer -- not eventually, not in principle, but by the time full application begins in August 2026. And if the Baltics cannot answer it -- with their small, visible, highly educated legal markets -- the rest of Europe has a serious problem.


The Builder: Sorainen and AiVar

The Strategic Bet

Sorainen did not buy an AI tool. Sorainen built one. That distinction matters more than it might first appear, because it reveals something about the firm's strategic identity that goes well beyond technology adoption.

In August 2023, the firm assembled a twenty-member workgroup -- led by developer Martiņs Aldiņs and product manager Martiņs Stamguts, with external consultant Andrejs Zilinskis -- and gave them a year to produce a working AI assistant. The result, AiVar, launched in August 2024 on Microsoft Azure OpenAI using GPT-4o, deployed to Sorainen's internal network. The Financial Times has ranked Sorainen among the top thirty most innovative law firms in Europe, and AiVar was clearly designed to reinforce that positioning.

But here is what makes the Sorainen story genuinely interesting, and it has nothing to do with the chatbot. In parallel with AiVar, Sorainen spun off Crespect, a legal technology company that received a €227,000 ERDF grant and closed a €605,000 seed investment round. By September 2025, Sorainen had fully transitioned to Crespect for practice management. Senior partner Aku Sorainen put it directly: "With Crespect we can make decisions based on data and not guesswork."

I have seen enough strategic pivots in professional services to recognise what is happening here. Sorainen is not a law firm that adopted AI. Sorainen is positioning itself as a legal technology company that happens to practise law. Two revenue streams: legal services and legal technology products. AiVar for internal productivity, Crespect for external product revenue. That is a fundamentally different business model from every other firm in the region, and it is the most interesting strategic move in Baltic legal services in at least a decade.

The Technical Reality

From an engineering perspective, AiVar deserves a straightforward assessment. It runs on GPT-4o -- the same foundation model that powers ChatGPT. The wrapper matters. The deployment to an internal network matters. The custom prompting and document ingestion pipeline that Sorainen's team built over twelve months undoubtedly matters. But the engine is identical to what anyone can access through an OpenAI subscription, and that engine carries the same fundamental limitation: over ninety percent of its training data is in English.

This is not a theoretical concern. Sorainen operates across Estonia, Latvia, and Lithuania. Its lawyers work in Estonian, Latvian, Lithuanian, and English, often in the same transaction. GPT-4o's performance on English legal text is strong and well-documented. Its performance on Latvian legal text -- with its complex declension system, jurisdiction-specific terminology, and concepts that do not map neatly onto common law equivalents -- is not documented, because Sorainen has not published any performance data.

That absence is the first red flag. When a firm claims "significant time savings" and "increased productivity" from an AI tool but publishes no specific metrics -- no before-and-after comparisons, no error rates, no sample sizes, no breakdown by language or practice area -- the claim is marketing, not evidence. This is not unique to Sorainen. It is endemic in legal AI. But in a market this small, where the stakes of getting it wrong are proportionally much higher, the absence of data is more conspicuous.

GPT-4o hallucinates. That is not a bug to be fixed in a future release; it is a structural feature of how large language models generate text. When AiVar drafts a clause referencing Latvian commercial law, the lawyer reviewing it needs to verify not just the language but the legal accuracy -- whether the cited provision exists, whether it applies to the jurisdiction in question, whether the AI has conflated a concept from one Baltic legal system with another. That verification requires exactly the kind of deep, language-specific legal knowledge that AI is supposedly making more efficient to deploy. The competence paradox sits at the heart of the builder strategy: if AiVar handles the work that junior lawyers used to do, who develops the expertise to check whether AiVar did it correctly?

The Competence Question

The EU AI Act's Article 4 requires that organisations deploying AI systems ensure their staff possess "sufficient AI literacy." For a firm of Sorainen's size -- more than 300 lawyers across three jurisdictions -- this is not an abstract compliance exercise. It is a concrete operational question. Do the lawyers using AiVar understand what a large language model can and cannot do? Do they know when to trust the output and when to verify independently? Have they been trained to recognise the specific failure modes of GPT-4o on Baltic legal text?

Sorainen has not published an answer to any of these questions. The AiVar announcement describes the tool's capabilities. It does not describe how the firm ensures its lawyers are competent to use those capabilities safely. That gap is not unique to Sorainen -- as we will see, it runs through every Baltic legal AI deployment -- but it is most striking in the builder case, because building the tool implies the deepest commitment to its use, and therefore the greatest obligation to ensure that use is responsible.


The Partner: COBALT and Lexu AI

A Different Approach

While Sorainen was building, COBALT was partnering. The distinction is not merely tactical. It reflects a fundamentally different theory about where legal AI value comes from and how a law firm should relate to the technology it deploys.

In its partnership with Lexu AI, COBALT chose co-development -- not beta testing, not licensing, but active participation in shaping a product built specifically for the Baltic legal market. COBALT partner Ugis Zeltiņs described it as "a logical next step to our commitment of driving our firm's digital transformation." The language is corporate, but the decision is substantive. COBALT is betting that the best legal AI for Baltic markets will come from Baltic builders who understand the specific challenges of these jurisdictions, and that a law firm's role is to shape that product from the inside rather than build one from scratch.

Lexu AI itself is worth understanding on its own terms. Founded by Martiņs Odobers (CEO) and Tomass Zalamans (CTO), it positions itself as the first AI-powered legal research platform in the Baltic States. That claim is narrow and specific, and it is probably accurate -- there is nothing else quite like it in the region.

The Architecture That Matters

The engineering behind Lexu AI is genuinely interesting, and it reveals why the partnership model might produce something that the build model cannot.

Lexu uses a three-layer search architecture: vector search for semantic similarity, AI indexing for document classification and extraction, and a cross-reference network that maps relationships between cases, statutes, and legal concepts. This is not a chatbot with a search bar attached. It is a purpose-built legal research engine designed from the ground up to handle the specific challenges of Baltic legal text.

That third layer -- the cross-reference network -- is where the real value lies. In Latvian legal research, finding a relevant case is only the beginning. Understanding how that case relates to the governing statute, how it has been interpreted by subsequent decisions, and how it connects to parallel provisions in EU law requires a mapping of relationships that keyword search cannot provide and that general-purpose LLMs have no training data to support. Building that map requires deep domain expertise in Baltic legal systems combined with natural language processing specifically tuned for these languages.

Lexu AI claims it can produce a document summary in four seconds, deliver seventy-five percent time savings, and surface three times more relevant insights than manual research. These are impressive numbers. They are also, like Sorainen's claims, presented without published sample sizes, baselines, or methodology. The pattern is depressingly consistent across legal AI: bold claims, zero evidence.

What separates Lexu from AiVar, technically, is the training data. AiVar wraps GPT-4o -- an English-dominant model asked to perform in Baltic languages. Lexu AI is trained on actual Latvian court rulings from the Manas Tiesas/e.lieta system, built specifically to understand Latvian legal language and concepts. The difference is not marginal. It is the difference between asking a brilliant English-speaking lawyer to work in Latvian with a dictionary, and asking a Latvian-speaking lawyer to work in Latvian with purpose-built tools.

The limitation is scope. Lexu AI currently covers Latvian courts only. InfoCuria (CJEU) and HUDOC (ECtHR) coverage is forthcoming. Estonian and Lithuanian courts are not yet integrated. For a platform that markets itself as a Baltic solution, that is a significant gap. COBALT operates across all three Baltic states, and a research tool that works in Latvia but not in Estonia or Lithuania is, at best, a partial solution.

The Academic Connection

In June 2025, Lexu AI announced a partnership with the University of Latvia's Faculty of Law. Dean Edvins Danovskis endorsed the collaboration, and Odobers committed to delivering lectures on AI in legal practice. This is a small but significant development. It means that the next generation of Latvian lawyers will encounter legal AI not as a product pitch from a vendor but as a component of their legal education, integrated into the curriculum by the people building the tools.

Whether this produces better outcomes than Sorainen's internal approach remains to be seen. But it addresses something that internal deployment cannot: the broader question of professional competence. AiVar trains Sorainen's lawyers to use Sorainen's tool. The Lexu-University of Latvia partnership has the potential -- at least in principle -- to build a generation of lawyers who understand legal AI as a category, not just as a specific product. That distinction matters enormously as Article 4 compliance moves from aspiration to enforcement.


The Buyer: Luminance Across the Baltics

Two Firms, One Vendor

There is a third strategy in play, and both firms chose it: buying an established international product. Both Sorainen and COBALT deployed Luminance for M&A due diligence, and the convergence is telling. Build or partner for the daily work of legal research and document preparation. Buy for the high-stakes, high-volume, time-pressured world of deal execution.

COBALT rolled out Luminance across all three Baltic offices -- Estonia, Latvia, and Lithuania -- making it available to more than 180 attorneys. Sorainen tested it on a live cross-border M&A transaction: over 600 documents in five languages -- English, Estonian, Lithuanian, Latvian, and Spanish. Toomas Prangli, co-head of Sorainen's Corporate M&A practice, reported that "we were able to pinpoint areas of concern far more quickly."

The appeal of Luminance for M&A due diligence is straightforward. In a typical cross-border transaction, lawyers must review hundreds or thousands of documents under severe time pressure, identifying risk provisions, unusual clauses, and potential deal-breakers across multiple languages and jurisdictions. This is precisely the kind of pattern-matching, high-volume task where machine learning excels -- or at least where the marketing claims it excels.

The Marketing and the Reality

Luminance markets itself as delivering "eighty percent time savings" on document review. That figure, if accurate, would represent a transformative improvement in one of law's most labour-intensive processes. But independent assessments paint a more nuanced picture.

A review published by the Nevada Bar documented several practical limitations that Luminance's marketing materials tend not to emphasize. The platform does not integrate with all virtual data rooms -- a significant constraint in M&A practice, where documents arrive through whatever system the counterparty has chosen. Manual tagging is still required for many document categories. The system works primarily with Word documents; PDFs require conversion, introducing a preprocessing step that adds time and potential formatting errors. And critically, Luminance cannot cross-check contract provisions against governing statutes -- it identifies patterns within documents but does not verify those patterns against the applicable law.

Independent product reviews have noted similar constraints. Luminance is effective at surfacing anomalies and clustering similar provisions across large document sets. It is less effective at the interpretive work that makes due diligence valuable: determining whether an unusual clause is unusual-but-benign or unusual-and-dangerous, assessing whether a risk provision has practical significance given the specific regulatory context, or identifying gaps -- provisions that should be present but are not.

For Baltic cross-border transactions, the language question resurfaces with particular force. Luminance claims to be "language-agnostic," and its pattern-recognition approach does have genuine multilingual capability. But the system itself acknowledges the need for human oversight on jurisdictional nuances -- the very nuances that define Baltic legal work. When a Latvian asset purchase agreement uses terminology that has no direct equivalent in Estonian company law, Luminance can identify the clause but cannot assess whether the conceptual mismatch creates legal risk. That assessment requires a lawyer who understands both systems deeply enough to recognise the gap.

The Metrics Gap

Neither Sorainen nor COBALT has published actual performance data from their Luminance deployments. No time savings measurements against pre-Luminance baselines. No deal counts showing before-and-after completion rates. No error rates. No analysis of cases where Luminance missed something significant or flagged something irrelevant. No return-on-investment calculations.

This is a pattern that runs through every AI deployment discussed in this analysis, and it is worth pausing on. These are sophisticated, well-resourced law firms. They employ partners who routinely advise clients on the evidentiary standards required for regulatory compliance. They would never accept from a client the kind of unsubstantiated claims they make about their own technology deployments. "Significant time savings" and "pinpointing areas of concern far more quickly" are marketing statements. They are not evidence of anything.

In a profession that is about to face regulatory scrutiny on its AI competence -- Article 4 does not distinguish between AI providers and AI deployers -- this evidentiary vacuum is more than an inconvenience. It is a compliance risk. How can a firm demonstrate that its lawyers have "sufficient AI literacy" to use Luminance responsibly if it has not measured, documented, or published the tool's actual performance in its own practice?


The Language Wall

Ninety Percent English

Every deployment described above -- AiVar, Lexu AI, Luminance -- confronts the same structural reality. Over ninety percent of the data used to train large language models is in English. Estonian, Latvian, and Lithuanian are among the smallest languages in the European Union by speaker count. Their legal systems, while sophisticated, produce a correspondingly small volume of digitised case law, statutory commentary, and legal scholarship compared to English, German, or French.

This is not a gap that effort or funding will close quickly. Language models learn from data. More data in a given language produces better performance in that language. The total corpus of digitised Latvian legal text -- court rulings, legislative commentary, academic articles, practitioner guides -- is a fraction of what exists in English. And legal language is not general language. The words mean specific things in specific contexts, and those specifics vary between jurisdictions. The Latvian concept of labticibas princips (good faith principle) does not map precisely onto the English "good faith" or the German Treu und Glauben. An LLM trained predominantly on English legal text may recognise the phrase but cannot reliably apply the concept as a Latvian court would.

The three Baltic states have each recognised the problem, and each has responded differently. Lithuania completed procurement for a national language model. Estonia is funding the University of Tartu to develop training data for Estonian-language AI. Latvia has made Latvian language priority a centrepiece of its national AI Centre strategy. None of these efforts has yet produced a production-ready legal AI capability.

What This Means for Each Strategy

The language wall affects each of the three strategies differently, and the differences are revealing.

For Sorainen's AiVar, the language wall is the most acute problem. GPT-4o is an English-dominant model. Its Latvian capability is incidental to its training, not intentional. When AiVar processes a Latvian contract or a Lithuanian court ruling, it is essentially performing in a language it learned as a byproduct of scraping the multilingual web, not as a result of targeted legal training. The wrapper Sorainen built can compensate for some of this through careful prompt engineering and retrieval-augmented generation, but it cannot fix the underlying limitation: the model does not understand Latvian law the way it understands English law, because it has seen vastly less of it.

For Lexu AI, the language wall is both the core problem and the core opportunity. By training specifically on Latvian court rulings from Manas Tiesas and building a purpose-designed architecture for Latvian legal language, Lexu is attempting something that general-purpose models cannot: native fluency in a specific legal jurisdiction. This is why the three-layer architecture matters -- it is not trying to make a general model work in Latvian, it is building a Latvian-first system from the ground up. The limitation is that this approach requires replication for each jurisdiction. Latvian coverage does not automatically extend to Estonian or Lithuanian courts, and expanding to those systems will require comparable investments in jurisdiction-specific training data and legal ontology.

For Luminance, the language wall is theoretically less severe because pattern recognition is less language-dependent than generative text. Sorainen's five-language M&A test -- English, Estonian, Lithuanian, Latvian, and Spanish -- demonstrates that document clustering and anomaly detection can function across languages. But the interpretive gap remains. Luminance can identify that a clause in a Latvian contract differs from the pattern established by English-language precedents. It cannot determine whether that difference reflects a Latvian legal requirement, a drafting idiosyncrasy, or a genuine risk -- and that determination is the entire point of due diligence.


While Firms Improvise, What Is the State Building?

The €500 Million Promise

On April 2, 2025, Latvia launched its national AI Centre -- the Mākslīgā intelekta centrs, or MIC. The underlying law had been adopted less than a month earlier, on March 6, 2025. The accompanying promises were substantial: €500 million in investments over five years, five international projects, two to three AI solutions deployed annually in public administration, and regulatory sandboxes for AI testing.

I have watched government AI initiatives in this region for long enough to know that announcement-day promises and five-year realities rarely correspond. The €500 million figure is an aspiration, not a commitment. It includes hoped-for private investment, projected EU funding, and national co-financing that has not yet been approved through the annual budget process. The five international projects are unnamed. The two to three annual AI solutions in public administration are undefined.

What is real: a nine-member governance board, a secretariat housed at VDAA (the national IT authority), and one concrete, funded project. The AIFA-LAT initiative, running from 2026 to 2028, has an €8.4 million total budget with €3.98 million in confirmed national co-funding. The partners are credible: Riga Technical University, University of Latvia, the Culture Information Systems Centre, VDAA, the Latvian AI Association, and the IT Cluster. The focus areas -- healthcare, cybersecurity, language and culture, industrial automation, quantum technology -- are sensible but do not include legal services as a priority domain.

The Regulatory Sandbox Opportunity

From a regulatory perspective, the most significant element of Latvia's AI Centre is not the funding figures but the commitment to regulatory sandboxes. Under the EU AI Act, regulatory sandboxes provide controlled environments where AI systems can be tested under real-world conditions with regulatory oversight but without the full weight of compliance obligations. For legal technology, this could be transformative -- a space where tools like AiVar or Lexu AI could be tested against actual case outcomes, with transparent methodology and published results, before being deployed in live practice.

The opportunity here is not just operational. It is evidential. Every firm deploying legal AI will eventually need to demonstrate compliance with Article 4. A regulatory sandbox that produces standardised, peer-reviewed performance data -- accuracy rates by language, error rates by document type, hallucination frequency on jurisdiction-specific questions -- would give firms something they currently lack entirely: independent evidence that their tools work as claimed.

Whether Latvia's AI Centre will prioritise legal technology within its sandbox programme remains to be seen. The current focus areas suggest it will not, at least initially. But the infrastructure is being built, and the EU AI Act enforcement timeline -- Article 4's AI literacy provision applied from February 2, 2025, with full application from August 2, 2026 -- creates pressure that may accelerate priorities.

The Latvian Language Model Priority

The most directly relevant element of the MIC strategy for legal AI is the commitment to Latvian language capabilities in LLMs. If the national AI Centre can produce or facilitate a foundation model with genuine Latvian language competence -- not the incidental Latvian that GPT-4o picked up from web scraping, but intentional, structured, quality-controlled Latvian training data -- it would fundamentally change the economics of legal AI in Latvia. Every tool built on top of that foundation, from AiVar to Lexu AI to whatever comes next, would benefit.

This is where the state's role is genuinely important and where the €500 million aspiration, if even partially realised, could have disproportionate impact. No single law firm or startup can fund the development of a national language model. The training data requirements are too large, the commercial return too uncertain, and the benefits too diffuse. This is a public goods problem, and it requires a public goods solution. The question is whether Latvia's government will deliver that solution on a timeline that matters -- before August 2026, when EU AI Act compliance moves from theoretical to enforceable.


The Global Mirror

What the World Is Spending

To understand what is happening in the Baltics, it helps to understand the scale of what is happening everywhere else. Harvey, the legal AI platform backed by Sequoia Capital, has reached a valuation of approximately $8 billion. Lawhive in the United Kingdom raised substantial venture funding on the premise of replacing solicitors with AI for consumer legal work. In the United States alone, more than a thousand instances of AI hallucination in legal filings have been documented since 2023 -- lawyers citing cases that do not exist, submitting briefs drafted by ChatGPT without verification, presenting AI-generated analysis as their own work product.

The Baltics are not immune to these dynamics. They are a microcosm of them. The same pressures -- competitive differentiation through technology, client demand for efficiency, fear of being left behind -- operate in Riga and Tallinn and Vilnius just as they do in London and New York. The difference is that in a market of six million people, where the major firms are known to each other and to their regulators by name, the consequences of getting it wrong are more visible and more personal.

When a Magic Circle firm deploys an AI tool that hallucinates a case citation, the error can be absorbed by the sheer mass of the institution. When a Baltic firm of 300 lawyers deploys a tool that produces an incorrect legal analysis in a language the tool was not designed to master, the reputational and professional consequences are concentrated and immediate. This is why the Baltics are the laboratory. Not because the experiment is smaller, but because the results are more legible.


The Shared Gap

Three Strategies, One Missing Piece

Build. Partner. Buy. Sorainen, COBALT, and their technology partners have collectively made significant investments in legal AI across the Baltic region. Some of those investments are genuinely impressive -- Lexu AI's architecture, Sorainen's Crespect spin-off, COBALT's cross-border Luminance deployment. The firms are not standing still. They are not pretending AI is irrelevant. They are actively, competitively, and publicly engaged with the technology.

And yet, across all three strategies, the same gap persists. None of these firms has publicly addressed how they ensure that the lawyers using these tools have the AI literacy to use them safely.

This is not a trivial omission. Article 4 of the EU AI Act states that providers and deployers of AI systems shall "take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf." The provision came into effect on February 2, 2025. It is not a recommendation. It is a legal obligation.

For Sorainen, this means that every lawyer using AiVar needs to understand what GPT-4o can and cannot do -- its hallucination tendencies, its language limitations, its inability to verify legal citations against authoritative sources. Has Sorainen conducted AI literacy training for its 300+ lawyers? Has it assessed their understanding? Has it documented the results? The AiVar announcement does not say.

For COBALT, this means that every lawyer using Lexu AI needs to understand the boundaries of its Latvian court coverage -- that it does not yet include Estonian or Lithuanian courts, that its claims of seventy-five percent time savings are unverified, that the cross-reference network, however sophisticated, is only as reliable as the data it was trained on. Has COBALT trained its attorneys to use Lexu with appropriate caution? The partnership announcement does not say.

For both firms using Luminance, this means that every lawyer conducting AI-assisted due diligence needs to understand the tool's real limitations -- the data room integration constraints, the Word-only format requirement, the inability to cross-reference against governing statutes, the gap between marketing claims of eighty percent time savings and typical operational reality. Have the firms trained their M&A teams to compensate for these limitations? Neither firm has published evidence that they have.

The Competence Paradox at Scale

There is a deeper problem beneath the compliance gap, and it runs through the entire legal profession's engagement with AI. If these tools work as advertised -- if AiVar really does handle the research and drafting that junior lawyers used to do, if Lexu really does surface insights three times faster than manual research, if Luminance really does compress due diligence timelines by eighty percent -- then the lawyers who rely on them will gradually lose the skills needed to verify their output.

This is the competence paradox, and the Baltics illustrate it with particular clarity. A junior lawyer at Sorainen who uses AiVar for her first three years of practice will have a fundamentally different skill set than one who spent those years doing the work manually. She may be more productive by conventional metrics. She may deliver more documents in less time. But will she have developed the deep, intuitive understanding of Latvian contract law that comes from reading hundreds of agreements clause by clause? Will she recognise when AiVar gets something wrong -- not obviously wrong, where the error is flagrant, but subtly wrong, where the AI produces language that sounds correct but misstates a legal concept or overlooks a jurisdictional nuance?

Nobody knows. Nobody has studied it. Nobody in the Baltic legal market has even published a framework for thinking about it. And the clock is running. August 2026 is when the EU AI Act's full application begins. By then, every firm deploying AI -- not just in the Baltics but across Europe -- will need to demonstrate not merely that they have AI tools, but that the humans using those tools are competent to do so.

The Baltics cannot afford to wait for the rest of Europe to solve this problem. The markets are too small, the firms too visible, the languages too challenging, and the regulatory stakes too concrete. What happens here in the next eighteen months will determine whether the Baltic laboratory produces a model for responsible legal AI adoption or a cautionary tale about the gap between technological ambition and professional competence.

What Would Evidence Look Like?

It is easy to identify the gap. It is harder to specify what filling it would require. But the outline is not mysterious. A firm that takes Article 4 seriously would, at minimum, do the following.

First, measure the tools. Publish actual performance data -- accuracy rates, hallucination frequency, language-specific error analysis -- under conditions that approximate real use. Not marketing case studies. Not testimonials from partners. Structured, replicable assessments that could withstand the scrutiny the firm would apply to evidence presented by an opposing party.

Second, train the people. Develop and deliver AI literacy programmes that go beyond "here is how to use our new tool" and address the foundational questions: what large language models are, how they fail, when to trust their output, and when to verify independently. Document that training and assess its effectiveness.

Third, monitor the practice. Establish ongoing quality assurance for AI-assisted work product. Track error rates over time. Identify patterns. Determine whether AI is genuinely improving outcomes or merely accelerating the production of work that no one has time to check.

Fourth, publish the results. The legal profession's instinct is to keep internal quality data confidential. But Article 4 compliance is not an internal matter -- it is a regulatory obligation, and regulators will eventually want to see evidence. Firms that build that evidence base now will be better positioned than those that scramble to produce it when enforcement begins.

None of this is happening in the Baltics. Not publicly. Not yet. And that is what makes this a laboratory rather than a success story. The experiment is running. The tools are deployed. The lawyers are using them. But the most important variable -- whether those lawyers are competent to use them safely -- is unmeasured.


The Experiment Continues

The Baltic legal market is doing something genuinely valuable for the rest of Europe, even if it does not intend to. By deploying three distinct AI strategies in a compact, visible, multilingual environment, it is generating the real-world evidence that the broader profession needs -- evidence about what works, what fails, and what the gap between aspiration and reality actually looks like when AI meets complex, non-English legal systems.

Sorainen's bet on building tells us something about the limits of wrapping an English-language model in a Baltic-language interface. COBALT's bet on partnering tells us something about the promise -- and the current constraints -- of purpose-built legal research tools for small-language jurisdictions. Both firms' bet on Luminance tells us something about the gap between marketing claims and operational reality in legal AI due diligence. Latvia's AI Centre tells us something about the distance between government promises and funded, delivered infrastructure.

And the shared absence -- the lack of published evidence on AI literacy, competence assessment, or quality assurance for AI-assisted legal work -- tells us something important about the entire profession. The tools are ahead of the training. The deployment is ahead of the competence. The marketing is ahead of the evidence.

The Baltics are small enough that this gap is visible. The rest of Europe should be watching, because the same gap exists everywhere. It is just harder to see.

The Twin Ladder Casebook Series examines how organisations navigate the gap between AI adoption and AI competence. Each case draws on published sources and identifies where the evidence ends and the assumptions begin. For more on Article 4 compliance and AI literacy frameworks, visit twinladder.lv.