"The Vicious Circle" --- How an Accounting Firm Forgot How to Count
Twin Ladder Casebook | Twin Ladder | February 2026
1. The Hook
She has done this reconciliation a thousand times. Perhaps two thousand. The fixed asset register, the depreciation schedules, the balancing entries that close a fiscal year --- these are the tasks that defined her professional identity for a decade before the software arrived.
Now the software is gone.
The vendor terminated the contract. The migration to the new platform will take six weeks. In the meantime, the work must be done by hand, the way it was done before. The way she used to do it.
She opens the spreadsheet. She stares at the columns. She knows what a depreciation schedule is. She can explain the concept to a junior colleague. But the sequence --- the specific steps, the order of operations, the checks that catch an error before it compounds --- is not there. It is not in her hands. It is not in her memory. It is somewhere in the system that no longer exists.
She is not incompetent. She is not unintelligent. She is a professional who, over the course of several years, allowed a machine to hold her competence for her. And now the machine is gone, and the competence went with it.
This is not a thought experiment. This happened. Researchers from Aalto University documented it, studied it, and gave it a name: the vicious circle of skill erosion.
2. The Story
The Firm, the System, the Discovery
Between 2019 and 2023, a team of researchers at Aalto University and the University of Jyvaskyla conducted a case study of a Nordic accounting firm that had adopted cognitive automation software for core bookkeeping and financial management tasks. The research was led by Tapani Rinta-Kahila and Esko Penttinen of Aalto University, alongside Antti Salovaara, Wael Soliman, and Joona Ruissalo. Their findings were published in the Journal of the Association for Information Systems in 2023 under a title that captures the finding with clinical precision: "The Vicious Circles of Skill Erosion: A Case Study of Cognitive Automation."
The firm had adopted the automation platform in stages. First, the system handled data entry. Then it took over reconciliation tasks. Then fixed asset management. Then increasingly complex accounting procedures that had previously required trained professional judgment. At each stage, the logic was sound: the software was faster, more consistent, and less prone to the small errors that accumulate across a busy quarter. The accountants, freed from routine work, could focus on advisory tasks, client relationships, and higher-value analysis.
That was the theory. The reality followed a different trajectory.
What the researchers documented was a three-part erosion cycle. First, automation reliance increased steadily as accountants delegated more tasks to the system. Second, complacency set in at both the individual and organizational level --- the firm stopped investing in skills training for processes the software now handled, and individual accountants stopped practicing those processes. Third, mindful engagement with the work deteriorated across three dimensions the researchers identified as activity awareness, competence maintenance, and output assessment. Accountants stopped noticing how the work was done. They stopped maintaining the ability to do it themselves. And they stopped critically evaluating whether the outputs were correct.
The vicious circle was this: each dimension of erosion reinforced the others. The less the accountants practiced, the less capable they felt, and the more they relied on the system. The more they relied on the system, the less reason they had to practice. Complacency at the individual level became normalized at the organizational level, as the firm redirected training budgets away from foundational skills and toward system operation. The circle tightened until the moment the system was removed from the IT architecture.
That was the moment of discovery. When the automated software was cut, the firm realized that its accountants could no longer perform fundamental accounting tasks. Not because they lacked intelligence or training, but because the specific procedural competence --- the hands-on knowledge of how to execute a fixed asset depreciation schedule from memory --- had atrophied through years of disuse. The firm was forced to retrain its staff in tasks those same staff had once performed routinely.
Rinta-Kahila and Penttinen were careful to note that this was not a failure of the individuals. It was a structural outcome of how the automation had been deployed. The system had been designed to replace human cognitive work, not to augment it. There was no mechanism to ensure that the accountants retained the skills the software was performing on their behalf. Nobody had asked the question that, in retrospect, seems obvious: what happens to the people when the system holds all the knowledge? The answer, as the firm learned during those painful weeks of retraining, is that the people become operators of a system they no longer understand. And operators who do not understand the system they operate cannot detect when that system is wrong. The erosion was invisible until the moment it became catastrophic.
The Classroom Mirror
The accounting firm is not an isolated case. The same dynamic appears, with different parameters, in education research. In 2025, a study published in Frontiers in Psychology by Jose, Cherian, Verghis, Varghise, and Joseph examined the cognitive paradox of AI-assisted learning. Their findings were striking: students who used AI tools answered forty-eight percent more problems correctly than their peers. By any standard productivity metric, AI was working. But when those same students were tested on conceptual understanding --- whether they grasped the underlying principles behind the problems they had solved --- they scored seventeen percent lower.
The researchers called this the "cognitive paradox of AI in education: between enhancement and erosion." AI had amplified procedural performance while simultaneously degrading the conceptual foundation on which that performance depended. The students could produce correct answers. They could not explain why the answers were correct. They had acquired the illusion of competence --- a state in which frequent successful output masks the absence of genuine understanding.
The parallel to the accounting firm is direct. The accountants could operate the system. They could not operate without it. The students could solve problems with AI. They could not solve the problems that solving problems is supposed to teach them to solve. In both cases, the metrics that mattered --- productivity, accuracy, throughput --- looked excellent right up to the moment they collapsed. The illusion of competence is, by definition, invisible to the person experiencing it. You do not know what you have forgotten until the moment you need it.
3. Through the Twin Ladder Lens
The Aalto University case is a study in Level 1 failure.
In the Twin Ladder framework, Level 1 --- the Professional Twin --- describes the deployment of AI as a mirror for individual professional roles. The purpose of a Professional Twin is not to perform the work instead of the human. It is to perform the work alongside the human, creating the conditions for comparison, challenge, and judgment-building. The professional sees what the AI produces. The professional evaluates where it falls short. The professional remains actively engaged with the domain, questioning outputs, correcting errors, and understanding why the system reached the conclusions it reached.
The accounting firm did none of this. Its automation was deployed as a replacement, not an augmentation. The system did not mirror the accountants' work --- it absorbed it. There was no mechanism for comparison, no structured moment where the accountant would independently perform the reconciliation and then compare her result with the machine's. There was no deliberate preservation of the cognitive struggle that had built her competence in the first place.
This is where Robert Bjork's research on desirable difficulties becomes essential. Bjork, a cognitive psychologist at UCLA, has spent decades demonstrating a counterintuitive principle: conditions that make learning feel harder in the moment produce dramatically better long-term retention and transfer. Spacing practice over time rather than massing it, interleaving different types of problems rather than blocking them by category, testing oneself rather than rereading --- these strategies slow down apparent performance. They also build durable competence. Bjork's research has shown that interleaved practice yields retention rates of sixty-three percent compared to twenty percent for blocked practice.
The implication for AI deployment is profound. The struggle --- the effortful, sometimes frustrating process of working through a depreciation schedule by hand, of catching one's own errors, of holding a sequence in memory rather than reading it from a screen --- is not a cost to be optimized away. It is the mechanism through which competence is built and maintained. Remove the struggle, and you remove the competence. Not immediately. Not visibly. But steadily, through the vicious circle the Aalto researchers documented: from reliance to complacency to erosion to dependence.
The Twin Ladder principle is that AI should challenge human judgment, not bypass it. A properly designed Professional Twin would have required the firm's accountants to perform manual reconciliations on a regular schedule --- not because the machine could not do them, but because the humans needed to. It would have flagged discrepancies between human and machine output for discussion. It would have treated the maintenance of professional skill not as an overhead cost but as a strategic asset.
The firm did not lack the technology for this. It lacked the framework. It deployed a powerful tool without a theory of what that tool would do to the people using it. That absence --- the missing theory of human competence in the presence of machine competence --- is what the Twin Ladder is designed to fill.
4. The Pattern
The accounting firm in Finland is not an outlier. It is an instance of a pattern that has been documented across industries, across decades, and across levels of technical sophistication.
In 1983, the British psychologist Lisanne Bainbridge published a paper in Automatica titled "Ironies of Automation." Bainbridge observed a fundamental paradox: the more reliable an automated system becomes, the less practice its human operators get at performing the tasks the system handles --- and therefore the less capable they become at intervening when the system fails. The humans are kept in the loop precisely for the moments when automation breaks down. But the automation, by working correctly most of the time, ensures that they are unpracticed and unprepared for exactly those moments. The paper has accumulated over 4,700 citations. The ironies it identified remain unresolved.
Aviation provides the most extensively documented evidence. In 2013, the United States Federal Aviation Administration issued Safety Alert for Operators 13002, warning that "continuous use of autoflight systems could lead to degradation of the pilot's ability to quickly recover the aircraft from an undesired state." Surveys of commercial pilots have found that seventy-seven percent report their manual flying skills have deteriorated due to cockpit automation. NASA research has documented that the more sophisticated automation becomes, the less mentally engaged pilots are with the systems they are nominally supervising --- and the less capable they become of recognizing and responding to anomalies.
The most recent evidence comes from medicine. In 2025, a multicentre observational study published in The Lancet Gastroenterology and Hepatology examined what happened to endoscopists at four Polish hospitals after they had worked with an AI polyp-detection system for an extended period. The AI had been introduced as part of the ACCEPT trial (Artificial Intelligence in Colonoscopy for Cancer Prevention). Nineteen experienced endoscopists --- each with over two thousand colonoscopies to their name --- participated. The finding: adenoma detection rates during non-AI-assisted colonoscopies fell from 28.4 percent before AI exposure to 22.4 percent after. That is a six-percentage-point absolute drop and a twenty-one percent relative decline. Experienced physicians, working without the AI they had grown accustomed to, missed cancerous growths they would have previously detected. The researchers attributed the decline to "the natural human tendency to over-rely on the recommendations of decision support systems."
Bainbridge in 1983. The FAA in 2013. The Lancet in 2025. The mechanism is the same. The domains are different. The pattern is universal. Automation that replaces human cognitive effort, without preserving the conditions under which that cognitive capacity is maintained, produces a workforce that is simultaneously more dependent and less capable. The accounting firm in Finland is not an anomaly. It is the rule, playing out at different speeds and different scales in every industry where AI is being deployed as a substitute for human judgment rather than a catalyst for developing it.
5. The Lesson
Every organization deploying AI is, whether it acknowledges it or not, running the same experiment the accounting firm ran. The question is whether it will learn from that experiment before it discovers, as the firm did, that its people can no longer do the work the machines are doing for them.
The lesson is not to avoid automation. The Aalto researchers do not argue for that, and the evidence does not support it. The lesson is to build competence preservation into every AI deployment as a design requirement, not an afterthought.
This means three things in practice.
First, regular manual practice. The FAA understood this when it recommended that pilots practice manual flying during low-workload conditions. The principle transfers directly: accountants should perform reconciliations by hand on a scheduled cadence. Analysts should build models without AI assistance at regular intervals. Physicians should conduct examinations without decision-support tools often enough to maintain their diagnostic judgment. In aviation, they call this "manual flying time." Every profession that deploys AI needs its equivalent.
Second, assessment that measures understanding, not merely output. The education research is unambiguous on this point. A forty-eight percent improvement in problem-solving means nothing if it is accompanied by a seventeen percent decline in conceptual understanding. Organizations must test whether their people understand the work, not just whether the work gets done. Can the accountant explain the depreciation method? Can the analyst identify why the model produced its recommendation? Can the physician articulate the clinical reasoning behind the diagnosis? If the answer is no, the AI has not augmented the professional. It has hollowed them out.
Third, structural humility about what automation does to people. Bainbridge identified the ironies in 1983. The aviation industry has spent four decades grappling with them. The medical profession is confronting them now. Every industry that adopts AI at scale will encounter the same dynamic. The organizations that navigate it successfully will be those that treat human competence as an asset to be deliberately maintained, not a cost to be progressively eliminated.
The vicious circle is not inevitable. It is a design failure. The accounting firm in Finland did not set out to deskill its workforce. It set out to make its workforce more efficient. The deskilling was an unintended consequence of a deployment that lacked a theory of human competence. The Twin Ladder provides that theory. Use it before the system goes down and the spreadsheet stares back, blank and waiting.
Monday Morning Question: When was the last time the people in your organization performed their core professional tasks without AI assistance --- and could they still do it?
Sources
-
Rinta-Kahila, T., Penttinen, E., Salovaara, A., Soliman, W., and Ruissalo, J. (2023). "The Vicious Circles of Skill Erosion: A Case Study of Cognitive Automation." Journal of the Association for Information Systems, 24(5), 1378--1412. https://aisel.aisnet.org/jais/vol24/iss5/2/
-
Jose, B., Cherian, J., Verghis, A. M., Varghise, S. M., and Joseph, S. (2025). "The Cognitive Paradox of AI in Education: Between Enhancement and Erosion." Frontiers in Psychology, 16, 1550621. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1550621/full
-
Bainbridge, L. (1983). "Ironies of Automation." Automatica, 19(6), 775--779. https://ckrybus.com/static/papers/Bainbridge_1983_Automatica.pdf
-
Budzyn, P. et al. (2025). "Endoscopist Deskilling Risk After Exposure to Artificial Intelligence in Colonoscopy: A Multicentre, Observational Study." The Lancet Gastroenterology and Hepatology, 10(10). https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract
-
Federal Aviation Administration (2013). Safety Alert for Operators 13002: "Manual Flight Operations." https://www.faa.gov/sites/faa.gov/files/other_visit/aviation_industry/airline_operators/airline_safety/SAFO13002.pdf
-
Bjork, E. L. and Bjork, R. A. (2011). "Making Things Hard on Yourself, But in a Good Way: Creating Desirable Difficulties to Enhance Learning." In Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society, 56--64. https://bjorklab.psych.ucla.edu/wp-content/uploads/sites/13/2016/04/EBjork_RBjork_2011.pdf
-
Aalto University News (2024). "Researchers Warn That Skill Erosion Caused by AI Could Have a Devastating and Lasting Impact on Businesses." https://www.aalto.fi/en/news/researchers-warn-that-skill-erosion-caused-by-ai-could-have-a-devastating-and-lasting-impact-on

