Blog

AI tax

The AI ​​tax is real. Use the design to get your refund. – in Etokom

0

AI doesn’t just add work; It changes the work in ways that are now empirically undisputed. HBR’s article “AI doesn’t reduce work—it intensifies it” confirms what I called “the AI ​​tax” almost a year ago: AI increases the volume, velocity, and ambiguity of work unless organizations intentionally design against that outcome.

When research reaches its destination

In the AI ​​Tax post, I argued that AI doesn’t just come in the form of a productivity dividend; This comes in the form of six categories of new work: juggling and tool dispersion, testing, data preparation, relevance and security, the burden of failed projects, and continuous learning and relearning. Those categories emerged from conversations with teams already using AI in practice, finding users toggling between tools, collating outputs, and cleaning data instead of doing the “high-value” work they were promised.

The HBR piece, written by Aruna Ranganathan and Xingqi Maggie Ye, offers a rare longitudinal look at that reality, following nearly 200 employees at an American tech company over eight months to see how generic AI actually changed their work. Their conclusion is clear: AI tools don’t reduce work; He “continued to pound it.” Employees worked faster, took on a wider range of tasks and extended their work hours into the day, often without a manager asking them to do so.

Simply put, the study provides an ethnography for the working categories of AI taxa.

AI tax

Three ways to speed up AI work

HBR research identifies three main patterns of intensity that emerge as AI tools move from performance to daily use.

  1. work scope
    Once AI becomes available, people no longer do the same things any faster; They start doing more types of work. Product managers and researchers begin writing and reviewing code; Employees perform tasks that previously would have required new headcount; And individuals reclaim work that was outsourced, postponed, or simply avoided. At one level this can be seen as empowerment. A deeper dive reveals engineers who advise colleagues on AI-assisted code, review a flood of partial pull requests, and fix low-quality “work-slops” that arrive in their queue as finished work.
  2. Blurred boundaries between work and non-work
    AI makes it easy to “just try something” over the course of the day: a quick hint during lunch, another refinement before heading to a meeting, an idea tested on the phone in bed late at night. Those micro-sessions don’t feel like extra work, but over time, they eliminate breaks and recovery, creating a sustained feeling of cognitive engagement. Workers in the study reported that, as prompting became their default during downtime, their breaks no longer felt restorative.
  3. Multitasking and increased cognitive load
    Employees run multiple AI agents and threads in parallel, letting the AI ​​​​generate alternative versions while writing, and keep half an eye on the output while trying to focus on something else. The presence of a “partner” who never tires encourages constant context switching: checking, pushing, re-cuing, and adjusting. The result is that there is always a lag environment even when visible throughput increases.

If you read my AI tax posts, these topics will sound very familiar – because they are the lived experience behind the categories.

AI tax

How AI Explains Tax Intensification

In “The AI ​​Tax” I describe six ways that AI creates more work than it saves when deployed without design. The new HBR research fits neatly into that framework.

  • Juggling with AI: Multi-tasking, switching, distractions
    The third pattern of study, increased multitasking, is the human experience of balancing AI tools, agents, and interaction metaphors. In my post, I wrote about toolchain dispersion: one AI for scheduling, another in email, a third hidden in CRM, each with a different interface, set of capabilities, and quirks. The result is a workday that feels like a constant juggling exercise, with dozens of finely tuned tasks to focus on.
  • Investigation: The Problem of Observation and Hallucinations
    Work detail sounds impressive until you remember that every AI-generated draft, whether it’s a document, snippet of code, or a marketing campaign, requires revision. The HBR study documents engineers who began spending significant time reviewing AI-assisted work done by colleagues outside their discipline, often through informal Slack exchanges and favors. This is the “shadow labor” of AI tax, the actual work without a line item in the project plan, absorbed by people already with capacity.
  • Data Science and Readiness: Hidden Work Exposed
    AI makes data problems visible. When employees eagerly expand their scope: writing analyses, reports, or prototypes that they might not have attempted before, they quickly encounter scattered, mislabeled, or outdated data. This conflict forces them into ad-hoc data wrangling: reconciling formats, seeking out authoritative sources, and learning enough about the organization’s data architecture to be dangerous.
  • Relevance and security: Administration is lagging behind in adoption
    As AI disseminates content more quickly, questions of tone, bias, privacy and regulatory risk become daily concerns rather than edge cases. The HBR article hints at this indirectly, but the connection to my AI tax category is direct: When governance lags in adoption, each step forward requires a detour to verify compliance and appropriateness. That friction isn’t visible in vendor demos, but employees feel it right away.
  • Failed Projects and Abandonment Cycle
    The study reflects enthusiastic early experimentation: people are “just trying things out” with AI. In my post, I warned that this pattern often devolves into a cycle of pilots that don’t connect to real workflow, bots that die on the edge of promise, and technical debt that someone has to clean up. As each failed experiment leaves behind traces, partial automation, and skeptical users, the AI ​​tax grows over time.
  • Learning and relearning: AI as a moving target
    Finally, both the HBR article and my AI Tax post focus on learning churn. Every model update, interface change, and new feature, not to mention the arrival of an entirely new tool, forces people to go back into training mode. Add in social FOMO (“Have you tried the latest model?”) and you get a culture in which workers are expected to keep up with the constantly changing AI landscape while maintaining their existing responsibilities.

The point is not that AI cannot create value. It scales together value and complexity, and complexity comes first.

AI tax

mirage of free time

When AI works, when it actually speeds up a task or simplifies a workflow, a different question emerges: What happens to the free time? In the AI ​​tax article, I argued that this is not a technical question but a leadership and policy challenge. Without intentional design, free time gets reabsorbed:

  • More functions, often vaguely defined as “strategic functions” or “innovation”.
  • Informal expectations that individuals will take on additional responsibilities because “tools make it faster now.”
  • Subtle pressure to maintain or increase output rather than using time for recovery, learning, or collaboration.

The HBR study makes this dynamic visible. Employees used AI to shave time off tasks, then filled the margins with new work: helping colleagues, experimenting with additional prompts, or expanding their responsibilities into areas that were previously out of scope. They felt more productive, but no less engaged. Over time, the initial excitement gives way to exhaustion and cognitive fatigue.

This is the core of the AI ​​tax argument: if organizations do not clearly decide how to treat time saved by AI, the default will always be intensification, not liberation, and in many cases, replacement rather than enhancement.

AI tax

designing against intensity

The HBR authors suggest that to prevent intensification from becoming the default, organizations need clear “AI practices”: norms about when to use AI, when not to, and how to manage AI-enabled work sustainably. The AI ​​tax framework aligns with that call and provides a solid starting point.

Informed by both research and AI tax, here are several design moves leaders can make:

  • Standardize the AI ​​Stack
    Reduce toolchain dispersion by choosing a small number of platforms and building around them. Consolidation reduces cognitive switching costs, simplifies governance, and makes it easier to design training that sticks rather than chasing every new feature.
  • Make testing visible and accountable
    Stop considering inspection as invisible heroism. Assign testing responsibilities, track time taken, and factor that time into project plans and ROI claims. This is not fair; This generates the data needed to decide where AI actually helps and where it merely redistributes labor.
  • Invest in data before scale
    Many of the frustrations highlighted in the study, such as partial results, confusing outputs, and reliance on “vibe” coding, stemmed from poor data, unclear standards, or missing context. Cleaning, tagging, and aligning data is unnatural, but they are necessary if AI is to produce outputs that reduce work rather than creating additional cleaning work.
  • Run a timed pilot with a real ending
    Organizations should treat AI pilots as experiments with clear timelines and decision gates rather than permanent, half-baked features. At the end of a pilot project, either commit and invest, or turn it off and document what was learned so you don’t repeat the same mistakes later. I also regularly argue that AI needs knowledge management, but accelerated AI adoption often overshadows its implementation.
  • Protect human time as an asset
    Perhaps most important: decide in advance how to reclaim free time purposefully. Instead of harvesting shadow productivity gains, some portion should be explicitly allocated for rest, reflection, mentorship, and exploration. If AI is to be an ally, it must create the conditions for better human decision-making, not just greater throughput.

AI tax

From AI Tax to AI Practice

The convergence between HBR research and AI tax is encouraging because it suggests that we are moving out of the speculative phase of AI and into a more empirical, design-oriented phase. We now have a large body of evidence that, left to its own devices, AI does not reduce work; This reduces friction and invites more work.

The task of leaders is to treat these realities as design constraints rather than inconveniences. AI Tax identifies where costs accumulate; The HBR article shows how those costs manifest in a real organization over time. Among them is the opportunity to build “AI practices” that respect human boundaries, protect time and ensure that intensity is a choice rather than an accident.

Latest posts by Daniel Rasmus (see all)

(tagstotranslate)ai tax

[ad_1]

#tax #real #design #refund #trending #[now:year]

Leave a Reply