top of page

AI Workload Increase: 163K-Employee Study Shows 104% More Emails

AI Workload Increase: 163K-Employee Study Shows 104% More Emails

We built tools to write for us, and now we spend all our time reading. Generating a six-page document takes five seconds, but parsing that document for factual accuracy still takes a human an hour. The reality setting into corporate offices and engineering departments right now is that generative tools are acting as an additional layer of work rather than a replacement for it.

Instead of leaving work at 3 PM, employees are reviewing generated code, checking AI summaries for hallucinations, and sending twice as many messages. The promised productivity explosion has quickly morphed into an undeniable AI workload increase. Data backing up this shift is getting hard to ignore, and the developers and office workers dealing with it every day are rapidly changing how they handle software deployment just to keep their heads above water.

Dealing with the AI Workload Increase: User Experiences and Technical Solutions

Dealing with the AI Workload Increase: User Experiences and Technical Solutions

The first place this breakdown becomes obvious is in daily task management. Workers looking to offload tedious assignments frequently find themselves spending more time managing the AI than they would have spent doing the work manually.

Take the common task of spreadsheet reconciliation. A user attempts to merge two complex Excel documents using AI, hoping to save an hour or two. What actually happens is a frustrating loop of Microsoft Copilot Excel issues. The software freezes. It claims to have created a new worksheet but produces nothing. When corrected, instead of fixing the original data, it hallucinates a completely unrelated table to justify its previous mistake. After fighting the tool for nearly two hours, the user abandons the process and does it manually in thirty minutes.

The same applies to testing and documentation. Feeding a requirements document into an AI to generate Confluence test cases frequently results in a sprawling, disorganized mess. The time required to clean up and verify the generated test cases is roughly equal to writing them from scratch. We are seeing a problem of "lossy uncompression." Humans naturally communicate in short, dense formats. AI takes three bullet points and inflates them into five paragraphs of polite corporate filler. A colleague then has to read all five paragraphs just to extract the original three bullets.

Beating AI Code Bloat with a Stricter AI Code Review Process

Software engineering is experiencing this friction at scale. AI models tend to produce massive amounts of code for very simple problems. A clean, efficient function that requires 30 to 50 lines of logic when written by a senior developer often balloons into 200 lines of convoluted, heavily abstracted code when generated by an LLM.

Because AI allows developers to produce hundreds of lines of code instantly, metrics like "Lines of Code (LoC)" are skyrocketing. Management sees this as a massive productivity boost. The reality is that much of this code is bloated, redundant, or generated specifically to patch bugs caused by previous AI outputs.

To handle this, engineering teams are completely restructuring their AI code review process. Relying on an LLM to build entire modules simply does not work. Successful developers restrict AI to small, highly defined snippets, keeping the overall architecture entirely under human control.

Handling the flood of generated code also requires a strict, three-layer defense system:

  1. First-Pass Review: The engineer who generated the code is strictly responsible for reviewing and testing it locally before committing. Throwing raw, unverified AI code at coworkers is becoming a serious breach of professional etiquette.

  2. Automated Checkpoints: CI/CD pipelines must be beefed up. Some teams deploy automated AI-assisted linters and reviewers as a front-line defense to catch structural flaws before a human even looks at the pull request.

  3. Human Peer Review: The final check requires a highly experienced developer backed by an aggressive unit testing strategy to ensure the bloated code doesn't harbor edge-case failures.

Setting Hard Constraints in Prompts

Users getting actual value out of these tools have abandoned the idea of a frictionless, automated assistant. They treat LLMs like malicious compliance engines.

You cannot ask for a summary and expect brevity. You have to enforce a strict persona and hard output limits. Giving instructions like "Adopt a concise, terse persona," or "Output exactly six bullet points, cite sources, and strictly limit the response to one page" is the only reliable way to stop the text bloat.

AI thrives on repetitive formatting. It handles boilerplate generation perfectly. It excels at bulk data formatting—taking a raw list and converting it into JSON based on strict rules. It is an excellent tool for standardizing rough notes into a professional email tone. It falls apart the second it is asked to execute complex, multi-step decision-making or deep logic parsing.

The ActivTrak AI Study Data: Why Time Gets Filled Instead of Saved

The ActivTrak AI Study Data: Why Time Gets Filled Instead of Saved

The anecdotal exhaustion felt by developers is fully backed up by workplace analytics. ActivTrak released data tracking 163,638 employees across 1,111 organizations over a three-year span. Their findings directly dismantle the idea that AI saves time in a corporate environment.

The deployment of AI tools had zero downward impact on total workload. Instead, workers found their days occupied by entirely new categories of micro-tasks. The time saved drafting a document was immediately consumed by the communication required to discuss it.

The 145% Jump in Chat and 104% in Email

The ActivTrak AI study data showed that after AI adoption, the volume of emails sent by employees spiked by 104%. The time spent interacting with instant messaging and chat systems rose by 145%. Usage of business management and tracking tools increased by 94%.

This is a classic manifestation of induced demand. If it becomes mathematically easier and faster to generate text, the organization will naturally expect more text to be generated. The friction of writing previously acted as a natural governor on corporate communication. Without that friction, the volume of emails, Slack messages, and tracking tickets explodes. Workers are not working fewer hours; they are spending the exact same number of hours managing a vastly accelerated flow of digital paperwork. AI is proving to be an additional productivity layer bolted onto the existing workday.

The Core Controversy: The Disconnect in Expectations

The Core Controversy: The Disconnect in Expectations

Managers and executives buy enterprise AI subscriptions with the belief that they are purchasing an automated workforce capable of reducing headcounts and accelerating timelines. They adjust schedules and increase individual workload allocations based on the assumption that AI has made everyone faster.

This fundamental misunderstanding of the technology creates severe burnout. Workers are given shorter deadlines, but because the AI hallucinates, they must perform 100% manual oversight on every deliverable. When Google Gemini responds to a specific research query with six irrelevant points and one completely incorrect central fact, the employee has to scrap the output and start over. But the deadline has already been artificially compressed.

Amazon AI Routing Logic Failures

This friction is highly visible in logistics and operations. Reports from Amazon employees detail how management pushes automated tools that actually drag down efficiency on the ground. The Amazon AI routing logic used for delivery drivers often lacks basic situational awareness.

The system regularly ignores the physical layout of roads, directing drivers to perform massive numbers of unsafe U-turns. It groups delivery addresses that are physically 200 feet apart into the same cluster without accounting for obstacles, barriers, or road flow that make them totally inaccessible to one another. The drivers are then forced to manually override and outthink the broken AI routing logic just to finish their shift, fighting against the software that management believes is optimizing their day.

The Shifting Definition of Productivity

We are reaching a point where organizations have to reconsider how they define output. Valuing a developer based on lines of code written makes zero sense when a script can dump 500 lines of garbage into an IDE in two seconds. Judging an employee's engagement by their email output is meaningless when AI drafts the replies.

The AI workload increase is a symptom of companies trying to use automation to squeeze more volume out of their staff, rather than improving the quality of the work. If the goal of introducing a new tool is to improve the work-life balance of an office, management has to deliberately choose not to increase task quotas when efficiency improves.

Technology has always functioned as an amplifier for existing corporate priorities. If a company prioritizes endless output, an LLM will simply help them generate endless output until the employees responsible for proofreading it burn out completely. The bottleneck has merely shifted from the typing phase to the review phase, and editing a machine's hallucinations takes just as much energy as writing the code yourself.

Frequently Asked Questions

Why is there an AI workload increase when tools are supposed to save time?

AI accelerates the creation of text and code, but it introduces high error rates and logical flaws. Employees must spend significant time verifying facts, checking for hallucinations, and editing bloated responses. The time saved typing is simply replaced by the time spent reviewing.

What does the ActivTrak AI study data reveal about productivity?

The study tracked over 160,000 employees over three years and found AI does not decrease total work hours. Instead, it triggered a 104% increase in emails and a 145% increase in instant messaging. AI freed up time on single tasks, which was instantly filled with higher volumes of communication and task management.

How can developers fix AI code bloat?

Developers solve this by restricting AI to generating small, highly specific code snippets rather than entire structural modules. Teams also rely heavily on an aggressive AI code review process, forcing local manual checks, automated CI/CD pipeline linters, and strict human peer review to catch redundant logic.

What causes the Microsoft Copilot Excel issues users report?

LLMs struggle with complex, multi-step operations and logical mapping in large spreadsheets. Copilot frequently freezes, hallucinates new tables instead of altering existing data, or fails to execute cross-sheet merges properly, requiring users to abandon the AI and complete the task manually.

Why does the Amazon AI routing logic fail in real-world scenarios?

The routing AI lacks physical context regarding local road rules and spatial barriers. It often plans dangerous amounts of U-turns or mistakenly links delivery points that are geographically close but physically separated by obstacles, forcing human drivers to manually correct the routes on the fly.

How can I stop AI from generating too much text?

You have to use absolute constraints in your prompts. Instruct the AI with specific rules like "output exactly five bullet points," "restrict length to one page," or "maintain a terse, direct persona." Without hard limits, LLMs will default to producing long, repetitive responses.

Get started for free

A local first AI Assistant w/ Personal Knowledge Management

For better AI experience,

remio only supports Windows 10+ (x64) and M-Chip Macs currently.

​Add Search Bar in Your Brain

Just Ask remio

Remember Everything

Organize Nothing

bottom of page