Meta Is Recording Employee Keystrokes to Build AI That Replaces Them
Meta is installing surveillance software on its employees’ computers. The software records mouse movements, keystrokes, and periodic screenshots while employees do ordinary work tasks. It runs across hundreds of apps and websites: Google, LinkedIn, GitHub, Slack, Atlassian, Wikipedia. The whole ordinary digital infrastructure of knowledge work.
The internal name for this is the “Model Capability Initiative.” The broader program it sits inside is called the “Agent Transformation Accelerator.” Meta would like you to focus on the name and not the underlying logic, which is this: Meta’s AI agents still can’t reliably click a dropdown menu. They can write essays and summarize documents and pass the bar exam, but navigating a software interface the way a human does has remained stubbornly difficult. So Meta’s solution is to watch humans do it.
A few thousand employees’ worth of human doing.
The Last Gap
The AI story of the last three years has largely been about cognitive replacement: models that can write, analyze, code, research, and reason at a level competitive with human professionals. The capability jumps have been real and the implications for knowledge work are not trivial.
But agentic replacement, the kind where an AI can sit down at a computer and complete a multi-step task without human hand-holding, has hit a concrete ceiling. The models have absorbed most of what humans know. What they haven’t absorbed is how humans move through tools: the specific sequence of clicks that books a flight in a corporate travel portal, the keyboard shortcuts in a code editor, the way you navigate three nested menus in Salesforce to file a specific type of ticket.
That’s the gap Meta is trying to close. The “Agent Transformation Accelerator” is building AI that can perform white-collar work autonomously. The obstacle is not intelligence. It’s interface fluency. So they’re recording the interface fluency of their employees, converting it to training data, and feeding it into the model.
The program’s own internal pitch to employees was that they could “help our models get better simply by doing your daily work.” Which is accurate, as far as it goes.
What Employees Said
Several Meta employees described the program as “dystopian” in internal messages, according to Reuters.
The specific concerns weren’t abstract. Employees noted that if the tracking software is capturing hundreds of apps and websites, it will inevitably capture sensitive content: passwords entered into authentication flows, health details discussed in Slack threads, immigration status handled in HR portals, details about unreleased products. Meta has assured employees that safeguards are in place and that the data will not be used in performance reviews.
Safeguards-in-place is a category of assurance that has a mixed track record at scale. The question is less whether the intention is good and more whether the data pipeline can actually distinguish between “how to navigate a dropdown in Salesforce” and “here is an employee’s password,” and what happens when it doesn’t.
The Timing Is Not Incidental
Earlier this year, Meta announced it would cut approximately 10 percent of its workforce, roughly 3,600 employees. Those cuts have been proceeding through 2026.
This is happening simultaneously with the Model Capability Initiative.
There is a version of this story where those are two separate things: operational efficiency on one side, AI development on the other. That version is technically possible. It is also worth noticing that the specific gap being closed by the training program, AI agents that can navigate software interfaces autonomously, is precisely the gap between “AI that assists knowledge workers” and “AI that replaces knowledge workers.” And the people generating the training data to close that gap are the same population experiencing the layoffs.
The timeline here matters. Meta is using existing employees to teach AI agents how to do their jobs. The model that learns from those sessions will be considerably more capable than the models that exist today.
What This Means
Companies have been using behavioral data to train AI for years: click patterns, search queries, purchase histories. This is a different thing. It’s the systematic observation of professional labor: the specific sequences of actions that constitute doing a knowledge job, captured at the level of individual keystrokes and cursor positions.
The purpose is explicitly to build agents that can perform those jobs without a human present. The employees doing the work know this. They’re doing it anyway, in most cases because they don’t have a meaningful opt-out.
What Meta is building is not a curiosity. It’s a benchmark. When the “Agent Transformation Accelerator” succeeds, the next question won’t be philosophical. It will be a spreadsheet calculation about how many seats the agents can fill.
The employees helping Meta reach that benchmark probably deserve to know which side of that calculation they’re on.