Quick Take
- The Tool:“colleague.skill” — a viral GitHub project that trains AI on a coworker’s digital footprint
- The Tactic:Workers map colleagues’ workflows to make their roles automatable — and redundant
- The Counter:“anti-distillation.skill” — a tool that obscures your work data from AI harvesting
- The Trigger:Chinese companies mandating “skill documentation” as part of AI efficiency drives
- What’s Next:A new arms race between AI-powered job sabotage and AI-powered self-protection
A viral GitHub project called“colleague.skill”has ignited a workplace arms race in China — where employees are quietly training AI agents on their coworkers’ emails, documents, and chat logs to make those colleagues appear replaceable, protecting their own jobs in the process.
This signals a new and deeply uncomfortable phase of AI adoption: one where the threat isn’t just AI replacing humans, but humans weaponising AI against each other. Every company mandating “knowledge documentation” for efficiency should be watching this closely.
StartupFeed Insight
What the numbers say:China’s corporate AI adoption rate is among the highest globally — and the “colleague.skill” phenomenon reveals that when companies frame knowledge documentation as efficiency, employees read it as a redundancy roadmap.
What this means for you:
- If you’re afounder: Your “knowledge management” initiative may be quietly destroying psychological safety — employees now see documentation as a threat, not a contribution
- If you’re aninvestor: Any HR-tech or enterprise AI company that doesn’t address the “weaponisation” problem will face adoption resistance from employees who’ve seen this story
- If you’re anemployee: Your digital footprint — Slack messages, emails, documents — is now a training dataset. What you share at work is no longer just professional output; it’s potentially your replacement manual
Our prediction:By Q4 2026, at least 3 major enterprise software platforms (likely Microsoft, Atlassian, or Notion) will announce explicit “skill protection” features — privacy controls that limit how employee work data can be used for AI training. The colleague.skill phenomenon will accelerate this by 12–18 months.
How “colleague.skill” Works
The concept behind the tool is deceptively simple.“Skill distillation”— the process of breaking down how a person works into structured, repeatable steps that an AI can learn — has been a legitimate enterprise practice for years. Companies use it to document SOPs, preserve institutional knowledge, and onboard new hires faster.
What changed is who’s doing the distilling, and why.
colleague.skilllets any user upload a coworker’s digital footprint — work chat messages, emails, spreadsheets, documents, audio recordings, and screenshots — and convert that data into an AI agent that mimics the coworker’s specific workflows, decision-making patterns, and communication style. The resulting agent can, in theory, perform the same tasks as the original employee.
The strategic logic is brutal in its simplicity: if management is going to cut roles made redundant by AI, it’s better that it’s someone else’s role. By building an AI replica of a colleague and demonstrating it to management, an employee can effectively argue that the colleague’s position is automatable — while positioning themselves as the person who built the automation.
The Corporate Context That Made This Possible
This didn’t emerge in a vacuum. According to reports fromIndia TodayandThe Financial Express, many Chinese companies have been asking employees to document their work in exhaustive detail — workflows, decision trees, internal communication styles — framed as a move toward efficiency and knowledge sharing.
Employees recognised the pattern. The same data being collected for “efficiency” could be used to train AI systems that replace them. The response was to get ahead of the threat — by pointing it at someone else first.
The tool went viral on GitHub and spread rapidly across Chinese social media platforms, where it sparked intense debate about workplace ethics, knowledge ownership, and the moral implications of using corporate data infrastructure to eliminate colleagues.
The Counter-Attack: “anti-distillation.skill”
The arms race didn’t take long to begin. A female developer in China reportedly created“anti-distillation.skill”— a counter-tool designed to protect employees’ expertise from being harvested by AI systems.
The tool works by subtly rewriting work documents so they remain clear and professional to human readers, while obscuring the critical details that AI systems rely on for pattern recognition. It introduces vague phrasing, removes key decision steps, and restructures information in ways that make it harder for machines to learn from. Users can control how much information they want to conceal, depending on their level of concern.
The result: a document that looks complete on the surface but is deliberately less useful for training AI. It’s the workplace equivalent of a honeypot — professional enough to pass human review, opaque enough to defeat machine learning.
Why This Matters Beyond China
The “colleague.skill” phenomenon is not a Chinese quirk. It’s a preview of what happens in any workplace where AI adoption is accelerating and job security is uncertain.
The underlying conditions — companies collecting employee work data, AI systems capable of replicating knowledge work, and employees under pressure to justify their roles — exist in every major economy. China is simply further along the adoption curve.
| Factor | China (2026) | India/Global (Emerging) |
| Corporate AI mandates | Widespread | Growing rapidly |
| Employee skill documentation | Formally required at many firms | Informal, increasing |
| AI agents replicating knowledge work | Deployed at scale | Early adoption |
| Awareness of “distillation” risk | High — viral on social media | Low — but rising |
| Counter-tools available | Yes (anti-distillation.skill) | Not yet mainstream |
The gap between China and the rest of the world on this issue is approximately 12–18 months. Indian IT services companies, BPOs, and knowledge-work firms — where documentation and process standardisation are already deeply embedded — are particularly exposed.
The Ethical Fault Lines
The colleague.skill debate has exposed 3 distinct ethical positions that will define how companies respond:
Position 1 — “It’s just efficiency”:Companies that frame knowledge documentation as neutral productivity tooling. This position ignores the power asymmetry between employers who own the data infrastructure and employees whose livelihoods depend on it.
Position 2 — “It’s sabotage”:Employees and ethicists who argue that using a colleague’s work data to build their AI replacement — without consent — is a form of workplace sabotage, potentially actionable under data privacy laws in jurisdictions with strong employee protections.
Position 3 — “It’s survival”:Workers who argue that in a system where companies are actively building AI to replace roles, individual self-preservation is a rational response to a structural threat they didn’t create.
None of these positions is entirely wrong. That’s what makes this genuinely hard.
What’s Next
The colleague.skill story will force a reckoning that most enterprise AI deployments have been avoiding:who owns the knowledge that employees generate at work, and who has the right to use it to train AI?
Most employment contracts give companies broad rights over work product. But “work product” was written in an era when knowledge lived in documents and databases — not in the behavioural patterns, communication styles, and decision-making heuristics that AI systems can now extract from digital footprints.
Expect this to become a labour law battleground within 18 months. The first major lawsuit — an employee suing a company for using their documented workflows to train an AI that replaced them — will set a precedent that reshapes enterprise AI deployment globally.
The real question isn’t whether AI will replace jobs. It’s whether the rules governing how that replacement happens will be written by companies, by governments, or by the workers themselves.
Is your company collecting employee skill data? Are you protecting yours? Tell us on Twitter @StartupFeed_official
