Article | Sleepy.md
Unfortunately, in this day and age, the more wholeheartedly you work, the easier it is to distill yourself into a skill that can be replaced by AI.
These days, the hot search lists and media channels have been flooded with "colleague's skill." As this matter continues to ferment on major social media platforms, the public's focus is almost inevitably enveloped by grand anxieties such as "AI layoffs," "capital exploitation," and "the digital immortality of the working class."
While these are indeed anxiety-inducing, what makes me most anxious is a line in the project README document:
"The quality of raw material determines the quality of skill: It is recommended to prioritize collecting long-form content written proactively by the person> decision-making responses> daily messages."
Those most easily perfectly distilled by the system, pixel-perfectly reconstructed, are precisely those who work the most diligently.
It's those who, after every project concludes, still sit at their desks to write a post-mortem document; those who, when faced with disagreements, are willing to spend half an hour typing a long-form response in a chat box, candidly analyzing their decision-making logic; those who are extremely responsible, meticulously entrusting all work details to the system.
Diligence, once the most admired virtue in the workplace, has now become a catalyst for accelerating workers' transformation into AI fuel.
We need to redefine a word: context.
In everyday context, context is the background of communication. But in the world of AI, especially in the world of those rapidly growing AI Agents, context is the roaring engine's fuel, the pulsating blood, the only anchor that allows models to make precise judgments in the chaos.
An AI stripped of context, no matter how amazing its parameter count, is nothing more than an amnesiac search engine. It cannot recognize who you are, cannot grasp the undercurrent hidden beneath the business logic, and has no way of knowing the long tug-of-war and trade-offs you experienced on this network woven from resource constraints and interpersonal dynamics when finalizing a decision.
And the reason why "colleague's skill" has caused such a huge stir is precisely because it coldly and precisely locked onto that mountain of hoarded high-quality context — modern enterprise collaboration software.
Over the past five years, the Chinese workplace has undergone a quiet yet grueling digital transformation. Tools like Feishu, DingTalk, Notion, and others have become vast repositories of corporate knowledge.
Take Feishu as an example. ByteDance has publicly stated that the number of documents generated internally every day is massive. These densely packed characters faithfully encapsulate every brainstorm, every heated meeting, and every strategic compromise of over 100,000 employees.
This level of digital penetration far exceeds any previous era. Once upon a time, knowledge was warm, lurking in the minds of veteran employees, drifting through casual chats in the pantry. Now, all human wisdom and experience have been forcibly drained of moisture, ruthlessly precipitated in the cold server matrix in the cloud.
In this system, if you don't write documents, your work cannot be seen, and new colleagues cannot collaborate with you. The efficient operation of modern enterprises is built on the foundation of every employee day by day offering contextual contributions to the system.
Diligent workers carry diligence and goodwill, unreservedly laying bare their thinking paths on these cold platforms. They do this to ensure the team's gears mesh smoothly, to strive to prove their value to the system, and to desperately carve out a place for themselves within this intricate commercial behemoth. They are not voluntarily surrendering themselves; they are simply awkwardly and diligently adhering to the survival rules of the modern workplace.
Yet, ironically, this contextual information left for interpersonal collaboration has become the perfect fuel for AI.
Feishu's admin panel has a feature that allows super administrators to bulk export members' documents and communication records. This means that the project reviews and decision-making logic you spent three years working on during countless late nights can be easily packaged into a lifeless compressed file with just an API call in a matter of minutes.
With the rise of "colleague.skill," some extremely uncomfortable derivatives have started to appear on GitHub's Issues section and various social media platforms.
Some have created "ex.skill," attempting to feed AI with chat records from WeChat over the past few years so that it can continue to argue or be tender in that familiar tone; others have created "unrequited love.skill," reducing untouchable palpitations to a cold interpersonal sandbox, repeatedly deducing probing dialogues, step by step seeking the optimal emotional outcome; and still others have created "paternalistic boss.skill," chewing on oppressive PUA rhetoric in the digital space in advance, constructing a sad psychological defense line for themselves.

The use cases of these skills have completely transcended the realm of work efficiency. Unconsciously, we have become accustomed to wielding the cold logic of tool treatment, dissecting and objectifying those once fleshy, lively individuals.
German philosopher Martin Buber once proposed that the foundation of human relationships boils down to two radically different modes: the “I-Thou” and the “I-It.”
In the encounter of the “I-Thou,” we transcend prejudices and regard the other as a complete and dignified living being to gaze upon. This bond is open without reserve, full of vibrant unpredictability, and precisely because of its sincerity, it appears particularly fragile; however, once plunged into the shadow of the “I-It,” the living person is reduced to an object that can be dismantled, analyzed, categorized, and labeled. Under this extremely utilitarian scrutiny, the only thing we care about is “What is the use of this thing to me?”
The emergence of products like “ex-skill” signifies that the tool rationality of the “I-It” has thoroughly invaded the most intimate emotional domain.
In a genuine relationship, a person is three-dimensional, full of wrinkles, constantly flowing with contradictions and nuances, and their reactions vary based on specific circumstances and emotional interactions. Your ex may react very differently to the same sentence when waking up in the morning compared to working late at night.
However, when you distill a person into a skill, what you strip away is merely the residual part of their functionality that happened to be “useful” to you and could “benefit you” in that specific bond. The once warm and self-experiencing individual is completely drained of their soul in this cruel purification, alienated into a “functional interface” that you can plug and play with at will.
It must be acknowledged that AI did not invent this chilling coldness out of thin air. Before AI emerged, we were already accustomed to labeling others, precisely measuring the “emotional value” and “social network weight” of each relationship. For example, in the dating market, we quantify a person’s attributes into grids; in the workplace, we classify colleagues as “capable” or “slackers.” AI just made this implicit, functional extraction between individuals blatantly explicit.
People have been flattened, leaving only that facet of “what is useful to me.”
In 1958, Hungarian-British philosopher Michael Polanyi published “Personal Knowledge.” In this book, he introduced a highly penetrating concept: tacit knowledge.
In a famous dictum, Polanyi stated, "We know more than we can tell."
He gave an example of learning to ride a bicycle. A skilled cyclist, riding effortlessly, can perfectly balance in every gravity tilt, but he cannot precisely describe to a novice the subtle intuition of that moment in words or dry physics formulas. He knows how to ride, but he cannot articulate it. This type of knowledge that cannot be encoded or spoken is called tacit knowledge.
The workplace is full of such tacit knowledge. A senior engineer, when troubleshooting a system failure, may quickly pinpoint the issue by glancing at the logs, but he would find it challenging to document this "intuition" built upon thousands of trial-and-error instances. An excellent salesperson may suddenly fall silent at the negotiation table, and the sense of pressure and timing that silence brings is something no sales manual can capture. An experienced HR professional may, just by observing a candidate's half-second of avoiding eye contact, sense the exaggerations on the resume.
What "Colleague.skill" can extract is only that which has already been written down or spoken—explicit knowledge. It can scrape your postmortem documents but cannot capture your struggles while writing them; it can replicate your decision responses but cannot replicate the intuition behind your decision-making.
What the system distills is always just a person's shadow.
If the story were to end here, it would be nothing more than another poor imitation of humanity by technology.
However, when a person is distilled into a skill, this skill does not remain static. It is used to reply to emails, write new documents, make new decisions. In other words, these AI-generated shadows begin to generate new contexts.
And these AI-generated contexts are then deposited in Feishu and DingTalk, becoming the training materials for the next round of distillation.
As early as 2023, a research team from the University of Oxford and the University of Cambridge jointly published a paper on "model collapse." The research indicated that when an AI model is iteratively trained using data generated by other AIs, the distribution of the data becomes increasingly narrow. Those rare, marginal but highly authentic human traits are rapidly erased. After just a few generations of training on synthetic data, the model completely forgets the long-tail, complex real human data and instead outputs extremely mediocre and homogenized content.
In 2024, Nature also published a research paper stating that training future generations of machine learning models on AI-generated datasets would severely taint their outputs.

This is like those meme images circulated on the internet, originally a high-resolution screenshot that has been shared, compressed, and reshared by countless people. With each spread, some pixels are lost, and some noise is added. In the end, the image becomes blurry, digitally impasted.
When real human context with implicit knowledge is squeezed dry, and the system can only train itself on impasted shadows, what will be left in the end?
What's left is only the right kind of nonsense.
When the river of knowledge dries up into an endless regurgitation and self-consumption of AI by AI, everything the system exhales will become extremely standard, extremely safe, but also irredeemably hollow. You will see countless perfectly structured reports, numerous flawlessly crafted emails, yet they will lack any human touch, devoid of any truly valuable insight.
The great defeat of knowledge is not because the human brain has become dull; the real tragedy is that we have outsourced the right to think and the responsibility to leave context to our own shadows.
Days after the explosion of "colleague.skill," a project called "anti-distill" quietly emerged on GitHub.
The author of this project did not attempt to attack big models or write any grand manifestos. They simply provided a small tool to help workers auto-generate seemingly reasonable but actually filled with logical noise invalid long texts on Feishu or DingTalk.
His purpose was simple: to hide his core knowledge before being distilled by the system. Since the system likes to fetch "actively written long texts," give it a bunch of nutritionless gibberish.
This project did not catch fire like "colleague.skill"; it even seemed a bit insignificant and feeble. Using magic to defeat magic still fundamentally revolves around the game rules set by capital and technology. It cannot change the trend of the system relying more and more on AI and increasingly overlooking real humans.
But this does not prevent this project from being the most tragically poetic and profoundly metaphorical scene in the entire absurd drama.
We work extremely hard to leave traces in the system, write detailed documents, make meticulous decisions, trying to prove our past existence in this vast modern corporate machine, proving our worth. Unaware that these very serious traces will eventually become the eraser that wipes us out.
But looking at it from a different perspective, this may not necessarily be a complete deadlock.
Because what the eraser wipes away is always just the "past you." A skill packaged into a file, no matter how sophisticated its scraping logic, is essentially just a static snapshot. It is frozen in that exported moment, relying only on stale nutrients, endlessly spinning in established processes and logics. It lacks the instinct to face unknown chaos and certainly does not possess the ability to self-evolve through real-world setbacks.
When we hand over those highly standardized, formulaic experiences, we also free up our own hands. As long as we continue to reach outward and constantly break and reconstruct our cognitive boundaries, that shadow resting in the cloud will forever only follow in our footsteps.
A human is a fluid algorithm.
Welcome to join the official BlockBeats community:
Telegram Subscription Group: https://t.me/theblockbeats
Telegram Discussion Group: https://t.me/BlockBeats_App
Official Twitter Account: https://twitter.com/BlockBeatsAsia