Perspectives · Black-Collar Workforce

How Black-Collar Workers Are Forged (Education): What Children Should Learn

To Parents: Suggestions for Cultivating the Qualities of AI Natives

Translator’s note. In contemporary professional usage, “black-collar” (黑领) refers to high-end specialists working in high-tech, information technology, engineering, automation, and scientific research — people who often work in dimly lit environments such as data centers, laboratories, and operations consoles, or who possess a distinctive creative and disruptive form of innovative talent. The author’s earlier essay used this term to describe the workers whose value in an AI era lies in judgment, oversight, and standing on the boundary between humans and systems rather than in execution. The previous essay set out what such workers are; this one asks how to raise children to become them.

About the author. Chun Xia is Founding Partner at TSVC.

Original Chinese version. This essay was first published in Chinese on WeChat: 《黑领是怎么炼成的(教育篇):孩子该学什么》.

In the previous article, we discussed what makes up the qualities of a black-collar worker — what kind of person is qualified to stand on the boundary between humans and systems and exercise judgment in the age of AI. But a more urgent question is this: if today’s white-collar class is becoming superfluous, then what should our children learn?

This is not a simple question of “which major to choose.” Today’s educational system is already lagging behind. From elementary school through university, it largely trains people for white-collar jobs that are disappearing. Standardized tests, drilling on practice problems, memorizing knowledge points, completing assignments according to fixed procedures — the goal of all this training is to produce people who can efficiently execute rules. But AI is becoming better and better at executing rules. When execution is no longer scarce, judgment becomes scarce.

What follows is an attempt to offer parents a framework for an answer.

1. Advice on University Studies

Today’s university majors largely reflect the demands of industrial civilization and information civilization — accounting, law, finance, management, marketing, human resources. The core capacity of these majors is handling the institutional affairs of the third layer of reality. And the third layer of reality is precisely the domain that AI is now entering at scale.

This does not mean that these majors have no value. It means that the way and purpose of studying them needs to change.

Law should not be treated as training in memorizing statutes and retrieving cases — AI already does these things better than humans. The core value of legal education lies in training the perception of where rights begin and end, the intuition for procedural justice, and the framework-thinking required to assign responsibility. A person who has studied law should be clearer than anyone else about a single question: when the system errs, who bears the consequences?

Political science is valuable not because it requires memorizing the structure of government bodies, but because it teaches one how power is distributed, how the public interest is defined, and how mechanisms of checks and balances are designed. These are directly usable knowledge when a black-collar worker is designing a governance framework for AI.

Public policy is an underrated major. It is not as theoretical as political science nor as abstract as economics; it directly confronts the question of “how do you design a workable rule?” In the AI era, the core value of public policy is that it trains a person to design policies that can be executed, evaluated, and corrected — under conditions of incomplete information, conflicting interests, and constrained resources.

Economics (the institutional branch) deserves more attention than macroeconomics or financial engineering. Incentive alignment, internalization of externalities, the dilemmas of collective action — these concepts help a person understand why intelligent individuals can produce collectively foolish outcomes, and how rule design can prevent that.

Philosophy (especially ethics) is more practical today than at any time in the past. Deontology provides red lines that cannot be crossed; utilitarianism trains one to weigh consequences; virtue ethics reminds us that when rules cannot cover the situation, one can only fall back on one’s own judgment. None of this can be done by AI in our place.

Sociology and anthropology help a person see that technology is not neutral. It carries cross-cultural values and is embedded in social structures in complex ways. A person who has studied sociology develops the habit of asking: who benefits from this technological optimization? Who is harmed? Who gets pushed to the margins? Whose culture does it fit best, and whose does it fit least?

Psychology trains the ability to recognize cognitive biases — why are humans prone to over-trusting systems that appear intelligent? Why does judgment distort under pressure? This knowledge is used not only to understand others, but also to manage oneself.

History offers long-horizon thinking. A person who has studied history, when confronted with the short-term mania of “efficiency above all,” will instinctively ask: what happened during the last comparable technological revolution? Which policies succeeded? Which led to disaster?

Communication studies trains the capacity to communicate and to shape influence. How do you build and maintain trust without lying? When the system fails, how do you explain it to the public? This is not a matter of rhetorical tricks — it is a discipline that requires systematic study.

International law and human rights law are particularly important for students who aim to work on transnational AI governance. An AI model trained in the United States, deployed in India, and affecting European citizens — whose jurisdiction governs it? There is no simple answer to this question, but there must be a framework with which to answer it.

Cognitive science provides the theoretical foundation for the division of labor between humans and machines. AI is good at pattern recognition but lacks common sense; humans are good at causal reasoning but tire easily. Only by understanding each side’s limits can effective collaboration be designed.

Statistics and data ethics, finally, is an underrated discipline. It is not just about computing p-values; it asks: where did the data come from? When are conclusions reliable? What kinds of discrimination follow from biased training data? These are the basic skills a black-collar worker uses to audit AI systems.

If one had to give the simplest, most practical combined recommendation, it would be: law + economics + one humanities discipline (philosophy / history / anthropology). This combination provides the capacity for institutional design (law), the capacity to understand incentives (economics), and the capacity for value inquiry and long-horizon thinking (the humanities).

2. Two Unavoidable Questions Parents Always Ask

Before continuing, there are two questions that nearly every parent asks, and they deserve direct answers.

Question One: Should children study both the humanities and the sciences?

The answer is yes, but the meaning of “both” needs clarification.

The traditional meaning of studying both humanities and sciences is that a child learns mathematics, physics, and chemistry as well as literature, history, and philosophy, to become a “fully developed person.” That goal is not wrong, but in the AI era it takes on new meaning: the humanities provide the framework for value judgment (what should be done), the thinking required for institutional design (who is responsible), and the long-horizon reference points (where we came from); the sciences provide logical reasoning, abstract modeling, and an understanding of the technology itself (what AI can and cannot do).

Both are indispensable. A person who knows only the humanities easily becomes an empty talker — they know that “fairness” matters, but they do not know where algorithmic bias comes from, and they cannot communicate effectively with technical teams. A person who knows only the sciences easily becomes an instrument — they can write efficient code, but they will not ask “should this code have been written?”

A more concrete suggestion: in basic education, do not specialize too early. The high-school division between “humanities track” and “science track” is an outdated institutional arrangement, premised on the assumption that a person must, at some fixed point in time, choose to “become a humanities student” or “a science student.” Where parents have a choice, they should delay this division as long as possible and keep their children in contact with both fields. At the university level, the ideal combination is “a hard-core science or engineering major + a systematic humanities or social-science minor,” or the reverse. The point is not two transcripts, but the fusion of two modes of thinking in one person.

This question deserves to be reconsidered. The consensus of the past few years has been “study programming, study AI, or you won’t be able to find a job in the future” — but recent facts show that this path is no longer safe. From late 2025 into early 2026, the job market for U.S. computer-science graduates took a sharp turn for the worse. CS graduates from Stanford and Berkeley began having difficulty finding jobs, something that would have been almost unimaginable a few years earlier.

This does not mean AI technology is unimportant. It means that the old logic of “study programming and you’ll get a good job” has been dismantled by AI itself. Companies have discovered that AI can complete a great deal of junior-level coding work, and entry-level programmer positions are being compressed. So the more accurate answer to the question of “should children study AI technology” is: yes, but the goal is no longer to become a programmer that companies will hire — it is to understand how AI works, what it is capable of, and what it is not, so that one can collaborate with it and make judgments about it.

Concretely, this kind of study includes: understanding the basic principles of machine learning (not deriving formulas by hand, but knowing that models learn patterns from data); understanding how data bias arises (what kinds of discrimination follow when training data is unbalanced); understanding the characteristics of large language models (why they “hallucinate,” why they have no real understanding); understanding the calibration of an AI system’s confidence (when it can be trusted, when it must be doubted).

More importantly, the engineer of the future is no longer simply a “producer of code” but a “manager of AI.” The core skill is no longer fluency with syntax, but task decomposition, context management, and the ability to judge whether an AI’s output is acceptable. A Stanford computer-science professor put it this way: “Software engineering is far more than coding… it is creative, problem-solving, identifying requirements, and designing software and systems to meet those requirements.” A director at the University of Pennsylvania’s career-services center likewise emphasized that what employers now demand is “higher-order thinking… and the skills that AI cannot replace — such as understanding what a customer’s vague requirements actually mean.”

The conclusion, then, is this: children should study AI-related knowledge, but as literacy, not as a job skill. The goal is to be able to understand AI, collaborate with AI, and oversee AI — not to become the junior programmer that AI is replacing. If a child has a genuine interest in computer science, they should not be discouraged from pursuing it, but they need to understand: simply being able to write code is no longer enough.

3. Cultivating the Four Kinds of Judgment

Disciplinary knowledge provides the raw material, but what truly makes a person irreplaceable in the AI era is the four kinds of judgment described below. These cannot be acquired by memorizing knowledge points; they must be practiced repeatedly in concrete situations.

Value judgment: what should not be optimized?

The algorithms of short-video platforms have discovered that the more extreme the content, the faster it spreads, and so they naturally tend to push extreme content. The algorithm itself has no concept of “good” or “bad”; it is simply optimizing for time-on-platform. A person with value judgment will ask: even if the spread is more efficient, should this kind of content be restricted?

This kind of judgment does not come from data; it comes from a prior definition of “what is acceptable.” In family and school education, this means not asking only “how do we do this more effectively,” but asking, regularly, “should this thing be done at all?” The latter is the training of value judgment.

Institutional judgment: who is responsible?

When an AI generates erroneous financial advice and a user loses money, the question of who is responsible must be settled in advance: is it a problem with the model, with the platform, or with the user? Without clear institutional judgment, a system cannot operate within a rule-of-law framework.

The way to train this kind of judgment is: when something goes wrong, do not ask only “where did the error occur?” but ask “by what rule do we assign responsibility?” Letting children participate in the formulation and enforcement of household rules from an early age is more effective than any classroom course.

Systems judgment: who pays the price of optimization?

A food-delivery platform optimizes its routing algorithm; delivery times shrink, but the rate at which couriers run red lights rises along with it. The system has shifted risk from the platform onto the couriers and onto pedestrians. A person with systems judgment will ask: who is paying the price of this optimization? Is the price acceptable?

This kind of judgment requires the habit of asking “and then what?” Every optimization comes at a price; the question is who is bearing it, and whether they had a chance to say no.

Cognitive judgment: when is the system wrong?

A financial trading model produces profits week after week; everyone begins to trust it; and one day, suddenly, it crashes. The hardest kind of judgment is to start being skeptical at the moment when the system is performing best. At that moment, such skepticism appears groundless, even foolish.

The way to train this kind of judgment is to cultivate a measured wariness toward “consensus.” In school, this means encouraging children to put forward views that differ from the teacher’s; at home, this means allowing children to question parental decisions. Not for the sake of opposition, but to build the habit that any consensus, no matter how strong, is open to examination.

4. The Most Underrated Capacity: Influencing Others

Many people assume that the most important thing in the AI era is to be the smartest person. That is not it. The important thing is to be a person who can get correct judgments executed.

Imagine a scenario: an AI team decides to launch a new system, and you identify a risk in it. If you cannot persuade the engineers, the product managers, the executives, and, when necessary, the public, the system will go live as originally planned. Holding the correct judgment without being able to make it have any actual effect is the same as having no judgment at all.

This means a competent collaborative overseer needs three kinds of language: technical language, to persuade engineers; commercial language, to persuade executives; and public language, to communicate with society. This is a kind of cross-world translation. In school and at home, this means not training children only to “produce the correct answer,” but training them to “make others understand and accept that answer” — writing, speaking, debate, negotiation. The value of these capacities will only grow in the AI era, never shrink.

5. The Capacity for Anti-Efficiency

This may be the most counterintuitive point of all. The work of white-collar workers is to improve efficiency, but in the AI era, one of the scarcest capacities is the capacity to keep efficiency from overstepping its bounds.

For example: a company plans to fully automate its customer service. Technically, this is entirely feasible. But a competent overseer might argue for keeping human agents — not for reasons of efficiency, but because emotional issues cannot be handled by automation, edge cases cannot be covered, and brand trust may be damaged. This is a deliberate decision to lower efficiency.

In an organization that worships efficiency, this kind of decision requires real grounding and conviction. The way to train this capacity is to ask children, regularly, to consider: is faster always better? Does the gain in efficiency come at an unacceptable cost? Is there anything worth protecting through deliberate inefficiency?

6. Soft Literacy: Growing Up in the Crevices of Life

The four kinds of judgment cannot be cultivated through classroom instruction alone. They come from the accumulation of experience, and experience comes from concrete, lived practice. The forms of soft literacy described below are continuous with those discussed in the previous essay on black-collar workers — they are the soil in which judgment takes root.

Cross-cultural travel and the capacity to adapt

Travel is not measured by “how many countries you’ve been to,” but by how you respond when flights are canceled, when language barriers arise, when plans fall apart. These experiences repeatedly shatter the illusion of control and teach a person to make decisions under uncertainty.

Sports

Athletic training builds far more than physical fitness. Endurance sports train the ability to remain steady under sustained pressure; competitive sports train calm decision-making in conflict; team sports train trust and collaboration. More importantly, sports teach a person to accept failure, to respect rules, and to push through one more time at the limit — these are the easily overlooked yet essential parts of the training of judgment.

Gardening and farming

Gardening teaches one fundamental thing: a system is not a machine, but an ecology. You cannot fix a garden the way you fix an engine. Sometimes you must wait, observe; sometimes the best intervention is no intervention at all. This is an extraordinarily scarce mental habit in the face of complex systems.

Volunteer service

Volunteering offers training in empathy and in acting without expectation of return. A person who has genuinely helped others develops a different sensitivity when facing decisions where efficiency conflicts with humanity.

Chess, board games, and esports

These activities train strategic thinking, anticipation of an opponent’s moves, and decision-making under incomplete information — precisely the cognitive tools required in complex strategic environments.

Meditation and mindfulness

Maintaining inner calm in high-pressure, high-stakes decisions is not a matter of personality but a trainable capacity. Meditation offers a way to observe one’s own emotions without being swept along by them.

7. The Deeper Goal: From Instrument-Person to Overseer-in-Collaboration

All of the above points toward a fundamental question: what kind of person are we trying to raise?

The educational systems of the industrial era — both Chinese and Western — were essentially designed to produce “instrument-people”: a workforce that could efficiently execute instructions, follow rules, and complete standardized tasks. Standardized testing, drilling, procedural assignments, deference to authority — these are the assembly line of instrument-people. Within industrial civilization, this was not a problem; the society needed many people who could execute rules.

But what AI is replacing is precisely the role of executing rules. When AI can perform standardized tasks more efficiently than humans, training people to be better executors is a path that leads to being replaced.

The goal of education in the AI era must therefore shift from “training efficient executors” to “training competent collaborative overseers.” This means children need to learn not how to be used by tools, but how to use tools — and to know when not to use them.

A “collaborative overseer,” in this sense, requires three layers of capacity.

First, learning to collaborate with AI agents. AI is no longer software in the “tool” sense; it is a “collaborator” with a degree of autonomy. It can be assigned tasks, can make decisions independently, and can interact with other AI systems. A competent collaborative overseer needs to understand how to assign tasks to AI, how to inspect its outputs, and how to correct it when it errs. This is unlike using Excel or Word — those are passive tools, while AI is a partner that requires continuous oversight and calibration.

Second, learning to manage the coordinated work of multiple AI agents. In the workplaces of the future, a single person may oversee multiple AI systems simultaneously: one for data analysis, one for customer communication, one for process orchestration. These systems may depend on each other, and they may also come into conflict. A collaborative overseer needs systems thinking — the ability to recognize how different agents interact, and to arbitrate when conflicts arise.

Third, and most importantly, placing the human at the center of the collaboration. This is not a slogan of “human supremacy” but an engineering reality: AI systems have no real intent, no sense of responsibility, and cannot bear the consequences of their actions. Only people can. When systems fail, the locus of accountability is human; when decisions involve value conflicts, the final call must be human; when AI’s optimization produces unacceptable costs, the authority to press pause is also human. The core capacity of a collaborative overseer is the ability to make trade-offs that AI cannot — between efficiency and humanity, between automation and control, between short-term gain and long-term risk.

Raising a collaborative overseer is not about teaching children more skills; it is about helping them develop a fundamental stance toward AI: AI is a partner one can collaborate with, but it must be overseen; tasks may be delegated to AI, but responsibility cannot be; AI can improve efficiency, but only humans can make the final trade-offs. This stance is cultivated through every daily opportunity to let children make decisions, bear consequences, and learn from mistakes.

8. A Fundamental Shift in the Educational Worldview

Beyond specific subjects and skills, parents need to understand a deeper change: today’s educational system was designed for industrial civilization, with the goal of producing people who can efficiently execute rules. In the AI era, execution is no longer scarce; judgment is. This means the goal of education must shift from “training execution” to “cultivating judgment.”

From standard answers to open questions. Industrial-era education pursued standard answers, because production lines could not tolerate deviation. But the core capacity of the AI era is not selecting the correct option among several — it is the courage to make a judgment when no ready-made answer exists, and to bear the consequences. Families and schools can deliberately reduce “multiple-choice” thinking and increase open-ended discussion of the form: what do you think? why?

From efficiency-first to value-first. Schools teach children “how to do things faster and better,” but rarely ask “whether this thing is worth doing.” Training in value judgment does not require an extra course — only the habit of asking why in everyday decisions. When a child asks, “why do I have to learn this?” do not just answer “because it’s on the test.” Discuss it seriously: in what context is this knowledge useful? What aspect of the world does it help us understand?

From memorizing knowledge to accumulating judgment through experience. AI can memorize every fact, but judgment comes from experience. Experience is not the natural accumulation of years; it is the accumulation of real decisions and their consequences. Let children take part in real decisions early — planning a route for a family trip, setting principles for allowance, drawing boundaries on screen time. These are the training grounds for judgment.

From individual achievement to systems thinking. Industrial-era education emphasized individual competitiveness — grades, rankings, prestige schools, high salaries. But the core questions of the AI era are often systemic: at what cost was this optimization achieved? Who is paying for the efficiency? A person who only cares about their own success will struggle to answer such questions. Training in systems thinking can begin with simple inquiries. When a headline announces “Platform X improved delivery speed by 20%,” sit with your child and ask: who provided the speed-up? Whose work made it possible?

9. Conclusion

Return to the original question: what should AI natives learn?

The answer can be put simply: learn what AI is not good at. AI is good at executing rules, not at making them; good at optimizing efficiency, not at deciding what should not be optimized; good at finding patterns in data, not at making judgments where there is no data; good at answering “how,” not “why” or “should.”

But these capacities cannot be transmitted by an “AI-era curriculum.” Judgment, sensitivity to value, systems thinking, the ability to remain clear-headed under pressure — these can only grow slowly out of real life experience. What parents can do is not to choose a “safe” major for their child, but to create an environment in which the child has the chance to make judgments, bear consequences, recover from failure, and grow their own judgment through real interaction with the world.

A true AI native is not someone who has used an iPad since childhood. A true AI native is someone who, in the age of AI, still preserves human judgment — someone who can collaborate with AI, oversee AI, weigh trade-offs when efficiency and humanity collide, and bear the consequences with courage.


Translated from the Chinese original by Chun Xia, Founding Partner at TSVC. Read the original Chinese version on WeChat.