Prologue: The Shape of the Problem

People living through technological transitions tend to get the timing wrong in a specific way. They overestimate how fast things will change in the short term and underestimate how deep the changes run in the long term. The automobile did more than replace the horse. It restructured the physical geography of nations, created suburbs, collapsed rural economies, generated new forms of death, and produced a geopolitical order organized around petroleum. The people buying Model Ts in 1910 were thinking about getting to work faster. They were not thinking about the interstate highway system or the oil wars of the twenty-first century.

We are in the early phase of a technological transition that will prove at least as consequential. Artificial intelligence, in its current form as large language models and generative systems, will restructure what engineering means, who gets to do it, how firms organize technical labor, and what cognitive capacities a society retains or loses as a result.

This essay attempts to map the terrain. It is organized around several predictions, each grounded in observable trends but necessarily speculative in the details. The predictions concern the normalization of AI in engineering workflows, the consolidation of entry-level positions, the restructuring of professional hierarchies, the emergence of contract-based technical labor, and what I will call the Great Dumbing: a generational erosion of critical thinking capacity resulting from over-reliance on systems that simulate understanding without possessing it. These are facets of a single structural transformation, and they have to be understood in relation to one another.


I. The Coming Default: AI as Expected Competency

The Software Precedent

Within the next year, the effective use of AI assistants will shift from competitive advantage to baseline expectation in software engineering. This is less of a bold prediction than a description of something already underway. Companies that adopted GitHub Copilot, Cursor, and similar tools in 2023 and 2024 observed measurable productivity gains in boilerplate generation, code completion, and routine debugging. The early adopters gained an edge. The late adopters noticed. The conversation has moved from "should we use AI?" to "how do we use AI well?" and is approaching "why aren't you using AI?"

This trajectory follows a pattern we have seen before. There was a time when proficiency with Git was a distinguishing skill. Before that, using an IDE rather than a plain text editor marked a certain kind of developer. These tools became invisible infrastructure, expected rather than exceptional. AI-assisted coding is on the same path, compressed into a much shorter timeline.

The implications for hiring are concrete. Job postings will begin listing "experience with AI-assisted development tools" as a requirement. Interview processes will test whether a candidate can decompose a problem into components suitable for AI assistance, evaluate the output, identify failure modes, and integrate the result into a larger system. The skill being tested shifts from "can you write code" toward "can you manage the process of code generation, including the parts you delegate to a machine."

This is a subtle but real change in what competence looks like.

Beyond Software: The Expansion into Physical Engineering

The more interesting and less discussed question is what happens when this dynamic extends beyond software into the disciplines that shape the physical world: mechanical engineering, civil engineering, electrical engineering, and their many subdisciplines.

The infrastructure for this expansion is being built now. Autodesk, the company behind AutoCAD, Revit, Inventor, and Fusion 360, sits at the center of an enormous portion of the world's engineering design workflows. They have the data. They have the user base. They have every financial incentive imaginable. It would be genuinely surprising if Autodesk, or a competitor like Dassault Systèmes, Siemens, or PTC, did not release a flagship AI model integrated into their CAD/CAM/CAE platform within the next two to three years. The technical foundations already exist. Generative design, which uses algorithms to explore design spaces under specified constraints, has been a feature in Fusion 360 since 2017. What changes with modern AI is the interface. Instead of specifying constraints through menus and parameter fields, an engineer could describe requirements in natural language and receive a set of candidate geometries, each optimized for different trade-offs.

Consider what this means for a mechanical engineer designing a bracket for an aerospace application. Today, this engineer needs deep knowledge of materials science, stress analysis, manufacturing constraints, and the specific CAD software they are using. Within five years, this engineer may need to know how to prompt an AI system with the right constraints, evaluate the structural integrity of the output using simulation tools that are themselves AI-augmented, and make judgment calls about trade-offs the AI cannot resolve on its own. The required knowledge does not disappear. The center of gravity moves from execution to evaluation, from generation to judgment.

The same logic applies across disciplines. A civil engineer designing a drainage system could use AI to generate candidate layouts optimized for flow capacity, cost, and environmental impact. An electrical engineer designing a PCB could use AI to suggest component placements and trace routings that minimize interference and thermal load. The AI does not need to be perfect. It needs to be good enough to accelerate the design process and shift the engineer's role from drafter to critic.

These are extrapolations of capabilities that already exist in rudimentary form, combined with economic incentives that are overwhelming. The companies that sell engineering software are in a competitive race. Whichever platform integrates AI most effectively will capture market share. The engineers who use these platforms will be expected to use the AI features. The same cycle of normalization currently playing out in software engineering will play out in physical engineering, delayed by perhaps two to four years.

The Question of Trust

There is a critical difference between AI-assisted software engineering and AI-assisted physical engineering, and it concerns the consequences of failure. A bug in a web application may cause data loss or user frustration. A flaw in a bridge design may cause death. The tolerance for error in physical engineering is orders of magnitude lower, and the regulatory frameworks reflect this.

AI will still be adopted in physical engineering. But the adoption will be mediated by a dense layer of verification, validation, and professional liability. Engineers will use AI to generate candidates, and the responsibility for the final design will remain with a licensed professional. The AI becomes a tool operating within a framework of accountability that predates it and will survive it.

This creates a new kind of problem, though. If the AI generates a design that passes every check an engineer knows how to run, and that design fails in a mode the engineer did not anticipate because they relied on the AI's apparent thoroughness, who bears responsibility? The engineer, who signed off? The software company, whose model produced the design? The firm, which adopted the tool? This question has no settled answer, and the legal and regulatory battles over it will shape the adoption curve for decades.


II. The Consolidation of Entry-Level Positions

What Junior Meant

To understand what is about to happen to junior engineering positions, we need to understand what they have historically been. A junior engineer, whether in software, mechanical, civil, or electrical engineering, has traditionally occupied a specific role within an organization. They perform routine tasks under supervision. They learn by doing. They absorb institutional knowledge through proximity to senior practitioners. They make mistakes in controlled environments where the consequences are manageable. The junior position is a training ground and the mechanism by which a profession reproduces itself.

AI disrupts this mechanism in a precise and dangerous way. The routine tasks that junior engineers perform (writing boilerplate code, running standard analyses, producing initial drafts of designs, conducting literature reviews) are exactly the tasks that AI handles well. If a senior engineer can delegate these tasks to an AI system that produces output in seconds rather than days, the economic case for hiring a junior engineer to do the same work collapses.

This is already happening in software engineering, and there is no structural reason it will stay confined there as AI tools mature. Even conservative estimates of task automation in entry-level knowledge work suggest that 30-40% of routine tasks could be handled by AI within the next few years. The implications are severe.

The Raising of the Bar

What follows from this consolidation is a radical redefinition of what "junior" means. The floor rises. An entry-level engineer in 2028 will be expected to possess skills that would have qualified them for a mid-level position in 2022. They will need to understand the fundamentals of their discipline and also the mechanics of AI-assisted workflows: how to prompt effectively, how to evaluate AI output for correctness and completeness, how to identify the specific failure modes of generative systems, and how to integrate AI-generated components into larger designs that require human judgment at the seams.

This creates a paradox. The training ground that produced competent engineers is being eliminated by the same technology that demands higher competence from engineers. Where do the engineers of 2030 learn the skills they need if the entry-level positions where those skills were historically acquired no longer exist?

Some of this learning will shift to universities, which will be forced to integrate AI tools into their curricula as core infrastructure. Some will shift to boot camps and certification programs. Some will happen through self-directed learning, as it always has for the most motivated practitioners. But there is a loss here that is difficult to quantify: the tacit knowledge, the institutional awareness, the professional judgment that comes from working within a functioning organization under the guidance of experienced practitioners. These things cannot be simulated. They can only be experienced. And if the positions where they are experienced cease to exist, they will simply not be experienced.

The profession will, in effect, be pulling up the ladder behind it.

The Flattening of Hierarchy

One plausible response to this consolidation is a flattening of professional hierarchies. If the distinction between junior and senior rests on a gradient of capability that AI compresses, if a competent practitioner with good AI tools can produce output previously requiring years of specialized experience, then the traditional hierarchy becomes difficult to justify.

We will likely see, in some organizations and eventually across sectors, a move toward a single technical title. Something like "Technical Staff" or "Member of Technical Staff," a convention already used by companies like Bell Labs, Meta, and some startups. Under this model, all engineers are expected to possess a baseline level of competence that includes AI fluency. Differentiation happens through scope of responsibility, domain expertise, and the ability to handle ambiguity and novel problems that AI cannot resolve.

This sounds egalitarian at first. In practice, it creates a different form of hierarchy, one organized around judgment rather than years of experience. The people who rise to the top will be those who can decompose novel problems, evaluate AI output critically, make decisions under uncertainty, and communicate effectively with both machines and humans. The people who stay at the bottom will be those who can only operate the tools.

The irony is that this new hierarchy may be harder to climb than the old one. The old hierarchy had a clear path: start junior, do the work, gain experience, advance. The new hierarchy demands capabilities (critical thinking, pattern recognition across domains, comfort with ambiguity) that resist any specific training program. They are, in a meaningful sense, the product of a particular kind of intellectual formation that begins long before one's first job.


III. The Oscillation: Layoffs, Shortages, and the Contractor Economy

The First Wave

The technology industry has already experienced what might be called the first wave of AI-motivated workforce reduction. The layoffs of 2023 and 2024, while driven by multiple factors including post-pandemic overcorrection and rising interest rates, were explicitly framed by some executives in terms of AI-driven productivity gains. The logic is straightforward: if a team of ten engineers using AI tools can produce the output previously requiring twenty, the other ten become redundant.

This logic will extend beyond software engineering. As AI tools mature for mechanical, civil, and electrical engineering, the same arithmetic will apply. Design teams will shrink. Analysis that required a team of specialists will be performed by a smaller group augmented by AI. The initial impact will be felt most at the entry and mid levels, where the automatable tasks are concentrated.

The Correction

The layoff logic contains a flaw that will become apparent over time. It treats engineering labor as fungible and interchangeable, reducible to units of output. Engineers possess institutional knowledge, domain expertise, client relationships, and problem-solving intuitions that no model captures. When companies lay off large numbers of technical staff, they lose organizational intelligence along with labor capacity. The AI tools that justified the layoffs cannot replace this intelligence because they were never designed to. They were designed to accelerate execution.

The result will be a correction. Companies that cut too deeply will discover they cannot function. Critical projects will stall. Quality will degrade. Customer relationships will suffer. The same executives who authorized the layoffs will authorize hiring surges, often at higher salaries than the positions they eliminated, because the market for experienced engineers will have tightened.

This oscillation (layoff, then shortage, then desperate hiring) has occurred in every previous technology cycle. What differs this time is the speed and amplitude. The AI-driven productivity gains are real enough to justify significant workforce reductions. But the resulting organizational fragility is also real, and it will surface faster than executives expect.

The Rise of the Technical Contractor

Into this volatile labor market will step a figure familiar from other industries but relatively uncommon in engineering: the independent contractor or contract firm. The analogy to construction is revealing.

In the construction industry, general contractors do not typically maintain large permanent workforces. They assemble teams for specific projects, drawing on a network of specialized subcontractors (plumbers, electricians, framers, concrete workers) who move from project to project as demand shifts. The general contractor's core skill is managing the work: scoping projects, coordinating specialists, ensuring quality, and bearing liability.

A similar model is plausible for engineering in the AI era. Companies may maintain a small core of permanent technical staff who understand the business, manage AI systems, and define requirements. For specific projects (a new product launch, a system migration, a facility design) they bring in contract engineers or contract firms who execute the work using AI tools and then move on.

This model offers companies flexibility and cost savings. It offloads the risk of permanent headcount in a rapidly changing technological landscape. It allows companies to access specialized expertise without maintaining it on payroll.

The costs are significant, though. Contract workers have weaker institutional loyalty and less incentive to invest in organizational knowledge. Quality control becomes harder when the workforce is transient. The knowledge that accumulates through sustained engagement with a codebase, a product, or a system (the knowledge that prevents catastrophic failures and enables incremental improvements) leaks out of the organization with every departing contractor.

The construction analogy is instructive in another way. Construction is an industry with thin margins, chronic quality problems, frequent cost overruns, and a high rate of worker injury and exploitation. If engineering follows the construction model, the prestige of the profession may diminish considerably.

There is also a deeper structural question. If engineering becomes predominantly contract-based, who trains the next generation? In permanent employment, companies invest in junior engineers because they expect to benefit from that investment over time. In the contract model, that incentive disappears. The cost of training falls on the individual engineer, who must invest in their own skill development between contracts, or on educational institutions, which lack the capacity to provide the practical, hands-on experience that engineering requires.


IV. The Great Dumbing

The Thesis

The predictions above concern the structure of engineering as a profession and an industry. What follows concerns something more fundamental: the cognitive capacity of the society that produces engineers.

I use the phrase "the Great Dumbing" deliberately, with full awareness that it sounds provocative. The intent is descriptive. The thesis is this: widespread, habitual reliance on AI systems for cognitive tasks (problem-solving, writing, analysis, decision-making) will erode the capacity of individuals and populations to perform these tasks independently. This erosion will be most severe in generations that grow up with AI as a default cognitive resource, never having developed the underlying skills that AI appears to provide.

This is a thesis about the relationship between practice and capacity. Cognitive skills, like physical skills, require practice to develop and maintain. The capacity for sustained analytical reasoning, for holding multiple variables in mind while working through a problem, for generating novel solutions through combinatorial thinking: these abilities are developed through years of effortful practice, most of it occurring during the formative period between childhood and early adulthood.

If AI systems handle the effortful parts of cognition (if a student can generate an essay by providing a prompt, solve a math problem by describing it in natural language, debug code by pasting it into a chat window) the student never develops the capacity to perform these tasks independently. They develop, instead, the capacity to operate the AI. This is a real skill. But it depends entirely on the continued availability and reliability of the AI system, and it does not cultivate the underlying cognitive muscles that independent problem-solving would have built.

The Generational Gradient

The impact will be uneven across generations. People who developed their cognitive skills before AI became widely available (roughly, anyone who completed their education before 2025) possess a foundation of independent capability that AI augments rather than replaces. They can use AI as a tool because they understand the domain well enough to evaluate its output. They have a baseline that exists independently of the technology.

Generation Alpha (those born roughly between 2010 and 2025, growing up with AI as a constant feature of their educational and social environment) will be the first generation for whom this foundation is compromised at scale. The reason has nothing to do with intelligence. The variable is practice. If the educational system and the broader culture provide AI as a default resource for cognitive tasks, the incentive to develop independent cognitive skills diminishes. Why struggle through a difficult math problem when you can ask an AI to explain the solution? Why develop a writing style through years of revision when you can generate polished prose on demand?

The answer ("because the struggle itself builds capacity") is true but extremely difficult to implement as pedagogy. It requires either restricting access to AI tools during formative years, which grows more unrealistic as the tools become ubiquitous, or redesigning education to focus on the cognitive skills that AI cannot provide, which requires a clarity about what those skills are that we do not yet possess.

Generation Beta (those born after 2025, who will grow up in a world where AI is as unremarkable as electricity) will face a more severe version of the same problem. For them, the question will shift from "why should I develop this skill when AI can do it?" to something more troubling: "what skill? I didn't know this was something humans needed to do." The absence of a skill is invisible to those who never possessed it. You do not miss what you never had. And you cannot know what you are missing.

The Historical Parallel

The closest historical parallel is the relationship between literacy and pre-literate societies. Before widespread literacy, humans maintained vast bodies of knowledge through oral tradition. Memorization, narrative structure, and mnemonic technique were highly developed cognitive skills. With the advent of writing and then printing, these skills atrophied. We did not notice the loss because we gained something (access to far more information than any individual could memorize) that seemed obviously superior.

The loss was real, though. The capacity for sustained memorization, for holding complex narratives in mind without external aid, for the particular form of deep attention that oral tradition requires: these are measurably diminished in literate populations compared to oral ones. We accept this trade-off because the gains from literacy are overwhelming. The question worth asking is whether the trade-off involved in AI-mediated cognition is similarly favorable.

There are reasons to think it may not be. Literacy augmented a specific cognitive function (memory) while leaving other functions intact or even strengthening them. The act of reading is itself a demanding cognitive exercise that develops attention, comprehension, and analytical thinking. Writing is even more demanding. Literate societies did not merely offload memory. They redirected cognitive effort toward new and arguably more complex tasks.

AI, by contrast, offers to augment cognition as a whole: the entire process of thinking through a problem and arriving at a conclusion. If the process is offloaded, what remains to develop the cognitive muscles that the process would have exercised? The user of AI is reducing cognitive effort, and the interface is designed to make this feel good. The output is designed to be satisfying. The entire user experience is optimized for the sensation of having accomplished something without the effort that accomplishment traditionally requires.

This design is intentional. The companies that build AI systems are incentivized to make their products as easy and as indispensable as possible. Friction reduces engagement. Difficulty reduces retention. The product is optimized for dependence, and the user's data is the mechanism by which that dependence generates revenue.

The Epistemic Dimension

There is a subtler aspect of the Great Dumbing that concerns the relationship between a person and their own beliefs.

When someone works through a problem independently, arriving at a conclusion through their own reasoning, they develop a specific kind of relationship with that conclusion. They understand the answer and the path that led to it. They can defend it, modify it, or abandon it based on new evidence. They own the reasoning in a way that is inseparable from their intellectual identity.

When someone receives an answer from an AI system, this relationship weakens. They may accept the answer. They may even be able to explain it if pressed. But they do not possess the deep, structural understanding that comes from having derived it themselves. The answer is received and consumed rather than constructed and produced.

Over time and at scale, this shift produces a population that holds beliefs without understanding their foundations. Most people have always held most of their beliefs on the basis of authority rather than independent reasoning, so the phenomenon has precedent. But AI accelerates the trend and extends it into domains where independent reasoning was previously necessary and common: education, professional practice, creative work, and technical problem-solving.

The result is a population growing increasingly dependent on systems it does not understand, operated by corporations whose interests diverge from the public good, for cognitive functions it can no longer perform independently. This is a reasonable extrapolation of current trends, and it is worth taking seriously even if the specifics prove different from what I have described.

The Data Question

The corporate dimension here is easy to overlook and difficult to overstate. The AI systems becoming cognitive infrastructure for hundreds of millions of people are built and operated by a small number of corporations. These corporations have a fiduciary obligation to their shareholders to maximize the value extracted from their users. The primary mechanism of value extraction is data.

Every prompt submitted to an AI system is a data point. Every interaction refines the model. Every user's cognitive habits, professional challenges, creative impulses, and personal vulnerabilities are captured, processed, and monetized. The system is designed to be maximally useful because maximal utility drives maximal engagement, which drives maximal data collection. User dependence serves this pipeline directly.

This structural conflict of interest is, as of this writing, entirely unregulated in any meaningful sense. The same systems reshaping how people think are owned by entities whose interests lie in shaping how people think in ways that serve corporate objectives. The parallel to social media is exact and should be alarming. We have already observed, documented, and largely failed to address the cognitive and social harms caused by algorithmically optimized social media platforms. AI systems represent a more powerful version of the same dynamic, operating at a deeper level of cognitive engagement.

Gen Alpha is growing up immersed in this environment. Gen Beta will know nothing else.


V. The Feedback Loop

The phenomena described above reinforce one another in a cycle that amplifies each component.

AI tools become standard in engineering. Entry-level positions are consolidated. The training pipeline narrows. The workforce becomes more dependent on AI tools because independent skill development has been curtailed. The tools become more powerful because they are trained on the data generated by this dependent workforce. The workforce becomes more dependent. The training pipeline narrows further.

Meanwhile, the generations entering the workforce have been shaped by the Great Dumbing. Their independent cognitive capacity is diminished relative to previous generations, because the environment in which their cognitive skills developed provided less incentive and fewer opportunities for the effortful practice that builds capacity. They are fluent users of AI tools but weaker independent reasoners. They can manage AI agents but struggle when the agents fail in novel ways.

This workforce is then organized into the flattened, contractor-based structures described earlier. There is no senior engineer to catch the error that the AI missed and the junior engineer cannot recognize. There is no institutional knowledge to draw on because the workforce is transient. There is no training pipeline to rebuild capacity because the entry-level positions where training happened have been eliminated.

The system becomes brittle. It works well under normal conditions, when the problems fall within the distribution the AI was trained on, when the environment matches the assumptions, when nothing novel happens. It fails catastrophically when something novel happens. And novel things always happen.


VI. What Remains Irreducible

The Case for Human Judgment

It would be irresponsible to present this analysis without acknowledging the countervailing forces. The most important is judgment under genuine uncertainty. AI systems, including the most advanced ones currently available, operate by pattern matching against their training data. They are extraordinarily good at this. But they are structurally unable to reason about situations that fall outside their training distribution in the way that humans can. A human engineer, confronted with a novel failure mode, can draw on physical intuition, analogical reasoning, and creative problem-solving in ways that no current AI system replicates. They can recognize that a situation is novel, that the patterns do not apply, and shift to a different mode of reasoning. This capacity is what distinguishes an engineer from a technician, and it is what will distinguish the engineers who thrive in the AI era from those who do not.

The question is whether this capacity will be cultivated or allowed to atrophy. It cannot be taught through AI-assisted workflows. It must be developed through direct engagement with difficult problems, through failure and recovery, through the specific kind of cognitive stress that builds resilience and flexibility. If the educational and professional structures that provide these experiences are dismantled, the capacity will erode, and no amount of AI augmentation will compensate.

The Value of Friction

There is a case to be made for the deliberate preservation of friction in cognitive work. Difficulty is the mechanism by which capacity is built. The athlete who trains with resistance becomes stronger. The musician who practices difficult passages develops technique. The engineer who works through problems without AI assistance develops the judgment that makes AI assistance useful rather than dangerous.

Every tool that reduces effort also reduces the development that effort would have produced. The rational response is to ensure that the development happens elsewhere: through deliberate practice, through educational design, through professional norms that value independent capability alongside tool fluency.

Whether this rational response will be implemented is uncertain at best. The incentives point the other way. Friction reduces productivity. Difficulty reduces engagement. Practice is expensive and slow. In a competitive economy optimized for short-term efficiency, the rational long-term response is exactly what will not happen unless someone deliberately intervenes.


VII. Conclusion: Navigating the Transition

What We Can Reasonably Conclude

The transformation of engineering by AI is a process underway. The questions that remain open concern its speed, its depth, and its consequences. These will be determined by the decisions made by individuals, institutions, corporations, and governments over the coming years and decades.

We can reasonably conclude the following.

AI fluency will become a baseline expectation across all engineering disciplines within five to ten years. The transition will begin in software engineering, where it is already well advanced, and extend to mechanical, civil, electrical, and other disciplines as tools from Autodesk, Siemens, Dassault, and others mature.

Entry-level engineering positions will be consolidated. The bar for what constitutes a junior engineer will rise significantly. Traditional professional hierarchies will flatten, and the distinction between junior and senior may give way to more fluid organizational structures organized around judgment and scope rather than years of experience.

The engineering workforce will experience cycles of layoff and shortage as companies miscalculate the relationship between AI-augmented productivity and organizational capability. The contract-based employment model will expand, bringing flexibility at the cost of institutional continuity and professional development.

The cognitive capacity of successive generations will be shaped by their relationship with AI tools. Those who develop independent skills before becoming dependent on AI will be better positioned than those who do not. The educational and cultural systems that might address this are not currently designed to do so.

The data economy underlying AI development creates structural conflicts of interest that existing regulatory frameworks are not addressing.

What We Cannot Conclude

We cannot conclude that these trends are inevitable in their specifics. Technology adoption is shaped by regulation, culture, economics, and individual choice. The construction-contractor model may not emerge if professional norms resist it. The Great Dumbing may be mitigated if educational institutions adapt with sufficient speed. The consolidation of entry-level positions may be offset by the creation of new roles we cannot currently envision.

We also cannot conclude that the net effect will be negative. Every previous technological transition has produced both losses and gains, and the gains have generally been substantial even when the losses were severe. AI may enable forms of engineering creativity and capability that we cannot currently imagine, just as the computer enabled forms of analysis that were previously impossible.

What we can conclude is that the transition deserves serious, sustained attention from everyone it will affect. The decisions being made now, in corporate boardrooms and university departments and government agencies and individual careers, will determine whether AI augments human capability or supplants it, whether the engineering profession becomes more accessible or more stratified, whether the next generation inherits a richer or a poorer cognitive toolkit.

The shape of the future is determined by us. But only if we are paying attention. And paying attention, it turns out, is exactly the skill most at risk.