Deep Tech: Creating Value from First Principles
What Brian Eno Taught Me About How Deep Tech Can Unlock New Technological Paradigms
In previous posts, I explored research from the strategy group at the Fuqua School of Business on Deep Tech Commercialization; the platform we built, Scientifiq.AI, for finding deep tech ideas globally; and a growing concern of mine: our overemphasis on value capture and the role of business education in it. This post shifts attention to value creation through deep tech, which can drive prosperity by providing businesses with new building blocks of progress.
What Brian Eno Taught Me About Deep Tech
If you are a music lover, you’ve probably heard of Brian Eno. If you haven’t, you’ve probably heard many of the albums he produced (or co-produced): U2’s The Unforgettable Fire, The Joshua Tree, and Achtung Baby1. I could probably go on about Brian Eno, his generative music2, and his other experimental projects. But I want to talk about something else that Brian Eno created (on his Macintosh):
Yes, Brian Eno created the Windows 95 sound. Here’s a quote from an SFGate article about this project and his process3:
The idea came up at the time when I was completely bereft of ideas. I'd been working on my own music for a while and was quite lost, actually. And I really appreciated someone coming along and saying, “Here's a specific problem -- solve it.”
The thing from the agency said, "We want a piece of music that is inspiring, universal, blah- blah, da-da-da, optimistic, futuristic, sentimental, emotional," this whole list of adjectives, and then at the bottom it said “and it must be 3 1/4 seconds long.”
I thought this was so funny and an amazing thought to actually try to make a little piece of music. It's like making a tiny little jewel.
In fact, I made 84 pieces. I got completely into this world of tiny, tiny little pieces of music. I was so sensitive to microseconds at the end of this that it really broke a logjam in my own work. Then when I'd finished that and I went back to working with pieces that were like three minutes long, it seemed like oceans of time. - Brian Eno (Interview in SFGate)
Eno’s deep dive into the Windows 95 sound follows the arc of many deep innovations. At some point, combinatorial ideas run out of gas. You get only so much “new” from remixing the known.4
Motivated by curiosity, musical talent, and a passion for challenge, Eno returned to foundational principles and dove into the microstructure of sound. This research uncovered sonic landscapes he had never imagined. When applied to longer compositions, the new musical elements he discovered while creating the Windows95 sound made it feel like he was exploring entirely different musical contours, where moments expanded into vast oceans of time.
This is the deep tech arc—(1) hitting the limits of combinatorial innovation, (2) the need to delve deep for a new conceptual framework, (3) the discovery of entirely new primitives, and (4) then a burst of innovation with a new paradigm.
Like Eno’s research into sound unlocked a new way of composing music, deep tech ventures could unlock new technological possibilities by reimagining the fundamental paradigms of computation, energy, and medicine.
Why Should We Care About Deep Tech?
The term Deep Tech has been around for some time but is now gaining traction in investment and academic circles. Definitions vary, from “technology solutions based on substantial scientific or engineering challenges” (Wikipedia) to innovations with significant “technology risk”—e.g., the possibility that a technology may not be feasible. Other definitions highlight deep tech’s challenges, such as high R&D costs, long incubation periods, and uncertainty, or focus on specific domains like quantum computing and next-generation batteries.
Growing research also examines why deep tech ventures are harder to develop and fund. Ramana Nanda and colleagues argue that high experimentation costs limit investment. Arora et al. highlight the joint effects of tech and market risks, creating a “missing middle” that makes some deep tech harder to finance. However, we must first understand how deep tech generates value before tackling organizational and financing challenges.
In this post, I argue that deep tech matters because it creates value in ways conventional (combinatorial) innovation cannot—by creating new primitives, making the constraints of existing paradigms less relevant, and solving unsolved problems with previously impossible technologies.
How Does Combinatorial Innovation Create Value?
Before developing our concept of deep tech innovation, it is helpful first to outline a model of “normal” combinatorial innovation, which works by recombining existing, discrete components in novel ways. These components operate at different levels of abstraction, enabling both incremental improvements and entirely new technologies.
Anyone familiar with computer science knows that programming operates at multiple levels. At the lowest level, assembly language and machine code interact directly with hardware, manipulating memory and processor instructions. Higher-level languages like C, Java, and Python don’t require you to know binary math. Developers can write more human-readable code with these languages. At an even higher level, frameworks and APIs provide fully formed software components, enabling rapid development of applications without ever knowing or interacting with the underlying logic of many components. Today, generative AI pushes this abstraction further, allowing us to 'write code' using natural human language.
You can see this in other creative domains, like music. At the most fundamental level, sound waves are the raw materials. Notes and chords structure these sounds, and they combine into melodies, harmonies, and rhythms. Higher up, song structures and genres provide frameworks for variation. Developers use APIs, and musicians sample, remix, and reorganize patterns to create new music.
Combinatorial innovation reassembles existing components into new configurations. These recombinations can drive substantial productivity gains for innovators who create novel and useful linkages.
The math behind combinatorial innovation is also exponential. With k known components, they can be mixed into bundles of up to k-1, excluding trivial cases. Even with just five components, there are 25 possible combinations. With 100 elements, the number explodes to approximately 1.27 × 10^30. Everyday combinatorial innovation has significant potential, and extensive research highlights both (a) its impact and (b) the characteristics of those who excel at it.
However, not all combinatorial innovation is equal. “Shallow innovation” (SI), what I like to think of as “API innovation,” and now “chat-based innovation” is becoming increasingly easy to execute across many domains. While still valuable, these forms of innovation are highly competitive because their ease makes it easy for most people to create with these building blocks.
There is also deeper combinatorial innovation (DCI), which is less obvious and less accessible to most people. Unlike shallow innovation, DCI involves rigorously recombining existing paradigms in novel ways or returning to the most fundamental components of established approaches. As technology matures, primitives are often abstracted away to make innovation more accessible, but this can reduce novelty and quality. By going deeper into these foundational building blocks, DCI enables the creation of breakthroughs that shallow innovation cannot achieve.
Therefore, shallow innovation operates within a given paradigm, leveraging its existing frameworks to address problems. In contrast, deeper combinatorial innovation typically does one of two things: (a) it pushes beyond the superficial limits of a paradigm while still working within its foundational building blocks, or (b) it merges distinct paradigms, such as applying computer science principles to biology. The actual test of these innovations lies in whether they solve meaningful problems and, for businesses, whether they offer a viable path to profitability, at least in the short term.
In essence, combinatorial innovation creates value by recombining existing components, frameworks, or paradigms to optimize performance (e.g., GPUs repurposed for AI), lower costs (e.g., low-code/no-code software tools), improve accessibility (e.g., smartphone cameras), and enable new applications (e.g., ride-sharing apps integrating GPS, mobile payments, and cloud computing) within existing technological constraints.
How Does Deep Tech Create Value?
While shallow and deeper combinatorial innovation remix existing components and primitives, real deep tech creates value in a different way.
Existing definitions, which emphasize “science,” “advanced engineering,” or “high cost,” serve as useful descriptors but miss the true source of value creation.
Deep tech’s significance does not come merely from applying the scientific method or tackling costly and complex problems. Its value emerges when old technological constraints, previously accepted limitations, and established primitives are replaced with an entirely new conceptual and technological framework. To understand what makes deep tech truly deep, we must see it as a venture into “continuous reality” where discretization and modularization have not yet occurred. Researchers draw circles around phenomena that haven’t been identified before to create new and generalizable primitives. These primitives then become part of a new framework for innovation.
The Failure of Existing Models
It is perhaps helpful to begin at the end.
Every technological paradigm has limits, and those limits eventually make it difficult to push the technology forward, even at great cost. This limit occurs when current frameworks and scaling laws (e.g., put x in, y comes out) stop working. Sometimes, these limits are due to engineering constraints, but more often, the underlying assumptions of the science may no longer hold. For instance, a physical “law” doesn’t behave as it should at some scale.
So, scientists and engineers stop trying to optimize within an asymptotically failing framework and start questioning whether a new one is needed. Recognizing this failure as a fundamental limit rather than a fixable inefficiency is a key step (e.g., Nelson and Winter (1977)).
After this realization takes hold, the next step is to revisit the first principles.
The Search For a More General Conceptual Framework
As the existing paradigm reaches diminishing returns, scientists, researchers, firms, and investors begin searching for a new framework. This kind of shift requires abstraction, challenging widely held assumptions, stripping away existing conceptual models that serve as the foundation for current technologies and impose constraints on progress.
The goal is to start afresh with a new model that reveals deeper patterns still unseen by humans, allowing innovators to perceive a path forward that was not visible when we looked at the world through the old model.
New Generalizable Primitives from Continuos Reality
The search for a new framework requires reconceptualizing the problem to be solved, particularly by creating new primitives. Primitives are a technological paradigm's most basic building blocks, forming the foundation for designing and constructing more complex concepts, structures, systems, and technologies.
It is important to note that while new systems may, in principle, perform the same functions as those built within the old paradigm, they are not merely more efficient versions of existing technologies. They represent an entirely different way of thinking about the problem (one example is the mRNA vaccine).
For instance, semiconductors were not simply better vacuum tubes, just as qubits are not improved transistors. They are new foundational primitives.
Development of New Innovation Paradigms
Once a new conceptual framework is developed and enough of its essential primitives are defined, the work of building new technologies on this foundation can begin. Using new frameworks and primitives distinguishes deep tech innovation from even the most advanced forms of combinatorial innovation.
Combinatorial innovation starts with existing primitives and conceptual paradigms, rearranging and recombining them in novel ways to extend the capabilities of known systems. Deep tech, on the other hand, involves changing the primitives themselves, redefining what is possible, and setting the stage for entirely new technologies, and perhaps even new industries. Once these new foundations are established, they become paradigmatic, forming the infrastructure that enables the next wave of combinatorial advances.
The End of the Classical Computing Paradigm?
Just as the limits of vacuum tubes forced a shift to an entirely new computational paradigm, today’s semiconductor industry is in need of innovations beyond incremental improvements. There are already people trying to figure out what is next in computation.
The future of computation may not just be about better chips. It might consist of a reconceptualization of what computation itself is.
Quantum Computing, Semiconductors, and Limits of Existing Scaling Laws

From Vacuum Tubes to Semiconductors
In a previous post, I talked about Intel, its early rise, and what now looks like its decline. But Intel is just one milestone in the broader journey of computing.
While the computing industry has deeper roots, a good starting point for discussing quantum computing is the vacuum tube era. This era emerged around World War II and lasted until the late 1950s. At its core was the vacuum tube itself, a tiny lightbulb like thing that acted as an on/off switch and could, with input, perform calculations by manipulating binary states.
When Vacuum Tube Paradigm Could No Longer Scale
Interacting with a vacuum tube-based computer back then was not easy. The ENIAC, for example, had 18,000 vacuum tubes, and programming it meant writing instructions in binary (0s and 1s), then feeding them to the machine via punch cards, paper tape, or plugboards.
These computers were incredibly costly and complex. They used lots of energy, broke down often, generated enormous heat, and required constant maintenance (imagine how many lightbulbs had to be replaced!).
Yet, these early computers were also incredibly important. One of the biggest users of vacuum tubes was AT&T, which relied on them as switches and amplifiers for long-distance calls. As America's telephone system expanded, AT&T ran up against the technological limits of this paradigm. In practical terms, connecting all of America with vacuum tubes, let alone the rest of the world, was impossible. Something different was needed. A new primitive that could overcome the inherent limitations of vacuum tubes. (My colleague Dan Gross has several papers on AT&T’s and its mechanization of the telephone system and its impact on the labor market.)
The vacuum tube architecture reached its physical breaking point as computational demands grew. The architecture was constrained by size, reliability, power consumption, and operational complexity. The scaling laws of the vacuum tube era effectively ran out of gas, unable to support further growth without immense cost and complexity. To sustain innovation, robust new primitives were needed.
The Search for a New Framework at Bell Labs
AT&T recognized that these diminishing returns were a real problem. Luckily, they had the financial resources (and time) to invest in deep research conducted by brilliant scientists to tackle this failure of the existing paradigm. The breakthrough came in 1947 at Bell Labs (AT&T’s internal “R” organization), where John Bardeen (Physics PhD, Princeton), Walter Brattain (Physics PhD, University of Minnesota), and William Shockley (Physics PhD, MIT) invented the transistor. (Check out The Idea Factory about Bell Labs)
The transistor replaced the vacuum tube by enabling electronic switching and signal amplification using solid-state materials rather than glass tubes. This shift required a complete rethinking of the underlying physics. It moved away from classical electrical principles toward understanding how electrons behave at the atomic level. It also required a shift in conceptual framework, leveraging new concepts from other fields, including materials science, quantum theory, and solid-state physics.
The New Primitives of Semiconductor Computing
Rethinking the conceptual underpinnings of computing hardware led to new primitives: the p-n junction and bandgap engineering, allowing engineers to precisely control electrical flow. This, in turn, enabled the transistor to function as a fast, reliable 0/1 switch. This was a massive reconceptualization of the building blocks of computing. What we thought of as the constraints of the vacuum tube era no longer applied in the same way.
The transistor, along with the new primitives that scientists learned from researching semiconductor physics, created the foundation for a new computing. These building blocks eventually led to integrated circuits, vastly expanding computational possibilities.
Redefining the Paradigm of Computing
In addition to the deep innovations that led to the transistor, other fundamental breakthroughs—such as Claude Shannon’s development of information theory—enabled the harnessing of this new paradigm.
More importantly, though, this conceptual breakthrough gave rise to much of the modern world: computing as we know it (with a device in our pocket several trillion times more powerful than the ENIAC), the evolution of culture in the form of movies made at home and shared around the world, smart cars, space travel, and much more.
However, today, semiconductor architectures are thought to be approaching their physical and economic limits, as even the deepest combinatorial innovation is running out of gas.
From Semiconductors to Quantum Computing
On February 19th, Microsoft announced what it claims to be a massive breakthrough in the history of computing: the first quantum computer powered by topological qubits.
I am a social scientist, not a quantum physicist, but I’ve been watching this evolution with great interest. For one, Duke University is a hotbed for Quantum Computing innovation (see the work of Jungsang Kim and Christopher Monroe, as well as dozens of other students and faculty). Its also not just scientific research anymore, several Duke faculty have launched the first publicly traded quantum computing company, IONQ.
Moreover, several scholars in my field, Strategy, have been assembling people to understand the economic implications of quantum computing (check out the website of a new conference on this topic run by Florenta Teodoridis, Milan Miric, and Avi Goldfarb).
If quantum computing works (and it looks like it may just have legs), then what we see with generative AI might be a blip compared to the transformation that is likely to happen with quantum.
Limits of Existing Computing Paradigms
The shift from vacuum tubes to semiconductors, and now from classical to quantum computing, again follows the arc of deep tech innovation. Semiconductor scaling hit limits not just of engineering but of the classical computing paradigm itself, forcing a rethinking of what computation could be.
Computing is fundamentally about how we represent and manipulate information. Classical computing encodes data in bits (discrete 0s and 1s stored in silicon transistors) processed through deterministic logic gates.
Over decades, researchers and engineers refined semiconductor manufacturing, allowing transistors to shrink exponentially in size while increasing computational power. However, fundamental constraints emerge as traditional computing and transistors approach atomic limits, notably diminishing returns from Moore’s Law. These barriers indicate that even the deepest combinatorial innovations in semiconductors are running out of gas, necessitating a new approach.
Searching for a New Framework
From my naive understanding, quantum computing isn’t just about building better semiconductors, just as semiconductors weren’t merely better vacuum tubes. While semiconductors retained their predecessors' binary computing model, quantum computing marks a complete departure from classical logic. Instead of encoding information as a series of 0s and 1s, quantum computers leverage the mind-bending (and, frankly, weird) principles of superposition and entanglement, allowing qubits, the new units of computation, to exist in multiple states simultaneously.
This new paradigm brings opportunities as well as scientific, engineering, and business challenges. Quantum systems are able to work with information in ways that classical ones cannot, but qubits are fragile and require specialized error correction mechanisms. Unlike classical redundancy-based error correction, quantum error correction must preserve entanglement without collapsing quantum states. The transition to quantum computing is not just about increasing processing power; it will require breakthroughs in physics, materials science, and algorithm design to fundamentally change how computation works.
New Primitives of Quantum Computing
While classical systems achieve speedups through multi-core architectures, GPUs, and optimized execution models, they ultimately follow a sequential or parallelized path. Quantum computers, however, use entanglement and interference to explore multiple computational pathways simultaneously, enabling exponential speedups for certain kinds of problems (e.g., cryptography).
To unlock these advantages, every layer of the “quantum stack,” from hardware to software, must be rebuilt with new primitives.
Scaling quantum computers means overcoming fundamental physics problems. The real question isn’t whether quantum computing works—it does—but whether we can turn this new paradigm into something practical and broadly useful, solving problems classical systems never could.
New Industries and a New Paradigm for Computing
Finally, quantum computing is not an extension of classical techniques but the creation of an entirely new computational paradigm. It operates beyond binary logic to redefine what computation could be. Introducing new primitives means that industries built around classical computation, from cryptography to materials science to optimization, may undergo fundamental transformations.
This is deep tech’s essence: recognizing limits, finding broader frameworks, and building new primitives that render old constraints irrelevant. Quantum computing isn’t just increment on an existing natural trajectory of computation; it redefines information processing, paving the way for new industries.
So, How Does Deep Tech Create Value?
Simple: Deep Tech creates value from first principles. It provides innovators with a new toolkit to solve problems that simply couldn’t have been solved within the existing paradigm.
This is why fostering deep tech innovation isn’t just important; it is essential. If we fail to invest in and support it through our research labs and universities, we risk ceding the future to those who do invest.
Brian Eno has also produced albums by Coldplay, Devo, and Talking Heads! One of the first cassette tapes I ever bought through my Columbia House subscription in middle school. Here, enjoy the early 90s.
My colleague Rick Larrick told me about his new GenerativeAI movie that is different every time you see it!
I originally learned about this project from Eno’s Wikipedia page.
One of my favorite YouTubers is Rick Beato; his basic premise is that music is so bad nowadays because it's so easy to create and consume (cheap combinatorial innovation).
This is one of the most expansive yet compact and engaging treatments of innovation that I have seen anywhere. Your clarity and scholarship concerning the crucial distinctions between combinatorial innovation and Deep Tech - along with your essential call to action - are a fine gift to the community. We've got to convert broader and more evidence-based awareness of where high-impact innovation comes from into more effective problem-solving and policy-making.
As a higher education professional, I sense a profound need and opportunity for Deep Tech innovation within that space. But wow, is there a LOT of noise (i.e. "cheap combinatorial innovation")