One caveat that I wish to make at the beginning of this—very lengthy—entry. I am looking at technologies that can change the world in a non-incremental fashion, regardless of whether they are technologies that are best financed by VC investments. I am also not constraining myself to technologies that will bear fruit in the next one to five years. My horizon is somewhat longer than that, spanning between five to twenty years.

It’s tough to make predictions, especially about the future

Predicting the future by extrapolating from the past is foolish; it all too often leads to “linear” extrapolations that fail to account for the outsized impact of non-incremental technological developments. It is, however, entirely common even among highly intelligent experts.1

So, first, let’s look at some famously wrong predictions about three non-incremental technological developments.

The telephone:

The Telephone purports to transmit the speaking voice over telegraph wires. We found that the voice is very weak and indistinct, and grows even weaker when long wires are used between the transmitter and receiver. Technically, we do not see that this device will be ever capable of sending recognizable speech over a distance of several miles. Messer Hubbard and Bell want to install one of their “telephone devices” in every city. The idea is idiotic on the face of it. Furthermore, why would any person want to use this ungainly and impractical device when he can send a messenger to the telegraph office and have a clear written message sent to any large city in the United States?

The electricians of our company have developed all the significant improvements in the telegraph art to date, and we see no reason why a group of outsiders, with extravagant and impractical ideas, should be entertained, when they have not the slightest idea of the true problems involved. Mr. G.G. Hubbard’s fanciful predictions, while they sound rosy, are based on wild-eyed imagination and lack of understanding of the technical and economic facts of the situation, and a posture of ignoring the obvious limitations of his device, which is hardly more than a toy….

In view of these facts, we feel that Mr. G.G. Hubbard’s request for $100,000 of the sale of this patent is utterly unreasonable, since this device is inherently of no use to us. We do not recommend its purchase.

The telephone eventually replaced the telegraph as the primary means of communication across great distances. I don’t think anyone born in a developed country after the 1980s has ever had the occasion to use a telegraph to send messages.

Airplanes:

Heavier than air flying machines are impossible.

On December 17, 1903, the Wright brothers made the first controlled, powered and sustainable flight with a heavier than air vehicle.

Computers:

Originally one thought that if there were a half dozen large computers in this country, hidden away in research laboratories, this would take care of all requirements we had throughout the country.

Based on data from Enders Analysis, in 2013 the installed base of personal computers in the world was more than 1.5 billion. Assuming that there are 200 countries (which is undoubtedly an overestimation of the number of countries then extant given that this statement was made in 1952), each with six computers per Aiken’s statement, that would be 1,200 computers worldwide.

Thoughts on Technologies

So, without further ado, here are the technologies that I think are potentially disruptive, divided into the following categories:

I exclude certain categories of technologies such as genomics, genetic engineering, and anti-senescence treatments not because they are not potentially disruptive, but because I lack the biological and medical knowledge to reflect meaningfully on their potential.3

Preliminary observations

The human race has an energy problem, which can be defined as a set of interrelated issues: generating sufficient energy without poisoning the environment to the extent that it is no longer feasible for humanity to survive on this planet, efficiently transmitting the energy to the places where it is needed, and reducing the overall power consumption required to maintain our present lifestyles.4

Within the broad subject of energy, there are four technological developments that I am particularly interested in:

The first two developments cover the issue of energy generation, while the second two deal with the related issues of storage and transmission of energy.

While many clean technology focused investors and government programs have touted solar and wind power as the solution to humanity’s energy needs, I am somewhat skeptical of these assertions. For one thing, I remain unconvinced that solar and wind power, without significant advancements in energy storage technology, can effectively meet base load requirements due to their intermittent nature. Moreover, the much touted “grid parity” claims by solar and wind proponents are based on a metric called levelized cost of electricity (LCOE). Both academics and the U.S. Energy Information Administration have criticized this metric as misleading.

Instead, my interest—undoubtedly influenced in part by my childhood exposure to physics—has been focused on nuclear fission, i.e. the splitting of atoms of uranium, thorium, or plutonium. In this, I would say I am not alone. In 2001, the Generation IV International Forum was chartered by 13 member states5 to coordinate efforts to develop advance nuclear systems. These include:

  • Very-high-temperature reactors (VHTR) including the pebble bed reactor being developed at MIT in the United States and at Tsinghua University (清华大学) in China
  • Lead-cooled fast reactors being developed in Belgium, Russia,6 the Republic of Korea, and the United States
  • Gas-cooled fast reactors being developed by a consortium of European countries including France, the Czech Republic, Hungary and Slovakia
  • Molten salt reactors (MSR) using a thorium fuel cycle, including flouride salt-cooled high-temperature reactors being developed at the Chinese Academy of Sciences (中国科学院) in China and at the Oak Ridge National Laboratory in the United States
  • Sodium-cooled fast reactors which have already entered commercial production in the Russian Federation and Japan, while experimental designs are in development in China and India
  • Supercritical-water-cooled reactors being developed by the Nuclear Power Institute of China in China

It should be noted that a lot of these “advanced” reactor designs were initially proposed in the early days of nuclear technology (the late 1940s to 1970s) but are only now becoming the subject of more extensive research and development. Nevertheless, these new technologies offer a range of interesting possibilities, and suggests that despite the Fukushima Daiichi nuclear disaster in 2011, there remains some measure of political will to continue developing advanced nuclear technology.

Moreover, both India and China are actively exploring the use of thorium rather than uranium as a nuclear fuel, which is significantly more abundant in these two countries than uranium and offers potential non-proliferation benefits,7 reduced nuclear waste and reduced radioactive half-life, and more environmentally friendly mining operations.8

I remain firmly convinced that nuclear energy remains a promising avenue for solving humanity’s energy needs. The advantages of nuclear energy relative to other energy sources should be obvious:

  • Nuclear energy is not intermittent and can serve (and has served) as base load power generators.
  • Nuclear energy is energy dense and requires less land area to generate the same amount of energy as solar or wind.
  • Nuclear energy generates negligible greenhouse gases (the amount of greenhouse gas generated by mining activities is relatively small compared to the energy generated by even current generation nuclear power plants).

While there are disadvantages, they are largely technical problems that are soluble:

  • There is a non-zero probability of accidents. “Three Mile Island”, “Chernobyl”, and “Fukushima Daiichi” are etched into popular consciousness. Yet, let us look more carefully at these three disasters. As of April 2015, there are 443 operational nuclear reactors generating electricity in the world, according to statistics from the Nuclear Energy Institute. The first commercial nuclear reactor was constructed in 1958. There have been three significant accidents in the last 57 years. Indeed, in the case of Three Mile Island, although the reactor suffered a core meltdown, the radioactive waste was contained and most post-accident investigations indicate minimal radioactive contamination. I am not suggeting that nuclear energy is completely safe. I am, however, suggesting that a balanced analysis would suggest that adequate measures can mitigate many of the risks, and that newer generation reactors (including the “generation IV reactors” currently under development by the Generation IV International Forum), taking into account lessons learned, are designed to mitigate such risks.
  • There is a need to deal with the waste products produced by the nuclear reactor. Historically, the United States has disposed of its nuclear waste by storing it repositories. Yet spent fuel from conventional nuclear reactors contains both usable fissionable materials and non-usable fissionable materials, and can be reprocessed to separate the usable fissionable materials for re-use as fuel for fast nuclear reactors. Indeed, some of the new fission reactors on the drawing board are designed to use such spent fuel as their own fuel source, thus creating a “nuclear ecosystem” where fuel burned in one reactor might become—with reprocessing—fuel for another reactor.

There have been some (though not many) VC investments in advanced nuclear fission reactors, notably:

  • Transatomic Power, which received $5.5 million in three rounds of seed funding from 2012 to 2015 from a group of investors including Peter Thiel, Daniel Aegerter, the Founders Fund, and FF Science to commercialize a molten salt reactor.
  • Terrapower, which has received an undisclosed amount of funding from Bill Gates, Charles River Ventures and Khosla Ventures, is another company that is trying to commercialize a different form of fission reactor, a traveling wave reactor.9

It is entirely likely that this field is one that may be of limited attractiveness to conventional VC investors, given the long development times, high upfront capital requirements, and significant regulatory and public relations hurdles.

There is one energy technology that is even more speculative than the advanced nuclear fission reactors described above: net-positive, self-sustaning nuclear fusion, i.e. the fusion of two light atoms (typically isotopes of hydrogen or helium). Ever since the 1950s, controlled, net-positive nuclear fusion has been one of the holy grails of applied nuclear physics. It has always remained out of reach.

The problems with nuclear fusion can be summed up as follows:

  • Starting a nuclear fusion reaction with as little energy as possible to ensure that the reaction is net-positive. While researches at the Lawrence Livermore National Laboratory have achieved a remarkable milestone by achieving a state where the energy released by a fusion reaction exceeded the amount of energy absorbed by the fusion fuel, it remains the case that the fusion reaction observed was still not self-sustaining, i.e. generating more energy than was used to initialize the reaction. (This occurs because the amount of energy absorbed by the fuel is less than that used to power the lasers due to unavoidable physical limitations on energy efficiency).
  • Containing the resultant nuclear fusion, aptly stated by the French Nobel laureate in Physics, Pierre-Gilles de Gennes, as, We say that we will put the sun into a box. The idea is pretty. The problem is, we do not know how to make the box.
  • Controlling the plasma resulting from nuclear fusion to prevent the growth of instabilities in the plasma that might otherwise shut down the fusion reaction.

Since 2006, ITER a massive multinational government program funded by the European Union, the United States of America, The People’s Republic of China, the Russian Federation, the Republic of India, the Republic of Korea and Japan, has been working on developing a working 500MW fusion reactor based on an updated Tokamak design.10

I remain somewhat skeptical about ITER, if only because with a few exceptions (e.g. the Manhattan Project and the Apollo Project) large, monolithic, preplanned research projects often deliver less than satisfactory outcomes. Instead, smaller, more agile “bottom-up” approaches such as those used at DARPA tend to perform better because they offer more (and earlier) points of failure and there is a significant amount of path dependence in such large projects, because decisions early in the life the project close off other avenues of development.

There have also been some (though not many) commercial developments in nuclear fusion, including one by a startup and one by the R&D arm of a major defense and advanced technology company:

Yet while I am skeptical about both ITER and the private sector fusion initiatives, I remain convinced of the long-term potential of nuclear fusion. It will be a massive disruption to humanity‘s energy calculus, with follow-on effects that touch almost every other sphere of human activity. As such, it will undoubtedly continue to attract some of humanity’s finest minds.

Similar to Vinod Khosla, I believe that novel energy storage technologies will be a disruptive force in the energy sector, principally because it offers a means of transforming—to some extent—intermittent energy sources like solar, wind, and tidal power into dispatchable energy sources by storing excess energy for later use.

I am, however, skeptical that any single energy storage technology will prove to be a “magic bullet”. Rather, for this aspect of technology, I find myself preferring to be agnostic: explore all options and understand what trade-offs each option requires and where it might be best used within an energy grid that draws power from diverse sources and has different geographical features within it. For example, the current form of pumped hydro energy storage, which has the longest history and is the most widely used option,11 is constrained to areas which have sufficiently large reservoir sites at higher elevations than the power plant. Molten salt batteries such as the sodium-sulfur (Na-S) battery being developed in Japan and the United States, by contrast, are location agnostic and can be installed near the power plants to store excess electricity for subsequent use and thus may have wider appeal provided that costs and safety concerns can be adequately addressed. Each storage option has its own appropriate scope, and preferring one over another will often have more to do with site specific considerations rather than an overwhelming superiority of one form over another.

One of the more interesting uses of high-temperature superconductors is in the field of energy transmission through the power grid, by means of superconducting power cables. Even though high-temperature superconductors that can be cooled by liquid nitrogen rather than liquid helium were discovered in 1987, because these superconductors are brittle ceramic compounds rather than metals, it has been challenging to use them for applications that require long, flexible wires. Only within the last ten years have scientists developed potentially commercially viable methods of fabricating such wires.

Electricity lost in transmission in most developed countries averages six to eight percent of total electricity generated.12 Aggregated across the developed and developing world, the cumulative energy lost to resistance in power lines and transformers is colossal. Moreover, to the extent that such energy is generated through carbon-emitting energy sources, a significant amount of greenhouse gasses and air pollutants is emitted solely to compensate for this energy loss, with concomitant harm to human health and the environment.

High-temperature superconducting cables, cooled by liquid nitrogen, could eliminate a significant proportion of that loss by eliminating resistance in the power lines and transformers. While some energy would be consumed in cooling the power lines and transformers, it would likely be lower than the total energy lost in transmission using classical power lines and transformers. Moreover, such superconducting cables—in the form of HVDC power cables—would allow energy to be generated in places further removed from the places where energy is consumed. This benefits renewable (solar, wind, tidal, and geothermal) and nuclear energy sources, because: first, often the ideal renewable energy generation sites are not near large population centers, and second, nuclear reactors—for public safety and public relations reasons—will often need to be sited far from large population centers. Furthermore, if space-based solar power generation become viable due to substantially lower launch costs and new fabrication techniques, ground-based receiver stations will likely need to be sited far from large population centers, and again superconducting cables will be needed for efficient power transmission.

I see superconducting power cables as an “enabler” that multiplies the impact of other technologies. It enables energy to be transmitted more efficiently across long distances and thus facilitates the transition to newer and cleaner energy sources. It increases the power transmission capacity that can be packed into a given amount of space and thus allows urban power grids to scale to the increased demands placed on it by urbanization.

Several test projects using high-temperature superconducting power cables have been initiated since 2001, including:

  • A 600m, 138kV AC cable operated by the Long Island Power Authority that commenced operations in Long Island which commenced operations in 2008 (and is still operating) and using cables from Nexans13
  • A 500m, 22.9kV AC cable operated by LS Cable & System and KEPCO in Incheon which commenced operations in 2011 and ceased operations in 2013
  • A 500m, 80kV DC cable operated by LS Cable & System and KEPCO in Jeju Island which commenced operations in 2014

While most of these test projects are at least partly funded by government research and development grants, there are, in principle, no reasons to doubt that superconducting power cables could eventually become commercially viable without government subsidies.

Superconducting transformers and generators

Tangentially, developing flexible wires for superconducting power cables has the potential to lead to more efficient electric generators and transformers using superconducting wire rather than classical copper wire.

Preliminary observations

One of the areas of physics that I find personally fascinating is high temperature superconductors. While there was a flurry of VC interest in superconductors in the late 1980s, it did not result in any meaningful successes.

Yet recent developments in superconductivity—including advances in fabricating and shaping cuprate superconductors into flexible wires—suggests that high-temperature and low-temperature superconductors have a number of possible technological applications that may prove quite promising, notably superconducting quantum computers and maglev space launch platforms.

The concept of a quantum computer has been around for more than three decades. In 1982, Richard P. Feynman developed a conceptual model of a quantum system that could be used for computation. In 1985, David Deutsch demonstrated mathematically that any physical process could in principle be modeled perfectly by a quantum computer. However, it was not until Peter Shor devised a quantum algorithm for integer factorization in 1994, that quantum computing became more than a subject of some academic curiosity.

The reason for this is simple. A fully-functional, sufficiently large quantum computer running Shor’s algorithm could be used to crack the public-key cryptographic algorithms used in banking and secured transmission of data across the internet. This realization transformed research into quantum computing into a topic with real national security and economic implications. Indeed, this discovery has spawned a search for post-quantum cryptographic algorithms that are not solvable by currently known quantum algorithms.

While there are several methods of constructing quantum computers, including ion traps, cavity quantum electrodynamics, and nuclear magnetic resonance, to date one of the most promising methods has been superconducting circuits. This has been the approach taken by research groups at IBM and Google,14 primarily because these circuits use macroscopic rather than microscopic quantum effects. Therefore, they can be engineered using established microfabrication techniques to fabricate qubits using Josephson junctions, and can be scaled in complexity.

The full implications of a breakthrough in quantum computing is still largely unknown. A great number of challenges exist, the most important of which is the fragility of the quantum mechanical effects upon which such technology depends (i.e. the need to prevent quantum decoherence due to interactions with the external world). Yet quantum computing holds significant promise in a range of different fields of human endeavors, including searching very large datasets, simulating quantum physical effects, optimizing complex optimization problems with numerous local minima, validating that classical software is free of bugs, and likely a range of other new applications that have not yet been conceived of.

Quantum computers from D-Wave Systems Inc.

The “quantum computer” developed by D-Wave Systems Inc. operates on the principle of “quantum annealing”, a method for finding the global minimum of a mathematical function, i.e. a combinatorial optimization problem with many local minima. It is not a general purpose quantum computer capable of running any quantum algorithm. Moreover, there independent researchers testing the D-Wave Two system found no clear evidence that the system was any faster or better than a conventional computer running classical annealing algorithms at solving an Ising spin-glass problem (the precise kind of problem that a quantum annealer is designed to solve).

In the 1970s, Dr. Gerard K. O’Neill developed a prototype magnetic mass driver, which he considered to be a key technology for lunar mining. The concept of a maglev surface-to-space launch platform draws on this theoretical and practical basis. This technology draws on the experience gained from experimental and commercial maglev projects, but applies it towards accelerating a payload to a significant fraction of escape velocity.

In principle, the maglev space launch platform can be thought of as a “cannon” firing a launch vehicle towards space.15 The chief advantage of such a system over current rocket systems is that a rocket system is terribly inefficient; it must not only deliver enough thrust to accelerate the payload to escape velocity, it must also deliver enough thrust to accelerate its stores of propellant to escape velocity. For many rocket systems, the proportion of the mass that is rocket propellant is between 85% and 95% of the total mass.16 By contrast, a maglev based space launch platform could have either de minimis propellant needs or at most just the amounts needed for a second stage booster to complete the acceleration to orbital velocity.

Two models of maglev space launch platforms have been proposed:

  • The Startram (in my opinion a rather inapt name that evokes images of a Disney ride rather than a workhorse space launch platform) in 2010
  • The Maglifter in 1994

The former posits a 100km launch tunnel that allows the payload to reach a velocity of 8km/s at approximately 6000m above sea level and does not require additional conventional rocket propellant to reach escape velocity; the latter posits a 8km catapult that allows the payload to reach a velocity of 0.27km/s at approximately 4,500m above sea level, whereupon conventional rocket propulsion or scramjet propulsion would take over to continue accelerating the payload.

Based on early—and likely too optimistic—cost projections, these platforms could offer costs of between $50/kilogram and $100/kilogram, an order of magnitude reduction from the approximately $1,100/kilogram costs that Elon Musk testified to Congress in 2004 that SpaceX could achieve with its heavy lift or super-heavy lift rockets.

Reducing launch costs by an order of magnitude or more not only improves the economics of existing uses of space (e.g. for communications satellites, geopositioning, and satellite imaging); it opens up entirely new uses of space. At minimum, I would expect to see reduced launch costs stimulating developments in:

  • Space-based power generation: Historically, the development of space-based solar power generation has been hampered by several challenges, most significantly—for these purposes—the cost of launching components manufactured on Earth into orbit.17
  • Microgravity or zero-gravity precision fabrication: Microgravity offers significant potential for manufacturing high value, low mass/bulk materials. One interesting application that ties in with the general theme of superior “superconductors” that I have mentioned here is the use of microgravity to grow large-grain high-temperature superconductors without chemical contamination from the substrate. Another area that holds great promise is in the development of unique alloys that take advantage of the microgravity environment to reduce structural flaws created by uneven mixing of the component metals.

The long-term consequences of such a development are likely to be analogous to the discovery of the New World by European explorers in the 15th century. Reducing launch costs has the potential to take humanity beyond the confines of Earth and thereby contribute to making humanity an interplanetary society, and thus setting humanity on the very long path to becoming an interstellar society.18

Mass drivers coupled with beamed propulsion

In Alastair Reynolds’ science fiction novel, Blue Remembered Earth, he describes the use—in the mid-22nd century—of a maglev launch system called the “blowpipe” that uses ground-based free-electron lasers to continue accelerating the payload (by means of either ablative or pulsed plasma propulsion) to escape velocities. Assuming developments in free-electron lasers continue to progress, this system might well replace the need for a rocket second stage to reach escape velocity.

Maglev space launch platforms are a long-term bet

Reusable rockets such as those being built by SpaceX or spaceplanes powered by the SABRE engine from Reaction Engines are likely to become the dominant space launch option in the short-term and medium-term given the formidable engineering challenges and financial costs of building a maglev space launch platform. Yet in the long-term, I believe that humanity will need to develop space launch options that do not rely on carrying significant quantities of propellant into space for a further order of magnitude cost reduction. It is for this reason that I remain interested—and optimistic—about the long-term viability of maglev space launch platforms.

Room-temperature superconductors

As a speculative aside, one area of fundamental condensed matter physics research that remains interesting is the pursuit of superconductors with critical temperatures of approximately 0° Celsius (32°F), colloquially known as “room temperature superconductors”. While so far no known room temperature superconductor has been found or created, superconductivity has repeatedly been discovered at temperatures previously thought to be impossible. Accordingly, it may be that at some point in the future research into condensed matter physics might lead to breakthroughs in room-temperature superconductors, which would be a breakthrough of tremendous significance.

Preliminary observations

Artificial general intelligence—a machine intelligence that is capable of performing any intellectual task that a human is capable of—has been the “holy grail” of artificial intelligence research since the 1950s. This holy grail has remained elusive for so long that it is almost a truism to say that artificial general intelligence is always fifteen to twenty-five years from becoming reality.

While I acknowledge that the creation of artificial general intelligence is likely to be a monumental undertaking, I believe that there are reasons to believe that it is physically possible to create it, and for the same reason that David Deutsch has expressed so eloquently:

Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation. This entails that everything that the laws of physics require a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.

We can understand this by considering that the human brain is a physical object subject to the laws of physics. It stands to reason, therefore, that, like other physical objects, it can be emulated on a computer with sufficient processing power, memory, and time. Even if there are quantum physical interactions within the brain that produces human consciousness, and I remain skeptical about that, in principle a properly designed quantum computer with the aforementioned processing power, memory and time could emulate such interactions.

Furthermore, while artificial general intelligence would be potentially the most disruptive event in the history of human technological innovation, even the more modest goal of building better machine intelligences capable of replicating components of human intelligence in more limited fields like facial, object and spatial recognition, locomotion, and natural language processing would have significant impact on human-machine interaction. Examples of this more modest level of machine intelligence currently in development or deployed include:

  • DeepFace from Facebook has already demonstrated 97.35% accuracy in identifying whether two faces in a dataset belong to the same celebrity, only a few tenths of a percent less accurate than a human (~98%).
  • The Deep Q-network developed at Google DeepMind has demonstrated the ability to learn the rules for 49 classic Atari 2600 games and the behaviors necessary to maximize scores, all with minimal prior input (just the game scores and visual images), and did nearly as well as a professional human gamer.

A brief digression on the development of artificial general intelligence

I suspect that artificial general intelligence will emerge not from any top-down programmatic approach, though I am agnostic as to which approach for developing artificial general intelligence is likely to succeed. Moreover, I suspect that artificial general intelligence, evolving within a machine environment and subject to very different selection pressures than biological humans, will develop into something very alien to us, and that very little in our imagination or past experience will be helpful in preparing us to confront it.

The dangers of artificial intelligence

A number of very prominent thinkers, including Professor Stephen Hawking and Elon Musk have expressed concerns about the potential existential risk of developing artificial general intelligence.

However, I am skeptical that regulatory oversight of artificial intelligence research, or even of artificial intelligence generally, will be effective to counteract the existential risk posed by artificial general intelligence. Setting aside deliberate human circumvention of such oversight, I wonder if there would be any way to address the possibility that the necessary breakthroughs for artificial general intelligence may come from unexpected “accidents” rather than deliberately designed scientific research programs.