Energy technologies and the Silicon Valley mindset

Horace Dediu identified the twin personalities of Google – Google A solves problems of humanity and Google B solves problems for advertisers. Google A is known for their moonshot programs and optimism, Google B pays the bills. So it was with interest when Ross Koningstein and David Fork announced the termination of Google’s RE<C program in 2011. Google’s aspirational goal was to produce a gigawatt of renewable power more cheaply than a coal-fired plant could, and do it in ‘years’, not decades.

Although Google had the best of intentions, the RE<C program represented the underlying problem of applying a Silicon Valley mindset to the energy and climate problem. Carey King summed it up neatly, noting that Google brings a mindset that is “used to solving some technological problem quickly, selling the company or idea to a larger company, and then moving on to the next great app.” To be sure, technology is going to help, but the dilemma is that energy and climate are not fundamentally ‘technological problems to be solved’. 

Why have renewables, particularly solar PV, become so connected with the Silicon Valley mindset? As a starting point, both share the element silicon and have parallel developments. 

Connecting ICT with solar PV
ICT Solar PV
Silicon Silicon transistors Crystalline silicon PV wafers
Miniaturization Greater transistor density Thinner silicon wafers
Performance Faster clock speed Higher efficiency
ICT Online media, shopping, banking, Uber Smart grids, vehicle-to-grid integration (V2G)
Moore’s law Doubling of performance every 2 years Declining cost of solar systems

But the question is – how close is the analogy really? In a recent paper, I explored the role of ICT, energy and resource decoupling, and concluded that –

The deepening of the service economy towards the Infotronics phase should be seen partly as a consequence of sufficient energy supply and productive primary and secondary sectors. ICT is enabling productivity gains and new business models, but does not significantly weaken the demand for energy services, and therefore does not enable strong decoupling.

In other words, it has been the high productivity of the primary and energy sectors that has enabled the advanced economies to progress towards an ICT-based service economy – not the other way around. ICT has not meaningfully diminished energy or material consumption when measured at a global scale. Energy and material use is a function of human wants, needs and income – this includes buildings, transport, food, health care etc. The relative composition of economies and ICT does not fundamentally alter the  demand for for these services.


The relevance of this is that ICT is operating in the information paradigm. The paradigm is governed by a different set of principles. Change a few lines of code and immediately improve an app. But trying to get a race car get around a track a couple of seconds quicker might require millions of dollars of investment. Energy supply technologies are governed by the familiar physical laws we are used to. No amount of ICT can improve the performance of a solar panel at night time.

A Framework for Incorporating EROI into Electrical Storage

The fundamental problem with a transition to renewable energy is that modern society has been structured around demand-based power flows. Any quantity of power is available at any time – the only limit is the circuit breaker in your mains connection. But the major scalable and affordable renewable power sources are wind and solar PV, both of which are intermittent. We could add biomass, but the degree to which biomass and biofuels can be scaled is limited and anyway, their use is contested. Until now, intermittency has been manageable because the variability generated by the modest proportion of RE is readily accommodated with the legacy infrastructure. Regions with a high penetration of VRE, including Denmark and South Australia, have access to virtual batteries in the form of interconnectors to larger grids. The question is – how do we deal with intermittency as legacy infrastructure is retired and wind and solar have to take on a greater role?

The solution is of course storage, but what sort of storage, how much, and what are the biophysical limits of storage. EROI is really about exploring the biophysical limits of storage rather than business models and markets. It may be economic to install a Tesla Powerwall based on feed-in and retail tariffs, but tariff-induced economics may not reflect the value of storage at a societal level.

In recent years, there have been important contributions to applying EROI to storage, however, there remains uncertainty as to how to apply these metrics to practical systems to derive useful or predictive information. I propose a methodology that assesses the EROI of the variable renewable energy and storage as a system, relative to the quantity of conventional generation capacity that is displaced.

A justification for focusing on substitution of capacity is the German Energiewende. Between the starting point of the EEG in 2003 and 2014, total installed power generation capacity grew by 51%, although total annual generation was virtually unchanged. The emission intensity for electricity declined from 610 to 559 grams CO2/kWh over the period. Unlike historical energy transitions, such as wood to coal or coal to oil, we simply haven’t seen the substitution of legacy infrastructure and productivity gains.

greenhouse emissions graph
Source: Federal Ministry for the Nature, Environment, Nature Conservation, Building and Nuclear Safety, 2015, 
Facts, Trends and Incentives for German Climate Policy


In a new paper in BioPhysical Economics and Resource Quality I explore these issues, with the aim of introducing a framework for further exploration. The most important outcome is the shape and behaviour of the embodied energy and marginal embodied energy curves. The first units of storage and VRE are the least energetically expensive. Using a simulation for the Texas ERCOT grid, I find that it is 4 to 41 times more energetically expensive to displace a gigawatt of generation capacity at near 100% RE than at low penetration RE. Geographic and technology diversity improve these numbers. Unlike conventional generation, which has access to essentially unlimited ‘stored sunlight‘ or nucleosynthesis in the form of fuels, VRE is handicapped by the energetic demands of surplus VRE and storage.




Leif Wenar- Blood Oil

From 1807, Britain tried to suppress the Atlantic slave trade until eventual success its 1867. In the late eighteenth century nearly everyone of influence or power in Britain had a direct or indirect interest in the slave trade. In 1805–1806 the value of British West Indian sugar production reliant on slaves equaled about 4% of Britain’s national income. Kaufmann and Pape suggest that Britain voluntarily forgoed 4% of national income, essentially for 60 years. By modern standards, the suggestion of embarking on a moral campaign and being willing to forgo such a magnitude of GDP seems almost unthinkable. Yet this is precisely the sort of brave thinking that Leif Wenar brings in his book Blood Oil: Tyrants, Violence, and the Rules that Run the World. Wenar’s is a thought provoking exploration of the morality of oil supply. He introduces the expression ‘Might Makes Right’, meaning that whoever possesses the oil gets to profit from it. Wenar draws the analogy of a criminal gang stealing a fuel delivery truck, driving it down the road, and selling the fuel. Of course this seems absurd. But Wenar claims that this is essentially what is happening in the realpolitik of oil producing countries – we are all responsible for buying stolen goods and indirectly supporting dictatorships – think Putin, and Wahhabism in Saudi Arabia. Wenar is not anti-oil per se, but advocates a form of Clean Trade in oil as a moral solution.

Wenar makes a compelling moral argument, but I think he vastly understates the value of oil in modern society. This is the crux of the problem and why the US embarked on two grand bargains in the 1970s after US oil supply peaked – the US sought to control oil supply by undermining oil-producing democracies with oil-to-arms deals; and entered into private banking agreements to ensure that petrodollars flowed back to the US and regions that ensured US influence.


The thing I find interesting is that many environmentalists advocate the decommissioning of dams, abandonment of nuclear, coal divestment, bans on gas fracking, but nobody willingly ‘gives up’ oil. If humanity could fix its global energy problems by forgoing 4% of national income, I suspect a sizable minority would be willing to participate. But it’s not that simple. Oil is the master energy source. Such is its importance that it has a role as a quasi-monetary commodity. Economic prosperity is unfortunately going to remain tied to oil for the foreseeable future. Are electric vehicles the answer?

The Maximum Power Principle

Lotka and Odum

In 1922, Lotka proposed a ‘law of maximum energy’ for biological systems. He reasoned that what was most important to the survival of an organism was a large energetic output in the form of growth, reproduction, and maintenance. Organisms with a high output relative to their size should outcompete other species.

In 1955, Odum and Pinkerton built on Lotka’s work with the ‘maximum power principle’, stating that ‘systems perform at an optimum efficiency for maximum power output, which is always less than the maximum efficiency.

The electrical Ohm’s Law provides a way of thinking about the relationship between maximum power and maximum efficiency. In electronic devices, the output resistance of the power source should match the input resistance of the load to maximise the power throughput for maximum power. However, the point of optimised power does not match the point of optimised efficiency. In the case of loudspeakers, for example, the maximum sound output is achieved when the speakers match the impedance (AC resistance) of the amplifier. At this point, half the energy is dissipated in the speakers and half in the amplifier. Speakers with a much higher impedance will improve the system efficiency but with less volume, requiring a larger amplifier to reproduce an equivalent volume (negative feedback in amplifiers actually makes this a bit more complicated). The same applies to antennas, that require the antenna impedance to match the transmitter at the designated frequencies.


In the biological realm, Hall recounts the relationship between a tree’s leaf area index (LAI) and the energy capture. The highest efficiency is achieved with a relatively low LAI since the topmost leaves capture the most sunlight, and each leaf is energetically expensive to maintain. But the usefulness of the high efficiency is offset by the limited leaf area and therefore total energy capture; an efficient plant would be short and outcompeted in a forest.  But there is a limit to which a plant can grow before the marginal gains of additional leaf area contribute to energy capture.

Perhaps the most sobering outcome of the Maximum Power Principle is thinking about the role of humanity, and what this means for human appropriation of net primary production of biomass for liquid and other fuels. William Rees cites Lotka’s maximum power principle in posing whether humans are unsustainable by nature, noting that –

by virtue of cumulative knowledge and technology, homo sapiens has become, directly or indirectly, the dominant macro-consumer in all major terrestrial and accessible marine ecosystems on the planet.

Role of energy efficiency and maximising energy throughput

The operation of electrical generators provides an example of the economic trade-off between maximising power and maximising efficiency; the revenue of an electrical generator depends on energy throughput, but the efficiency defines the fuel cost per unit of electricity. An efficient plant will increase electricity output for a given quantity fuel, but beyond the optimal power/efficiency, the additional gains in output do not justify the additional cost of improved efficiency. These trade-offs are routine in engineering practice.

The limits of energy efficiency as a goal in itself

The concept of ‘energy efficiency’ is really a human construct that is often useful to conceptualize the performance of energy systems. But energy efficiency targets, in themselves, can be counterproductive. The Australian building regulations typify this problem. The Building Code specifies deemed-to-satisfy building requirements for thermal performance, such as insulation R-value, and double glazing. But the Code merely institutionalises energy efficiency as a goal in itself, rather than per-capita energy consumption. This is because larger homes generate a better thermal efficiency score than equivalent smaller homes, since geometrically, larger homes gain proportionally more interior space relative to exterior fabric area (better know as Galileo’s square-cube Law). But rating tools do not penalise larger homes even though it is obvious that they consume more energy, nor account for functional use or the number of occupants. This leads to the perverse outcome that the Code favours homes that consume more energy, but do so ‘more efficiently’.

The EROI energy multiplier hypothesis

A common narrative in EROI discourse is the energy multiplier hypothesis. The reasoning is that it doesn’t matter what the EROI is, providing it’s greater than unity. From this reasoning, it follows that even very low EROI’s (say 1.2:1) aren’t a problem because you can multiply sources to give the required net-energy. The problem with this hypothesis is that it doesn’t account for the difference between consumption and investment. The multiplier hypothesis implicitly assumes that we reinvest 100% of our production. If that were true, then very low EROIs may be workable in a subsistence economy.

But we don’t reinvest 100% of production. We eat, commute, consume health care, education, go the movies, etc. As a modern society, we choose to reinvest 20 or so percent of our production. Energy industries exist to support society, not themselves. So if we take the 20% that is surplus energy (for an EROI of 1.2:1) and multiply it by the 20% that is available for reinvestment, we get only 4% that is available to ‘grow’ the energy source.

Consider the example of teachers. Let’s say that a university-employed teacher only teaches 0.2 new graduate teachers over their entire 30-year career (i.e. it takes 5 qualified teachers to create 1 graduate teacher over a 30 year period). This is an education ‘EROI’ of 1.2:1, and clearly this is ridiculous. For example the overall University of Melbourne student/staff ratio is about 18:1. Assuming a 3 year average program equates to a crude ‘EROI’ of education of about 6:1 over 3 years. Therefore a university career of 30 years gives a lifetime ‘EROI’ of about 60:1 (30/3 x 6). These are the sort of numbers we expect. We train teachers so that they can go out and teach students not teacher’s teachers. Even with this high ratio, Australian higher education is facing enormous challenges.

Consider the example of a pre-industrial farm. Say it produces 1000 kg of potatoes per annum and the farmers consume 1000 kg. This is an annual EROI of 1:1. That would be called subsistence farming and the family could not purchase anything from the local market. If the farm produced 2000 kg, it could eat 1000 and sell 1000kg and use the money to buy something useful – maybe tools to improve the on-farm productivity but not much more. The problem with 2:1 is that half the community would need to be farmers. It wasn’t until around 1840 that the UK reached an EROI of 5:1 for energy, marking a key milestone in the Industrial Revolution. But by modern living standards, 1840 Britain is hardly the sort of society most of us would aspire to. Modern Australia employs around 2% of the population to produce all the food plus much more for export. How does it do this? With high-EROI fossil fuelled farm equipment and modern agriculture technology.

The point of all this is that most of the energy we produce is for consumption, not for reinvestment. It’s hard to pin down a firm number, but modern society is likely to require an EROI of at least 10:1 to maintain living standards and probably higher. For those of us pursuing EROI, there is still a risk in returning to a type of classical economics objectivist energy-as-value theory of production. The key is that energy is an enabler of economic activity but often not the primary driver. At high-EROI energy systems are not EROI-constrained and other factors will dominate the viability of the energy source.

Cost-constrained versus EROI-constrained electricity generation

One of the challenges for EROI researchers is explaining in simple terms why EROI matters. I’m planning on doing a series of short posts exploring the relevance of EROI.

This first post simply makes the observation that embodied energy (the reciprocal of EROI), and cost, are not necessarily correlated. In electricity generation, the general rule is that ‘mega-projects’ requiring regulatory and environmental approval, oversight, long lead times with higher technical and other risks are more likely to be cost constrained. This is due to a large cost share going to low-energy intensity service, administrative, and debt servicing costs. Paying a consultant $200 an hour while working in an office is not very energy intensive. On the other hand, modular technologies, such as solar and wind projects, are much easier to gain approval, less risky, and get built quickly. Much of the of the cost is due to manufacturing and materials, which have a much higher energy intensity. This doesn’t include the cost of firming of variable renewable energy. In countries such as Spain, Portugal and Austria, a significant part of the cost reduction is due to streamlining of approvals and ‘soft’ costs. These have the effect of lowering the consumer cost without substantially reducing the embodied energy footprint.

Figure 1 illustrates this by dividing a graph of cost versus EROI into 4 quadrants. Nuclear and coal with carbon capture are net-energy positive but have a high CAPEX, and can be classified as ‘cost constrained’. On the other hand, some forms of solar and biofuels may appear quite cheap, but may be ‘EROI-constrained’. Central receiver concentrated solar is expensive, both cost-wise and energetically. The graph is meant to be illustrative of Australian costs at the present. If the EROI of the energy source is greater than 20:1, then we probably don’t have to worry about the EROI unless it is declining. A problem that we face is that the available options that are neither EROI nor cost constrained is small. In Australia, even unsequestered coal-fired generation has become expensive.


Figure 1 – Australian electricity generation costs and EROI


Energetic Implications of a Post-industrial Information Economy

The decoupling of energy and resources from economic growth is the Holy Grail of sustainable development. The observation that there seems to be a concentration of wealth in Australian localities associated with Information and Communication Technology (ICT) services, and a growing role for artificial intelligence services, would seem to strengthen the decoupling hypothesis. At face value, we seem to be less dependent on the high-energy intensity primary and secondary sectors — agriculture, mining, manufacturing and transport.

But what is the evidence for decoupling? In a new paper in BioPhysical Economics and Resource Quality I explore some of the linkages between ICT and energy consumption.

Australia primary energy consumption by fuels, and real GDP 1900-2014. Sources ABS, Butlin, Dyster & Meredith, Office of the Chief Economist, Vamplew. The fall in coal post 2010 is due to a doubling of retail electricity prices.

Here’s a section from the paper discussing dematerialisation –

An early version of dematerialization was Buckminster Fuller’s concept of ‘Ephemeralization’ — doing more and more with less and less until eventually you can do everything with nothing. In a contemporary ICT-based version, Kurzweil hypothesised that computing power will eventually cross a critical boundary (the so-called singularity), after which dematerialised economic growth will accelerate sharply. Kurzweil argued that there is a rapidly increasing knowledge and information content in products and services, and that these are not constrained by material resources.

Using Fuller as a backdrop, Lee uses the concrete example of the introduction of Google Maps onto smartphones to argue that information technology is a ‘magic wand’ that ‘in one stroke, transformed millions of Android phones into sophisticated navigation devices’. In Lee’s conception, the smartphone is assumed to be a low energy footprint device that substitutes for a host of real-world products—at zero marginal cost, Google Maps is said to be substituting for paper maps and dedicated navigation devices.

But the reverse is true—Nokia, Google and Apple all have multi-billion dollar ‘real world’ investments in mapping hardware, software development and data. Furthermore, GPS piggy backs onto the large sunk investment of the Navstar GPS satellite system. Google has bundled ‘free’ maps to improve the perceived value of Android, from which it reportedly made $31 billion in revenue and $22 billion in profit during the past seven years. Furthermore, GPS devices are penetrating cameras and fitness devices, far exceeding the material and energy footprint of paper-based maps and atlases. Hence far from dematerialising, the ‘magic’ of GPS-enabled devices carries a far reaching energy and material footprint.