The EROI energy multiplier hypothesis

A common narrative in EROI discourse is the energy multiplier hypothesis. The reasoning is that it doesn’t matter what the EROI is, providing it’s greater than unity. From this reasoning, it follows that even very low EROI’s (say 1.2:1) aren’t a problem because you can multiply sources to give the required net-energy. The problem with this hypothesis is that it doesn’t account for the difference between consumption and investment. The multiplier hypothesis implicitly assumes that we reinvest 100% of our production. If that were true, then very low EROIs may be workable in a subsistence economy.

But we don’t reinvest 100% of production. We eat, commute, consume health care, education, go the movies, etc. As a modern society, we choose to reinvest 20 or so percent of our production. Energy industries exist to support society, not themselves. So if we take the 20% that is surplus energy (for an EROI of 1.2:1) and multiply it by the 20% that is available for reinvestment, we get only 4% that is available to ‘grow’ the energy source.

Consider the example of teachers. Let’s say that a university-employed teacher only teaches 0.2 new graduate teachers over their entire 30-year career (i.e. it takes 5 qualified teachers to create 1 graduate teacher over a 30 year period). This is an education ‘EROI’ of 1.2:1, and clearly this is ridiculous. For example the overall University of Melbourne student/staff ratio is about 18:1. Assuming a 3 year average program equates to a crude ‘EROI’ of education of about 6:1 over 3 years. Therefore a university career of 30 years gives a lifetime ‘EROI’ of about 60:1 (30/3 x 6). These are the sort of numbers we expect. We train teachers so that they can go out and teach students not teacher’s teachers. Even with this high ratio, Australian higher education is facing enormous challenges.

Consider the example of a pre-industrial farm. Say it produces 1000 kg of potatoes per annum and the farmers consume 1000 kg. This is an annual EROI of 1:1. That would be called subsistence farming and the family could not purchase anything from the local market. If the farm produced 2000 kg, it could eat 1000 and sell 1000kg and use the money to buy something useful – maybe tools to improve the on-farm productivity but not much more. The problem with 2:1 is that half the community would need to be farmers. It wasn’t until around 1840 that the UK reached an EROI of 5:1 for energy, marking a key milestone in the Industrial Revolution. But by modern living standards, 1840 Britain is hardly the sort of society most of us would aspire to. Modern Australia employs around 2% of the population to produce all the food plus much more for export. How does it do this? With high-EROI fossil fuelled farm equipment and modern agriculture technology.

The point of all this is that most of the energy we produce is for consumption, not for reinvestment. It’s hard to pin down a firm number, but modern society is likely to require an EROI of at least 10:1 to maintain living standards and probably higher. For those of us pursuing EROI, there is still a risk in returning to a type of classical economics objectivist energy-as-value theory of production. The key is that energy is an enabler of economic activity but often not the primary driver. At high-EROI energy systems are not EROI-constrained and other factors will dominate the viability of the energy source.

Cost-constrained versus EROI-constrained electricity generation

One of the challenges for EROI researchers is explaining in simple terms why EROI matters. I’m planning on doing a series of short posts exploring the relevance of EROI.

This first post simply makes the observation that embodied energy (the reciprocal of EROI), and cost, are not necessarily correlated. In electricity generation, the general rule is that ‘mega-projects’ requiring regulatory and environmental approval, oversight, long lead times with higher technical and other risks are more likely to be cost constrained. This is due to a large cost share going to low-energy intensity service, administrative, and debt servicing costs. Paying a consultant $200 an hour while working in an office is not very energy intensive. On the other hand, modular technologies, such as solar and wind projects, are much easier to gain approval, less risky, and get built quickly. Much of the of the cost is due to manufacturing and materials, which have a much higher energy intensity. This doesn’t include the cost of firming of variable renewable energy. In countries such as Spain, Portugal and Austria, a significant part of the cost reduction is due to streamlining of approvals and ‘soft’ costs. These have the effect of lowering the consumer cost without substantially reducing the embodied energy footprint.

Figure 1 illustrates this by dividing a graph of cost versus EROI into 4 quadrants. Nuclear and coal with carbon capture are net-energy positive but have a high CAPEX, and can be classified as ‘cost constrained’. On the other hand, some forms of solar and biofuels may appear quite cheap, but may be ‘EROI-constrained’. Central receiver concentrated solar is expensive, both cost-wise and energetically. The graph is meant to be illustrative of Australian costs at the present. If the EROI of the energy source is greater than 20:1, then we probably don’t have to worry about the EROI unless it is declining. A problem that we face is that the available options that are neither EROI nor cost constrained is small. In Australia, even unsequestered coal-fired generation has become expensive.

 

constraints
Figure 1 – Australian electricity generation costs and EROI

 

Energetic Implications of a Post-industrial Information Economy

The decoupling of energy and resources from economic growth is the Holy Grail of sustainable development. The observation that there seems to be a concentration of wealth in Australian localities associated with Information and Communication Technology (ICT) services, and a growing role for artificial intelligence services, would seem to strengthen the decoupling hypothesis. At face value, we seem to be less dependent on the high-energy intensity primary and secondary sectors — agriculture, mining, manufacturing and transport.

But what is the evidence for decoupling? In a new paper in BioPhysical Economics and Resource Quality I explore some of the linkages between ICT and energy consumption.

fig_1
Australia primary energy consumption by fuels, and real GDP 1900-2014. Sources ABS, Butlin, Dyster & Meredith, Office of the Chief Economist, Vamplew. The fall in coal post 2010 is due to a doubling of retail electricity prices.

Here’s a section from the paper discussing dematerialisation –

An early version of dematerialization was Buckminster Fuller’s concept of ‘Ephemeralization’ — doing more and more with less and less until eventually you can do everything with nothing. In a contemporary ICT-based version, Kurzweil hypothesised that computing power will eventually cross a critical boundary (the so-called singularity), after which dematerialised economic growth will accelerate sharply. Kurzweil argued that there is a rapidly increasing knowledge and information content in products and services, and that these are not constrained by material resources.

Using Fuller as a backdrop, Lee uses the concrete example of the introduction of Google Maps onto smartphones to argue that information technology is a ‘magic wand’ that ‘in one stroke, transformed millions of Android phones into sophisticated navigation devices’. In Lee’s conception, the smartphone is assumed to be a low energy footprint device that substitutes for a host of real-world products—at zero marginal cost, Google Maps is said to be substituting for paper maps and dedicated navigation devices.

But the reverse is true—Nokia, Google and Apple all have multi-billion dollar ‘real world’ investments in mapping hardware, software development and data. Furthermore, GPS piggy backs onto the large sunk investment of the Navstar GPS satellite system. Google has bundled ‘free’ maps to improve the perceived value of Android, from which it reportedly made $31 billion in revenue and $22 billion in profit during the past seven years. Furthermore, GPS devices are penetrating cameras and fitness devices, far exceeding the material and energy footprint of paper-based maps and atlases. Hence far from dematerialising, the ‘magic’ of GPS-enabled devices carries a far reaching energy and material footprint.

What does the loss of Hazelwood mean for reliability

Fairfax reported that “Victoria is facing an unprecedented 72 days of possible power supply shortfalls over the next two years following the shutdown of the Hazelwood plant next week.” This was picked up by other media but a little more sense was injected by Giles Parkinson, Dylan McConnell and Tony Wood, including RenewEconomy and Radio National Breakfast. Since AEMO uses a probabilistic approach to reliability, I thought it would be helpful to graphically illustrate the meaning of reliability with probability distribution functions.

PDF

The figure is a stylised representation of the annual demand distribution (left plot) and generator availability distribution, with and without Hazelwood (right plots). I have added dashed lines when the VIC-NSW interconnector is included. Note that this is a stylised diagram and that the generator availability changes throughout the year. It also doesn’t include semi-dispatchable and non-dispatchable power (i.e. wind and solar) since these haven’t different contributions to reliability. Wind can normally be assumed to contribute 5 to 10% of rated capacity in Victoria. The plots are probability density functions (PDF’s) and can be converted to a cumulative density function (CDF), more commonly known as a load duration curve. In this case, I used the CDF algorithm designed by Preston and converted to a PDF.

The loss-of-load-probability (LOLP) can be calculated for each hour based on the generator availability. It can be thought of as the area bound by the intersection of the demand and supply curves. The LOLP for the peak hour of each day can be added to give the loss-of-load-expectation (LOLE) for a year. Most jurisdictions use LOLE as the standard reserve margin planning metric. The United States standard is a LOLE of 0.1 (‘one day in ten year’), meaning that an outage (of any duration) should only occur on one day in 10 years on average. A LOLE of 2.9 hours per year is used within the reliability standards used by France, Ireland and Belgium. Australia applies an Expected-Unserved-Energy (EUE) standard of 0.002% of annual consumption.

I have assumed a forced outage rate (FOR) of 5% to calculate the probability distribution functions (PDF) assuming that none have scheduled service. I have included all the generators in the table below. AEMO has precise data on forced outage but this information is not generally available as far as I know. The area bound by the curves should be seen as stylised and not precise.

The probability of unserved energy is determined by the intersection of the right tail of the demand distribution, with the left tail of the availability distribution. I have used a kernel density estimation (KDE) to draw the demand graph. The KDE is a non-parametric way to produce a smooth curve which can be extrapolated with a given confidence. Essentially AEMO extrapolates the right tail of the demand function and compares this to the availability function. AEMO refers to the extrapolation of “probability of exceedance (POE). If demand is greater than the reserve capacity, a “reserve shortfall” is flagged. This simply means that  there is a non-zero probability of a demand shortfall. AEMO’s actual method is described here.

From the graph, it is clear that Hazelwood has extinguished Victoria’s surplus capacity and raised the possibility of unserved energy. The headline “72 days” is highly misleading but nonetheless, reserve margins have significantly tightened.

Appendix – Generators included (units and capacity in MW)

Hazelwood 8 200
Loy Yang A 3 560
1 500
Loy Yang B 2 500
Mortlake 2 283
Newport 1 510
Somerton 4 40
Valley Power 6 50
Yallourn 2 380
2 360
Bogong 2 80
6 25
Dartmouth 1 185
Eildon 2 60
2 7.5
Hume 1 29
Laverton North 2 156
Murray 1 10 95
Murray 2 4 138
Jeeralang A 4 53
Jeeralang B 3 76
Bairnsdale 2 47
West Kiewa 4 15

EROI of the Australian electricity supply industry

I recently did a presentation on the EROI of the Australian electricity supply industry. The key aims were –

  • Calculate how much energy it takes to build, run and maintain the Australian electricity supply industry
  • Disaggregate the feedstock fuels (coal, gas, etc) from the operational energy of the system
  • Disaggregate generation, transmission, distribution & on-selling
  • Establish a net-energy baseline for Australia for future work

The analysis required calculating the direct and indirect energy of the system using various energy accounts (ABS, BREE, Energy Efficiency Opportunities (EEO) program, NGER greenhouse reporting data, AEMO). The presentation is here for 2013-14.

The main conclusions are –

  • The EROI is around 40:1 using primary energy equivalent scaling, therefore the system is not EROI constrained.
  • This is mostly due to the availability and proximity of coal in relation to the major demand centres
  • The high EROI would permit an ambitious abatement strategy based on lower EROI generation, however there is a limit to how far this could proceed
  • Unlike oil supply, electricity systems are mostly cost constrained rather then EROI constrained, although a large scale shift to renewables would most likely change this. Most of the recent cost increase are low-energy intensity costs associated with transmission and distribution

black box

constraints

Pumped hydro storage – an Australian overview

A pumped hydro primer

Nearly all electrical storage to date has been pumped hydro storage (PHS), which makes up 97% or 142 GW of global power capacity for electrical storage. The three leading PHS countries are Japan with 26 GW, China at 24 GW and the US at 22 GW. The Eurelectric region comprising the 34 European countries that are part of the Eurelectric synchronous regions, has a total installed capacity of 35 GW.

At a global scale, other utility scale storage includes thermal storage (e.g. concentrated solar thermal) at 1.7 GW, which assuming 6 hours storage equates to around 10 GWh. Other storage includes electro-mechanical (e.g. flywheel) at 1.4 GW, battery at 0.75 GW, and hydrogen at 0.003 GW (United States Department of Energy (DOE) 2016).

The storage capacity of most PHS facilities in the US, Japan and China range from 8 to 25 GWh per GW of installed capacity, corresponding to a typical daily arbitrage cycle with spare capacity. In Europe, the storage capacity of 2,500 GWh is dominated by Spain with 1,530 GWh. US storage capacity equates to around 545 GWh.

Australia’s PHS

Australia has 3 PHS storage plants – Wivenhoe, Shoalhaven and Tumut 3. Wivenoe usually operates with about a 0.8 GWh pump cycle, Shoalhaven about 0.7 GWh, Tumut about 1.5 GWh. Tumut 3 capacity is 1,800 MW (after being upgraded from 1,500 MW in 2011), but only 3 of the 6 generators have pumps. These plants total about 3 GWh total storage but the actual capacity may be greater. Pumping power capacity is Tumut-3  473 MW; Shoalhaven 240 MW; and Wivenhoe 550 MW. To get a sense of scale, the NEM supplies about 600 GWh of energy per day.

The role of PHS

PHS has historically operated in unison with coal and nuclear baseload. In the US, the deployment of PHS was relatively slow until the 1960s, but developed in parallel with nuclear during the 1960s and 70s, and subsequently slowed in the 1980s when nuclear deployment came to a standstill. Since the 1980s, PHS has been superseded by gas turbines (i.e. utilising stored sunlight), which have a low capital cost and quick build time, and present lower risk for investors.

Baseload-PHS usually operates with a daily arbitrage cycle between overnight off-peak and daytime peak. The daily cycling maximises energy throughput for a given storage capacity and underpins the economic return for PHS. Since the deregulation of electricity markets, the use of pumped hydro has expanded to cover a range of additional services. PHS can also be used for load following intermittent renewables, provided that continuous power is available for charging. In Australia, PHS charging is simply utilising whatever generation is available – whether it be coal, gas, wind or solar. In practice, PHS is more likely to be relying on overnight coal baseload, and surplus wind at increasing wind penetration.

Utilisation of Australia’s PHS

Interestingly, Australia’s PHS plants aren’t used that much. There was only 118 GWh and 172 GWh consumed in pumping by these plants in 2014 and 2015 respectively (I’ve uploaded my spreadsheet here). Total capacity for these is about 1,391 MW giving a capacity factor of 1.0% and 1.5% respectively. Given the sunk cost, I’m not sure why these plants aren’t used more and whether price gaming may be part of the explanation. More likely, these simply require a much higher arbitrage than often assumed. Traditionally a low off-peak and high peak price supported PHS but price volatility is also seen as being essential with greater penetration of renewables. South Australia has a more volatile market which improves the volatility economics for the potential seawater scheme on the Spencer Gulf, but may not provide the certainty for a regular arbitrage cycle. The problem with relying on volatility of course, is that additional supply cannibalises its own economics.

The proposed Tantangara-Talbingo scheme

I contacted Peter Lang, who did an estimate at BraveNewClimate for a much larger Tantangara-Blowering scheme in 2010. The current proposed scheme is for a similar but smaller scheme linking the Tantangara-Talbingo reservoirs. The topology is that Tantangara (1,230 metres above sea level) sits near the top of the hill and Talbingo (550 metres) is upstream of Blowering (379 metres). 

Snowy-Tumut-Development-2016

Peter put together some rough costings for the proposed Snowy PHS –

Tantangara-Talbingo (TT) head is 686 m versus average head 850 m for Tantangara-Blowering (TB); the generating capacity of TT is stated to be 2 GW versus 8 GW for TB.  But with only the three tunnels used for generating.  8 GW/3 x 80% = 2.1 GW.  This implies the tunnel diameters and flow rates are the same in the two projects.

Tantangara-Talbingo tunnel length is 27 km v 53 km for TB – i.e. about half the tunnel length.  This should reduce the cost of the tunnels by about 40% and reduce the project by about 24%.  That is, about $1.5B in 2010 A$.  Therefore, based on my 2010 estimate for TB, the $2 billion for 2 GW for TT seems roughly reasonable.  

But it does not fit with overseas experience – US costs for PHS are around $3 to 4 billion per GW.  UK DECC (p57) gives a figure of GBP 3.4 per GW.  That’s around A$5.5 B per GW (using GBP 1 = AUD 1.6).  Of course there are differences (no dams, no land reclamation; on the other hand, three tunnels but only one productive and highly inflexible because of the tunnel length and the mass of water in the tunnels that has to be accelerated and decelerated).

What does it all mean?

What does all this mean for the Snowy upgrade? More storage has got to be better as more intermittency is added, but is it economically viable? Why aren’t the existing PHS facilities being used more and why is the proposed expansion going to be better? Is the market structured for merchant storage? Is there too much emphasis on intermittent renewables rather than low-emission baseload or dispatchable renewables? What scale of PHS will be required at higher penetration of intermittent renewables? 

As I see it, the bigger problem is that we simply don’t have markets that are designed to work with a changing market mix and storage. Markets can work if given the right long-term signals and policy stability but require technology agnosticism. Some progress might be on the horizon with the proposed AEMC rule change to reduce the settlement period from 30 down to 5 minutes. This will provide greater value for fast ramping generation that can capture market transients over OCGTs. But how do we value storage in an energy-only market?

These are interesting questions requiring resolution.

Thanks to Peter Lang for information and insights on Australia’s pumped hydro.

The social licence of coal

I’m (just) old enough to remember the Australian nuclear disarmament (and associated opposition to nuclear power) rallies from the 1970s, the fervent opposition to the Newport gas-fired power plant in Melbourne, Tasmanian dams protests of the 1980s, along with logging and woodchipping protests, and so on. Just about every energy source attracts opposition. The interesting thing is that although there’s a broad understanding of the need to transition from unabated coal, there doesn’t seem to be the acute feeling against coal-fired electricity in relation to historic conservation campaigns. Indeed, historically, coal was often advocated as an energy source that could complement renewables, provide energy security, and substitute for oil and gas.

A parallel narrative is the broad support for renewables. But much of the community is yet to appreciate the practical constraints of high-penetration renewable scenarios, and the inevitable synergy with gas in the absence of dispatchable renewables.

I would argue that this helps to explain at least part of the political stalemate of Australian climate policy – no consensus on CO2 pricing; but support (if recently equivocal from the Government) for renewables. Indeed, Australian electricity seems to be on a trajectory that will emulate the original aspirations of the 1980 forerunner to the German Energiewende – 50 to 55% coal and 45 to 50% renewable energy by 2030 (with a larger role for gas in Australia due to indigenous resources). The social licence of coal seems to be a key factor, and I’ve put together a few observations in no particular order –

  1. As a pioneer nation, Australia’s economic roots lie in agriculture and mining. Australia readily exploited the power of steam. Local coal was an antidote to a reliance on imported fuels – oil and natural gas were not developed until the 1960s. Furthermore, affordable electricity was essential for incubating a manufacturing industry.
  2. Prior to climate change, people really weren’t that worried about coal. Even the Australian medical and academic community seem to have not been too concerned about studying coal-fired power’s health impacts (see two recent reports by the ATSE and BZE). Most health studies were related to the mining of coal rather than combustion.  
  3. The Australian geographic distribution of power plants in low population density areas has mitigated against the worst effects of pollution. Australian coal is low in sulphur, lessening the likelihood of acid rain.
  4. During the 1970s and 80s, despite pressing for more funding for ‘alternative energy’, environmental advocates treated coal as a relatively benign fuel. For example, in advocating a policy of environmental protection, Hugh Saddler (1981, pp. 119-120) argued that coal was a more economic and less risky option than nuclear. In arguing the case against Tasmanian hydro development, Peter Thompson, representing the Australian Conservation Foundation, noted that coal plants ‘pose relatively few air pollution problems if the operation is adequately planned, sited and built to the highest standards of quality’ (Thompson 1981, p. 125). Similarly, during the Franklin River campaign in 1981, Bob Brown stated that ‘a new coal fired power station is the manifestly best option built on Tasmanian coal fields.’
  5. A similar view was held in Germany and the US. During the 1970s, the German Government actively promoted the expansion of coal for electricity and combined heat and power (Guilmot et al 1986, pg. 20). Even the contemporary Energiewende was originally conceived around Germany’s substantial coal resources. The Energiewende emerged from a study by the German Öko-Institut in 1980 that grew out of concerns of oil security from the first oil crisis, and the safety of nuclear energy (Krause et al. 1981; Joas et al. 2016; Maubach 2014; Morris & Jungjohann 2016). The study, titled ‘Energy turnaround, growth and prosperity without oil and uranium’, envisaged a German energy supply derived from 50 to 55% coal and 45 to 50% renewable energy by 2030.
  6. In 1977, pro-conservation US President Carter proposed an 80% increase in coal production for power generation and liquid fuels, arguing for ‘the expanded use of coal, supplemented by nuclear power and renewable resources, to fill the growing gap created by rising energy demand’ (Stobaugh & Yergin, 1979, pg. 80).
  7. From Lowy and other polling, the willingness to pay a higher cost for electricity is very limited – in 2011, 39% were prepared to pay no more and a further 32% were prepared to pay no more than $20 a month. The majority of Australians like the idea of an energy transition but aren’t willing to pay for it. Although the Murdoch Press and elements of the Lib-Nat Coalition are critical of the LRET, the community seems to support both the LRET and rooftop solar.
  8. The IEA projections for India illustrate the way in which a ‘techno-optimist’ narrative of growing renewables (or nuclear) can co-exist with the stark reality of growing coal-fired generation. According to the IEA ‘New Policies Scenario’, solar PV in India is projected to grow around 60-fold by 2040, wind around 7-fold, and nuclear around 6-fold. But despite coal’s share falling to 57% of generation, coal-fired generation is projected to nearly double in absolute terms because of the sheer scale of India’s demand growth. The reasons for India’s increasing demand for coal are simple – coal is cheap and easily shipped, requires no pretreatment, and most importantly, provides fit-for-purpose dispatchable generation. It does not require smart grids or storage to provide dispatchable power, nor the institutional and community support that nuclear requires.
iea-page-94
Figure 2.22, IEA India Energy Outlook