Wednesday, March 31, 2010

API, EPA and Hydrofracking gas shale

There has been some ongoing discussions around the country about the possibility of drinking water contamination as a result of hydrofracking natural gas wells, most particularly those that would be placed in the Marcellus shale, which, in part, underlies sections of New York state which provides the watershed for New York City. And to put this in context, it should be born in mind that hydraulic fractures are generally relatively short, and occur in the shale at depths of thousands of feet, while ground water is usually obtained from wells that are less than 500 ft deep.

I first mentioned this at the time of the House hearing on the topic and have returned to the topic intermittently over the past few months as the issue had been dragged more and more before the public. I have also covered the basic technology behind hydrofracking natural gas wells, including direction to a video which illustrates how hydrofracking improves production and makes the well productive. And there is a Primer on the subject (that includes the composition of a typical fracking fluid).

Well a couple of weeks ago the EPA announced that they were going to conduct a study of the process. More precisely:
The U.S. Environmental Protection Agency (EPA) announced that it will conduct a comprehensive research study to investigate the potential adverse impact that hydraulic fracturing may have on water quality and public health. Natural gas plays a key role in our nation’s clean energy future and the process known as hydraulic fracturing is one way of accessing that vital resource. There are concerns that hydraulic fracturing may impact ground water and surface water quality in ways that threaten human health and the environment. To address these concerns and strengthen our clean energy future and in response to language inserted into the fiscal year 2010 Appropriations Act, EPA is re-allocating $1.9 million for this comprehensive, peer-reviewed study for FY10 and requesting funding for FY11 in the president’s budget proposal.
One wonders, given the input from the state agencies at the Congressional Hearing, who said, since they have been doing most of the monitoring for the past several decades, that there wasn’t a problem, who will provide the research study, and who will be doing the peer-review? But perhaps I am being a little cynical so early in the process.

With all that as background, last Thursday the American Petroleum Institute (API) held a phone conference for a number of those of us who blog on energy, so that we could ask questions on hydrofracking from a group of industrial experts.

API had previously noted that EPA had already carried out such a study in 2004, although that one dealt more specifically with hydrofracking coal seams to extract coal bed methane. That study had concluded:
Based on the information collected and reviewed, EPA has concluded that the injection of hydraulic fracturing fluids into CBM wells poses little or no threat to USDWs and does not justify additional study at this time.
It should be noted that that coal seams are typically quite significantly shallower than a typical gas shale, generally by several thousand feet.

And since I haven’t defined an underground source of drinking water (USDW), let me do that by quoting that earlier EPA document:
A USDW is defined as an aquifer or a portion of an aquifer that:
A.1 Supplies any public water system; or
2. Contains sufficient quantity of groundwater to supply a public water system; and
i. currently supplies drinking water for human consumption; or
ii. contains fewer than 10,000 milligrams per liter (mg/L) total dissolved solids (TDS); and
A. 1. B. Is not an exempted aquifer

NOTE: Although aquifers with greater than 500 mg/L TDS are rarely used for drinking water supplies without treatment, the Agency believes that protecting waters with less than 10,000 mg/L TDS will ensure an adequate supply for present and future generations

API noted, both then and at the conference call, that hydrofracking is an integral part of much of the oil and gas industry, and has been for over sixty years, during which time there have been over a million wells that have used the technology.

The transcript of our conversation is now available on the web and I am only going to summarize portions of it, since the full transcript runs to 21 pages. I’ll also add the odd comment of my own.

The experts fielded by API to talk with us were:
Sara Banaszak, Senior Economist
John Felmy, Chief Economist
Stephanie Meadows, Senior Policy Advisor
Erik Milito, the Group Driector for Upstream/Industry Operations
Richard Ranger, Senior Policy Advisor
Andy Radford, Senior Policy Advisor

Jane Van Ryan of API acted as Moderator, and had invited me to join the others in the conference.

Gail Tverberg (representing TOD) opened the discussion by asking about the fate of all the fluids that were used in generating the hydrofrack, which can run from hundreds of thousands, to millions of gallons. Stephanie Meadows answered that most of the water comes back out of the ground, though it may take weeks or months to recover most of the fluid, (Richard Ranger noted that the fractures that are generated don’t typically extend that far – the distance is usually measured in feet,) and recovery rates can range from 30 – 70% of the fluid injected. The rest can slowly trickle out during production, but can remain, within the producing formation until then.

(This was something that I had wanted clarifying and in the three questions I had submitted before the conference, I asked about the risk of various shales being water sensitive. The concern being that if the water in the fracking fluid is in contact with the shale, for a significant time, it can wet and weaken certain shale to the point that it can soften and deform – which would prematurely close the fractures and lower the volume produced, both as a rate and total amount. You can 
inhibit the wetting by adding different polymers (as they do when 
dealing with, for example, the Gumbo shale in Texas when they drill
 through it using a water-based mud), and if I remember the ones that they often use are also used to
 keep the froth in beer from collapsing.)

Jazz Shaw (The Moderate Voice) mentioned that there continue to be repeated claims (he cited one from Maurice Hinchey a Congressman from New York, who claimed on a recent CNN program that there were multiple cases of groundwater from hydraulic fracturing, and refused to acknowledge that the host pointed out that the claim could not be substantiated – which followed a similar comment that I had noted that arose from the state agencies that currently monitor hydrofracking operations when they testified before Congress). It was also pointed out that the Ground Water Protection Council had also been unable to find any instances of this occurring.

Erik Molito pointed out that over the million wells that had been hydrofracked, while there had been some surface spills (which were not defended) there was not one instance where a hydrofrac in the formation had led to groundwater contamination, over the 60-years that the practice has been in existence. At present 90% of current natural gas wells that are drilled are hydrofracked. It is not only practiced in shale, one of my questions related to use in Colorado, where I was told that
Virtually every well 
drilled into the tight Cobell sandstone of the Wattenberg field in the
 Weld County area, or into the Williams Fork sands of the Mesa Verde group 
in the Piceance Basin on the Western Slope involves hydraulic fracturing 
for well completion. HF is also used in a substantial amount of the coal
bed methane production in the San Juan Basin in the southwest part of 
the state.
In later discussion it was also pointed out that Colorado requires that the ingredients in the fracking fluid formation be listed, but not the specific amounts or recipes. (In much the same way that Coke lists the ingredients but not the formula, so that no-one can gain the commercial benefit of copying their recipe). Other states also follow that requirement.

Richard Ranger pointed out that as a protective measure increasingly drillers are working with state agencies to take water samples both before and after drilling and fracking the wells to substantiate the claims of no impact. He also noted that the state agencies work closely together through groups such as the Interstate Oil and Gas Compact Commission to ensure that the wells that are drilled are properly monitored, and that nothing is done without following a detailed permitting process. And knowledge gained, for example in Texas, is quickly transferred to Pennsylvania.

When it came to the impact of any proposed regulation, asked by Rich Trzupek of Big Journalism, the Economists on the panel noted that up to 60% of current natural gas production comes from hydrofracked wells, and that slowing or stopping that amount of natural gas would obviously have significant impact. And it was noted that natural gas increasingly provides a fall-back reserve of power should the wind not blow, or the sun not shine, to support renewable energy supplies. (And in a subsequent follow-up API has pointed to the jobs that could be gained by growing the natural gas industry to get natural gas from the Marcellus shale in Pennsylvania .

Tim Hurst of Ecopolitology raised the question of the EPA project, and he was assured that, at the appropriate time API would have a response.

Gail asked about the source of the water, and in response Richard Ranger noted that while a typical 7 – 10,000 ft well might use 3 million gallons of water, this is the amount used by a typical golf course in a week, a 5-acre cornfield in a season, or a municipality of 8 million people(e.g. New York City) in 4 minutes. The amount of water required to generate a million Btu’s from natural gas is about 10% of that required to produce the same amount from coal, and about 0.1% of that required to get the same amount of energy from corn-based ethanol.

Geoff Styles of Energy Outlook asked about the use of diesel, but it was pointed out that while this is sometimes used as the basis for drilling fluids and muds, where water based muds might create problems in reacting with the rock, diesel is not used in the hydrofracking process.

One of the possible risks of hydrofracking was posited as being that the fractures would intersect other wells drilled in the same location, but Andy Radford pointed out that the degree of control ensures that fractures are grown under tight enough control, and limited ranges, to ensure that this does not happen, and that when wells are spent, that the sealing of the well is done sufficiently well to ensure that there is no risk of subsequent leakage,

The conference went on for over an hour, so I would recommend that those interested in more detail review the entire transcript. It was, as I have tried to illustrate, quite informative.


Read more!

Tuesday, March 30, 2010

Secretary Chu in Newsweek

My dependence on good internet service has been underlined by a week where it has not been available. Unfortunately the motel we were staying at in Maine had a problem with its server, and we returned home to find a problem with our own internet connection. Thus a couple of the posts that I was completing will be a little delayed, until I have that indispensable tool, a good connection. This has been prepared largely without, and similarly posted with access just to find a couple of references.

And in that meantime I have been perusing this week’s NEWSWEEK, on my Kindle, and noted that the Secretary of Energy had a new interview. The first thing that he said in it was in response to a request to define President Obama’s energy policy. Here is what he said:
We look at all the factors and we say, how can we get to the lowest possible level of carbon as quickly as possible and not only at the lowest cost but with the greatest possible economic opportunity for the U.S.?
That’s it!

The rest of the interview was not that much more constructive (one of the benefits of the Kindle is that it counts words, in this case the article – including questions – ran to 782 words - the attempted length of this piece). There was no mention of peak oil, or energy security or prices in the article. In response to the criticism that the Stimulus package did not make enough investment in energy, the Secretary said that they would fund projects for up to three years maximum, and encouraged innovators to “swing for the fences.” Whatever that means (in context)! I will admit to having helped put a couple or more proposals into the DOE hopper, though most came back rather quickly and negatively (one was successful). What struck me about the process, and the attitude redolent in the Secretary’s remarks was the focus on short term benefits and application. Most of the work is also oriented to larger group efforts, with a lot less focus on the smaller innovator to stimulate new ideas. If you can’t claim a home run within the remaining life of this Administration, don’t bother applying.

He appears to hope for a start to a Smarter Electric Grid, to double renewable energy contributions by 2012, and to get the nuclear power plant construction industry restarted in this term. But he also recognizes that carbon capture and sequestration is at least 10-years away from deployment. His “blue sky” hopes are for cheap (below $2) per watt photo-voltaic systems, (current costs he quoted as being over $4) and he still looks to the generation of fuels such as gasoline directly from biomass. This is not the ethanol production that the industry and government are still heavily involved in, but rather focuses back on the work he was supporting while at LBNL looking at using natural fauna to do the digestion and fuel generation.

But he returned to the need to put a price on carbon, and for a cap-and-trade bill to make sure that the point on where his focus was, would not be missed.

Sigh! He sounds as though the “scientist in a tower” description still fits him like a glove. There are considerable issues in regard to the changing energy supply of the planet that should be giving him pause in his charge against carbon. Increasing numbers of people are pointing to a coming crisis in oil supply. The British government, at the urging of folk such as the head of Virgin Airways, has decided that perhaps it is about time that it took its head out of the sand, and took a hard look at the situation. Of course it is also taking a look at the reality of climate change predictions, though with the coming of a general election, it is not clear whether either effort will amount to much.

I am increasingly struck by the perception that many of the folk that write about both climate change and energy supply do so with a very complacent attitude toward the continuing situation. The potential impact from a major impact on climate from a severe eruption of the Laki suite of volcanoes in Iceland seems to be being totally ignored. (A quick skim through some of the scientific papers suggests that the major eruption follows within a couple of years of the current eruption of the smaller volcano). The problem is that should there be a problem, running around in a panic for a couple of days is going to do nothing constructive in stopping folk from being killed.

Well we will have to see, in the relatively short term, whether that complacency is warranted. Being a Cassandra is unlikely to get more recognition this time around.

Read more!

Saturday, March 27, 2010

Using heat to refine kerogen from oil shale

One of the problems with the oil (kerogen) in oil shale is that it is not mature enough (i.e. close enough to being an oil) that it will easily flow through the rock. In earlier parts of this particular theme I have written about mining the rock and then heating it in retorts as a way of transforming the kerogen and recovering it for use. I have also, somewhat tongue in cheek, discussed using nuclear weapons to heat the rock so that the transformation can take place without moving the rock, while breaking the rock at the same time, and the unlikely potential for burning some of the oil within the deposit to power the transformation of the rest. While it might work in a heavy oil sand, is not likely to be realistically practical for the finer grained shales. But there are ways of adding somewhat less heat to the rock than using a nuclear bomb, and that will be the topic for today.

This is a continuation of the technical posts that I usually write on Sundays, but I am trying to catch up after the eye problem, and so will try and get the last one or two posts on the immediate topic of oil shale up within the next week, before moving on to a new set of subjects..

While I am largely going to bypass the use of nuclear power (apart from that of providing electrical power) in this piece, the potential use of nuclear power to heat penetrators that allow rapid drilling of weak rock has been partially demonstrated. As I have mentioned previously , Los Alamos National Lab, in looking at different methods for drilling, had come up with the idea of using a small nuclear reactor to provide sufficient heat to a ceramic probe that it would melt its way into the ground, pushing the molten rock to one side, and providing a glass lining to the resulting tunnel.

By the way, this has not been used to create the network of tunnels under this country in an idea beloved of some, it has been demonstrated. Not with a nuclear source, but with more conventional heating, Los Alamos drilled drainage holes at the Tyuoni pueblo plaza for drainage in 1973. A total of eight drainage holes were drilled at this archeological site in the Bandelier National Monument.
The first significant step in the Subterrene technology transfer program occurred when eight water drainage holes were melted with a field demonstration unit at the Rainbow House and Tyuonyi archaeological ruins at Bandelier National Monument in New Mexico in cooperation with the National Park Service,, By utilizing a consolidation penetrator, the required glass-lined drainage holes were made without creating debris or endangering the ruins from mechanical vibrations.

At around the same time Dr George Clark, at what was then the University of Missouri-Rolla (now Missouri University of Science and Technology) had used ceramic electrical heaters in rock to raise rock temperatures enough to fracture and break out blocks of granite.

Field tests have therefore been able to take rock up to temperatures that are high enough to melt rock, using electrical heaters placed in holes in the rock. Which is a good introduction to the Mahogany Project in which Shell have been using electrical heaters to heat oil shale in place, to high enough temperatures that the kerogen transforms into a light oil. The investigation has been going on for some 25 years starting in the laboratory, and has progressed through an initial field trial.

Small holes are drilled down through the rock to house the electric heating coils, which slowly raise the temperature of the rock to between 600 and 750 deg F, at which temperature the kerogen will convert, depending on what is there, to a mixture of light oil and natural gas. These fuels can be recovered by drilling conventional wells into the rock, with typical depths at the test site being in the 1,000 to 2,000 ft depths.

The Shell Mahogany Technology

The field trial placed heaters in a grid over a 30 ft by 40 ft test area and found that a third of the volume produced was natural gas was produced from the lower grade layers of the shale above the layers with the highest concentrations of kerogen (the Mahogany layer) which produced the light oil.

Array of heaters at a Shell test site

A total of 1,700 barrels of the light oil was recovered during the test period.

Production from the Shell test wells in oil shale

While the Bureau of Land Management has approved further sites for tests, the program is waiting to see what happens to the price of oil to determine whether or not the program will be sufficiently economically viable to move forward. At present this decision is anticipated to be in the middle of this decade, by which time it may be a little clearer whether the Cornucopians or some of the rest of us have been more accurate in our predictions on the future availability of sufficient oil to meet global demands at an affordable price. But it is the level of that affordable price that will decide whether the oil shale program is viable.

The costs of the project will not just have to cover the heating of the rock. One of the problems with the site is that there is some migration of water through the rock, and this can create two problems. The first is that it pulls heat away from the transformation process and the second is that it can interfere in the overall process itself. To stop the water flow (and concurrently the risk of transformed oil and gas migrating away from the collector wells) Shell has been looking into building an ice wall around the site to hold the water back.

Ground freezing is growing more popular as a tool for dealing with water underground. It has been used, for example, to stabilize the ground while the Boston Big Dig (the Central Artery/Tunnel project) was built and in stabilizing the ground for some of the underground stations in the London Tube network (including the collapse of one of the excavations). It has been used to hold back the water while uranium ore was mined at MacArthur River. Simply described, a dual pipe system is placed in vertical holes, and a freezing solution (usually a brine) is circulated through them, lowering the temperature of the rock to the point that the water freezes. Since the lowered temperature is distributed around the holes, there is no need to intersect any of the fractures, or voids, and the frozen water also helps to strengthen the rock where needed.

For the Mahogany Project test, which began in 2007, the freezing liquid was ammonia, and the test used a pattern of 157 holes drilled eight-feet apart, to a depth of 1,800 ft. The test removed the groundwater from within the well, but did not heat the rock to produce the oil and gas.

It will be interesting to see how this project turns out. It has been suggested that the technology would need a dedicated power source of some 1.2 gigawatts, in order to yield a production of 100,000 bd. Shell estimates it will yield 3-4 energy units for every unit consumed.

Layout of freezing pipes for the Shell Mahogany tests.

As usual with these technical posts, they can only briefly outline a process, if something is not clear please ask in comments, or if there is more information available, we all gain from reading of it.

Read more!

Wednesday, March 24, 2010

Temperatures in Nevada, and Death Valley

Well trying to find the temperatures for the different states and compare the USHCN data with the GISS data is proving to be more interesting than I had thought. And it is taking a fair bit more time. So when I got to Nevada, as I moved West from Missouri, I thought that today’s state would be a faster run through. Turns out that, along the way there were a couple of other issues, and so even for a smaller sized population there are some problems. (I started writing this just after inputting the USHCN initial data into the table). Nevada has Death Valley, which although it has some of the higher temperatures in the Union did not make either the GISS or USHNC lists, since it has only been providing data since 1910.


Finding the temperature relationships started out easily. Checking with the USHCN stations first, there are only 13 stations listed, and so a quick modification to the generic file, and renaming it, made it easy to get and put up the basic temperature data. There was no data appearing until 1901 for Boulder City and there were seven years missing for Searchlight. So as I described earlier, I filled in estimates for each of these values by seeing that Boulder City (for e.g.) was, at 67.13 deg, 16.49 deg warmer than the state average of 50.64 deg, and that 1895 was 3.54 deg colder than that overall average. This suggests that the Boulder City temp was 63.59 deg that year. And so I filled that number in the space, and repeated the procedure to find an estimate of the rest of the missing numbers. And then, as I started to get the GISS data I started to run into trouble.


I looked at the Chiefio site to find the sites that GISS uses in Nevada. Not being sure which station was in each state, I went through the list and checked (by using the co-ordinates and Google Earth) to make sure which state each station was in. And in the process discovered that there were actually four GISS stations in Kansas and not just the three that I listed. So I will have to go back and redo that state, adding in the info from the GISS station in Goodland, KS. (But I will put that off for this evening).

The next problem that I ran into is that Chiefio cites two stations in Nevada as being GISS stations. These are in Las Vegas, and Ely. But when I open the Las Vegas data table the data only goes back to 1937. Odd, I thought that one of the criteria for selection was that there had to be a continuous record going back to 1895. So can I do the same correction to find the earlier data. Actually given that I am looking for a difference between the two sets of data, and that there are only two stations in the GISS record, I don’t think that I can do this. Wonder what the difference is? Well let me enter the data and then take a look. (Which is not the purely scientific way of doing this, but I am doing all this out of personal curiosity so press on . . . .) And it gets a bit more strange, since when I go to the Ely data there is only data since 1947. Now in Nevada there are some 35 stations listed by GISS (though I think the odd one or two are in other states), and a number of these have data going back to 1895 or earlier (see the USHCN list), so why pick two stations that only have partial data sets?

Have to think a bit about that one, but in the meantime how am I to handle the lack of complete sets of data to compare? Well, in the circumstances it is a little difficult to do a significant comparison, since Las Vegas is the station with the lowest elevation in the set I am looking at, and thus has the highest temperature, so it will have a significant influence, when it is brought into the averages. In fact the difference between the GISS data and the USHCN data consistently increases over the years that there is GISS data.


And the average difference over the time period where there are values for both GISS stations is 4.37 degrees. The impact of the added GISS stations on the overall average can be seen over the period from 1895, with the red showing the addition of the GISS data after 1945.


Looking at the influences on those temperatures, as previously, there is a strong correlation with latitude:


This is now becoming quite consistent over the states, as is the correlation with elevation:


But here in Nevada there is no significant correlation with longitude, nor is there much of a correlation with population, probably because there are not a lot of folk living near most of the stations. And there is not a significant trend looking at overall standard deviation of the data.

I am going to try normalizing the data to the center of the state, by just using latitude, (I believe that GISS uses distance but since the correlation shown above is relatively consistent with latitude, I will use that). The center of the state is given (by GISS) at 38.9N 116.4 W. So if I adjust for that using the equation on the latitude plot, then I get a figure of:


One can use the derived equation to determine the temperature profile for a station that was at the center of the state, and at the mean elevation for the state, which turns out to be 1,676 m. And that gives a plot:


Note that I have not used the GISS data in this series, because, had I done so, since the GISS data is hotter than the average, only including it in the latter parts of the curve would give a relatively cooler initial period and hotter latter half. So I left the GISS stations out of this plot.

And when we look at standard deviation, having adjusted to the center of the state and to the state mean elevation, then the plot shows a definite decrease in scatter as the years have worn on.


Which is interesting, but remember that this is a relatively small sample. And so on to correcting Kansas, and then to see what California adds to the picture. This is the state that Chiefio first got my attention by writing about, and it also brings us down to the ocean, which, until now we have been far from.

Today there were more questions about the data than about the hypotheses, hopefully we can get on to other things next time.

Read more!

Tuesday, March 23, 2010

The volcano in Iceland

The current volcanic eruption in Iceland raises a question of the Precautionary Principle but one calling for an opposing action to that most commonly sought in ameliorating damaging climate change. If I can summarize the situation in a somewhat condensed version, the current eruption in Iceland at the Eyjafjallajokull site has now been going on for a couple of days (video) and is throwing a plume of ash up to 5 miles into the overlying air.

View of the plume from the Eyjafjallajokull eruption (scienceblogs). More photos here

What should be a concern, it appears, however, is that this particular volcano, although in itself not that large, is often a precursor to the eruption of the Katla volcano, which is nearby but further north- east. While the current eruption started away from a glacier, the Katla lies under the Myrdalsjoekull glacier, and the precedents are not good.
"Eyjafjallajokull has blown three times in the past thousand years," Dr McGarvie told The Times, "in 920AD, in 1612 and between 1821 and 1823. Each time it set off Katla." The likelihood of Katla blowing could become clear "in a few weeks or a few months", he said.


The second Katla eruption has, historically, had much greater impacts since it has had the power to inject large volumes of particles and gases into the atmosphere that create a sufficiently dense cloud that sunlight is reduced, harvests are affected and famines result. For example when the nearby Laki fissures erupted in 1783 the results that were reported included:
Iceland's Laki volcano erupted in 1783, freeing gases that turned into smog. The smog floated across the Jet Stream, changing weather patterns. Many died from gas poisoning in the British Isles. Crop production fell in western Europe. Famine spread. Some even linked the eruption, which helped fuel famine, to the French Revolution. Painters in the 18th century illustrated fiery sunsets in their works.

The winter of 1784 was also one of the longest and coldest on record in North America. New England reported a record stretch of below-zero temperatures and New Jersey reported record snow accumulation. The Mississippi River also reportedly froze in New Orleans.
The last eruption of Katla was not that severe, although there was severe flooding. But it can be violent, throwing large volumes of material into the air. The consequence of the Laki eruption of 1783-4 (it lasted for eight months and ejected over 9 cubic miles of basaltic lava and a quarter of the population of Iceland died from starvation, the effect on winter temperatures in the US was evident.

(Sigurdsson 1982)

But it is not the fine particles that apparently cause the most climate impact, rather the large volumes of gases that also are ejected and turn into acid aerosols that contribute to the disastrous consequences. It has been projected that there were some 8 million tons of fluorine and 120 million tons of sulphur dioxide released. To put this in context, during the Mount Pinatubo eruption (which also had a short term cooling on global climate) the volcano is projected to have emitted some 20 million tons of sulfur dioxide and that this lowered the global temperature by perhaps 0.8 deg C in 1992. (The aerosols reflecting back more solar energy into space.)

Mount Pinatubo erupting (USGS photo)

The fissures themselves, from which the Laki lava flowed, stretch some 16- miles though not as a continuous crack, but rather the lava flowed from about 130 craters.

Laki craters (climate4you)

The general locations of the Katla to the earlier Laki and Eldgja eruptive flows is shown here:

Location of the Laki, Katla and Eldgja fissures, the current eruption is further south and west, on the hidden side of the Mydalsjokull glacier.

The severity of the impacts of these eruptions on the global climate and the resulting devastation on the harvests is a matter of record. We know that there are more coming, and, from past history the damage will be great.

That record suggests that there will soon be another serious eruption that, on past performance, will also eject large quantities of sulfur dioxide gases into the atmosphere and induce another period of cooling.

Since the previous evidence has shown that this cooling has not been good for the global health, perhaps it might be about time that folks started working on what can be done to stop, or collect these large volumes of gases before they become too predatory. Certainly the problem of neutralizing the gas or otherwise collecting it or storing it is significant (but then so is that for carbon dioxide). However we know where it is going to come from, and some of the mechanics of how it will migrate upwards. There are large unknowns in terms of how we might capture, control, or neutralize a lot of the acid-forming gasses, but the size of the problem has never stopped us before.

And if the problem were to be addressed by the nations of the world, perhaps we could, in time, assuage some of the known negative effects from the global cooling that we can now expect as the Icelandic eruption continues. This cooling may well already be more certain than near term global warming, and more immediately likely to occur, so under the same Precautionary Principles that are driving the huge investments in climate change already, ought we not be spending our time right now trying to find ways to stop the impact of these massive eruptions on the health of the globe.

Silly thought – but one that suggests that the problems don’t always only have one side, and that we might find out by visual observation the down-sides of too much cooling and its effect on people and agriculture, perhaps sooner than we would like.

Read more!

Saturday, March 20, 2010

A tear in the retina

Just over a week ago, on Thursday morning, I got into the car to make the three mile drive to the University. Shortly after leaving the house I noticed what appeared to be something like a # sign in my upper right eye. This turned into a black line that, as I continued along the road reached all the way down the right side of my vision. And it was followed by waves of small spots that swept across my view, from the right, gradually first blurring and then cutting back on what I could see as i pulled into the parking lot.

A quick call to my optometrist and I had an appointment for that afternoon. But as the morning wore on I gradually reached the point where I could not see out of my right eye. When I was examined the diffused blood within the eye made it impossible for the optometrist to see anything within the eye itself, and I was given an appointment the following morning at the Barnes Retina Institute in St Louis.


At the Institute the eye was still too blocked for even what seemed the very brightest of lights to penetrate well enough to show the problem. But with an ultrasonic probe it was possible to see that there was at least one tear in the back of my retina which had caused the bleeding that was the evident result of the problem. I was, accordingly, given an appointment for surgery this past Monday morning. The Actress was kind enough and diligent in repeatedly taking me into (and from) St. Louis, even though she herself was engaged in rehearsals for performances which have been going on in the latter evenings of this week.

The surgery took about an hour, there were apparently two sites that needed to be tied back to the retinal wall, using a laser to do the “welding.” And the majority of the debris in the eye was drained out, and an air bubble injected to hold the area that had been worked on in place, until it could re-establish. (While it was done under a local anesthetic, I was given a general one to allow that to happen, and my memory of the whole event is extremely clouded).

After a check on the Tuesday to ensure that there was no infection, I was able to come home (we had spent the previous night in St Louis as a precaution). By Wednesday I could see out of the top of the right eye (looking down through the air bubble gave a clear but offset view when I looked down), and over the week as the air bubble has shrunk (with my head six inches from the table it now covers the area of my palm when placed on the table). When I look at the laptop screen, the bubble now only covers part of the keyboard as I type. I can almost read these words as I write them (the definition comes from my left eye) since there is still some small debris in the eye, and that will only be absorbed through time.

I am left very impressed with the strides of modern medicine, the benefits of Medicare (I retired at the end of last Month) and the skills of the team that so rapidly brought me back to functionality. And as my eye strengthens I will be returning to more regular posting. I gather that the air bubble, which is steadily shrinking, should be gone by the beginning of the week.

Thanks again for the good wishes.

Read more!

Sunday, March 14, 2010

A slight problem

Because of the need for an unexpected minor operation I will not be able to post for a couple of days or more and posts may be limited in the near term. I regret the necessity, and will be back as soon as feasible.

While waiting you might want to keep up with the Iditarod, which seems to be heading for an exciting finish.

Read more!

Saturday, March 13, 2010

Utah Temperatures and Colorado UHI

This is now the fourth in an originally unplanned series looking at the differences between GISS and USHCN average temperatures and some trends in the data that are not necessarily obvious when one only looks at global data. Tracking Westward from Colorado, Utah has the next set of temperature data that I will look at, and I will also comment on Anthony Watts note on Colorado UHI , which he posted since my post on that state.

Utah has a different topography in that while in Colorado as we head West the land rises, in Utah that is no longer the case. So for the hypothesis of the week I will assume that while there will be a correlation with elevation, that there will be none with longitude. Let’s see if I’m correct. And while there was little correlation with latitude in Colorado, because of the strong effect of elevation and the wide range it covered, I am going to hypothesize that we will see a significant effect of latitude again, even though the elevation is considerably higher.

As usual I am writing this as I carry out the tabulation, and so I begin, following the initial procedure, by going to the USHCN web site and, yikes, there are 40 weather stations listed for Utah. OK, there will be a slight pause while I enter all these into the data table . . . . 40 stations, mutter, mutter).


Hmm! And we have the same problem of missing data that showed up in Colorado, but with more stations. For some stations there is no data in specific years before 1900. Well I am going to apply the same procedure as I did for Colorado and calculate an assumed temperature for those sites. First is Blanding, for which there is no temperature for 1896. Blanding is, on average at 51.15 deg, 2.8 deg warmer than the rest of the state. If that year the average temp was 47.5 deg, then Blanding should be at 50.3 degrees. If the average temperature for 1896 was, at 47.5 degrees, 0.8 degrees below the average annual temp, and the annual temp for Blanding is 51.15 deg, then the temp in 1896 would be 50.35 deg. Taking the average of these gives 50.33 deg, which is what I insert.

I follow the same procedure to calculate the temperatures for Bluff in 1895, 1896, 1896, and 1898; Farmington in 1895; Morgan Power in 1895; Snake Creek in 1895; (if you get the table these values are marked in red.)

Again checking on Chiefio’s list of the GISS stations. I find only one station out of the stations in Utah, that has survived the cut is the one in Salt Lake City. So I go to get the GISS data for that station.

Adjusting the tabulation to calculate averages for the different numbers of USHCN and GISS stations reveals that the GISS station reads 4 degrees higher than the state average from the USHCN stations.

Looking for the populations of the different cities and towns, Deseret is now apparently preferably called Delta. Modena, UT is also not on the city-data file, and so I got the population of 35 from Century 21. The population for Snake River Powerhouse also requires a little detective work, and two communities have merged to form Midway, and so I will use that number. And for Zion National Park I used Springdale.

And so all the data is in the table and what do we find? First of all, looking at the primary hypothesis of the day:


There is realistically no correlation here, so the first hypothesis is apparently valid.


As with the Colorado data, this again suggests that the trend that we saw with the Kansas data in regard to longitude was really a reflection of the increasing height above sea level. And at the high elevations, looking at latitude, there is a trend:


Not as strong as elevation, given the widely carrying heights of the stations around the state, but it is still there.

In regard to the population distribution, Utah is a state with a large number of small communities , which also have weather stations, so I shrunk the scale of the plot shown to below 10,000 population:


And the log correlation still shows.

In terms of the standard deviation plot that I have calculated for all the states so far:


There has been a slight increase in scatter over the years, (which would validate Anthony Watts observation on station quality) though it is not at a significant level (which I have set at an r-squared of 0.05).
However Utah is a state that has seen a warming over the years:


But the growth of the small communities, relative to the single GISS station in Utah, and the sensitivity of temperature more to the growth of small communities, means that the difference between the GISS and USHCN averages has declined over the years, due to the makeup of the USHCN average.


Which brings me back to Anthony Watts recent post on Colorado data. What he and Stephen Goddard have done is to look at the relative population growths of Boulder, and Ft Collins in Colorado.

Growth in relative populations of Boulder and Ft Collins in Colorado

Now, taking the data that I had discussed last Saturday, it was fairly easy to pull out the data for Ft Collins and Boulder and find the difference, and plot it against time:



I have taken it back a little further than WUMT, since I had the data, but the two curves are otherwise the same. And it does show the effect of the Urban Heat Island, from the point of view that the most significant change between the two towns has been their relative growth.

And so it continues, perhaps next week we will look at Nevada?


Read more!

Wednesday, March 10, 2010

Carbon Sequestration sites and their success

There are a number of questions on the ease with which carbon dioxide can be sequestered underground, and I alluded to some of them in yesterday’s post. That led me to a quick review of the status of the concept, and I thought I would pass on information from some of the papers that I looked at. Some of the different options that can be used for carbon dioxide injection underground are illustrated by a review of the Polish program.

Different options for carbon dioxide disposal underground.

Of these the use for enhancing oil recovery has, perhaps the longest history. Some sense of the work can, perhaps, be seen by looking at CO2 injection at the Cranfield site in Mississippi.


The site is in an oilfield that was discovered in 1943, and abandoned in 1966. Since that time, under the influence of a strong aquifer drive, it has returned to its original reservoir pressure. There is a layer of residual oil, under a gas cap..

Section through the Cranfield site

The site is actually a dome, folding in both directions, so that the residual oil forms a ring. It is a part of the Tuscaloosa Formation, which MIT has calculated should be able to retain some 10,000 million metric tons of CO2. Adjacent continuations in East Texas and the Gulf would add an additional 187,000 million tons of capacity. Validation of the performance of the test site would thus go a long way to answering some of the critics of the technology. Because of the limited volume of oil available, the project is also looking into injecting CO2 into the brine interval during the third phase of the program.

At Cranfield the CO2 has been injected continuously, starting in July 2008, at a rate of 500,000 tons per year. so that, as Professor Economides discussed, the injection pressure remains high. At present the analysis of the samples shows little change in the water chemistry as a result of the injection. Last November it became the fifth site in the world to store more than a million tons of CO2. Monitoring of the pressures as the third stage has begun, does show a pressure increase, although this may be injection rate sensitive.


Monitored pressures for Cranfield 3 (U of Texas)

A second site is being prepared in Alabama at the Citronelle oil field, near Mobile. Both carbon dioxide and water will be injected at that site, with the intent that the CO2 will allow an additional 15 to 20% increase in overall production from the field, before the site is left to sequester the CO2.
In the United States, CO2 injection has already helped recover nearly 1.5 billion barrels of oil from mature oil fields, yet the technology has not been deployed widely. It is estimated that nearly 400 billion barrels of oil still remain trapped in the ground. Funded through the D.O.E.'s Office of Fossil Energy, the primary goal of the Citronelle Plan is to demonstrate that remaining oil can be economically produced using CO2-EOR technology in untested areas of the United States, thereby reducing dependency on oil imports, providing domestic jobs, and preventing the release of CO2 into the atmosphere. . . . . . . When the 5-month injection is completed, incremental oil recovery is anticipated to be 60 percent greater than that of conventional secondary oil recovery by water flood. A recent study by Advanced Resources International of Arlington, Va., estimates that approximately 64 million additional barrels of oil could be recovered from the Citronelle Field by using this tertiary recovery method.

In the last Oil and Gas Journal survey (April 2008) they found 100 miscible ongoing CO2 projects and 5 immiscible ones, with enhanced oil production, at the beginning of 2008, running at 250,000 bd.
Costs for CO2 EOR have been given as $20.86 boe, divided out as follows:
* $3.68/boe for CO2.
* $5.72/boe for power and fuel.
* $3.34/boe for labor and overhead.
* $2.00/boe for equipment rental.
* $1.36/boe for chemicals.
* $3.05/boe for workovers.
* $1.71/boe for miscellaneous.

One of the Centers most active in the monitoring of CO2 plumes as they migrate from the wells out into the formation is at the University of Texas-Austin. Sue Hovorka, for example, monitored a CO2 plume migration after it was injected as part of a test in the Frio Blue sand, although in that test the injection was of the gas.
Several times a day during injection, trucks hauling 20-ton tanks of cold liquified CO2 arrive at the test site, where it is transferred to two 70-ton storage tanks. The CO2, which comes from a natural reservoir near a Mississippi salt dome, is transported most of the way by train.

During injection, the liquid CO2 is pumped through a heat exchanger, which warms it up to 21 degrees C (70 degrees F), converting it to a gas. Then it is pumped through the injection well head and a mile down the well. The CO2 enters the porous sandstone and brine through perforations in the well casing and spreads out in a plume.
She also described, briefly how the process was supposed to work.
Before the first tests, the scientists had predicted that an effect called residual saturation, caused by capillary forces, would cause the brine-filled pores in the stone to trap and hold about 20 percent CO2. The other 80 percent moves on to the next set of pores, and as it moves, it’s continuously diminished. In other words, the plume smears out. Hovorka said the effect is intuitive.

“It’s the same reason you can’t get grease off the stove,” she said. “You can’t wash it loose with water, you have to use soap.”

The 2004 test confirmed this prediction and now initial results from the 2006 test seem to reconfirm it. “It means we got the physics right,” said Hovorka. It also means she and her colleagues can predict the CO2-trapping ability of other sites before injection begins, a powerful and necessary tool for carbon sequestration to become a common practice.

Polish trials have looked at displacing natural gas with CO2 in a program that has been going on for over 12 years Part of the process at the Borzecin site was to inject the gas into the underlying aquifer beneath the natural gas pocket. The CO2 dissolves into the water and so the migration to the gas pocket occurs only very slowly, the gas is at 1,500 psi (just above the critical pressure) when it enters the reservoir). The gas displaces natural gas that had previously been dissolved in the aquifer, yielding about 60% of the injected volume of CO2, as natural gas from the production wells.

One event that this test showed, which perhaps Professor Economides had not considered is that the dissolved CO2 appears to have interacted with the water, over time, to form a carbonic acid, that ate into the carbonate rock, and increased the permeability of the formation, lowering the pressure required for injection, rather than, as he had anticipated, having it rise. The site has now accepted more than 1.4 million scm.

CO2 has also been tested as a means of displacing methane from unmined coal seams. The initial project was completed in 2005
During the project 203 tonnes of CO2 were supplied to the site and stored in tankers. The CO2 is taken from the tankers where it is already stored under pressure and then injected at the injection well (MS-3 well). The injection well was a new well drilled down to a depth of 1120m for the purpose of this pilot project. The target seams were thin coal layers that were bounded (above and below) by highly impermeable shales. The pre-existing coal bed methane (CBM) production well (MS-4) is 150m from the injection well. A tank by the production well stores the saline water which is a by-product. This is emptied and disposed of on a weekly basis. The produced gas (naturally - 97% methane, 2% CO2) is flared. Since December 2004 there has been a gradual rise in CO2 content of the produced gas, the latest figure is 8% which may represent breakthrough of injected CO2 at the production well.
Modeling of the process is not yet fully functional, and in contrast to the more conventional reservoirs for oil and natural gas, the large fracture patterns in coal, known as cleat, play a greater part in the performance of the coal beds and must be included in the analysis.

Nevertheless the tests of the different methods for storage, and use of CO2 injected into the ground have been successful. The most widely recognized, however, is that carried out by Statoil, with Sleipnir the most documented. By 2004 Sleipnir had been injecting CO2, which is produced at an unacceptable 9% in the natural gas extracted at the site, at a level of a million tons a year, since 1996. Because of the length of time that the injection had occurred it has been possible to map the migration of the CO2 over that time. The initial injection is at a depth of 1,000 m below sea level.


Pattern of CO2 injected flows from the injection well at Sleipnir after 3 years

If I read the plots correctly the injection point is aligned with the deepest point in the picture and the flow path is about 2 miles long on its greatest extent.

The site continues to be monitored, as injection continues, with migration being downward under the containment of the cap rock.

Seismic surveys of CO2 migration at Sleipnir

It is expected that the CO2 will slowly dissolve into the brine (over hundreds of years). The scale of the above is exaggerated vertically since the height of the plume is around 600 ft.

The success of the program has led to the Snohvit Project which again takes the CO2 from a natural gas supply (in this case at 5% CO2) and stores it underground.

The success of these projects, and the changes in conditions from the simple models initially assumed to the more complex considerations that have had to be undertaken as the storage has continued to accept high levels of CO2 in some cases, and only high injection rates in others, nevertheless combine to suggest that Professor Economides models may be overly conservative.

Read more!

Tuesday, March 9, 2010

The Professors Economides and carbon sequestration

One of the problems in ensuring that there is sufficient energy available at some future date is that the supply, in virtually all cases, requires that a significant infrastructure is in place to deliver it. Thus, for those of us who have become used to having a light come on, when a switch is flicked, there needs to be some form of generator hidden at the other end of the wires to which the switch is connected. Stepping further back beyond the generator there must be some energy source, whether it be the sun’s rays; the wind blowing across the moors; or the fuel that fired the power station from which the electricity flowed.

Putting that infrastructure in place takes time. And the existing plant, over time, and even with good maintenance, will slowly decay and require replacement. At the same time just being able to generate power doesn’t necessarily allow me to flick the switch and see the computer screen as I type this. There has to be system, typically an electric grid of power cables, which carries the power to where it is needed. As Boone Pickens found out, putting wind turbines into West Texas was an exercise in futility, until such time as there is a network that will deliver the power to the folk that need it in East Texas, more than a few miles, and a few years of construction and permitting, away. As “just in time” manufacturing has grown as a cost control practice, so ordering large pieces of equipment carries with it longer delivery times, since the minerals must be mined, processed, produced, transported, made into parts and assembled, before the final product can be brought to a site and installed. The larger the facility, in many cases, the longer that process takes, and the more paperwork is likely going to be involved in the getting the installation into the ground. It is a lengthy process even when the need is generally recognized. If there is political opposition - regardless of merit – then you just added years to the delivery, construction and operational start times.

Power station construction is one of those things where the process is more likely to be beset with delays, than with dramatically earlier than scheduled delivery. And if the fuel comes out of the ground, whether coal, oil, natural gas or uranium, then delays can more critically compound, particularly at politically critical points, such as around election times.

I was thinking of this today, as I came across Chris Booker’s comments on the Conservative leadership views on electricity generation, at a time when the UK must, before long, have a General Election. Given that Britain must replace 14 of its current generating plants, under EU mandate, taking with them 40% of the current generating capacity of the country, he draws attention to the Conservative view that coal-fired power can only be allowed if there is full carbon capture and storage.

What makes it particularly interesting and relevant is that, after some 100 coal-fired power stations in the United States have been cancelled or postponed, we have folk such as the head of one of the largest generating companies in the Midwest virtually echoing the Conservative leader. And it is much the same position with the Department of Energy and the current Administration. Carbon sequestration is the also a current area for lots of research and planned development.
In October 2009, as part of President Obama’s stimulus package, the US invested $1.4 billion in 12 CCS demonstration projects. In December 2009, the US announced a further $979 million is to be invested in 3 further such projects. The same month the EU pledged over $1.4 billion towards building 12 CCS similar projects by 2015. The UK is also proposing to fund and bring online 4 CCS demonstration projects by 2020.

Now there has been the odd small problem with leakage at one of the few places that are actually carrying this out at present. Yet the current site at Sleipnir, has been injecting CO2 for years, though the gas it re-injects was initially a part of the natural gas mix that wells at the site produce.
Using a simple metallic tube measuring 50 centimetres (20 inches) in diameter, the platform operator, Norwegian oil and gas group StatoilHydro, has injected some 10 million tonnes of CO2 into a deep saline aquifer one kilometre (0.6 miles) under the sea.

"We bury every year the same amount of CO2 as emitted by 300,000 to 400,000 cars," said Helge Smaamo, the manager of the Sleipner rig, a structure so large that the 240 employees ride three-wheeled scooters to get around.
But that is not the concern that has been highlighted, nor is it the volumes of gas generated to run the compressors that drive the gas to liquid and up to the pressure at which it is injected. Rather it is the results of the pressure itself.

Last October Michael Economides and his wife presented a paper in Houston, that he has since abbreviated on his web site, and which Chris Booker has made reference to.

Simplistically he examines the relative volume, within an existing rock formation, that carbon dioxide can occupy, if it is injected into that rock and left there in storage. The number that he comes up with is 1% of the pore volume.

The problem that they identify (with computer models) is that as the carbon dioxide (which has been compressed under pressure so that it is delivered and stored as a liquid) is injected at a relatively steady rate, it flows into a formation where there is already some pre-existing fluid. Thus as the new fluid is pushed into the rock, in the reverse of the process that happens when oil flows out of the rock into the well, the driving pressure must increase to move the CO2 away from the well. This injection pressure must continue to build, over time, to keep the fluids moving, but it can only displace some of the existing fluid, and some it has to compress. Achieving the compression by increasing pressure has a couple of serious snags. The first is that fluid at low pressures is virtually incompressible, and the second is that to achieve it the pressures must therefore be increased significantly to, and above, the levels that the rock can easily stand.

What do I mean by that, well if you remember back to one of the tech talks, I mentioned that we can crack a rock by raising the pressure in the well above the strength of the surrounding rock. We do that to create paths that the oil can flow out through, into the well, driven by the pressure differential between the fluid in the rock, and the pressure in the well, that we lower after fracturing the well. We lower the pressures because we don’t want the crack to continue growing. If it did then it could easily grow through the reservoir rock, and then up through the cap rock that is holding the oil (or gas) confined within the reservoir rock. Once we crack through that cap rock, then the reservoir isn’t a reservoir any longer, since it has a leak path, through which the CO2 can escape.

As a result we can only inject CO2 into that reservoir until the pressure in the well starts to approach the fracture pressure of the rock. But we don’t start at zero pressure, we have to start at the existing pressure of the fluid in the rock, and it is only in the difference between these two pressures that we can drive the CO2 into the rock. And that range, the Economides have calculated, will only allow a 1% change in fluid volume before the pressure gets too excessive.

Now I have to add two caveats to this argument – the first is that it does not hold when the liquid is injected into a reservoir where the fluid is still oil, since CO2 will merge into the oil, lowering its viscocity, and this has been used previously to enhance oil recovery (EOR) and get perhaps another 10% production out of formations where it is feasible. And even when the oil can’t be recovered, it can accommodate larger levels of injection that that forecast.

The other point is that there is nothing that stops the site from extracting the contained fluid (generally water, though often brackish) to allow the CO2 to displace it, at a lower pressure (which could be considered as a reverse water flood). It may be a bit more expensive, but at least it would reduce the volume of space that would be needed from the huge areas that the Professors Economides currently anticipate would make carbon injection infeasible.

I suppose someone will realize that, after a bit, but in the meantime while the debate continues, and the example sites fail to yield reassuring results, the power stations are not being built, and the future needs of both the UK and the USA hang further in the balance.

Read more!

Sunday, March 7, 2010

Producing oil shale by burning it in place

This is part of the continuing series that I have been writing about oil shale. And, while I just digressed in talking about using nuclear devices to break the rock and heat it, the key problems that those posts highlighted remain, the first was that the oil is not really oil and won’t flow to the well, and the second is that there are no easy paths for the oil to flow though, even if it could. And this creates a problem when it comes to getting the kerogen (or oil for simplicity) separated from the rock around it. As I said in the first post on this topic, the oil can be separated in a retort, after being mined. The retorting can be self-energized and, by heating the oil it can be transformed into a form of bitumen that can then be further refined into a commercial grade And if you think it is easy, there is this quote I found at Econbrowser, that might give you some perspective. He quotes Bubba, of Belly of the Beast:
If you heat this shale to 700 degrees F you will turn this organic carbon (kerogen) into the nastiest, stinkiest, gooiest, pile of oil-like crap that you can imagine. Then if you send it through the gnarliest oil refinery on the planet you can make this s*** into transportation fuel. In the mean time you have created all kinds of nasty byproducts, have polluted the air and groundwater of wherever you have extracted it.

Mining shale and then processing out the oil is, therefore, fairly expensive, both in terms of energy, and hard dollars. At the same time, once the oil is extracted, the spent shale has to be disposed of. That costs more money. Considering all these potential expenses and potential problems, it is therefore not surprising, from the beginning, that the idea of trying to create the initial retort in the rock, and making that transition to oil in-place looked as though it might be a winner.


There has been considerable technical success in recent times in getting natural gas from the tight shale around the country, but natural gas is, comparatively, easy to extract if some additional cracks are artificially driven through the rock to create the needed permeability.

Unfortunately that only potentially treats one of the problems with the oil shale. The other is that the oil will not move, even if the cracks are there, unless it is heated to the point that it will either vaporize, or transform into a flowable hydrocarbon. And this takes a lot of heat. Thus the attraction of having a nuclear device to create a cavity, radically fracture the rock around the cavity, and generate enough heat to start an underground fire, that could be sustained, and controlled, by adding additional air, and from which the oil could be released.

OK so accepting that we can't use nuke's can we do this another way? Because of space and time I’m going to talk of the more conventional retorting today, based on the idea of doing most of the processing of the oil in place. Why do we need to do that? Well it gets very expensive to mine and move that rock from the deeper deposits, and though it has been and is being done for metal ores, their costs are still much higher than that of oil.

If we can process the rock in place, so that the oil is heated sufficiently, then we save the transportation costs. So what will we need? For the more conventional approach we still need some sort of cavity in which to start the fire, and to allow it to spread. Then there has to be air fed to the fire to keep it going (and this will require that boreholes be drilled down into the area to sustain the air flow). And then there has to be some way of getting the mobilized oil out of the ground, so that it all doesn't end up being burned down there.

It is an idea that has been suggested for a number of different energy sources. And it is why I included a post on in-situ combustion processes at the beginning of this series. The first dealt with burning coal in place, and then I wrote about the THAI process that is being investigated in Canada for producing the heavy oil in the sands above Fort McMurray.

It might be helpful to insert a slight digression here. In a normal oil refinery, the heavy oils, or residuum, that come out of the bottom of the initial fractionating column have almost no light hydrocarbons left in them, and so are sent to a a Coker, where at a temperature of around 1200 degrees, the final hydrocarbons are driven off, and cracked into lighter fractions, leaving the carbon residue known as coke (or petroleum coke to distinguish it from that made from coal). From my youth I can tell you that coke is a much harder fuel to start burning than conventional coal, since it no longer has any volatiles left in it. Thus, for example, even after the intensity of the fires in the Kuwaiti oil field, coke was deposited around the burning wells and required barrels of C-4 to break it up, so that the fire fighters could reach the top of the well, put out the fire, and replace the fixtures. The reason that I mention this is that Petrobank are burning this coke to provide the heat for the reactions. And from the modifications from the first test to the second have found that the process needs a lot of air be supplied to the burning zone to sustain the fire - over the full face of the burn. I'll come back to that in a bit.

The situation with the oil shale is a little more complex than for oil sand, since the structure of the rock is tighter than the sands in Alberta, and the oil has to be heated to a significantly higher temperature before it will transition and move. The first underground experiments were carried out by Sinclair, in 1953 and 1954. (So we are back to paper references -see Ref 1 at the end). In those days drilling technology wasn't as advanced and so, for the first experiments, they drilled a hole near the outcrop of the shale, and then created a crack from the well to the outcrop by pressurizing air in the well until the rock fractures (a simple variant on hydrofracing a well). By adding sand the crack can be propped open so that air can get into it. It took a couple of tries to get it working, but they were able to start fires in the oil shale at the well, and then by continuously pumping down air, carry the fire along the crack. The heat of the fire changed the kerogen to oil, in the same way as with the retort, and oil was seen coming out of the crack at the outcrop. The rock around the well was, however, fairly fractured from being near the outcrop, so that air passage to encourage the flame to progress, was possible. It is worth quoting some of the conclusions to that work:
Under field conditions - particularly if the operation requires high pressures - volumetric conformance and thermal efficiency can differ significantly from model predictions. The burning zone probably will expand to more closely follow the retorting isotherm and shorten heat transfer distances. In addition, convection may become significant. To illustrate, shale retorted under simulated overburden pressures in the laboratory does not spall or crack as it does at low pressure. Instead, a consolidated rock having high porosity and low permeability remains after pyrolysis of the kerogen. Bulk volume is greater than in the un-retorted state. It is possible that some of the injected air will move through this permeable matrix of spent shale to more fully utilize the fuel content of the spent shale and accelerate heat transfer to raw shale over the rates computed from the mathematical model.

Coring of the oil shale as a precursor to the aborted nuclear shot at Rio Blanco (Ref. 2) showed that at depth the shale appeared to have considerable jointing, which would be a real help in any in-situ retorting method, as Socony anticipated (Ref. 3). When looked at under a microscope the retorted shale also had a number of voids, left by the volatized kerogen, that provided some permeability to the shale (Ref. 4).

It is the presence, or absence of cracks, voids and other passages that the controls the success of conventional in-situ retorting of oil shale. Cyclic hydro-fracing or air fracing of the shale can induce a series of fractures around a well bore at depth, but these are going to be relatively narrow. There is not the mobility within the structure that one gets from the oil sands. Further the environment has to be heated to a much higher temperature to induce transition first to the bitumen and then to the crude. In the tight rock that exists under pressure at depth, the only path that air has to the fire is from boreholes drilled to that depth. (In contrast with close-to-surface conditions where ground fracturing will open cracks to the surface). With the cracks being relatively narrow the air that must be supplied to the fire must be at a relatively high pressure, and in considerable volumes.

Without an underground cavity, into which some of the rock can displace, or a means for removing some of the rock to allow multiple fractures of the shale, and fracture opening to allow air access, starting and sustaining a large underground fire will be a significant undertaking.

Unfortunately also "Lean shale tends to be brittle, fracturing under stress, while rich shale tends to be tough and resilient, resisting fracture by bending, and tending to yield plastically under stress." (Ref. 5) This is going to make it harder to grow the cracks where we need them to be.

The other problem with in-situ retorting is controlling the flame front to go where you want it. It is hard to control where the fractures go underground, and the path that the air takes, to make sure that all the shale is retorted, so much more air has to be pumped underground than might be needed otherwise. And this is where it gets frustrating because, though it may only take 260 Btu to raise a lb of shale to 900 degF, (Ref. 6) and that can come from the carbon content of the shale (the coke above), getting enough air there and having somewhere for the released oil and gas to go can take a lot more energy.

For example if two wells are drilled, say 500 ft apart, and a crack run between them, then the air to the burning front, and the flow from it, is gong to be limited by the width of the crack. These processes are relatively slow. A model of the process (Ref. 7) has shown that it can take 10 years for the front to move from one well to the next. During that time air has to be continuous injected, and the volume of air required, for a barrel of oil recovered can be calculated.

Depending on the temperature at which the air was injected (since it shouldn't cool the fire) it can take between 24,000 scf (standard cubic feet) and 86,000 scf/bbl. To get that air into the fire effectively it would have to be pumped into the well at 2,500 psi. (A conventional air compressor runs at around 120 psi). To generate a flow of 50,000 barrels a day was found to require an air compressor system run at 272,000 horsepower. To cut a longer story short, this turns out not be economic, at 1968 costs.

Hmm! Well I am not quite finished, but perhaps this explains in part why Shell are using heaters, rather than fire. I will have a short discussion of that, next time.

Ref. 1 Grant B.F. "Retorting Oil Shale Underground - Problems and Possibilities", 1st Oil Shale Symposium, CSM, 1964.

Ref. 2 Stanfield K.E. "Progress Report on Bureau of Mines - Atomic Energy Commission Corehole, Rio Blanco Country, Colorado", 3rd Oil Shale Symposium, CSM, 1966. 

Ref. 3 Sandberg C.R., "Method for recovery of hydrocarbons by in situ heating of oil shale", US Patent 3,205,942, 1965.

Ref. 4 Hill G.R. and Dougan P. "The characteristics of a low temperature in-situ shale oil", 4th Oil Shale Symposium, CSM, 1967.

Ref. 5 Budd C.H. McLamore T.T., and Gray K.E. "Microscopic examination of mechanically deformed oil shale," 42nd Petr. Engrs Fall Mtg, SPE 1826, 1967.

Ref. 6 Carpenter H.C. and Sohns H.W. "Application of above ground retorting cariable to in situ oil shale processing", 5th Oil Shale Symposium , CSM 1968.

Ref. 7 Barnes A.L. and Ellington R.T. "A Look at in situ oil shale retorting methods based on limited heat transfer contact surfaces", 5th Oil Shale Symposium, CSM, 1968.

Read more!