Monday, October 4, 2010

Refining the Peak Oil Rosy Scenario Part 4: Testing Hubbert’s linear and nonlinear logistic models


I've always seen modeling as a stepping stone—Tyra Banks

A lot of modeling is how much crap you can take—Lauren Hutton

Logistic models
Before delving into simulated data generation and modeling, I want to make a few comments about the logistic model that I will be considering.  Hubbert briefly noted in "Techniques" (p. 49) that the logistic equation was originally derived in the 1800 's to model human population growth.   It should not be too surprising that oil production, and consumption, should follow a logistic growth model as well.  Indeed, Hubbert was well aware of the inter-relationship between exponentially increasing population growth and the exponentially increasing amount of energy made available by fossil fuels:


Prior to 1800 most of the energy available to man was that derivable from his food, the labor of his animals, and the wood he used for fuel. On a world-wide basis it is doubtful if the sum of these exceeded 10,000 kilogram-calories per man per day, and this was a several-fold increase over the energy from food alone.

After 1800 there was superposed on top of these sources the energy from fossil fuels. From a world average of 300 kilogram-calories per capita per day in 1800 the energy from coal and petroleum increased to 9,880 by 1900, and to 22,100 by 1940. In the areas of high industrialization this quantity is much larger. In the United States, for example, the energy from coal and petroleum consumed per day per capita amounted in 1940 to 114,000 kilogram-calories (2); and from coal, petroleum, and natural gas, to 129,000.
....
Viewed on such a time scale, the curve (If human population would be fiat and only slightly above zero for all preceding human history, and then it too would be seen to rise abruptly and almost vertically to a maximum value of several billion. Thereafter, depending largely upon what energy supplies are available, it might stabilize at a maximum value, as in Curve I, or more probably to a lower and more nearly optimum value, as in Curve II. However, should cultural degeneration occur so that the available energy resources should not be utilized, the human population would undoubtedly be reduced to a number appropriate to an agrarian existence, as in Curve III.

These sharp breaks in all the foregoing curves can be ascribed quite definitely, directly or indirectly, to the tapping of the large supplies of energy stored up in the fossil fuels. The release of this energy is a unidirectional and irreversible process. It can only happen once, and the historical events associated with this release are necessarily without precedent, and are intrinsically incapable of repetition. MK Hubbert, Energy from Fossil Fuels, Science. February 4, 1949, Vol. 109, p. 103-109.
                          

And, more recently, as more bluntly put by Roscoe Bartlett, modern industrial society literally “eats” oil:  
What we do here to look back through history when the industrial revolution first started with the brown line there which is wood and it was stuttering of course when we found coal. And then we found oil and look what happened. And the ordinate here is quadrillion BTU’s and the abscissa, of course, is time. I would just like to note that the population curve of the world roughly follows the curve for oil here. We started out with about a billion people and now we have about 7 billion people almost literally eating oil and gas because of the enormous amounts of energy that go into producing food. Almost half the energy that goes into producing a bushel of corn comes from the natural gas that we use to produce the nitrogen fertilizer.  Roscoe Barlett R-MD, HEARING BEFORE THE SUBCOMMITTEE ON ENERGY AND AIR QUALITY, DECEMBER 7, 2005, Serial No. 109-41.  Available online here or here.
At least for modern industrial society, I think of fossil fuels as being essentially equivalent to a food source that "fuels" the growth of human population—the faster we pump the oil, the faster the population can  grow.  Obviously, the oil and other fossil fuels is not a direct food source, but rather indirectly makes food more available, through a variety of means that I will not go into here.  Therefore, we should not be surprised to see that oil production and consumption would follow a logistic model, just as human population growth would follow a logistic model.  Rates of oil consumption and population growth are linked at the hip.

My Modeling Goals

Like Tyra Banks, I see modeling as a stepping stone to more interesting things.  My long term goal is to use Hubbert’s linear logistic equation (equation 27 in Part 3) to make an estimate of “a,” the rate constant for oil production decline, as well as the rate constant for oil consumption, so that I can make new estimates of the prospects for oil importation into the USA using the land-export model over the next few years.  It is appropriate first, however, to do some modeling to check the ability of Hubbert’s linear logistic equation to estimate these rates.

I agree with Lauren Hutton, however, that those who model have to take a lot of crap.  You are not analyzing real data, so it seems like your wasting your time. You always get critics who say that you didn't model the right thing.  Still other critics will not be very happy if your modeling exposes some flaws in the model.  I am aware that there was some controversy when Jeff Brown and colleagues first applied the linear equation to their land-export model (e.g., see Robert Rapier’s articles part 1 and part 2.   As a previous modeler (of spectroscopic data) I am sympathetic to the points that Rapier was trying to make. 

For instance, in part I of his critique, Rapier used Texas oil production data to show that, depending on which range of years one used, the estimates of, Q∞ from the horizontal intercept of the plot, could vary quite dramatically.   For instance taking oil production data available from 1960 versus 1980 versus 2006 gave different estimate of Q∞ and hence Q∞/2, the point where one-half of the recoverable oil has been extracted.  Although Q∞/2 is often implied to correspond to "peak oil" (i.e., the year of peak oil production), it is not the same thing.  Q∞/2 coincides with peak oil if the production (dQ/dt) versus time plot is symmetric—i.e., a rosy scenario.

Although this is important for estimating the year of Q∞/2 (which may equal peak oil production), that is not was I am interested doing in here.  I am interested in estimating the value of “a,” which corresponds to the vertical intercept.  In particular I am interested in being able to estimate recent changes in production and consumption because these are the numbers that will most affect my estimates of the oil available for importation into the USA versus rates of oil consumption in the USA.

Nevertheless I think that modeling is a worthwhile exercise to validate the procedure and gain some insights into how best to apply the model when examining real data.

One final note: I have presented this analysis in what some might think to be excruciating detail.  I do this for two reasons: a) I want to make it clear that anyone with a spreadsheet and some high-school math can do this and reproduce this—you don’t have to be an ivory tower scientist to model! However, you do have to be willing to put some the time in.  b) This is my blog, and I am not constrained by space!

Simulated data generation and assumptions

For those of you have never done modeling before I want to point out what may be obvious to those with experience—the strength of modeling is that you know the true answer to the problem that you are trying to solve, because you have generated the simulated data set based on that answer. 

Here, I have assumed some reasonable values of the parameters in Hubbert's logistic equation, based on Hubbert's own analysis of USA (lower 48 states) oil production, and used these to simulate a data set of Q versus t.  Then, I used this simulated data set to test the ability of Hubbert’s linear logistic model to predict those parameters. 

As a starting point, I used the logistic equation and parameter values for Q∞, a, b and the production start time (to) as presented in Hubbert’s Figure 15 shown at the end of part 3:

                    Q = Q∞ / (1 + b∙e–a(t-1900))                          [1]

Q∞= 170 bbls (i.e., 170 billion barrels of oil)
a = 0.0687 yr-1 (i.e., an exponential rate of consumption equal to 6.87 percent per year)
to = 1900 yrs
b = 46.8

Notice that b is the same as No in Hubbert's equation (38) (see Part 3 of this series).  Recalling that No = (Q∞-Qo)/Qo and given that Q∞ is assumed to equal 170, we can solve for Qo:
                  46.8 = (170 – Qo)/ Qo

                  Qo = 170/(46.8+1) = 3.556 bbls

The value of “b” reflects Hubbert’s assumption (or his knowing, from data not explicitly presented in "Techniques") of a certain amount of cumulative oil production by 1900 (recall his comment that "the choice of the date for t = 0 is arbitrary so long as it is within the range of the production cycle so that No will have a determinate finite value").    

Using the above equation [1] and these parameters I can use a spreadsheet to generate a simulated data set of Q versus t (in years), and then use this to calculate dQ/dt.  Using a spreadsheet this is easy: for the first year, dQ equals Q(1901)-Q(1900) and dt equals 1901-1900, for the second year dQ equals Q(1902)-Q(1901) and dt equals 1902-1901, and so on.  Figure 1 shows a plot of dQ/dt versus t for the simulated data ranging from 1901 to 2010.  I will refer to this plot as the production curve.



Analyzing simulated data using Hubbert’s linear logistic model

Okay now we have our data set for which we absolutely know the correct answers: Q∞= 170 bbls and a = 0.0687 yr-1.  When we linearize this data, according to Hubbert’s linear logistic equation, and, perform linear regression analysis to calculate the intercepts, do we get these answers back?

To linearize the simulated data, first recall the form of the linear logistic model:
                               (dQ/dt)/Q = a - (a/Q∞)Q                      [2]
Therefore, from my data set of (Q, t) value I need to calculate the production rate (dQ/dt) as shown in Figure 1, and divide this by Q (e.g., Q in 1901, 1902 etc...), and then plot (dQ/dt)/Q versus Q   and then plot this against Q.  Figure 2 shows the result:


If I then perform linear regression analysis of the entire data set from 1901 to 2010 using the SLOPE, INTERCEPT, RSQ featured of Excel) here is the result:

slope = -3.9 x 10-4
y-intercept = 0.06695 = a
x-intercept = 171.5 = Q∞
r2 = .9998

The plot looks pretty linear to the human eye and the r2 supports that this is a substantially linear plot.  Still, it is somewhat surprising to me, given that I am using “perfect” (simulated) data, that the y-intercept and x-intercept are not exactly equal to “a” and Q∞, respectively.  Nevertheless, the percentage error from the true value for both parameters (i.e., “a” underestimated by 2.5 percent and Q∞ overestimated by 0.9 percent) is fairly small and certainly would be good enough for my purposes. 

Accuracy when using limited time segments
Of course, we are never going to have a full data set such shown in Figure 1.  How well does the linear model predict “a” and Q∞ if only use partial data, and, does it matter where in the production curve of Figure 1 we take the partial data from?

To address these questions I performed linear regression analysis to calculate to y-intercept (“a”) and the x-intercept (Q∞) for independent 5-year segments along the whole span of the data set from 1901 to 2010.  Then for each of these 5-year segments I calculated the percentage error in the estimate value of “a” and Q∞ as compared to the known true values (0.0687 and 170, respectively).

Here’s some examples for the 1901 to 1905, 1951 to 1955, and 2006 to 2010 5-year segments:

The term Average “% of Q∞” simply refers to the average value of Q within the time segment of interest, divided by the true value of Q∞.  This term helps show where we are on the production curve shown in Figure 1.

The table shows the trend for “a” to be underestimated by a few percent when we analyze time segments at the beginnings of the curve and to be overestimated when using time segments at the end of the curve.  There is a trend for Q∞ to be overestimated when we analyze time segments at the beginnings of the curve, and time error progressively decreases as we use time segments towards the end of the curve.

These trends are more clearly show in plots of the % of true values versus time segment as shown in two figures below (for the horizontal axis I have just depicted the first year of each five-year segment).  Figure 3 shows the percentage different in predicted value of “a” compared to the true value as a function of the percentage of Q∞ along the production curve, and, Figure 4 shows the analogous percentage different in predicted value of  Q∞:

As you can see, for all time segments the absolute value of the % error never exceeds about 3.5% for the prediction of “a” and never exceeded 7% for the prediction of  Q∞.  Moreover, if you only consider time segments where Q is in the range of 20 to 80% of Q∞ (i.e., the likely range we will encounter when analyzing real data) the error in predicting “a” is never greater than about -3% to +1% and the error in predicting Q∞ is never greater than about +4.5%.

I have repeated this exercise using values of “a” ranging from 50% to 150%, and, “b” ranging from 50% to 200% of the values of the parameters, "a" and Q∞, that were used to generate the simulated data in Figure 1.  

For Q in the range of 20 to 80% of Q∞  I never see the error in “a” to be outside of about -4.4% to +1.5% and the error in Q∞ is never greater than about +7 %.  These upper error bounds all occurred for the same set of simulated data, where “a” was set to equal 0.10305 (i.e., a 50 percent greater rate than used in Figure 1).  

Therefore, it looks like the linear logistic equation can make reasonable accurate predictions of “a” and Q∞ under the conditions studied here.  That is, the "a" the exponential rate constant for production remains constant through the entire production curve. 

Accuracy when the rate constant for production declines post peak production
This really gets the gist of the matter: how well does Hubbert’s logistic linear equation (27) predict oil production when there is decline in the exponential rate constant?  Being sure that an accurate estimate of the rate constant can be made using this model  is pretty fundamental to being able to apply this model to real data for the purposes of refining the export-land model for the USA as outlined in Part 1 of this series.

To test this, I have taken the simulated data from figure 1 (“a” = 0.0687) and assumed that after the peak year (i.e., the year where the yearly production rate, dQ/dt, reached its maximum in 1956), for each year to follow, the exponential rate constant “a” decreased by 2 percent per year.  That is, in 1957, a equals a(1956) x 0.98; in 1958, a equals a(1957) x 0.98; etc....

Here’s what the production curve using these assumption looks like:

Figure 5 shows simulated production data Q∞ = 170, a = 0.0687 and then decreasing 2%/yr from 1957 and on.  For reference, I also show the previous data set where “a” = 0.0687 throughout the entire produce curve.

I note here that for each of the subsequent years, 1957 and on, where “a” changes, one needs to applying an adjusted logistic equation, where not only “a” is changed as described above, but also “b”  and to are adjusted to reflect the accumulation, Q, up to the transition year.  Recall again, b = No = (Q∞-Qo)/Qo.  The value of Qo for the subsequent years with the altered value of “a” are similarly adjusted as well to.  For instance, in 1956, Q = 85.056.  Therefore, for the subsequent year, 1957 where “a” is decreased to 0.067326, “b” is given by:

b = (Q∞-Qo)/Qo = (170 – 85.056)/ 85.056 = 0.9987,  and to equals 1956. 

Similar adjustments have to be made for each subsequent year along the production curve.

Figure 6 shows the linearization of the simulated production data:
Next, I estimated “a” and Q∞ for sequential 5-year segments and then estimated the error in the predicted values of these parameters as compared to the true values, similar to that described above in the context of Table 1 and figures 3-4—with one modification.  In the present case, because the value of “a” is modeled to steadily decrease after the peak production year (1956), there is no single true “a” value within any of these post-peak 5-year segments.  Therefore, I have calculated the average “a” for each of the 5-year-segments and used this as the true value of “a” in my evaluation of the linear logistic equation’s ability to accurately predict “a” and Q∞.    

Figures 7 and 8 show the percentage different in predicted value of “a” and Q∞, respectively:

Up until the production maximum at % of Q∞ = 50 the prediction results, of course, are the same as depicted in Figures 3 and 4.  However once we enter the range of data where % of Q∞ >50 and “a” is assumed to decrease 2%/yr, the predicted “a” and “Q∞” vary wildly from the true values, especially at the transition from a constant “a” to a declining “a.”  The error in the predicted value of “a” gets progressively worse as Q Þ Q∞ and is approaches a 200% over-estimation at % of Q∞  ~ 90.  The error in the prediction of Q∞ on a percentage basis is not as bad, but still badly under-estimates the true value at the transition and then progressively improves as Q Þ Q∞.  Still, there is a systematic underestimation of Q∞ in the range of  about -5% to -16%.

I will not show the data here, but for analogous simulated data with “a” assumed to decrease by 4%/yr following peak production in 1956, the overestimation of “a” and underestimation of  Q∞ is even worse (“a” overestimated by as much as 600% and Q∞ underestimated by as much as -200%).

These results are disappointing to say the least. 

We need a better model.

A constrained logistic nonlinear analysis of simulated data assuming a declining production rate constant

Given the above disappointing results, I have abandoned Hubbert’s logistic linear equation in favor of directly applying the non-linear logistic equation, that is, Hubbert’s equation (38).  More specifically I am using the the time derivative of the equation in Hubbert’s Fig.15, shown in Part 3 of this series, or in equation [1] shown above in this Part 4:

                              dQ/dt = (Q∞ / (1 + No∙e–a(t-to)))/(Dt),            [3]

where No = (Q∞-Qo)/Qo,  Dt = time increment (1 year)

One technical problem to over come is how to perform a non-linear least squares (NLLS) analysis, preferably, within confines of an Excel spread sheet, and preferably without laying out a lot of money for a fancy software package.  Fortunately, Excel’s SOLVER feature (in the tools menu) provides a means of doing , and, there is a nice example on-line on how to set this up (Jeff Graham, Regression Using Excel's Solver, Regression Using Excel's Solver, ICTCM, vol. 13, Paper C013, 2000).

In brief, the NLLS analysis was done by setting up a column with formulas to estimate dQ/dt according the equation [3] for each production year.  Then I subtracted this predicted result from the actual or measured value (in this case the simulated production data set).  Then I took the sum of the square of the differences, for the set of cells subject to the analysis, and put this into a single cell address in Excel that SOLVER minimizes by adjust the values of the cells holding the estimate of dQ/dt based on the value of the parameters, “a” “Q∞” and Qo.  Note—the exact same three cells in the spreadsheet holding these parameters have to be in each of the formulas representing equation [3] in the column used to estimate dQ/dt. 

After doing a few trial runs of NLLS analysis on the simulated data shown in Figure 6, it became apparent to me that good predictions of “a” could be obtained for that portion of the production curve simulating the transition from a constant value to a steadily decreasing value for “a,” if Q∞ was constrained to a fixed value. 

For the analysis of the simulated data sets to follow, I have adopted the following procedure:

1) Use the production curve corresponding to Q in the range from 20% to 50% of Q∞ to estimate “a” Q∞ and Qo.  I will call these the growth-side production parameters.

2) Use the values of the growth-side production parameters as the seed values for the spreadsheet column used to estimate dQ/dt for the successive 5-year segments on the decline side of the production curve, and, fix Q∞ (and indirectly, Qo) to the seed value determined in step (1).

3) Run the NLLS analysis for the successive 5-year segments to predict “a” and compare this to the average “a” for the 5-year segment being analyzed to determine the % difference from this true value of “a.”

Results for non-linear least squares analysis of simulated data

Figure 9 shows a composite of the production curve plots of 5-simulated data sets that were subject to this analysis.  For all five data sets, the growth-side production parameters are the same as before (Figure1):
     Q∞ =  170 bbls;
     Qo =  3.556 bbls (equivalent to “b” = 46.8); and
     “a” = 0.0687 yr-1
Thereafter, following the peak production in 1956, the value of “a” is yearly decreased by the different percentages shown in the legend to Figure 9.

The NLLS analysis of the production curve data for Q in the range from 20% to 50% of Q∞ (1934 to 1954 in Figure 9) gave the following predicted values for the growth-side production parameters:
     Q∞ =  170.4 bbls;
     Qo =       3.579 bbls; and
     “a” = 0.06857 yr-1
This is in pretty good agreement with the “true” values summarized above (as to be expected when dealing with simulated data). 

These values were then used as the seed values for the NLLS analysis of successive 5-year segments on the decline side of the production curves for each of the 5-simulated data sets. 

Some example results for the simulated data set, where “a” is decreased by 2%/yr, is presented in Table 2:
The predictions of “a” are in excellent agreement with the average true “a” values for these time segments.  The agreement is much better than that the predictions obtained using the logistic linear model (see Figure 3 above).  This is due, at least in part, to fixing the value of Q∞ for the NLLS analysis done on the decline-side part of the production curve. 

There is a slight trend for underestimating the values of “a.”  This becomes clearer in a plot of the % of the true value versus % of Q∞, for all five simulated data sets, as shown in Figure 10: 
The trend for underestimating “a” is progresses as Q Þ Q∞.  This is due to the slightly higher than true value of Q∞ (170.4 versus 170) that was set to a fixed value. 

Overall, however, I am satisfied that the three-step procedure outline above could at least in principle make accurate predictions of a declining production rate constant.

A constrained logistic nonlinear analysis of simulated data assuming  two different declining production rate constants

One final test before evaluating some “real” data.  How well does the NLLS analysis procedure predict “a” for simulated data where, following peak production there is an initial rapid decline in “a” followed by a milder decline in “a”?

To  test this, I again assumed growth-side production parameters that are the same as used to generate the simulated data shown in Figure 9.  Thereafter, following the peak production in 1956, the value of “a” was yearly decreased by 8% /yr from 1957 to 1970 and then decreased by 2%/yr thereafter. 

Figure 11 shows a plot of production curve plots of this simulated data set, along with a reference curve showing the data set where “a” was fixed to 0.0687 yr-1 throughout the entire produce curve.




Once again excellent predictive results were obtained.  As shown in Figure 12 the % difference from the true value (again the average value of “a” for the same 5-year segment) is less than 1 percent over the range of % Q of Q∞ covered by the data set.


Well, that was a long one, but well worth the effort!  I hope...

Next up, let’s see how the NLLS process handles some real data.

5 comments:

  1. > r2 = .9998

    > The plot looks pretty linear to the human eye and the r2 supports that this is a substantially linear plot. Still, it is somewhat surprising to me, given that I am using �perfect� (simulated) data, that the y-intercept and x-intercept are not exactly equal to �a� and Q∞, respectively.

    The explanation is that you have used 200 data points to get from t0 to t∞, and ideally you would use ∞ data points. To put that another way, you have charted what appears to be a smooth curve, but the chart should really have been drawn as a column type, with 200 columns, their areas representing the quantity of oil produced in the year, and 200 abrupt steps from one column to the next. If you had used monthly data, you would have 2,400 data points, the steps would have been smaller, and r� would have been even closer to 1.000 .

    ReplyDelete
  2. Dave

    It's a good idea and that may be it. I haven't gone back to test this, but my feeling is that it has to do with the value of Qo I assumed from b = 46.8. I did this before I appreciated how sensitive the linearity of equation [2] to the value of Qo, as part 6 in this series showed.

    ReplyDelete
  3. I found the comparison between Tyra Banks and Hubbert’s linear logistic to be hilarious. The thing i liked about your post, is that you turned a very heavy topic to a light fun reading material.

    ReplyDelete
  4. Thanks MC—yes I was trying to find some way to brighten up what is otherwise a pretty dry subject for most people. And, on this point Tyra, and I are in complete agreement! I never thought I would say this, but the girl has some good insights!

    Going into this analysis, as a prequel to looking at real data from real countries, I had high hopes for the linear logistic model, if for no other reason than that many others had used it. However, after my experiences with modeling simulated data, I lost my enthusiasm.

    Instead, as discussed in the article, I turned to non-linear least squares (NLLS) analysis of the logistic model, which is still fairly easy to implement within an Excel spread sheet environment, and, seems to give more reasonable results. There still might be some situation where I would have to use the linear model, but I haven’t found it yet.

    ReplyDelete

Your comments, questions and suggestions are welcome! However, comments with cursing or ad hominem attacks will be removed.