Forecasting

Citation metadata

Editor: Sonya D. Hill
Date: 2012
Encyclopedia of Management
Publisher: Gale, a Cengage Company
Document Type: Topic overview
Pages: 6
Content Level: (Level 4)

Document controls

Main content

Full Text: 
Page 398

Forecasting

Forecasting involves the generation of a number, set of numbers, or scenario that corresponds to a future occurrence, and it is helpful for both short-range and long-range planning. Properly prepared forecasts should be able to address blips that arise from one-off spread factors as well as other major seasonal factors. Spreadsheets can be used for analyzing past trends and determining forecasts.

Forecasting is based on a number of assumptions:

  1. The past will repeat itself. In other words, what has happened in the past will happen again in the future.
  2. As the forecast horizon shortens, forecast accuracy increases. For instance, a forecast for tomorrow will be more accurate than a forecast for next month; a forecast for next month will be more accurate than a forecast for next year; and a forecast for next year will be more accurate than a forecast for 10 years in the future.
  3. Forecasting in the aggregate is more accurate than forecasting individual items. This means that a company will be able to forecast total demand over its entire spectrum of products more accurately than it will be able to forecast individual stock-keeping units. For example, General Motors can more accurately forecast the total number of cars needed for next year than the total number of white Chevrolet Impalas with a certain option package.
  4. Forecasts are almost never totally accurate, although some are very close. Therefore, it is wise to offer a forecast “range.”

FORECASTING LIMITATIONS

While many forecasting models exist to meet businesses for a range of decision making, forecasting does have inherent limitations that all businesses should be aware of. Many companies ruin their forecasting processes with unnecessary additions and qualifications that show an intrinsic misunderstanding as to what forecasting is and what it can do.

An Institute of Business Forecasting and Planning article by Michael Gilliland, “What We Learned About Forecasting in 2010,” stated a primary issue: “For behaviors that are not amenable to statistical forecasting methods, recognition of this ‘unforecastability’ is a key first step. We wisely do not apply super-human efforts to forecast Heads or Tails in the tossing of a fair coin, because we recognize the randomness and our inability to improve upon a simple guess.” If all possible situations have an equal chance of occurring, forecasting cannot help businesses find out which is more likely. Customer demand, for instance, is such a complex variable that it cannot be forecasted.

Forecasting can also become heavily politicized. If too many analysts are involved in a forecasting project, or have conflicting aims, bosses to impress, or other motivations, then the forecasting can become heavily skewed. Special interests and biases are a rotten foundation to build forecasting on and should be avoided at all costs.

Sometimes forecasting models do not fit forecasting requirements correctly. Just because a particular forecasting

Page 399  |  Top of Article

model fits the history of the business does not mean it will fit the future. Companies need to examine each situation individually to ascertain the best forecasting model without assuming the past model will still work best. In the same step, companies should try to reduce waste associated with forecasting. Michael Gilliland's 2010 book, The Business Forecasting Deal, suggests a lean approach to forecasting that, like lean manufacturing, examines forecasting procedures for signs of any steps that do not add direct value to the forecast, and removes them. Wasted resources are still wasted when applied unnecessarily to forecasting.

Lastly, companies need increased faith in their forecasting techniques. Some companies allow analysts to apply “overrides” on sets of data that do not match their personal experiences or judgments, but this is usually a mistake. According to Gilliland, “Analysis found that the overall forecasting process was generating its value from the statistical forecasting system, not from judgmental overrides made by planners/forecasters. Overrides were found to make the forecast worse 60% of the time.”

GOOD FORECASTS

In Operations Management, William J. Stevenson lists a number of characteristics that are common to a good forecast:

  • Accurate—some degree of accuracy should be determined and stated so that comparison can be made to alternative forecasts.
  • Reliable—the forecast method should consistently provide a good forecast if the user is to establish some degree of confidence.
  • Timely—a certain amount of time is needed to respond to the forecast so the forecasting horizon must allow for the time necessary to make changes.
  • Easy to use and understand—users of the forecast must be confident and comfortable working with it.
  • Cost-effective—the cost of making the forecast should not outweigh the benefits obtained from the forecast.

Forecasting techniques range from the simple to the extremely complex. These techniques are usually classified as being qualitative or quantitative.

QUALITATIVE TECHNIQUES

Qualitative forecasting techniques are more subjective than quantitative forecasting techniques. Qualitative techniques are more useful in the earlier stages of the product life cycle, when less past data exists for use in quantitative methods. Qualitative methods include the Delphi technique, the nominal group technique (NGT), sales force opinions, executive opinions, market research, and rule developing experimentation.

The Delphi Technique. The Delphi technique uses a panel of experts to produce a forecast. Each expert is asked to provide a forecast specific to the need at hand. After the initial forecasts are made, each expert reads what every other expert wrote and is influenced by their views. A subsequent forecast is then made by each expert. Each expert then reads again what every other expert wrote and is again influenced by the perceptions of the others. This process repeats itself until each expert nears agreement on the needed scenario or numbers.

Nominal Group Technique. The nominal group technique is similar to the Delphi technique in that it utilizes a group of participants, usually experts. After the participants respond to forecast-related questions, they rank their responses in order of perceived relative importance. Then the rankings are collected and aggregated. Eventually, the group should reach a consensus regarding the priorities of the ranked issues.

Sales Force Opinions. The sales staff is often a good source of information regarding future demand. The sales manager may ask for input from each salesperson and aggregate their responses into a sales force composite forecast. Caution should be exercised when using this technique as the members of the sales force may not be able to distinguish between what customers say and what they actually do. Also, if the forecasts will be used to establish sales quotas, the sales force may be tempted to provide lower estimates.

Executive Opinions. Sometimes upper-level managers meet and develop forecasts based on their knowledge of their areas of responsibility. This is sometimes referred to as a jury of executive opinion.

Market Research. In market research, consumer surveys are used to establish potential demand. Such market research usually involves constructing a questionnaire that solicits personal, demographic, economic, and marketing information. On occasion, market researchers collect such information in person at retail outlets and malls, where the consumer can experience—taste, feel, smell, and see—a particular product. The researcher must be careful that the sample of people surveyed is representative of the desired consumer target.

Rule Developing Experimentation. Rule developing experimentation (RDE) is part qualitative and part quantitative. It resembles market research but uses randomized programs to help identify the most effective product packages. Companies create categories of possible features for a product, and the RDE program randomly assigns these features into a large number of possible products. This can be applied to food items, technology products, and

Page 400  |  Top of Article

many other possible goods. Companies then test these possible products with customers and identify the features or bundles of features that resonate the most with customers. RDE allows businesses to identify products with the most potential sales, or different customer groups with separate needs that were not previously acknowledged. Since the product prototype creation is randomized, human error is diminished; ideally, this reveals new possibilities and options.

QUANTITATIVE TECHNIQUES

Quantitative forecasting techniques are more objective than qualitative forecasting methods. Quantitative forecasts can be time-series forecasts (i.e., a projection of the past into the future) or forecasts based on associative models (i.e., based on one or more explanatory variables). Time-series data may have underlying behaviors that need to be identified by the forecaster. In addition, the forecast may need to identify the causes of the behavior. Some of these behaviors may be patterns or simply random variations. Among the patterns are:

  • Trends, which are long-term movements (up or down) in the data.
  • Seasonality, which produces short-term variations that are usually related to the time of year, month, or even a particular day, as witnessed by retail sales at Christmas or the spikes in banking activity on the first of the month and on Fridays.
  • Cycles, which are wavelike variations lasting more than a year that are usually tied to economic or political conditions.
  • Irregular variations that do not reflect typical behavior, such as a period of extreme weather or a union strike.
  • Random variations, which encompass all nontypical behaviors not accounted for by the other classifications.

Among the time-series models, the simplest is the naïve forecast. A naïve forecast simply uses the actual demand for the past period as the forecasted demand for the next period. This makes the assumption that the past will repeat. It also assumes that any trends, seasonality, or cycles are either reflected in the previous period's demand or do not exist. An example of naïve forecasting is presented in Table 1.

Another simple technique is the use of averaging. To make a forecast using averaging, one simply takes the average of some number of periods of past data by summing each period and dividing the result by the number of periods. This technique has been found to be very effective for short-range forecasting.

Table 1: Naïve Forecasting
Period Actual demand (000's) Forecast (000's)
January 45  
February 60 45
March 72 60
April 58 72
May 40 58
June   40


Table 1: Nave Forecasting Table 1: Naïve Forecasting

Variations of averaging include the moving average, the weighted average, and the weighted moving average. A moving average takes a predetermined number of periods, sums their actual demand, and divides by the number of periods to reach a forecast. For each subsequent period, the oldest period of data drops off and the latest period is added. Assuming a three-month moving average and using the data from Table 1, add 45 (January), 60 (February), and 72 (March) and divide by three to arrive at a forecast for April:

45 + 60 + 72 = 177 ÷ 3 = 59

To arrive at a forecast for May, drop January's demand from the equation and add the demand from April. Table 2 presents an example of a three-month moving average forecast.

A weighted average applies a predetermined weight to each month of past data, sums the past data from each period, and divides by the total of the weights. If the forecaster adjusts the weights so that their sum is equal to 1, then the weights are multiplied by the actual demand of each applicable period. The results are then summed to achieve a weighted forecast. Generally, the more recent the data is, the higher the weight, and the older the data the smaller the weight. Using the demand example, a weighted average using weights of .4, .3, .2, and .1 would yield the forecast for June as:

Table 2: Three Month Moving Average Forecast
Period Actual demand (000's) Forecast (000's)
January 45  
February 60  
March 72 59
April 58 63
May 40 57
June    


Table 2: Three Month Moving Average Forecast Table 2: Three Month Moving Average Forecast

Page 401  |  Top of Article
Table 3 : Three-Month Weighted Moving Average Forecast
Period Actual demand (000's) Forecast (000's)
January 45  
February 60  
March 72 55
April 58 63
May 40 61
June    


Table 3: Three-Month Weighted Moving Average Forecast Table 3: Three-Month Weighted Moving Average Forecast

60(0.1) + 72(0.2) + 58(0.3) + 40(0.4) = 53.8

Forecasters may also use a combination of the weighted average and moving average forecasts. A weighted moving average forecast assigns weights to a predetermined number of periods of actual data and computes the forecast the same way as described above. As with all moving forecasts, as each new period is added, the data from the oldest period is discarded. Table 3 shows a three-month weighted moving average forecast utilizing the weights .5, .3, and .2.

A more complex form of weighted moving average is exponential smoothing, so named because the weight falls off exponentially as the data ages. Exponential smoothing takes the previous period's forecast and adjusts it by a predetermined smoothing constant, á (called alpha; the value for alpha is less than one) multiplied by the difference in the previous forecast and the demand that actually occurred during the previously forecasted period (called forecast error). Exponential smoothing is expressed formulaically as such:

New forecast = previous forecast + alpha (actual demand − previous forecast) F = F + á (A − F)

Exponential smoothing requires the forecaster to begin the forecast in a past period and work forward to the period for which a current forecast is needed. A substantial amount of past data and a beginning or initial forecast are also necessary. The initial forecast can be an actual forecast from a previous period, the actual demand from a previous period, or it can be estimated by averaging all or part of the past data. Some heuristics exist for computing an initial forecast. For example, the heuristic N = (2÷á) − 1 and an alpha of .5 would yield an N of 3, indicating the user would average the first three periods of data to get an initial forecast. However, the accuracy of the initial forecast is not critical if one is using large amounts of data, since exponential smoothing is “self-correcting.” Given enough periods of past data, exponential smoothing will eventually make enough corrections to compensate for a reasonably inaccurate initial forecast. Using the data used in other examples, an initial forecast of 50, and an alpha of .7, a forecast for February is computed as such:

Table 4
Period Actual demand (000's) Forecast (000's)
January 45 50.00
February 60 41.50
March 72 54.45
April 58 66.74
May 40 60.62
June   46.19


Table 4 Table 4

New forecast (February) = 50 + 0.7(45 − 50) = 41.5

Next, the forecast for March:

New forecast (March) = 41.5 + 0.7(60 − 41.5) = 54.45

This process continues until the forecaster reaches the desired period. In Table 4 this would be for the month of June, since the actual demand for June is not known.

An extension of exponential smoothing can be used when time-series data exhibits a linear trend. This method is known by several names: double smoothing; trend-adjusted exponential smoothing; forecast including trend; and Holt's Model. Without adjustment, simple exponential smoothing results will lag the trend; that is, the forecast will always be low if the trend is increasing, or high if the trend is decreasing. With this model there are two smoothing constants, á and b with b representing the trend component.

An extension of Holt's Model, called Holt-Winter's Method, takes into account both trend and seasonality. There are two versions, multiplicative and additive, with the multiplicative being the most widely used. In the additive model, seasonality is expressed as a quantity to be added to or subtracted from the series average. The multiplicative model expresses seasonality as a percentage—known as seasonal relatives or seasonal indexes—of the average (or trend). These are then multiplied times values in order to incorporate seasonality. A relative of 0.8 would indicate demand that is 80 percent of the average, while 1.10 would indicate demand that is 10 percent above the average.

EVALUATING FORECASTS

Forecast accuracy can be determined by computing the bias, mean absolute deviation (MAD), mean square error (MSE), or mean absolute percent error (MAPE) for the forecast using different values for alpha. Bias is the sum

Page 402  |  Top of Article

of the forecast errors [∑(FE)]. For the exponential smoothing example above, the computed bias would be:

(60 − 41.5) + (72 − 54.45) +(58 − 66.74) + (40 − 60.62) = 6.69

If one assumes that a low bias indicates an overall low forecast error, one could compute the bias for a number of potential values of alpha and assume that the one with the lowest bias would be the most accurate. However, caution must be observed in that wildly inaccurate forecasts may yield a low bias if they tend to be both over forecast and under forecast (negative and positive). For example, over three periods a firm may use a particular value of alpha to over forecast by 75,000 units (-75,000), under forecast by 100,000 units (+100,000), and then over forecast by 25,000 units (-25,000), yielding a bias of zero (-75,000 + 100,000 − 25,000 = 0). By comparison, another alpha yielding over forecasts of 2,000 units, 1,000 units, and 3,000 units would result in a bias of 5,000 units. If normal demand was 100,000 units per period, the first alpha would yield forecasts that were off by as much as 100 percent while the second alpha would be off by a maximum of only 3 percent, even though the bias in the first forecast was zero.

A safer measure of forecast accuracy is the mean absolute deviation (MAD). To compute the MAD, the forecaster sums the absolute value of the forecast errors and then divides by the number of forecasts (∑|FE| ÷N). By taking the absolute value of the forecast errors, the offsetting of positive and negative values are avoided. This means that both an over forecast of 50 and an under forecast of 50 are off by 50. Using the data from the exponential smoothing example, MAD can be computed as follows:

(|60 − 41.5| + |72 − 54.45| + |58 − 66.74| + |40 − 60.62|) ÷ 4 = 16.35

Therefore, the forecaster is off an average of 16.35 units per forecast. When compared to the result of other alphas, the forecaster will know that the alpha with the lowest MAD is yielding the most accurate forecast.

Mean square error (MSE) can also be utilized in the same fashion. MSE is the sum of the forecast errors squared divided by N-1 [(∑(FE)) ÷ (N-1)]. Squaring the forecast errors eliminates the possibility of offsetting negative numbers, since none of the results can be negative. Utilizing the same data as above, the MSE would be:

[(18.5) + (17.55) + (-8.74) + (-20.62)] ÷ 3 = 383.94

As with MAD, the forecaster may compare the MSE of forecasts derived using various values of alpha and assume the alpha with the lowest MSE is yielding the most accurate forecast.

The mean absolute percent error (MAPE) is the average absolute percent error. To arrive at the MAPE one must take the sum of the ratios between forecast error and actual demand times 100 (to get the percentage) and divide by N [(∑| Actual demand − forecast | + Actual demand) × 100 ÷ N]. Using the data from the exponential smoothing example, MAPE can be computed as follows:

[(18.5/60) + 17.55/72 + 8.74/58 + 20.62/48) × 100] ÷ 4 = 28.33%

As with MAD and MSE, the lower the relative error the more accurate the forecast.

It should be noted that in some cases the ability of the forecast to change quickly to respond to changes in data patterns is considered to be more important than accuracy. Therefore, one's choice of forecasting method should reflect the relative balance of importance between accuracy and responsiveness, as determined by the forecaster.

MAKING A FORECAST

Stevenson lists the following as the basic steps in the forecasting process:

  • Determine the forecast's purpose. Factors such as how and when the forecast will be used, the degree of accuracy needed, and the level of detail desired determine the cost (time, money, employees) that can be dedicated to the forecast and the type of forecasting method to be utilized.
  • Establish a time horizon. This occurs after one has determined the purpose of the forecast. Longer-term forecasts require longer time horizons and vice versa. Accuracy is again a consideration.
  • Select a forecasting technique. The technique selected depends upon the purpose of the forecast, the time horizon desired, and the allowed cost.
  • Gather and analyze data. The amount and type of data needed is governed by the forecast's purpose, the forecasting technique selected, and any cost considerations.
  • Make the forecast.
  • Monitor the forecast. Evaluate the performance of the forecast and modify, if necessary.
  • Establish cause and effect relationships that add validation to a forecast.
Page 403  |  Top of Article

MACHINE LEARNING

While forecasting models still have their place, companies are developing programs that are capable of learning and refining forecasting procedures without human inter-vention, known in general as machine learning. Neural networks and genetic programming are both advanced computer programming methods that give computers the abilities to associate data sets and choose factors based on the situation. Neural networks receive their name from the way they store and access information in models based on the human brain, adapting to situations based on programmed decision trees. In his 2010 book Advances in Neural Networks, James Kwok stated, “The results obtained by these approaches in academic competitions or business forecasting are comparable with each other and often outperform conventional statistical approaches of ARMA, ARIMA or exponential-smoothing methods.”

According to Applied Time Series Analysis and I nnovative Computing,, a 2010 book by Siolong Ao, evolutionary computing models show even more forecasting promise than traditional neural or genetic networks. Ao noted: “The model was shown to improve upon the performance of genetic programming and other benchmark models for a set of simulated and real-time series.” Evolutionary systems are essentially dynamic genetic programs that include evolutionary algorithms to advance the system, allowing it to make improvements to its own system as it operates.

BIBLIOGRAPHY

Ao, Sio-long. Applied Time Series Analysis and Innovative Computing. New York: Springer, 2010.

Finch, Byron J. Operations Now: Profitability, Processes, Performance. 2nd ed. Boston: McGraw-Hill Irwin, 2006.

“Forecasting Principles.” forecastingprinciples.com . Available from http://www.forecastingprinciples.com .

Gilliland, Michael. The Business Forecasting Deal: Exposing Myths, Eliminating Bad Practices, Providing Practical Solutions. Hoboken, NJ: John Wiley & Sons, 2010.

———. “IBF Year End Blog–What We Learned about Forecasting in 2010.” Institute of Business Forecasting and Planning, 21 December 2010. Available from http://www.demand-planning.com/2010/12/21/ibf-year-end-blog-%e2%80%93-what-we-learned-about-forecasting-in-2010/ .

Green, William H. Econometric Analysis. 5th ed. Upper Saddle River, NJ: Prentice Hall, 2003.

Hanke, John E., and Dean Wichern. Business Forecasting. 9th ed. Upper Saddle River, NJ: Prentice Hall, 2008.

Kwok, James. Advances in Neural NetworksISNN 2010. New York: Springer, 2010.

Moskowitz, Howard R., and Alex Gofman. Selling Blue Elephants. Upper Saddle River, NJ: Wharton School Publishing, 2007.

Stevenson, William J. Operations Management. 8th ed. Boston: McGraw-Hill Irwin, 2005.

Stutely, R. Definitive Guide to Business Finance: What Smart Managers Do with the Numbers. Upper Saddle River, NJ: Prentice Hall, 2007.

Source Citation

Source Citation   

Gale Document Number: GALE|CX4016600125