Search
Close this search box.

Key terminology & process for timeseries analysis including exponential smoothing

Timeseries analysis is incredibly powerful but can get quite confusing. There is a lot of terminology which we need to understand before we can really progress with making a forecast. Ultimately, timeseries analysis is all about analysing and forecasting data that is indexed in equally spaced increments of time; i.e. minutes, seconds, days, weeks, months, etc…

Not all timeseries are created equal; some have a seasonal impact and some have trends built into the data. We need to account for seasonality and trends by making the data stationary. Below, we have an example of a trending dataset:


And now, we have an example of a seasonal dataset. You can see clear seasonality, as each weekend sees a spike in the dataset.

Making Timeseries Stationary

In order to make a timeseries stationary, we need to remove the effects of seasonality and trends from the dataset. We do this with a concept called differencing. This is where we take the difference between consecutive observations. Let’s look at an example.

In the left chart below below, you can see our dataset. There is a strong, consistent trend upwards. We need to remove this trend using differencing. You can see the result of doing this in the right hand chart.


The data used for this is below. You can see that we’ve simply taken the difference between the current day and the day immediately prior to it to generated the stationary dataset.

DayvalueDifference
mon5
tue72
wed92
thu134
fri11-2
sat121
sun142
mon173
tue14-3
wed162

Testing for stationarity

Sometimes, it’s not so clear as to whether your dataset is stationary. We can use the Dickey Fuller test in this instance, which tests for stationarity for us. If the P value it returns is less than 0.05 then your dataset is stationary. In the below output, the second item is the P-Value. You can see it’s 0.99, which is not below 0.05, hence, it is not a stationary dataset.

from statsmodels.tsa.stattools import adfuller
x = [5, 7, 9,13,11,12,14,17,14,16,15,12,11,15,19,17,21,23,27,25,29,31,33,31,37,35,34,31,39,40,42,41,43,42,44]
df_test = adfuller(x, autolag = "AIC")
'''OUTPUT
(0.8053272991071334,
 0.9917216757594437,
 9,
 25,
 {'1%': -3.7238633119999998, '5%': -2.98648896, '10%': -2.6328004},
 115.63418179131537)'''

Model Selection

Now we have come to the point of deciding what model we’re going to use for our prediction. We have a number of choices here:

Naive Model

We could go completely naive. Where we forecast that the future values of Y are equal to the current value of Y. As below, all the predicted values are within the red box, and are all equal to Y. Here, we are saying that the most likely value for Y tomorrow, is equal to the value of Y today. Of course, with the trend we see in our data, this is unlikely.

Averaged Model

We could also look at an averaged model. This is where we give all observations in the forecast equal weight & average them all. If there is a trend or seasonality, this is a totally rubbish approach, as you can see below but if your data is quite flat, it may not be too bad.


Exponential Smoothing

When we have seasonality or trends in our data, we can use exponential smoothing. Here, the most recent observation is given most weight and the weight of all previous observations decrease exponentially into the past. The idea of exponential smoothing is to assume that the future values will be similar to those of the recent past.

The exponential smoothing models are favourable when compared to a simple moving average model, but they cannot project trends; unless we implement double or triple exponential smoothing (called the Holt-Winters model).

In the simple model, we adjust alpha to control the weight of historic observations in the model. Double explonential smoothing introduces a trend component to the model, this supports trends as being additive (for linear trends) or multiplicative (for exponential trends). It also supports the ability to dampen trends over time – we use this when a trend continuing as it has histroically is unrealistic. The hyperparameters we can tune with our double exponential smoothing model are:

  • Alpha: the smoothing factor
  • Beta: the smoothing factor for the trend
  • Type: additive (linear) or multiplicative (exponential) trend type
  • Dampening type: additive (linear) or multiplicative (exponential) dampening

Triple exponential smoothing brings a new hyperparameter to account for seasonality. We call this Gamma. This controls the seasonal component for us. In addition to the above, we have further hyperparameters to play with:

  • Seasonality type: Additive or multiplicative
  • Period: how many time steps in a seaonal period (e.g. in our weekly seasonality example above, it’s 7 time steps in each season).

ARIMA

Finally, let’s talk about ARIMA models. We use these if we want to capture the effects of auto correlation. This is where early observations influence later observations. Trend and seasonality are autocorrelation examples. With an upward trend, there is a good chance that T is higher than T-1and July 2019 will be a good indicator of weather in July 2020.

ARIMA is a big topic (and the most popular timeseries algorithm), so I will discuss this in more detail in my next post.

Share the Post:

Related Posts