.. _sec_forecasting_quickstart: Forecasting Time Series - Quick Start ===================================== Via a simple ``fit()`` call, AutoGluon can train and tune - simple forecasting models (e.g., ARIMA, ETS, Theta), - powerful deep learning models (e.g., DeepAR, Temporal Fusion Transformer), - tree-based models (e.g., XGBoost, CatBoost, LightGBM), - an ensemble that combines predictions of other models to produce multi-step ahead *probabilistic* forecasts for univariate time series data. This tutorial demonstrates how to quickly start using AutoGluon to generate hourly forecasts for the `M4 forecasting competition `__ dataset. Loading time series data as a ``TimeSeriesDataFrame`` ----------------------------------------------------- First, we import some required modules .. code:: python import pandas as pd from autogluon.timeseries import TimeSeriesDataFrame, TimeSeriesPredictor To use ``autogluon.timeseries``, we will only need the following two classes: - ``TimeSeriesDataFrame`` stores a dataset consisting of multiple time series. - ``TimeSeriesPredictor`` takes care of fitting, tuning and selecting the best forecasting models, as well as generating new forecasts. We load a subset of the M4 hourly dataset as a ``pandas.DataFrame`` .. code:: python df = pd.read_csv("https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_hourly_subset/train.csv") df.head() .. raw:: html
item_id timestamp target
0 H1 1750-01-01 00:00:00 605.0
1 H1 1750-01-01 01:00:00 586.0
2 H1 1750-01-01 02:00:00 586.0
3 H1 1750-01-01 03:00:00 559.0
4 H1 1750-01-01 04:00:00 511.0
AutoGluon expects time series data in `long format `__. Each row of the data frame contains a single observation (timestep) of a single time series represented by - unique ID of the time series (``"item_id"``) as int or str - timestamp of the observation (``"timestamp"``) as a ``pandas.Timestamp`` or compatible format - numeric value of the time series (``"target"``) The raw dataset should always follow this format with at least three columns for unique ID, timestamp, and target value, but the names of these columns can be arbitrary. It is important, however, that we provide the names of the columns when constructing a ``TimeSeriesDataFrame`` that is used by AutoGluon. AutoGluon will raise an exception if the data doesn’t match the expected format. .. code:: python train_data = TimeSeriesDataFrame.from_data_frame( df, id_column="item_id", timestamp_column="timestamp" ) train_data.head() .. raw:: html
target
item_id timestamp
H1 1750-01-01 00:00:00 605.0
1750-01-01 01:00:00 586.0
1750-01-01 02:00:00 586.0
1750-01-01 03:00:00 559.0
1750-01-01 04:00:00 511.0
We refer to each individual time series stored in a ``TimeSeriesDataFrame`` as an *item*. For example, items might correspond to different products in demand forecasting, or to different stocks in financial datasets. This setting is also referred to as a *panel* of time series. Note that this is *not* the same as multivariate forecasting — AutoGluon generates forecasts for each time series individually, without modeling interactions between different items (time series). ``TimeSeriesDataFrame`` inherits from `pandas.DataFrame `__, so all attributes and methods of ``pandas.DataFrame`` are available in a ``TimeSeriesDataFrame``. It also provides other utility functions, such as loaders for different data formats (see :class:`autogluon.timeseries.TimeSeriesDataFrame` for details). Training time series models with ``TimeSeriesPredictor.fit`` ------------------------------------------------------------ To forecast future values of the time series, we need to create a ``TimeSeriesPredictor`` object. Models in ``autogluon.timeseries`` forecast time series *multiple steps* into the future. We choose the number of these steps — the *prediction length* (also known as the *forecast horizon*) — depending on our task. For example, our dataset contains time series measured at hourly *frequency*, so we set ``prediction_length = 48`` to train models that forecast up to 48 hours into the future. We instruct AutoGluon to save trained models in the folder ``./autogluon-m4-hourly``. We also specify that AutoGluon should rank models according to `symmetric mean absolute percentage error (sMAPE) `__, and that data that we want to forecast is stored in the column ``"target"`` of the ``TimeSeriesDataFrame``. .. code:: python predictor = TimeSeriesPredictor( prediction_length=48, path="autogluon-m4-hourly", target="target", eval_metric="sMAPE", ) predictor.fit( train_data, presets="medium_quality", time_limit=600, ) .. parsed-literal:: :class: output ================ TimeSeriesPredictor ================ TimeSeriesPredictor.fit() called Setting presets to: medium_quality Fitting with arguments: {'enable_ensemble': True, 'evaluation_metric': 'sMAPE', 'hyperparameter_tune_kwargs': None, 'hyperparameters': 'medium_quality', 'prediction_length': 48, 'random_seed': None, 'target': 'target', 'time_limit': 600} Provided training data set with 148060 rows, 200 items (item = single time series). Average time series length is 740.3. Training artifacts will be saved to: /home/ci/autogluon/docs/_build/eval/tutorials/timeseries/autogluon-m4-hourly ===================================================== AutoGluon will save models to autogluon-m4-hourly/ AutoGluon will gauge predictive performance using evaluation metric: 'sMAPE' This metric's sign has been flipped to adhere to being 'higher is better'. The reported score can be multiplied by -1 to get the metric value. Provided dataset contains following columns: target: 'target' tuning_data is None. Will use the last prediction_length = 48 time steps of each time series as a hold-out validation set. Starting training. Start time is 2023-02-06 22:59:50 Models that will be trained: ['Naive', 'SeasonalNaive', 'ETS', 'Theta', 'ARIMA', 'AutoETS', 'AutoGluonTabular', 'DeepAR'] Training timeseries model Naive. Training for up to 599.88s of the 599.88s of remaining time. -0.4341 = Validation score (-sMAPE) 0.00 s = Training runtime 5.98 s = Validation (prediction) runtime Training timeseries model SeasonalNaive. Training for up to 593.89s of the 593.89s of remaining time. -0.1686 = Validation score (-sMAPE) 0.00 s = Training runtime 0.40 s = Validation (prediction) runtime Training timeseries model ETS. Training for up to 593.47s of the 593.47s of remaining time. -0.2700 = Validation score (-sMAPE) 0.00 s = Training runtime 36.30 s = Validation (prediction) runtime Training timeseries model Theta. Training for up to 557.15s of the 557.15s of remaining time. -0.2236 = Validation score (-sMAPE) 0.00 s = Training runtime 16.98 s = Validation (prediction) runtime Training timeseries model ARIMA. Training for up to 540.15s of the 540.15s of remaining time. -0.5269 = Validation score (-sMAPE) 0.00 s = Training runtime 22.32 s = Validation (prediction) runtime Training timeseries model AutoETS. Training for up to 517.82s of the 517.82s of remaining time. -0.2381 = Validation score (-sMAPE) 0.00 s = Training runtime 120.12 s = Validation (prediction) runtime Training timeseries model AutoGluonTabular. Training for up to 397.69s of the 397.69s of remaining time. -0.1089 = Validation score (-sMAPE) 48.85 s = Training runtime 3.99 s = Validation (prediction) runtime Training timeseries model DeepAR. Training for up to 344.84s of the 344.84s of remaining time. -0.1405 = Validation score (-sMAPE) 109.03 s = Training runtime 2.91 s = Validation (prediction) runtime Fitting simple weighted ensemble. -0.1079 = Validation score (-sMAPE) 8.20 s = Training runtime 12.87 s = Validation (prediction) runtime Training complete. Models trained: ['Naive', 'SeasonalNaive', 'ETS', 'Theta', 'ARIMA', 'AutoETS', 'AutoGluonTabular', 'DeepAR', 'WeightedEnsemble'] Total runtime: 382.80 s Best model: WeightedEnsemble Best model score: -0.1079 .. parsed-literal:: :class: output Here we used the ``"medium_quality"`` presets and limited the training time to 10 minutes (600 seconds). The presets define which models AutoGluon will try to fit. For ``medium_quality`` presets, these are simple baselines (``Naive``, ``SeasonalNaive``), statistical models (``ARIMA``, ``ETS``, ``Theta``), tree-based models XGBoost, LightGBM and CatBoost wrapped by ``AutoGluonTabular``, a deep learning model ``DeepAR``, and a weighted ensemble combining these. Other available presets for ``TimeSeriesPredictor`` are ``"fast_training"``, ``"high_quality"`` and ``"best_quality"``. Higher quality presets will usually produce more accurate forecasts but take longer to train and may produce less computationally efficient models. Inside ``fit()``, AutoGluon will train as many models as possible within the given time limit. Trained models are then ranked based on their performance on an internal validation set. By default, this validation set is constructed by holding out the last ``prediction_length`` timesteps of each time series in ``train_data``. Generating forecasts with ``TimeSeriesPredictor.predict`` --------------------------------------------------------- We can now use the fitted ``TimeSeriesPredictor`` to forecast the future time series values. By default, AutoGluon will make forecasts using the model that had the best score on the internal validation set. The forecast always includes predictions for the next ``prediction_length`` timesteps, starting from the end of each time series in ``train_data``. .. code:: python predictions = predictor.predict(train_data) predictions.head() .. parsed-literal:: :class: output Global seed set to 123 Model not specified in predict, will default to the model with the best validation score: WeightedEnsemble /home/ci/opt/venv/lib/python3.8/site-packages/joblib/externals/loky/process_executor.py:700: UserWarning: A worker stopped while some jobs were given to the executor. This can be caused by a too short worker timeout or by a memory leak. warnings.warn( .. raw:: html
mean 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
item_id timestamp
H1 1750-01-30 04:00:00 656.810422 586.976725 621.229876 638.542440 649.120869 657.112781 664.543691 673.450612 687.353248 719.313496
1750-01-30 05:00:00 586.721462 516.741268 551.084220 568.473991 579.070680 587.018400 594.447359 603.391556 617.348995 649.502593
1750-01-30 06:00:00 550.046321 479.535761 513.847416 531.439196 542.150099 550.336225 557.916722 566.983562 581.093363 613.402435
1750-01-30 07:00:00 513.119547 442.413219 476.734200 494.510521 505.241599 513.311314 521.063947 530.214256 544.216720 576.726283
1750-01-30 08:00:00 487.204191 416.089319 450.500906 468.124981 479.078713 487.267045 494.933571 504.437055 518.601267 551.364340
AutoGluon produces a *probabilistic* forecast: in addition to predicting the mean (expected value) of the time series in the future, models also provide the quantiles of the forecast distribution. The quantile forecasts give us an idea about the range of possible outcomes. For example, if the ``"0.1"`` quantile is equal to ``500.0``, it means that the model predicts a 10% chance that the target value will be below ``500.0``. We will now visualize the forecast and the actually observed values for one of the time series in the dataset. We plot the mean forecast, as well as the 10% and 90% quantiles to show the range of potential outcomes. .. code:: python import matplotlib.pyplot as plt # TimeSeriesDataFrame can also be loaded directly from a file test_data = TimeSeriesDataFrame.from_path("https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_hourly_subset/test.csv") plt.figure(figsize=(20, 3)) item_id = "H1" y_past = train_data.loc[item_id]["target"] y_pred = predictions.loc[item_id] y_test = test_data.loc[item_id]["target"][-48:] plt.plot(y_past[-200:], label="Past time series values") plt.plot(y_pred["mean"], label="Mean forecast") plt.plot(y_test, label="Future time series values") plt.fill_between( y_pred.index, y_pred["0.1"], y_pred["0.9"], color="red", alpha=0.1, label=f"10%-90% confidence interval" ) plt.legend(); .. parsed-literal:: :class: output Loaded data from: https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_hourly_subset/test.csv | Columns = 3 / 3 | Rows = 157660 -> 157660 .. figure:: output_forecasting-quickstart_a33e23_11_1.png Evaluating the performance of different models ---------------------------------------------- We can view the performance of each model AutoGluon has trained via the ``leaderboard()`` method. We provide the test data set to the leaderboard function to see how well our fitted models are doing on the unseen test data. The leaderboard also includes the validation scores computed on the internal validation dataset. In AutoGluon leaderboards, higher scores always correspond to better predictive performance. Therefore our sMAPE scores are multiplied by ``-1``, such that higher “negative sMAPE”s correspond to more accurate forecasts. .. code:: python # The test score is computed using the last # prediction_length=48 timesteps of each time series in test_data predictor.leaderboard(test_data, silent=True) .. parsed-literal:: :class: output Additional data provided, testing on additional data. Resulting leaderboard will be sorted according to test score (`score_test`). .. raw:: html
model score_test score_val pred_time_test pred_time_val fit_time_marginal fit_order
0 WeightedEnsemble -0.104160 -0.107873 6.831072 12.870274 8.203957 9
1 AutoGluonTabular -0.105318 -0.108919 4.313360 3.987516 48.853924 7
2 SeasonalNaive -0.119063 -0.168566 0.639290 0.402309 0.002878 2
3 DeepAR -0.139969 -0.140460 2.693217 2.905922 109.032125 8
4 Theta -0.194352 -0.223630 17.335417 16.984238 0.001529 4
5 AutoETS -0.195432 -0.238137 125.045698 120.121201 0.001530 6
6 ETS -0.217874 -0.269993 37.138881 36.303076 0.001524 3
7 Naive -0.453291 -0.434068 0.187922 5.976836 0.003573 1
8 ARIMA -0.518139 -0.526885 22.211391 22.320221 0.001539 5
Summary ------- We used ``autogluon.timeseries`` to make probabilistic multi-step forecasts on the M4 Hourly dataset. Check out :ref:`sec_forecasting_indepth` to learn about the advanced capabilities of AutoGluon for time series forecasting.