diff --git a/.vscode/settings.json b/.vscode/settings.json new file mode 100644 index 0000000..0ce1cb9 --- /dev/null +++ b/.vscode/settings.json @@ -0,0 +1,3 @@ +{ + "julia.environmentPath": "/Users/paltmeyer/code/ConformalPrediction.jl" +} \ No newline at end of file diff --git a/_freeze/docs/src/how_to_guides/timeseries/execute-results/md.json b/_freeze/docs/src/how_to_guides/timeseries/execute-results/md.json index 0895369..6870453 100644 --- a/_freeze/docs/src/how_to_guides/timeseries/execute-results/md.json +++ b/_freeze/docs/src/how_to_guides/timeseries/execute-results/md.json @@ -1,7 +1,8 @@ { - "hash": "e2af0dfe2227227b5486bbc15edd6cbf", + "hash": "4f8c3718530edd51fadb56729573aeda", "result": { - "markdown": "---\ntitle: How to Conformalize a Time Series Model\n---\n\n\n\n\n\nTime series data is prevalent across various domains, such as finance, weather forecasting, energy, and supply chains. However, accurately quantifying uncertainty in time series predictions is often a complex task due to inherent temporal dependencies, non-stationarity, and noise in the data. In this context, Conformal Prediction offers a valuable solution by providing prediction intervals which offer a sound way to quantify uncertainty. \n\nThis how-to guide demonstrates how you can conformalize a time series model using Ensemble Batch Prediction Intervals (EnbPI) [@xu2021conformal]. This method enables the updating of prediction intervals whenever new observations are available. This dynamic update process allows the method to adapt to changing conditions, accounting for the potential degradation of predictions or the increase in noise levels in the data.\n\n## The Task at Hand \n\nInspired by [MAPIE](https://mapie.readthedocs.io/en/latest/examples_regression/4-tutorials/plot_ts-tutorial.html), we employ the Victoria electricity demand dataset. This dataset contains hourly electricity demand (in GW) for Victoria state in Australia, along with corresponding temperature data (in Celsius degrees). \n\n\n::: {.cell execution_count=2}\n``` {.julia .cell-code}\nusing CSV, DataFrames\ndf = CSV.read(\"./dev/artifacts/electricity_demand.csv\", DataFrame)\n```\n:::\n\n\n## Feature engineering\n\nIn this how-to guide, we only focus on date, time and lag features.\n\n### Date and Time-related features\n\nWe create temporal features out of the date and hour:\n\n::: {.cell execution_count=3}\n``` {.julia .cell-code}\nusing Dates\ndf.Datetime = Dates.DateTime.(df.Datetime, \"yyyy-mm-dd HH:MM:SS\")\ndf.Weekofyear = Dates.week.(df.Datetime)\ndf.Weekday = Dates.dayofweek.(df.Datetime)\ndf.hour = Dates.hour.(df.Datetime) \n```\n:::\n\n\nAdditionally, to simulate sudden changes caused by unforeseen events, such as blackouts or lockdowns, we deliberately reduce the electricity demand by 2GW from February 22nd onward. \n\n::: {.cell execution_count=4}\n``` {.julia .cell-code}\ncondition = df.Datetime .>= Date(\"2014-02-22\")\ndf[condition, :Demand] .= df[condition, :Demand] .- 2\n```\n:::\n\n\n### Lag features\n\n::: {.cell execution_count=5}\n``` {.julia .cell-code}\nusing ShiftedArrays\nn_lags = 5\nfor i = 1:n_lags\n DataFrames.transform!(df, \"Demand\" => (x -> ShiftedArrays.lag(x, i)) => \"lag_hour_$i\")\nend\n\ndf_dropped_missing = dropmissing(df)\ndf_dropped_missing\n```\n:::\n\n\n## Train-test split\n\nAs usual, we split the data into train and test sets. We use the first 90% of the data for training and the remaining 10% for testing.\n\n::: {.cell execution_count=6}\n``` {.julia .cell-code}\nfeatures_cols = DataFrames.select(df_dropped_missing, Not([:Datetime, :Demand]))\nX = Matrix(features_cols)\ny = Matrix(df_dropped_missing[:, [:Demand]])\nsplit_index = floor(Int, 0.9 * size(y , 1)) \nprintln(split_index)\nX_train = X[1:split_index, :]\ny_train = y[1:split_index, :]\nX_test = X[split_index+1 : size(y,1), :]\ny_test = y[split_index+1 : size(y,1), :]\n```\n:::\n\n\n## Loading model using MLJ interface\n\nAs our baseline model, we use a boosted tree regressor:\n\n::: {.cell execution_count=7}\n``` {.julia .cell-code}\nusing MLJ\nEvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees verbosity=0\nmodel = EvoTreeRegressor(nrounds =100, max_depth=10, rng=123)\n```\n:::\n\n\n## Conformal time series\n\nNext, we conformalize the model using EnbPI. First, we will proceed without updating training set residuals to build prediction intervals. The result is shown in the following figure:\n\n::: {.cell execution_count=8}\n``` {.julia .cell-code}\nusing ConformalPrediction\n\nconf_model = conformal_model(model; method=:time_series_ensemble_batch, coverage=0.95)\nmach = machine(conf_model, X_train, y_train)\ntrain = [1:split_index;]\nfit!(mach, rows=train)\n\ny_pred_interval = MLJ.predict(conf_model, mach.fitresult, X_test)\nlb = [ minimum(tuple_data) for tuple_data in y_pred_interval]\nub = [ maximum(tuple_data) for tuple_data in y_pred_interval]\ny_pred = [mean(tuple_data) for tuple_data in y_pred_interval]\n```\n:::\n\n\n::: {.cell execution_count=9}\n\n::: {.cell-output .cell-output-display execution_count=10}\n![](timeseries_files/figure-commonmark/cell-10-output-1.svg){}\n:::\n:::\n\n\nWe can use `partial_fit` method in EnbPI implementation in ConformalPrediction in order to adjust prediction intervals to sudden change points on test sets that have not been seen by the model during training. In the below experiment, sample_size indicates the batch of new observations. You can decide if you want to update residuals by sample_size or update and remove first $n$ residuals (shift_size = n). The latter will allow to remove early residuals that will not have a positive impact on the current observations. \n\nThe chart below compares the results to the previous experiment without updating residuals:\n\n::: {.cell execution_count=10}\n``` {.julia .cell-code}\nsample_size = 10\nshift_size = 10\nlast_index = size(X_test , 1)\nlb_updated , ub_updated = ([], [])\nfor step in 1:sample_size:last_index\n if last_index - step < sample_size\n y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:last_index , :])\n partial_fit(mach.model , mach.fitresult, X_test[step:last_index , :], y_test[step:last_index , :], shift_size)\n else\n y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:step+sample_size-1 , :])\n partial_fit(mach.model , mach.fitresult, X_test[step:step+sample_size-1 , :], y_test[step:step+sample_size-1 , :], shift_size) \n end \n lb_updatedᵢ= [ minimum(tuple_data) for tuple_data in y_interval]\n push!(lb_updated,lb_updatedᵢ)\n ub_updatedᵢ = [ maximum(tuple_data) for tuple_data in y_interval]\n push!(ub_updated, ub_updatedᵢ)\nend\nlb_updated = reduce(vcat, lb_updated)\nub_updated = reduce(vcat, ub_updated)\n```\n:::\n\n\n::: {.cell execution_count=11}\n\n::: {.cell-output .cell-output-display execution_count=12}\n![](timeseries_files/figure-commonmark/cell-12-output-1.svg){}\n:::\n:::\n\n\n## Results\n\nIn time series problems, unexpected incidents can lead to sudden changes, and such scenarios are highly probable. As illustrated earlier, the model's training data lacks information about these change points, making it unable to anticipate them. The top figure demonstrates that when residuals are not updated, the prediction intervals solely rely on the distribution of residuals from the training set. Consequently, these intervals fail to encompass the true observations after the change point, resulting in a sudden drop in coverage.\n\nHowever, by partially updating the residuals, the method becomes adept at capturing the increasing uncertainties in model predictions. It is important to note that the changes in uncertainty occur approximately one day after the change point. This delay is attributed to the requirement of having a sufficient number of new residuals to alter the quantiles obtained from the residual distribution.\n\n## References\n\n", + "engine": "jupyter", + "markdown": "---\ntitle: How to Conformalize a Time Series Model\n---\n\n\n\n\n\nTime series data is prevalent across various domains, such as finance, weather forecasting, energy, and supply chains. However, accurately quantifying uncertainty in time series predictions is often a complex task due to inherent temporal dependencies, non-stationarity, and noise in the data. In this context, Conformal Prediction offers a valuable solution by providing prediction intervals which offer a sound way to quantify uncertainty. \n\nThis how-to guide demonstrates how you can conformalize a time series model using Ensemble Batch Prediction Intervals (EnbPI) [@xu2021conformal]. This method enables the updating of prediction intervals whenever new observations are available. This dynamic update process allows the method to adapt to changing conditions, accounting for the potential degradation of predictions or the increase in noise levels in the data.\n\n## The Task at Hand \n\nInspired by [MAPIE](https://mapie.readthedocs.io/en/latest/examples_regression/4-tutorials/plot_ts-tutorial.html), we employ the Victoria electricity demand dataset. This dataset contains hourly electricity demand (in GW) for Victoria state in Australia, along with corresponding temperature data (in Celsius degrees). \n\n\n::: {.cell execution_count=2}\n``` {.julia .cell-code}\nusing CSV, DataFrames\ndf = CSV.read(\"./dev/artifacts/electricity_demand.csv\", DataFrame)\n```\n:::\n\n\n## Feature engineering\n\nIn this how-to guide, we only focus on date, time and lag features.\n\n### Date and Time-related features\n\nWe create temporal features out of the date and hour:\n\n::: {.cell execution_count=3}\n``` {.julia .cell-code}\nusing Dates\ndf.Datetime = Dates.DateTime.(df.Datetime, \"yyyy-mm-dd HH:MM:SS\")\ndf.Weekofyear = Dates.week.(df.Datetime)\ndf.Weekday = Dates.dayofweek.(df.Datetime)\ndf.hour = Dates.hour.(df.Datetime) \n```\n:::\n\n\nAdditionally, to simulate sudden changes caused by unforeseen events, such as blackouts or lockdowns, we deliberately reduce the electricity demand by 2GW from February 22nd onward. \n\n::: {.cell execution_count=4}\n``` {.julia .cell-code}\ndf.Demand_updated = copy(df.Demand)\ncondition = df.Datetime .>= Date(\"2014-02-22\")\ndf[condition, :Demand_updated] .= df[condition, :Demand_updated] .- 2\n```\n:::\n\n\nThat is how the data looks like after our manipulation\n\n``` julia\ncutoff_point = 200\nplot(df[cutoff_point:split_index, [:Datetime]].Datetime, df[cutoff_point:split_index, :].Demand ,\n label=\"training data\", color=:green, xlabel = \"Date\" , ylabel=\"Electricity demand(GW)\")\nplot!(df[split_index+1 : size(df,1), [:Datetime]].Datetime, df[split_index+1 : size(df,1), : ].Demand,\n label=\"test data\", color=:orange, xlabel = \"Date\" , ylabel=\"Electricity demand(GW)\")\nplot!(df[split_index+1 : size(df,1), [:Datetime]].Datetime, df[split_index+1 : size(df,1), : ].Demand_updated, label=\"updated test data\", color=:red, linewidth=1, framestyle=:box)\nplot!(legend=:outerbottom, legendcolumns=3)\nplot!(size=(850,400), left_margin = 5Plots.mm)\n```\n\n### Lag features\n\n::: {.cell execution_count=5}\n``` {.julia .cell-code}\nusing ShiftedArrays\nn_lags = 5\nfor i = 1:n_lags\n DataFrames.transform!(df, \"Demand\" => (x -> ShiftedArrays.lag(x, i)) => \"lag_hour_$i\")\nend\n\ndf_dropped_missing = dropmissing(df)\ndf_dropped_missing\n```\n:::\n\n\n## Train-test split\n\nAs usual, we split the data into train and test sets. We use the first 90% of the data for training and the remaining 10% for testing.\n\n::: {.cell execution_count=6}\n``` {.julia .cell-code}\nfeatures_cols = DataFrames.select(df_dropped_missing, Not([:Datetime, :Demand, :Demand_updated]))\nX = Matrix(features_cols)\ny = Matrix(df_dropped_missing[:, [:Demand_updated]])\nsplit_index = floor(Int, 0.9 * size(y , 1)) \nprintln(split_index)\nX_train = X[1:split_index, :]\ny_train = y[1:split_index, :]\nX_test = X[split_index+1 : size(y,1), :]\ny_test = y[split_index+1 : size(y,1), :]\n```\n:::\n\n\n## Loading model using MLJ interface\n\nAs our baseline model, we use a boosted tree regressor:\n\n::: {.cell execution_count=7}\n``` {.julia .cell-code}\nusing MLJ\nEvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees verbosity=0\nmodel = EvoTreeRegressor(nrounds =100, max_depth=10, rng=123)\n```\n:::\n\n\n## Conformal time series\n\nNext, we conformalize the model using EnbPI. First, we will proceed without updating training set residuals to build prediction intervals. The result is shown in the following figure:\n\n::: {.cell execution_count=8}\n``` {.julia .cell-code}\nusing ConformalPrediction\n\nconf_model = conformal_model(model; method=:time_series_ensemble_batch, coverage=0.95)\nmach = machine(conf_model, X_train, y_train)\ntrain = [1:split_index;]\nfit!(mach, rows=train)\n\ny_pred_interval = MLJ.predict(conf_model, mach.fitresult, X_test)\nlb = [ minimum(tuple_data) for tuple_data in y_pred_interval]\nub = [ maximum(tuple_data) for tuple_data in y_pred_interval]\ny_pred = [mean(tuple_data) for tuple_data in y_pred_interval]\n```\n:::\n\n\n``` julia\n#| echo: false\n#| output: true\ncutoff_point = findfirst(df_dropped_missing.Datetime .== Date(\"2014-02-15\"))\nplot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] ,\n label=\"train\", color=:green , xlabel = \"Date\" , ylabel=\"Electricity demand(GW)\", linewidth=1)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime,\n y_test, label=\"test\", color=:red)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime ,\n y_pred, label =\"prediction\", color=:blue)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime,\n lb, fillrange = ub, fillalpha = 0.2, label = \"prediction interval w/o EnbPI\",\n color=:lake, linewidth=0, framestyle=:box)\nplot!(legend=:outerbottom, legendcolumns=4, legendfontsize=6)\nplot!(size=(850,400), left_margin = 5Plots.mm)\n```\n\nWe can use `partial_fit` method in EnbPI implementation in ConformalPrediction in order to adjust prediction intervals to sudden change points on test sets that have not been seen by the model during training. In the below experiment, sample_size indicates the batch of new observations. You can decide if you want to update residuals by sample_size or update and remove first $n$ residuals (shift_size = n). The latter will allow to remove early residuals that will not have a positive impact on the current observations. \n\nThe chart below compares the results to the previous experiment without updating residuals:\n\n::: {.cell execution_count=9}\n``` {.julia .cell-code}\nsample_size = 30\nshift_size = 100\nlast_index = size(X_test , 1)\nlb_updated , ub_updated = ([], [])\nfor step in 1:sample_size:last_index\n if last_index - step < sample_size\n y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:last_index , :])\n partial_fit(mach.model , mach.fitresult, X_test[step:last_index , :], y_test[step:last_index , :], shift_size)\n else\n y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:step+sample_size-1 , :])\n partial_fit(mach.model , mach.fitresult, X_test[step:step+sample_size-1 , :], y_test[step:step+sample_size-1 , :], shift_size) \n end \n lb_updatedᵢ= [ minimum(tuple_data) for tuple_data in y_interval]\n push!(lb_updated,lb_updatedᵢ)\n ub_updatedᵢ = [ maximum(tuple_data) for tuple_data in y_interval]\n push!(ub_updated, ub_updatedᵢ)\nend\nlb_updated = reduce(vcat, lb_updated)\nub_updated = reduce(vcat, ub_updated)\n```\n:::\n\n\n``` julia\n#| echo: false\n#| output: true\nplot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] ,\n label=\"train\", color=:green , xlabel = \"Date\" , ylabel=\"Electricity demand(GW)\", linewidth=1)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, y_test,\n label=\"test\", color=:red)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime ,\n y_pred, label =\"prediction\", color=:blue)\nplot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime,\n lb_updated, fillrange = ub_updated, fillalpha = 0.2, label = \"EnbPI\",\n color=:lake, linewidth=0, framestyle=:box)\nplot!(legend=:outerbottom, legendcolumns=4)\nplot!(size=(850,400), left_margin = 5Plots.mm)\n```\n\n## Results\n\nIn time series problems, unexpected incidents can lead to sudden changes, and such scenarios are highly probable. As illustrated earlier, the model's training data lacks information about these change points, making it unable to anticipate them. The top figure demonstrates that when residuals are not updated, the prediction intervals solely rely on the distribution of residuals from the training set. Consequently, these intervals fail to encompass the true observations after the change point, resulting in a sudden drop in coverage.\n\nHowever, by partially updating the residuals, the method becomes adept at capturing the increasing uncertainties in model predictions. It is important to note that the changes in uncertainty occur approximately one day after the change point. This delay is attributed to the requirement of having a sufficient number of new residuals to alter the quantiles obtained from the residual distribution.\n\n## References\n\n", "supporting": [ "timeseries_files" ], diff --git a/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/electricity_demand.svg b/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/electricity_demand.svg new file mode 100644 index 0000000..8eecf70 --- /dev/null +++ b/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/electricity_demand.svg @@ -0,0 +1,48 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/electricity_demand_pred.svg b/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/electricity_demand_pred.svg new file mode 100644 index 0000000..f7d545a --- /dev/null +++ b/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/electricity_demand_pred.svg @@ -0,0 +1,56 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/electricity_demand_pred_enbpi.svg b/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/electricity_demand_pred_enbpi.svg new file mode 100644 index 0000000..8ff0545 --- /dev/null +++ b/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/electricity_demand_pred_enbpi.svg @@ -0,0 +1,52 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/src/how_to_guides/timeseries.md b/docs/src/how_to_guides/timeseries.md index 8c44478..757da45 100644 --- a/docs/src/how_to_guides/timeseries.md +++ b/docs/src/how_to_guides/timeseries.md @@ -32,8 +32,22 @@ df.hour = Dates.hour.(df.Datetime) Additionally, to simulate sudden changes caused by unforeseen events, such as blackouts or lockdowns, we deliberately reduce the electricity demand by 2GW from February 22nd onward. ``` julia +df.Demand_updated = copy(df.Demand) condition = df.Datetime .>= Date("2014-02-22") -df[condition, :Demand] .= df[condition, :Demand] .- 2 +df[condition, :Demand_updated] .= df[condition, :Demand_updated] .- 2 +``` + +That is how the data looks like after our manipulation + +``` julia +cutoff_point = 200 +plot(df[cutoff_point:split_index, [:Datetime]].Datetime, df[cutoff_point:split_index, :].Demand , + label="training data", color=:green, xlabel = "Date" , ylabel="Electricity demand(GW)") +plot!(df[split_index+1 : size(df,1), [:Datetime]].Datetime, df[split_index+1 : size(df,1), : ].Demand, + label="test data", color=:orange, xlabel = "Date" , ylabel="Electricity demand(GW)") +plot!(df[split_index+1 : size(df,1), [:Datetime]].Datetime, df[split_index+1 : size(df,1), : ].Demand_updated, label="updated test data", color=:red, linewidth=1, framestyle=:box) +plot!(legend=:outerbottom, legendcolumns=3) +plot!(size=(850,400), left_margin = 5Plots.mm) ``` ### Lag features @@ -54,9 +68,9 @@ df_dropped_missing As usual, we split the data into train and test sets. We use the first 90% of the data for training and the remaining 10% for testing. ``` julia -features_cols = DataFrames.select(df_dropped_missing, Not([:Datetime, :Demand])) +features_cols = DataFrames.select(df_dropped_missing, Not([:Datetime, :Demand, :Demand_updated])) X = Matrix(features_cols) -y = Matrix(df_dropped_missing[:, [:Demand]]) +y = Matrix(df_dropped_missing[:, [:Demand_updated]]) split_index = floor(Int, 0.9 * size(y , 1)) println(split_index) X_train = X[1:split_index, :] @@ -93,15 +107,30 @@ ub = [ maximum(tuple_data) for tuple_data in y_pred_interval] y_pred = [mean(tuple_data) for tuple_data in y_pred_interval] ``` -![](timeseries_files/figure-commonmark/cell-10-output-1.svg) +``` julia +#| echo: false +#| output: true +cutoff_point = findfirst(df_dropped_missing.Datetime .== Date("2014-02-15")) +plot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] , + label="train", color=:green , xlabel = "Date" , ylabel="Electricity demand(GW)", linewidth=1) +plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, + y_test, label="test", color=:red) +plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime , + y_pred, label ="prediction", color=:blue) +plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, + lb, fillrange = ub, fillalpha = 0.2, label = "prediction interval w/o EnbPI", + color=:lake, linewidth=0, framestyle=:box) +plot!(legend=:outerbottom, legendcolumns=4, legendfontsize=6) +plot!(size=(850,400), left_margin = 5Plots.mm) +``` We can use `partial_fit` method in EnbPI implementation in ConformalPrediction in order to adjust prediction intervals to sudden change points on test sets that have not been seen by the model during training. In the below experiment, sample_size indicates the batch of new observations. You can decide if you want to update residuals by sample_size or update and remove first *n* residuals (shift_size = n). The latter will allow to remove early residuals that will not have a positive impact on the current observations. The chart below compares the results to the previous experiment without updating residuals: ``` julia -sample_size = 10 -shift_size = 10 +sample_size = 30 +shift_size = 100 last_index = size(X_test , 1) lb_updated , ub_updated = ([], []) for step in 1:sample_size:last_index @@ -121,7 +150,21 @@ lb_updated = reduce(vcat, lb_updated) ub_updated = reduce(vcat, ub_updated) ``` -![](timeseries_files/figure-commonmark/cell-12-output-1.svg) +``` julia +#| echo: false +#| output: true +plot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] , + label="train", color=:green , xlabel = "Date" , ylabel="Electricity demand(GW)", linewidth=1) +plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, y_test, + label="test", color=:red) +plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime , + y_pred, label ="prediction", color=:blue) +plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, + lb_updated, fillrange = ub_updated, fillalpha = 0.2, label = "EnbPI", + color=:lake, linewidth=0, framestyle=:box) +plot!(legend=:outerbottom, legendcolumns=4) +plot!(size=(850,400), left_margin = 5Plots.mm) +``` ## Results diff --git a/docs/src/how_to_guides/timeseries.qmd b/docs/src/how_to_guides/timeseries.qmd index 74f3bea..fbd0ae2 100644 --- a/docs/src/how_to_guides/timeseries.qmd +++ b/docs/src/how_to_guides/timeseries.qmd @@ -38,8 +38,21 @@ df.hour = Dates.hour.(df.Datetime) Additionally, to simulate sudden changes caused by unforeseen events, such as blackouts or lockdowns, we deliberately reduce the electricity demand by 2GW from February 22nd onward. ```{julia} +df.Demand_updated = copy(df.Demand) condition = df.Datetime .>= Date("2014-02-22") -df[condition, :Demand] .= df[condition, :Demand] .- 2 +df[condition, :Demand_updated] .= df[condition, :Demand_updated] .- 2 +``` +That is how the data looks like after our manipulation + +``` julia +cutoff_point = 200 +plot(df[cutoff_point:split_index, [:Datetime]].Datetime, df[cutoff_point:split_index, :].Demand , + label="training data", color=:green, xlabel = "Date" , ylabel="Electricity demand(GW)") +plot!(df[split_index+1 : size(df,1), [:Datetime]].Datetime, df[split_index+1 : size(df,1), : ].Demand, + label="test data", color=:orange, xlabel = "Date" , ylabel="Electricity demand(GW)") +plot!(df[split_index+1 : size(df,1), [:Datetime]].Datetime, df[split_index+1 : size(df,1), : ].Demand_updated, label="updated test data", color=:red, linewidth=1, framestyle=:box) +plot!(legend=:outerbottom, legendcolumns=3) +plot!(size=(850,400), left_margin = 5Plots.mm) ``` ### Lag features @@ -61,9 +74,9 @@ df_dropped_missing As usual, we split the data into train and test sets. We use the first 90% of the data for training and the remaining 10% for testing. ```{julia} -features_cols = DataFrames.select(df_dropped_missing, Not([:Datetime, :Demand])) +features_cols = DataFrames.select(df_dropped_missing, Not([:Datetime, :Demand, :Demand_updated])) X = Matrix(features_cols) -y = Matrix(df_dropped_missing[:, [:Demand]]) +y = Matrix(df_dropped_missing[:, [:Demand_updated]]) split_index = floor(Int, 0.9 * size(y , 1)) println(split_index) X_train = X[1:split_index, :] @@ -100,17 +113,21 @@ ub = [ maximum(tuple_data) for tuple_data in y_pred_interval] y_pred = [mean(tuple_data) for tuple_data in y_pred_interval] ``` -```{julia} +``` julia #| echo: false #| output: true - cutoff_point = findfirst(df_dropped_missing.Datetime .== Date("2014-02-15")) -p1 = plot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] , label="train", color=:blue, legend=:bottomleft) -plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, y_test, label="test", color=:orange) -plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime ,y_pred, label ="prediction", color=:green) +plot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] , + label="train", color=:green , xlabel = "Date" , ylabel="Electricity demand(GW)", linewidth=1) plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, - lb, fillrange = ub, fillalpha = 0.2, label = "PI without update of residuals", color=:green, linewidth=0) - + y_test, label="test", color=:red) +plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime , + y_pred, label ="prediction", color=:blue) +plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, + lb, fillrange = ub, fillalpha = 0.2, label = "prediction interval w/o EnbPI", + color=:lake, linewidth=0, framestyle=:box) +plot!(legend=:outerbottom, legendcolumns=4, legendfontsize=6) +plot!(size=(850,400), left_margin = 5Plots.mm) ``` We can use `partial_fit` method in EnbPI implementation in ConformalPrediction in order to adjust prediction intervals to sudden change points on test sets that have not been seen by the model during training. In the below experiment, sample_size indicates the batch of new observations. You can decide if you want to update residuals by sample_size or update and remove first $n$ residuals (shift_size = n). The latter will allow to remove early residuals that will not have a positive impact on the current observations. @@ -119,8 +136,8 @@ The chart below compares the results to the previous experiment without updating ```{julia} -sample_size = 10 -shift_size = 10 +sample_size = 30 +shift_size = 100 last_index = size(X_test , 1) lb_updated , ub_updated = ([], []) for step in 1:sample_size:last_index @@ -140,18 +157,20 @@ lb_updated = reduce(vcat, lb_updated) ub_updated = reduce(vcat, ub_updated) ``` -```{julia} +``` julia #| echo: false #| output: true - -p2 = plot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] , label="train", color=:blue, legend=:bottomleft) -plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, y_test, label="test", color=:orange) -plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime ,y_pred, label ="prediction", color=:green) +plot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] , + label="train", color=:green , xlabel = "Date" , ylabel="Electricity demand(GW)", linewidth=1) +plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, y_test, + label="test", color=:red) +plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime , + y_pred, label ="prediction", color=:blue) plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, - lb_updated, fillrange = ub_updated, fillalpha = 0.2, label = "PI with adjusted residuals", color=:green, linewidth=0) - - plot(p1,p2, layout= (2,1)) - + lb_updated, fillrange = ub_updated, fillalpha = 0.2, label = "EnbPI", + color=:lake, linewidth=0, framestyle=:box) +plot!(legend=:outerbottom, legendcolumns=4) +plot!(size=(850,400), left_margin = 5Plots.mm) ``` ## Results diff --git a/docs/src/how_to_guides/timeseries_files/figure-commonmark/cell-10-output-1.svg b/docs/src/how_to_guides/timeseries_files/figure-commonmark/cell-10-output-1.svg deleted file mode 100644 index b462309..0000000 --- a/docs/src/how_to_guides/timeseries_files/figure-commonmark/cell-10-output-1.svg +++ /dev/null @@ -1,54 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/docs/src/how_to_guides/timeseries_files/figure-commonmark/cell-12-output-1.svg b/docs/src/how_to_guides/timeseries_files/figure-commonmark/cell-12-output-1.svg deleted file mode 100644 index 2b794af..0000000 --- a/docs/src/how_to_guides/timeseries_files/figure-commonmark/cell-12-output-1.svg +++ /dev/null @@ -1,94 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/docs/src/how_to_guides/timeseries_files/figure-commonmark/electricity_demand.svg b/docs/src/how_to_guides/timeseries_files/figure-commonmark/electricity_demand.svg new file mode 100644 index 0000000..8eecf70 --- /dev/null +++ b/docs/src/how_to_guides/timeseries_files/figure-commonmark/electricity_demand.svg @@ -0,0 +1,48 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/src/how_to_guides/timeseries_files/figure-commonmark/electricity_demand_pred.svg b/docs/src/how_to_guides/timeseries_files/figure-commonmark/electricity_demand_pred.svg new file mode 100644 index 0000000..f7d545a --- /dev/null +++ b/docs/src/how_to_guides/timeseries_files/figure-commonmark/electricity_demand_pred.svg @@ -0,0 +1,56 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/src/how_to_guides/timeseries_files/figure-commonmark/electricity_demand_pred_enbpi.svg b/docs/src/how_to_guides/timeseries_files/figure-commonmark/electricity_demand_pred_enbpi.svg new file mode 100644 index 0000000..8ff0545 --- /dev/null +++ b/docs/src/how_to_guides/timeseries_files/figure-commonmark/electricity_demand_pred_enbpi.svg @@ -0,0 +1,52 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/src/conformal_models/transductive_regression.jl b/src/conformal_models/transductive_regression.jl index 864604e..4fed232 100644 --- a/src/conformal_models/transductive_regression.jl +++ b/src/conformal_models/transductive_regression.jl @@ -858,6 +858,10 @@ function partial_fit( ŷₜ = _aggregate(ŷ, aggregate) push!(conf_model.scores, @.(conf_model.heuristic(y, ŷₜ))...) conf_model.scores = filter(!isnan, conf_model.scores) + if shift_size > length(conf_model.scores) + @warn "The shift size is bigger than the size of Non-conformity scores" + return conf_model.scores + end conf_model.scores = conf_model.scores[(shift_size + 1):length(conf_model.scores)] return conf_model.scores end