Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 4,792 Bytes
967c5b8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
253b6c6
967c5b8
253b6c6
 
967c5b8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
744e9a2
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
license: apache-2.0
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
dataset_info:
  features:
  - name: name
    dtype: string
  - name: seed
    dtype: int64
  - name: weight
    dtype: string
  - name: context_sources
    sequence: string
  - name: skills
    sequence: string
  - name: background
    dtype: string
  - name: scenario
    dtype: string
  - name: constraints
    dtype: string
  - name: seasonal_period
    dtype: int64
  - name: past_time
    dtype: string
  - name: future_time
    dtype: string
  - name: metric_scaling
    dtype: float64
  - name: region_of_interest
    sequence: int64
  - name: constraint_min
    dtype: float64
  - name: constraint_max
    dtype: float64
  - name: constraint_variable_max_index
    sequence: int64
  - name: constraint_variable_max_values
    sequence: float64
  splits:
  - name: test
    num_bytes: 1513965
    num_examples: 355
  download_size: 213607
  dataset_size: 1513965
task_categories:
- time-series-forecasting
language:
- en
pretty_name: Context is Key
size_categories:
- n<1K
---
# Context is Key dataset

This dataset contains the samples from the [Context is Key benchmark](https://arxiv.org/abs/2410.18959).

While we encourage users of the benchmark to instance it using its [Code repository](https://github.com/ServiceNow/context-is-key-forecasting),
we understand that using this dataset can be more convenient.

## Splits

Context is Key is meant to be used as a benchmark, with only a test split.
Therefore, the splits in this dataset have been used to represent versions of the dataset, from correcting minor errors found after its initial release.

* **test**: The latest version of the dataset.
* **ICML2025**: The version of the dataset used for the experiments whose results have been published to ICML 2025.

The differences between **test** and **ICML2025** are in the `FullCausalContextImplicitEquationBivarLinSVAR` and `FullCausalContextExplicitEquationBivarLinSVAR` tasks,
where the context contained unscaled numbers in **ICML2025** and scaled numbers in **test**.

## Features

| Feature    | Content |
| -------- | ------- |
| name | The name of the task, also the name of the class generating the task in the [code](https://github.com/ServiceNow/context-is-key-forecasting) |
| seed | An integer between 1 and 5, to distinguish various instances of the same task |
| weight | A fraction indicating the relative weight this task has in aggregated RCRPS results |
| context_sources | A list of strings indicating whether the context contains past, future, causal, ... information |
| skills | A list of strings indicating skills which should help models accurately solve the task |
| background | Part of the textual context (mostly the part which doesn't depend on the instance) |
| scenario | Part of the textual context (mostly the part which does depend on the instance) |
| constraints | Part of the textual context (explicit constraints on valid forecasts) |
| seasonal_period | A reasonable guess on the seasonal period of the time series, for models which requires it. -1 if there is seasonal periodicity. |
| past_time | Pandas DataFrame converted to JSON containing the historical portion of the time series |
| future_time | Pandas DataFrame converted to JSON containing the portion of the time series to be forecasted |
| metric_scaling | Multiplier of the RCPRS metric, to handle the changes in scales between tasks |
| region_of_interest | List of indices of the future_time which should have more weight in the RCPRS metric |
| constraint_min | Any forecasted values below this value will be penalized in the RCPRS metric |
| constraint_max | Any forecasted values above this value will be penalized in the RCPRS metric |
| constraint_variable_max_index | A list of indices for which there is a maximum constraint |
| constraint_variable_max_values | A list of maximum values, any forecasted values at the associated indices will lead to a penalty in the RCPRS metric |

Users of the benchmark should only gives the *background*, *scenario*, *constraints*, *seasonal_period*, and *past_time* features to their model,
together with the timestamps of *future_time*.
The other features are there to compute the RCPRS metric and classification of the tasks.

Note: to convert *past_time* and *future_time* to Pandas DataFrame, use the following snipet: `pd.read_json(StringIO(entry["past_time"]))`.

## Computing the RCPRS metric

Code to compute the RCPRS metric is available in the [`compute_rcrps_with_hf_dataset.py`](https://huggingface.co/datasets/ServiceNow/context-is-key/blob/main/compute_rcrps_with_hf_dataset.py) script inside this dataset repository.
Please look at the `__main__` section of the script to see an example on how to use it.