MONSTER

MONSTER: Monash Scalable Time Series Evaluation Repository

arXiv:2502.15122 / HF Papers (preprint)
GitHub (code)

We introduce MONSTER—the MONash Scalable Time Series Evaluation Repository—a collection of large datasets for time series classification. The field of time series classification has benefitted from common benchmarks set by the UCR and UEA time series classification repositories. However, the datasets in these benchmarks are small, with median sizes of 217 and 255 examples, respectively. In consequence they favour a narrow subspace of models that are optimised to achieve low classification error on a wide variety of smaller datasets, that is, models that minimise variance, and give little weight to computational issues such as scalability. Our hope is to diversify the field by introducing benchmarks using larger datasets. We believe that there is enormous potential for new progress in the field by engaging with the theoretical and practical challenges of learning effectively from larger quantities of data.

Please cite as:

@article{dempster_etal_2025,
  author  = {Dempster, Angus and Foumani, Navid Mohammadi and Tan, Chang Wei and Miller, Lynn and Mishra, Amish and Salehi, Mahsa and Pelletier, Charlotte and Schmidt, Daniel F and Webb, Geoffrey I},
  title   = {MONSTER: Monash Scalable Time Series Evaluation Repository},
  year    = {2025},
  journal = {arXiv:2502.15122},
}

Downloading Data

hf_hub_download

from huggingface_hub import hf_hub_download

path = hf_hub_download(repo_id = f"monster-monash/Pedestrian", filename = f"Pedestrian_X.npy", repo_type = "dataset")

X = np.load(path, mmap_mode = "r")

load_data

from datasets import load_dataset

dataset = load_dataset("monster-monash/Pedestrian", "fold_0", trust_remote_code = True)

(More to come...)

🦖