"TTM's computational requirements are low by design: it can be pre-trained efficiently on a handful of high-memory GPUs within hours and fine-tuned on a single GPU, enabling fast experimentation and deployment even in limited-resource settings. This contrasts sharply with large transformer-based models requiring weeks of training on dozens of GPUs".
Specific datasets were used for pre-training TTM
1. Pre-training data is drawn mainly from Monash Time Series Forecasting repository (~244M to ~1B samples)
3. Australian Electricity Demand are included within Monash.
4.Informer dataset is used for fine-tuning and evaluation rather than pre-training.
------------------------------
Suman Suhag
Dev Bhoomi Uttarakhand university
Data Scientist Student
+8950196825 [Jhajjar, Haryana, India]
------------------------------