Back to Article
AI for Time Series and Anomaly Detection
Journal of Artificial Intelligence and Big Data
| Vol 4, Issue 2
Table 1. Comparison of Traditional, Machine Learning, and DeepLearning Approaches for Time Series Forecasting and Anomaly Detection
| Approach Type | Representative Models / Techniques | Key Features | Strengths | Limitations | Key References |
| Traditional Statistical Models | ARIMA, SARIMA, Holt-Winters, Exponential Smoothing | Assume linearity and stationarity; rely on historical trends | Simple, interpretable, computationally efficient | Poor for nonlinear/multivariate data; sensitive to noise and nonstationarity | Hyndman & Athanasopoulos (2021); Zhang & Kim (2022) |
| Statistical Anomaly Detection | Z-score, Grubbs’ test, Control Charts | Detects deviations from mean or standard deviation thresholds | Easy to implement; interpretable | Fails with non-Gaussian data and dynamic thresholds | Ahmed et al. (2023) |
| Machine Learning Models | SVM, Random Forest, Gradient Boosting, Prophet, Hybrid ARIMA-ML | Data-driven, nonlinear modeling | No need for strict statistical assumptions; flexible | Heavy feature engineering; limited temporal awareness | Wang & Zhou (2023); Pérez-Chacón et al. (2022) |
| Deep Learning Models (Sequential) | RNN, LSTM, GRU | Capture temporal dependencies; learn directly from data | Effective for sequence learning; strong predictive accuracy | Vanishing gradient; limited scalability | Lim & Zohren (2021) |
| Deep Learning Models (Convolutional) | Temporal Convolutional Networks (TCN) | Uses dilated convolutions for long-term patterns | Parallelizable; efficient | May overlook global temporal context | Bai et al. (2023) |
| Transformer-Based Models | Temporal Fusion Transformer (TFT), Informer, TimesNet | Self-attention for long-range dependencies; interpretable embeddings | High scalability; superior multivariate handling | Requires large datasets and tuning | Xu et al. (2024); Lai et al. (2023) |
| AI-Based Anomaly Detection | Autoencoder, VAE, GAN, GNN, Attention-based models | Learn representations of normal behavior to flag deviations | Works in unsupervised settings; handles multivariate data | Limited interpretability; high computation | Darban et al. (2022); Iqbal et al. (2024); Chiranjeevi et al. (2024) |
| Emerging Hybrid / Edge Models | Physics-informed NN, Federated Learning, XAI frameworks | Combines interpretability, causality, and scalability | Explainable; data-efficient; privacy-preserving | Still developing; less standardized | Lee & Park (2024); Chen et al. (2024); Méndez et al. (2024) |