Accurate forecasting of crop yields plays a pivotal role in ensuring food security, guiding policy decisions, and optimizing resource management. This review brings together more than fifty years of progress in time series forecasting for agriculture, tracing the evolution from classical statistical approaches (1970s–1990s) to advanced time series models (1990s–2010s), and most recently to deep learning architectures (2015–2025). The methods examined include multiple regression, principal component analysis (PCA), logistic regression, autoregressive integrated moving average (ARIMA) models, state space formulations, and a growing array of machine learning techniques such as long short-term memory (LSTM) networks, convolutional neural networks (CNN), and Transformer-based models. Through a structured comparative lens, this review assesses the strengths, limitations, data requirements, and computational demands of each methodological category. The findings underscore that no single approach is universally optimal; the choice of model depends on factors such as data availability, forecast horizon, computational capacity, and the specific agricultural context. Recent advances highlight the promise of hybrid models that integrate complementary techniques, offering improved predictive accuracy while preserving interpretability. A central challenge identified is climate non-stationarity, which calls for adaptive forecasting methods. At the same time, the convergence of advanced analytics, satellite remote sensing, IoT sensor networks, and climate science is opening unprecedented opportunities for agricultural prediction. Looking ahead, future research directions include the development of climate-adaptive forecasting systems, hybrid frameworks that combine mechanistic and learning-based approaches, and explainable artificial intelligence tailored to agricultural applications.