32 private links
Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus, and can work well across different forecasting history lengths, prediction lengths and temporal granularities.
a high carry predicts future crypto price crashes. They further imply that there is “excess volatility” of crypto futures relative to spot prices, i.e. our estimates imply that changes in futures prices are about ten times more volatile than changes in spot prices
the crypto futures basis tends to be elevated when smaller entities seek leveraged upside exposure.
This work thus provides strong empirical evidence towards developing scaling laws for reinforcement learning.
We document return predictability from deep-learning models that cannot be explained by common risk factors or limits to arbitrage.
a strong positive effect of debt refinancing risk, as measured by refinancing intensity, on excess bond returns in the subsequent year, supporting the rollover risk channel
signals as linear combinations of exogenous variables
statistical arbitrage portfolios with graph clustering algorithms
SPRING is an LLM-based policy that outperforms Reinforcement Learning algorithms in an interactive environment requiring multi-task planning and reasoning. A group of researchers from Carnegie Mellon University, NVIDIA, Ariel University, and Microsoft have investigated the use of Large Language Models (LLMs) for understanding and reasoning with human knowledge in the context of games. They propose a two-stage approach called SPRING, which involves studying an academic paper and then using a Question-Answer (QA) framework to justify the knowledge obtained. More details about SPRING In the first stage, the authors read the LaTeX source code of the original paper by Hafner (2021)