32 private links
The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific lan- guage to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization per- formance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.
In our exercise, we built a database using only photos from public websites
by using neural nets we are able to outperform cache-optimized B-Trees by up to 70% in speed while saving an order-of-magnitude in memory over several real-world data sets
A major attraction of the Black–Litterman approach for portfolio optimization is the potential for integrating subjective views on expected returns. In this article, the authors provide a new approach for deriving the views and their uncertainty using predictive regressions estimated in a Bayesian framework. The authors show that the Bayesian estimation of predictive regressions fits perfectly with the idea of Black–Litterman. The subjective element is introduced in terms of the investors’ belief about the degree of predictability of the regression. In this setup, the uncertainty of views is derived naturally from the Bayesian regression, rather than by using the covariance of returns. Finally, the authors show that this approach of integrating uncertainty about views is the main reason this method outperforms other strategies.
In this article, the author introduces the Hierarchical Risk Parity (HRP) approach to address three major concerns of quadratic optimizers, in general, and Markowitz’s critical line algorithm (CLA), in particular: instability, concentration, and underperformance. HRP applies modern mathematics (graph theory and machine-learning techniques) to build a diversified portfolio based on the information contained in the covariance matrix. However, unlike quadratic optimizers, HRP does not require the invertibility of the covariance matrix. In fact, HRP can compute a portfolio on an ill-degenerated or even a singular covariance matrix—an impossible feat for quadratic optimizers. Monte Carlo experiments show that HRP delivers lower out-ofsample variance than CLA, even though minimum variance is CLA’s optimization objective. HRP also produces less risky portfolios out of sample compared to traditional risk parity methods.
Jupyter notebooks are a tool for exploration not for production
accepted papers at:
BayesOpt 2017
NIPS Workshop on Bayesian Optimization
December 9, 2017
Long Beach, USA
We introduced ROBO, a flexible Bayesian optimization framework in python. For standard GP-based
blackbox optimization, its performance is on par with Spearmint while using the permissive BSD
license. Most importantly, to the best of our knowledge, ROBO is the first BO package that includes
Bayesian neural network models and that implements specialized BO methods that go beyond the
blackbox paradigm to allow orders of magnitude speedup.
volumes
1) vectors and matrices
2) derivatives
3) integrals