Limbo’s documentation

Limbo (LIbrary for Model-Based Optimization) is an open-source C++11 library for Gaussian Processes and data-efficient optimization (e.g., Bayesian optimization, see [2][9]) that is designed to be both highly flexible and very fast. It can be used as a state-of-the-art optimization library or to experiment with novel algorithms with “plugin” components. Limbo is currently mostly used for data-efficient policy search in robot learning [2] and online adaptation because computation time matters when using the low-power embedded computers of robots. For example, Limbo was the key library to develop a new algorithm that allows a legged robot to learn a new gait after a mechanical damage in about 10-15 trials (2 minutes) [1], and a 4-DOF manipulator to learn neural networks policies for goal reaching in about 5 trials [4].

The implementation of Limbo follows a policy-based design [1] that leverages C++ templates: this allows it to be highly flexible without the cost induced by classic object-oriented designs (cost of virtual functions). The regression benchmarks show that the query time of Limbo’s Gaussian processes is several orders of magnitude better than the one of GPy (a state-of-the-art Python library for Gaussian processes) for a similar accuracy (the learning time highly depends on the optimization algorithm chosen to optimize the hyper-parameters). The black-box optimization benchmarks demonstrate that Limbo is about 2 times faster than BayesOpt (a C++ library for data-efficient optimization, [8]) for a similar accuracy and data-efficiency. In practice, changing one of the components of the algorithms in Limbo (e.g., changing the acquisition function) usually requires changing only a template definition in the source code. This design allows users to rapidly experiment and test new ideas while keeping the software as fast as specialized code.

Limbo takes advantage of multi-core architectures to parallelize the internal optimization processes (optimization of the acquisition function, optimization of the hyper-parameters of a Gaussian process) and it vectorizes many of the linear algebra operations (via the Eigen 3 library and optional bindings to Intel’s MKL).

The library is distributed under the CeCILL-C license via a Github repository. The code is standard-compliant but it is currently mostly developed for GNU/Linux and Mac OS X with both the GCC and Clang compilers. New contributors can rely on a full API reference, while their developments are checked via a continuous integration platform (automatic unit-testing routines).

Limbo is currently used in the ERC project ResiBots, which is focused on data-efficient trial-and-error learning for robot damage recovery, and in the H2020 projet PAL, which uses social robots to help coping with diabetes. It has been instrumental in many scientific publications since 2015 [1][5][11][4][10][3].

Limbo shares many ideas with Sferes2, a similar framework for evolutionary computation.


[1]Andrei Alexandrescu. Modern C++ design: generic programming and design patterns applied. Addison-Wesley, 2001.
[2]Eric Brochu, Vlad M Cora, and Nando De Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010.
[3]Konstantinos Chatzilygeroudis and Jean-Baptiste Mouret. Using parameterized black-box priors to scale up model-based policy search for robotics. In International Conference on Robotics and Automation (ICRA). 2018.
[4](1, 2) Konstantinos Chatzilygeroudis, Roberto Rama, Rituraj Kaushik, Dorian Goepp, Vassilis Vassiliades, and Jean-Baptiste Mouret. Black-Box Data-efficient Policy Search for Robotics. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Vancouver, Canada, September 2017. URL:
[5]Konstantinos Chatzilygeroudis, Vassilis Vassiliades, and Jean-Baptiste Mouret. Reset-free Trial-and-Error Learning for Robot Damage Recovery. Robotics and Autonomous Systems, 2018.
[1](1, 2) Antoine Cully, Jeff Clune, Danesh Tarapore, and Jean-Baptiste Mouret. Robots that can adapt like animals. Nature, 521(7553):503–507, May 2015. URL:, doi:10.1038/nature14422.
[2]Daniel J Lizotte, Tao Wang, Michael H Bowling, and Dale Schuurmans. Automatic gait optimization with gaussian process regression. In Proceedings of the the International Joint Conference on Artificial Intelligence (IJCAI), volume 7, 944–949. 2007.
[8]Ruben Martinez-Cantin. BayesOpt: a Bayesian optimization library for nonlinear optimization, experimental design and bandits. Journal of Machine Learning Research, 15:3915–3919, 2014.
[9]J. Mockus. Bayesian approach to global optimization: theory and applications. Kluwer Academic, 2013.
[10]Rémi Pautrat, Konstantinos Chatzilygeroudis, and Jean-Baptiste Mouret. Bayesian optimization with automatic prior selection for data-efficient direct policy search. In International Conference on Robotics and Automation (ICRA). 2018.
[11]Danesh Tarapore, Jeff Clune, Antoine Cully, and Jean-Baptiste Mouret. How Do Different Encodings Influence the Performance of the MAP-Elites Algorithm? In The 18th Annual conference on Genetic and evolutionary computation (GECCO‘14). ACM, 2016. URL:, doi:10.1145/2908812.2908875.