I am a final year PhD student in Probabilistic Machine Learning at the University of Cambridge under the supervision of Dr. José Miguel Hernández-Lobato. My research focuses on model uncertainty in deep learning, which I apply to make models more robust and deployable, and to make them more compact and efficient.
I will be graduating around April 2021 and I am actively looking for post-doctorate or industry researcher positions.
My research focuses on two connected areas. First, I am interested in Bayesian inference and Bayesian deep learning [2,3,6]. I want to use Bayesian methods to understand model uncertainty in neural networks. Model uncertainty can then be used to build robust and deployable deep learning models that do not exhibit the typical failure modes, such as poor calibration and overconfident predictions of traditionally trained networks.
Second, model uncertainty from Bayesian methods can be used to estimate the information stored within the neural network. We can use this to derive an efficient model compression algorithm, that can reduce the size of the network by a factor of 100 . A similar approach can be applied to images, where we exploit uncertainty in their latent representations for effective lossless and lossy image compression .
I also worked on Bayesian optimization with the goal of optimizing hardware accelerator design parameters .
October 2017 - April 2021, Cambridge, UK
PhD in Probabilistic Machine Learning, University of Cambridge
Working on model uncertainty, robustness, model compression and image compression with José Miguel Hernández-Lobato.
June 2020 - September 2020, Remote
Research Intern, Google Health
Worked on robust and efficient ensemble models with Dustin Tran, Andrew M. Dai and Balaji Lakshminarayanan.
June 2019 - September 2019, Cambridge, MA
Research Intern, Google Brain
Worked on Bayesian neural networks with Jasper Snoek and Dustin Tran.
October 2016 - August 2017, Cambridge, UK
MPhil in Machine Learning, Speech and Language Technology, University of Cambridge
Thesis: Designing neural network hardware accelerators using deep Gaussian processes.
June 2016 - September 2016, London, UK
Software Engineer Intern, Facebook
Working on locations on Facebook.
October 2013 - June 2016, Cambridge, UK
BA in Computer Science with Mathematics, University of Cambridge
June 2015 - September 2015, Menlo Park, CA
Software Engineer Intern, Facebook
Working on news feed ads.
June 2013, Colombia
Participant at the International Mathematical Olympiad
Earned a Bronze Medal
June 2012, Italy
Participant at the International Olympiad in Informatics
Earned a Silver Medal
Supervised courses: Machine Learning, Discrete Mathematics, Artificial Intelligence.
Gergely Flamich (joint first author), Marton Havasi (joint first author), José Miguel Hernández-Lobato
We prospose an image compression algorithm based on relative entropy coding. Arxiv
Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew M. Dai, Dustin Tran
We prospose an algorithm for training robust and efficient ensemble models. PDF
Marton Havasi, Jasper Snoek, Dustin Tran, Jonathan Gordon and José Miguel Hernández-Lobato
We prospose an algorithm for improving the expressivity of mean-field variational inference in Bayesian neural networks. PDF
Marton Havasi, Robert Peharz and José Miguel Hernández-Lobato
We propose a non-deterministic compression method for neural networks that samples from a variational distribution. Arxiv
Kshitij Bhardwaj, Marton Havasi, Yuan Yao, David M Brooks, José Miguel Hernández Lobato, Gu-Yeon Wei PDF
Marton Havasi, José Miguel Hernández-Lobato and Juan José Murillo Fuentes
We applied an MCMC sampling method to do inference in deep Gaussian processes. Arxiv