I am currently a Perception system engineer working on object detection (lidar + computer vision) for autonomous vehicles. My research background is in neural compression, probabilistic methods, reliable deep learning and interpretability.
Quick links: Google scholar, LinkedIn
My research focuses on probabilistic machine learning and its applications. I am interested in Bayesian inference and Bayesian deep learning [3,4,6]. I want to use Bayesian methods to understand model uncertainty in neural networks. Model uncertainty can then be used to build robust and deployable deep learning models that do not exhibit the typical failure modes, such as poor calibration and overconfident predictions of traditionally trained networks.
Model uncertainty from Bayesian methods can be used to estimate the information stored within the neural network. We can use this to derive an efficient model compression algorithm, that can reduce the size of the network by a factor of 100 [5]. A similar approach can be applied to images, where we exploit uncertainty in their latent representations for effective lossless and lossy image compression [2].
Lastly, uncertainty is critical in interpretable machine learning. We show that correctly modelling uncertainty significantly improves the reliability of concept bottleneck models [1].
March 2023 - Present, Boston, MA
Perception System Engineer, Motional
Topics: machine learning, engineering.
September 2021 - March 2023, Cambridge, MA
Postdoctoral Researcher, Harvard School of Engineering and Applied Sciences
Topics: interpretability in machine learning.
October 2017 - April 2021, Cambridge, UK
PhD in Probabilistic Machine Learning, University of Cambridge
Advisor: José Miguel Hernández-Lobato.
Topics: model uncertainty, robustness, model compression and image compression
June 2020 - September 2020, Remote
Research Intern, Google Health
Collaborators: Dustin Tran, Andrew M. Dai and Balaji Lakshminarayanan
Topics: robust and efficient ensemble models
June 2019 - September 2019, Cambridge, MA
Research Intern, Google Brain
Collaborators: Jasper Snoek and Dustin Tran
Topics: Bayesian neural networks
October 2016 - August 2017, Cambridge, UK
MPhil in Machine Learning, Speech and Language Technology, University of Cambridge
Thesis: Designing neural network hardware accelerators using deep Gaussian processes.
June 2016 - September 2016, London, UK
Software Engineer Intern, Facebook
Team: locations on Facebook.
October 2013 - June 2016, Cambridge, UK
BA in Computer Science with Mathematics, University of Cambridge
Churchill College
June 2015 - September 2015, Menlo Park, CA
Software Engineer Intern, Facebook
Team: news feed ads.
June 2013, Colombia
Participant at the International Mathematical Olympiad
Earned a Bronze Medal
June 2012, Italy
Participant at the International Olympiad in Informatics
Earned a Silver Medal
Marton Havasi, Sonali Parbhoo, Finale Doshi-Velez
We address the problem of concept leakage and improve the reliability of concept bottleneck models.
Gergely Flamich (joint first author), Marton Havasi (joint first author), José Miguel Hernández-Lobato
We prospose an image compression algorithm based on relative entropy coding.
Arxiv
Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew M. Dai, Dustin Tran
We prospose an algorithm for training robust and efficient ensemble models.
PDF
Marton Havasi, Jasper Snoek, Dustin Tran, Jonathan Gordon and José Miguel Hernández-Lobato
We prospose an algorithm for improving the expressivity of mean-field variational inference in Bayesian neural networks.
PDF
Marton Havasi, Robert Peharz and José Miguel Hernández-Lobato
We propose a non-deterministic compression method for neural networks that
samples from a variational distribution.
Arxiv
Marton Havasi, José Miguel Hernández-Lobato and Juan José Murillo Fuentes
We applied an MCMC sampling method to do inference in deep Gaussian processes.
Arxiv
Supervised courses: Machine Learning, Discrete Mathematics, Artificial Intelligence.
Projects: