Marton Havasi


I am currently a Perception system engineer working on object detection (lidar + computer vision) for autonomous vehicles. My research background is in neural compression, probabilistic methods, reliable deep learning and interpretability.

Quick links: Google scholar, LinkedIn

Research


My research focuses on probabilistic machine learning and its applications. I am interested in Bayesian inference and Bayesian deep learning [3,4,6]. I want to use Bayesian methods to understand model uncertainty in neural networks. Model uncertainty can then be used to build robust and deployable deep learning models that do not exhibit the typical failure modes, such as poor calibration and overconfident predictions of traditionally trained networks.

Model uncertainty from Bayesian methods can be used to estimate the information stored within the neural network. We can use this to derive an efficient model compression algorithm, that can reduce the size of the network by a factor of 100 [5]. A similar approach can be applied to images, where we exploit uncertainty in their latent representations for effective lossless and lossy image compression [2].

Lastly, uncertainty is critical in interpretable machine learning. We show that correctly modelling uncertainty significantly improves the reliability of concept bottleneck models [1].

Experience


  • March 2023 - Present, Boston, MA
    Perception System Engineer, Motional
    Topics: machine learning, engineering.

  • September 2021 - March 2023, Cambridge, MA
    Postdoctoral Researcher, Harvard School of Engineering and Applied Sciences
    Topics: interpretability in machine learning.

  • October 2017 - April 2021, Cambridge, UK
    PhD in Probabilistic Machine Learning, University of Cambridge
    Advisor: José Miguel Hernández-Lobato.
    Topics: model uncertainty, robustness, model compression and image compression

  • June 2020 - September 2020, Remote
    Research Intern, Google Health
    Collaborators: Dustin Tran, Andrew M. Dai and Balaji Lakshminarayanan
    Topics: robust and efficient ensemble models

  • June 2019 - September 2019, Cambridge, MA
    Research Intern, Google Brain
    Collaborators: Jasper Snoek and Dustin Tran
    Topics: Bayesian neural networks

  • October 2016 - August 2017, Cambridge, UK
    MPhil in Machine Learning, Speech and Language Technology, University of Cambridge
    Thesis: Designing neural network hardware accelerators using deep Gaussian processes.

  • June 2016 - September 2016, London, UK
    Software Engineer Intern, Facebook
    Team: locations on Facebook.

  • October 2013 - June 2016, Cambridge, UK
    BA in Computer Science with Mathematics, University of Cambridge
    Churchill College

  • June 2015 - September 2015, Menlo Park, CA
    Software Engineer Intern, Facebook
    Team: news feed ads.

  • June 2013, Colombia
    Participant at the International Mathematical Olympiad
    Earned a Bronze Medal

  • June 2012, Italy
    Participant at the International Olympiad in Informatics
    Earned a Silver Medal

Selected Publications


[1] Adressing Leakage in Concept Bottleneck Models, NeurIPS 2022

Marton Havasi, Sonali Parbhoo, Finale Doshi-Velez
We address the problem of concept leakage and improve the reliability of concept bottleneck models.

[2] Compressing Images by Encoding Their Latent Representations with Relative Entropy Coding, NeurIPS 2020

Gergely Flamich (joint first author), Marton Havasi (joint first author), José Miguel Hernández-Lobato
We prospose an image compression algorithm based on relative entropy coding. Arxiv

[3] Training independent subnetwork for robust predictions, ICLR 2021

Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew M. Dai, Dustin Tran
We prospose an algorithm for training robust and efficient ensemble models. PDF

[4] Refining the variational posterior using through iterative optimization, Entropy Special issue 2021

Marton Havasi, Jasper Snoek, Dustin Tran, Jonathan Gordon and José Miguel Hernández-Lobato
We prospose an algorithm for improving the expressivity of mean-field variational inference in Bayesian neural networks. PDF

[5] Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters, ICLR 2019

Marton Havasi, Robert Peharz and José Miguel Hernández-Lobato
We propose a non-deterministic compression method for neural networks that samples from a variational distribution. Arxiv

[6] Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo, NeurIPS 2018

Marton Havasi, José Miguel Hernández-Lobato and Juan José Murillo Fuentes
We applied an MCMC sampling method to do inference in deep Gaussian processes. Arxiv



Teaching


Supervised courses: Machine Learning, Discrete Mathematics, Artificial Intelligence.

Projects:

  • Zoltan Molnar-Saska, Undergraduate dissertation: Training robust agents in cooperative multi-agent reinforcement learning.
  • Gergely Flamich, MPhil dissertation: Compression without quanization.
  • Tudor Paraschivescu, Undergraduate dissertation: Library for MIRACLE compression.
  • Carissa Wu, Undergraduate project: Learning Optimal Summaries of Clinical Time-series with Concept Bottleneck Models.
  • Katrina Brown, Undergraduate project: Diverse concept proposals in concept bottleneck models
  • Two capstone teams for couse APCOMP 297R