Singular Learning Theory (SLT)
At present my research is focused on the application of Singular Learning Theory to the understanding of deep learning (AI) models like neural networks and transformers. With a newfound interest in AI Safety and Alignment, I am currently working on the Developmental Interpretability agenda.
Writing
Phase Transitions in Neural Networks - Master’s Thesis
Supervisor: Dr. Daniel Murfet.
Date: October 2021
Summary: The thesis studies Sumio Watanabe’s Singular Learning Theory (SLT) and explores how it can be used to explain why neural networks generalise so well, and how to think about and analyse phase transitions in deep learning. I illustrate some important aspects of Watanabe’s theory for small neural networks by examining the relationship between singularities, phases and phase transitions. I demonstrate the existence of both first and second order phase transitions in the Bayesian posterior for simple ReLU neural networks by varying the true distribution.
You can find the code used to generate my Bayesian posterior experiments using HMC here.
If you would like to cite this work please use the following BibTeX reference:
@mastersthesis{carroll2021phase,
title={Phase Transitions in Neural Networks},
author={Liam Carroll},
month={October},
year={2021},
school={The University of Melbourne},
url={http://therisingsea.org/notes/MSc-Carroll.pdf},
type={Master's Thesis},
}
Distilling Singular Learning Theory on LessWrong
Thanks to a grant from the Long Term Future Fund, I have written a LessWrong sequence called Distilling SLT which translates the key lessons, claims and findings of my masters thesis into a more palatable format. The posts were published to coincide with the initial Workshop on Singular Learning Theory and Alignment.
If you would like to cite this work please use the following BibTeX reference:
@misc{DSLT2023lesswrong,
title={Distilling Singular Learning Theory},
author={Liam Carroll},
year={2023},
howpublished={\url{https://www.lesswrong.com/s/czrXjvCLsqGepybHC}},
}
Growth and Form in a Toy Model of Superposition on LessWrong
This post distills Dynamical and Bayesian Phase Transitions in a Toy Model of Superposition by Chen et al. (2023), where they study developmental stages of the Toy Model of Superposition, understanding growth and form from the perspective of SLT. This work was supported by Lightspeed grants.
If you would like to cite this work please use the following BibTeX reference:
@misc{TMS1_2023lesswrong,
title={Growth and Form in a Toy Model of Superposition},
author={Liam Carroll AND Edmund Lau},
year={2023},
howpublished={\url{https://www.lesswrong.com/posts/jvGqQGDrYzZM4MyaN/growth-and-form-in-a-toy-model-of-superposition}},
}
Talks
Talk at SLT Summit for Alignment June 2023
In this talk I present the key ideas of the Singular Learning Theory perspective on phase transitions in statistical models. I show toy examples of simple loss landscapes that demonstrate why the RLCT is so important to phase transitions, and present the work from my masters thesis which demonstrates examples of first and second order phase transitions in two layer feedforward ReLU neural networks.
Master’s Completion Talk
In this talk I explain how to inerpret the phase transitions demonstrated in my thesis through the lens of Singular Learning Theory to an audience of fellow Master’s students.