Kyle Daruwalla
NeuroAI Scholar at CSHL

Our work on the Neural Tangent Ensemble is accepted as a spotlight at NeurIPS 2024! Bayesian ensembles are great for many applications such as continual learning, but it is computationally intensive to have an ensemble of large neural networks. In this work, we use neural tangent kernel (NTK) theory to show that a single network can be thought of as an ensemble of functions. We call this the Neural Tangent Ensemble (NTE) and derive a belief update rule for weighting members of the ensemble. Turns out this rule is closely related to (single sample) SGD! Our work focuses on continual learning as an example application, but the NTE is applicable anywhere you would use a Bayesian ensemble. Read the paper here, visit us at NeurIPS, or contact us if you have any questions about the NTE!