๐
Date: Monday, April 27, 2026
๐ Time: 11:00 โ 12:00
Location: Heinzel Seminar Room, Office Building West
Speaker: Valentina Njaradi (University College London)
Title: An Analytically Tractable Model of Optimal Schema Learning and Few-Shot Generalization
Abstract:
Learning to generalize rapidly from limited experience is a fundamental challenge for both biological and artificial systems. A key strategy is the extraction of reusable low-dimensional structure from large amounts of data, which then supports efficient adaptation to new tasks โ instantiated in biological systems as schema learning, and in machine learning as representation learning, followed by linear probing or fine-tuning. We present a tractable analytical theory of this two-stage process: structure extraction is formalized as a linear autoencoder capturing environmental regularities, and adaptation as downstream linear regression on the learned latent space. In the high-dimensional regime, we derive exact closed-form expressions for generalization and training errors as a function of representation dimensionality, data availability, and task parameters. We find that optimal representations take qualitatively different forms depending on the learning objective: minimizing training error favours high-dimensional representations with low training error but poor transfer, while minimizing generalization error yields compressed, lower-dimensional representations that support rapid new learning. These results reproduce key empirical findings from neuroscience โ accelerated learning under strong schemas, schema-induced memory distortions, and rapid cortical integration of schema-consistent memories โ and provide precise conditions under which learning compressed representations during pretraining aids downstream generalization.