Samy Jelassi Headshot

Samy Jelassi

I am a Postdoctoral Fellow at the School of Engineering and Applied Sciences (SEAS) at Harvard University. My hosts are Boaz Barak and Sham Kakade.

I study the algorithms and architectures that make large language models work. My research focuses on large language model architecture design, optimization, and long-context capabilities. I have also done work on post-training and reinforcement learning.

Prior to coming to Harvard, I did my PhD at Princeton University advised by Boris Hanin. During that time, I interned at Facebook AI Research, Google Deepmind and Google Research. And before that, I undergraduated at Ecole Normale Superieure de Lyon in France.

Selected Works

(full list)

Let's (not) just put things in Context: Test-time Training for Long-context LLMs
Rachit Bansal, Aston Zhang, Rishabh Tiwari, Lovish Madaan, Sai Surya Duvvuri, Fnu Devvrit, David Brandfonbrener, David Alvarez-Melis, Prajjwal Bhargava, Mihir Kale, Samy Jelassi
Submitted, 2025.

Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining
Rosie Zhao*, Alexandru Meterez*, Sham Kakade, Cengiz Pehlevan, Samy Jelassi, Eran Malach
2nd Conference on Language Modeling (COLM), 2025.
*Equal contribution  |  Equal senior contribution

Mixture of Parrots: Experts improve memorization more than reasoning
Samy Jelassi, Clara Mohri, David Brandfonbrener, Alex Gu, Nikhil Vyas, Nikhil Anand, David Alvarez-Melis, Yuanzhi Li, Sham M. Kakade, Eran Malach
13th International Conference on Learning Representations (ICLR), 2025.
[Blog]

Repeat after me: Transformers are better than state space models at copying
Samy Jelassi, David Brandfonbrener, Sham M. Kakade, Eran Malach
41st International Conference on Machine Learning (ICML), 2024.
[Blog]