Andrea Santilli

Andrea Santilli

NLP Researcher

GLADIA, Sapienza University of Rome

Biography

I am a Research Scientist in Natural Language Processing and Large Language Models. I hold a PhD in Computer Science from GLADIA at Sapienza University of Rome, where my doctoral research focused on building effective, efficient, and reliable Large Language Models. Previously, I worked as a Research Scientist at Nous Research and at Apple in the MLR team (see Experiences for the full list).

My current research interests center on improving language models’ robustness and reliability through uncertainty estimation and mechanistic interpretability (see pub1 and pub2). In the past, I have worked on a wide range of topics, including syntax in transformers (see KERMIT), efficient decoding methods (we introduced Parallel Jacobi Decoding to roughly double decoding speed, now adopted by lmsys), and instruction tuning for LLMs (we introduced instruction tuning, which is now a standard component of modern LLM training pipelines). I have also worked on instruction tuning for the Italian language (see Camoscio), as well as related areas such as preserving privacy in LLMs, audio LLMs, and multimodal neural databases. A full list of publications is available on my Google Scholar profile.

In the news: The BigScience project, to which I contributed, has been covered by outlets such as MIT Technology Review and The Washington Post. I was also featured in La Repubblica as one of the “500 Italians who matter in AI” (article in Italian). More recently, our work on LLM injectivity (aka the Pringle paper) received broad attention with roughly 5 million views!

If you would like to connect, feel free to reach out on X, LinkedIn, or through the contact form below.

Interests
  • Large Language Models
  • Natural Language Processing
  • Representation Learning
Education
  • PhD in Computer Science, 2025

    Sapienza University of Rome

  • MSc in Computer Science, 2020

    University of Roma Tor Vergata

  • BSc in Computer Science, 2018

    University of Roma Tor Vergata

Experience

 
 
 
 
 
Nous Research
Research Scientist
Mar 2025 – Jun 2025 Remote
Conducting post-training research on LLMs with a focus on enhancing their robustness, reliability, and alignment.
 
 
 
 
 
Apple
MLR Research Scientist
Apr 2024 – Oct 2024 Barcelona
Researched robustness and reliability of foundation models through uncertainty estimation in the MLR group, resulting in publications at ACL 2025 (Main) and the NeurIPS Safe Generative AI Workshop 2024.
 
 
 
 
 
BigScience - Hugging Face
Open Science Researcher
Jun 2021 – Jun 2022 Remote
Researcher at Hugginface’s workshop on large language models. Worked in the prompt-engineering working group, introducing the now popular instruction-tuning training paradigm. Three publications: T0, BLOOM, PromptSource.
 
 
 
 
 
Pi School, School of Artificial Intelligence
Research Engineer
Oct 2019 – Dec 2019 Rome
Worked on a European Commission project to promote entrepreneurship and tech transfer in the R&D area (“Started Project”) via NLP-based tools.

Publications

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
TMLR
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their …
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

Grants Awarded

Activation-Level Control for Reliable LLM Behavior
This project advances methods for making AI behavior more predictable and reliable under targeted interventions. It addresses a key safety challenge: reducing harmful or undesirable behaviors without introducing unintended side effects or degrading core capabilities. The work aims to improve the precision of behavioral control in advanced AI systems and to provide evaluation tools that clarify when interventions can be trusted to generalize. Status: In progress. Budget: 80.000$
Our project on efficient Machine Translation (MT) was selected as the winner of the category ‘Machine Learning Algorithms For Translation’ among different proposals submitted by world experts and professors (7% acceptance rate). We develop a novel decoding algorithm to speedup autoregressive transformers up to 2x and published the results at ACL 2023. PI: Andrea Santilli. Budget: 20.000€
ufi
Multimodal Artificial Intelligence for 3D shape analysis, modeling and applications
Joint project on multimodal 3D and NLP applications between our research group GLADIA at Sapienza and Maks Ovsjanikov’s group at Ecole Polytechnique. PI: Simone Melzi, Maks Ovsjanikov. Budget: 10.000€

Contact