I’m on the job market, please reach out if you have an interesting position/project
I’m a soon-to-graduate Ph.D. student in Computer Science at GLADIA, Sapienza University of Rome (top-ranking university in AI in Europe) - I’ve just submitted my thesis on “Effective, Efficient and Reliable Large Language Models” (soon available after my defense). Previously, I was a research scientist intern at Apple in the MLR team (see section Experiences for a full list).
My current interests are improving language models’ robustness and reliability through uncertainty estimation and mechanistic interpretability (see pub1 and pub2). In the past, I’ve worked on diverse topics like syntax in transformers (see our publication KERMIT), efficient decoding techniques (we introduced Parallel Jacobi Decoding to double decoding speed - now adopted by lmsys), instruction-tuning in LLMs (we introduced instruction tuning, now adopted in every LLM training pipeline), instruction-tuning for the Italian language (see Camoscio). I’ve also worked on other tangential topics like preserving privacy in LLMs, audio LLMs, and multimodal neural databases (check out my publications).
If you’d like to connect or have an exciting project, feel free to reach out on X, LinkedIn, or through the contact form below!
PhD in Computer Science, 2024
Sapienza University of Rome
MSc in Computer Science, 2020
University of Roma Tor Vergata
BSc in Computer Science, 2018
University of Roma Tor Vergata