I am a Research Scientist at Deezer, working on NLP and multimodal representation learning. I hold a PhD in Computer Science & Engineering from the University of Bologna (Language Technologies Lab), where I investigated how AI interprets human expressive signals across voice, language, and music.
- Multimodal Learning (speech × text × audio × music)
- Music Information Retrieval & creative AI
- Speech & paralinguistic signal modeling
- Large Language Models & cross-domain transfer
- Explainable & perceptually-grounded model interpretation
- Multimodal Argument Mining & political discourse analysis
- AI for clinical assessment (Parkinson's, depression)
| Platform | Link |
|---|---|
| 🌐 Website | https://helemanc.github.io |
| https://www.linkedin.com/in/eleonora-mancini/ | |
| 🎓 Google Scholar | https://scholar.google.com/citations?user=1Qk3rogAAAAJ |
| [email protected] |
Python · PyTorch · TensorFlow · HuggingFace Transformers · SpeechBrain
Large-scale training on HPC clusters (Compute Canada, CINECA Leonardo, UniBo HPC)


