Dan R. DeGenaro

Incoming PhD Student @ Georgetown Computer Science

prof_pic.jpg

Poulton Hall 240

1421 37th Street NW

Washington, DC 20057

I’m an incoming PhD student at Georgetown University, where I will be working with Dr. Sarah Bargal on multimodal intelligent systems – those that integrate text, vision, and other forms of data such as audio. I’m also affiliated with the PICoL Lab led by Dr. Ethan Wilcox.

I am interested in the development of safe, ethical, and energy-efficient multimodal intelligent systems that serve the needs of everyday people while respecting important rights such as privacy, copyright, and the right to be forgotten.

I am also interested in low-resource machine translation and speech recognition, multilingual NLP, and information-theoretic approaches to language modeling and linguistics.

news

Jun 25, 2025 Teaching as a MITES Semester Project Course Instructor for the second year running!
May 16, 2025 Completed my Master of Science in Computational Linguistics!
May 15, 2025 Workshop paper accepted to ACL 2025!
Apr 22, 2025 Won Georgetown University’s Graduate Student Teaching Assistant Award!
Apr 04, 2025 Demo paper accepted to SIGIR 2025!

latest posts

selected publications

  1. Experiments in Mamba Sequence Modeling and NLLB-200 Fine-Tuning for Low Resource Multilingual Machine Translation
    Dan DeGenaro and Tom Lupicki
    In Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024), Jun 2024
  2. MMMORRF: Multimodal Multilingual MOdularized Reciprocal Rank Fusion
    Saron Samuel, Dan DeGenaro, Jimena Guallar-Blasco, and 12 more authors
    In Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval, Padua, Italy, Jun 2025
  3. ACL
    FORTIFY: Generative Model Fine-tuning with ORPO for ReTrieval Expansion of InFormal NoisY Text
    Dan DeGenaro, Eugene Yang, David Etter, and 5 more authors
    In Proceedings of the 1st Workshop on Multimodal Augmented Generation via Multimodal Retrieval (MAGMaR 2025), Aug 2025