▌ ▐ ▐▀▌ █ █▀▌ ▐▀▌ ▌ ▐ ▐▌▐▌ ▐▀▀ ▀▌▐▌▐▀ █▀█ █▀█ █ █ █ █▀█ █ █ ▐▐▌█ █▀ ▌▐▌▐ ▌ ▐ ▌ ▐ ▐██ ███ ▌ ▐ █▄█ ▐ █ ▐██ ▄▌▐▌▐▄ (W)elcome (A)dvisees (C)alendar (P)ublications (V)ita/Bio/Contact (F)AQ (T)eaching (D)ata + Code (L)inks (B)log (I)mages + Photos Tal(k)s
You enter a dark forest. Standing in front of you is:

An associate professor named Hal Daumé III. He wields appointments in Computer Science and Language Science at UMD where he and his wonderful advisees study questions related to how to get machines to becomes more adept at human language, by developing models and algorithms that allow them to learn from data. (Keywords: natural language processing and machine learning.) The two major questions that really drive their research these days are:

    (1) how can we get computers to learn language
        through natural interaction with people/users?

and (2) how can we do this in a way that promotes fairness,
        transparency and explainability in the learned models?

He's discussed interactive learning informally recently in a Talking Machines Podcost and more technically in recent talks; and has discussed fairness/bias in broad terms in a recent blog post. Hal is committed to promoting an inclusive scientific environment; if you are thinking of inviting him for a talk or to participate in an event, please ensure that the event is consistent with this goal (see the first question on the FAQ).

Hal is super fortunate to have awesome colleagues in the Computional Linguistics and Information Processing Lab (which he currently directs). He maintains the structured prediction framework in VW. If you want to contact him, email is your best bet; you can also find him on @haldaume3 on Twitter. Or, in person, in the CLIP lab (AVW 3126) or his office (AVW 3227). If you're a prospective grad student or grad applicant, please read his FAQ!

Recent Publications:

Datasheets for Datasets
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III and Kate Crawford
arxiv, 2018
[Abstract] [BibTeX]

Residual Loss Prediction: Reinforcement Learning with no Incremental Feedback
Hal Daumé III, John Langford and Amr Sharaf
ICLR, 2018
[Abstract] [BibTeX]

Hierarchical Imitation and Reinforcement Learning
Hoang M Le, Nan Jiang, Alekh Agarwal, Miroslav Dudík, Yisong Yue and Hal Daumé III
arxiv, 2018
[Abstract] [BibTeX]

The UMD Neural Machine Translation Systems [at WMT17 Bandit Learning Task]
Amr Sharaf, Shi Feng, Khanh Nguyen, Kianté Brantley and Hal Daumé III
WMT, 2017
[Abstract] [BibTeX]

Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback
Khanh Nguyen, Hal Daumé III and Jordan Boyd-Graber
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
[Abstract] [BibTeX]

More papers please!

Recent Talks:

Learning language through interaction
December 2016, Georgetown, Amazon, USC, GATech

Bias in AI
November 2016, UMD MCWIC Diversity Summit
[PDF] [ODP] [PPTx (exported)] [Blog Post]

Locally optimal learning to search and distant supervision
December 2015, UMD CS Research Seminar
[PDF] [ODP] [Video]

Imitation learning and recurrent neural networks mashup
December 2015, CIFAR NCAP Workshop

Algorithms that learn to think of their feet
October 2015, UC Santa Cruz

More talks please!

Contact information:
    email: me AT hal3 DOT name               skype: haldaume3
    phone: 301-405-1073                    twitter: haldaume3
   office: AVW 3227                         github: hal3
I can't reply to all prospective students email; please read this before emailing me.

credits: design and font inspired by Seth Able's LoRD, some images converted to ANSI using ManyTools, original drawing of me by anonymous.
last updated on twenty three march, two thousand eightteen.