Natural Language Processing is one of the core areas of artificial intelligence. Currently, the majority of the research efforts in this area are mostly opting for better performance on major benchmarks and downstream tasks. However, it is vital to acknowledge that eventually, these systems will be deployed in a concrete sociotechnical environment, and, thus, their development and their decisions will directly or indirectly affect individuals and more generally, society as a whole. For instance, NLP systems are prone to encode and amplify unfair stereotypes and often simply fail to adequately represent terms referring to minority groups, which might lead to further discrimination of marginalised individuals. This is the first part of a two-part talk, in which Prof. Dirk Hovy will give a tutorial on general ethical issues around NLP systems. Concretely, he will introduce you to important concepts and outline on potential sources of discrimination along the machine learning pipeline.

Biography

Dirk Hovy is associate professor of computer science at Bocconi University in Milan, Italy. Before that, he was faculty and a postdoc in Copenhagen, got a PhD from USC, and a linguistics masters in Germany. He is interested in the interaction between language, society, and machine learning, or what language can tell us about society, and what computers can tell us about language. He has authored over 50 articles on these topics, including 3 best paper awards. He has organized one conference and several workshops (on abusive language, ethics in NLP, and computational social science). Outside of work, Dirk enjoys cooking, running, and leather-crafting. For updated information, see http://www.dirkhovy.com