Date: Oct 06, 2023
Time: 4:30 PM - 5:30 PM
Abstract: Large language models (LLMs) contributed to a major breakthrough in NLP, both in terms of understanding natural language queries, commands or questions; and in generating relevant, coherent, grammatical, human-like text. LLMs like ChatGPT became a product used by many, for getting advice, writing essays, troubleshooting and writing code, creative writing, and more. This calls for a reality check: which NLP tasks did LLMs solve? What are the remaining challenges, and which new opportunities did LLMs create? In this talk, I will discuss several areas of NLP that can benefit from but are still challenging for LLMs: grounding, i.e. interpreting language based on non-linguistic context; reasoning; and real-world applications. Finally, I will argue that the standard benchmarking and evaluation techniques used in NLP need to drastically change in order to provide a more realistic picture of current capabilities.
Bio: Vered Shwartz is an Assistant Professor of Computer Science at the University of British Columbia and a CIFAR AI Chair at the Vector Institute. Her research is concerned with natural language processing, with the fundamental goal of building computer programs that can interact with people in natural languages. In her work, she is teaching machines to apply human-like commonsense reasoning which is required to resolve ambiguities and interpret underspecified language. Before joining UBC, Vered completed her PhD in Computer Science at Bar-Ilan University and was a postdoctoral researcher at the Allen Institute for AI and the University of Washington.