AIML Special Presentation: Meaning and Intelligence in Language Models — From Philosophy to Appropriate LLM Responses
- Date: Mon, 5 May 2025, 2:00 pm - 3:00 pm
- Location: AIML
- Professor Chris Manning Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence
Abstract: Language Models have been around for decades but have suddenly taken the world by storm. In a surprising third act for anyone doing NLP in the 70s, 80s, 90s, or 2000s, in much of the popular media, artificial intelligence is now synonymous with language models. In this talk, Prof Chris Manning will take a look backward at where language models came from and why they were so slow to emerge, a look inward to give some thoughts on meaning, intelligence, and what language models understand and know, and a look at some recent work on steering language models to respond well to people’s questions and commands. In the first part, he will argue that material beyond language is not necessary to having meaning and understanding, but it is very useful in most cases, and that adaptability and learning are vital to intelligence, and so the current strategy of building from huge curated data will not truly get us there, even though LLMs have so many good uses. In the second part, he will introduce Direct Preference Optimization (DPO), a recent way of learning to steer LLMs from human preference data without the complex iterative training of traditional reinforcement learning methods. DPO leverages a mapping between reward functions and optimal policies to show that a suitable constrained reward maximization problem can be optimized exactly with a single training step. This method has opened up steering LLMs to a broad array of smaller players and is also usable for other goals such as improving the factuality of models.
Bio: Professor Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Linguistics and Computer Science at Stanford University and an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). From 2010, Manning pioneered Natural Language Understanding and Inference using Deep Learning, with impactful research on sentiment analysis, paraphrase detection, the GloVe model of word vectors, attention, neural machine translation, question answering, self-supervised model pre-training, tree-recursive neural networks, machine reasoning, summarization, and dependency parsing, work for which he has received two ACL Test of Time Awards and the IEEE John von Neumann Medal (2024). He earlier led the development of empirical, probabilistic approaches to NLP, computational linguistics, and language understanding, defining and building theories and systems for natural language inference, syntactic parsing, machine translation, and multilingual language processing, work for which he won ACL, Coling, EMNLP, and CHI Best Paper Awards. In NLP education, Manning coauthored foundational textbooks on statistical NLP (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), and his online CS224N Natural Language Processing with Deep Learning course videos have been watched by hundreds of thousands. In linguistics, Manning is a principal developer of Stanford Dependencies and Universal Dependencies, and has authored monographs on ergativity and complex predicates. He is the founder of the Stanford NLP group (@stanfordnlp) and was an early proponent of open source software in NLP with Stanford CoreNLP and Stanza. He is a member of NAE and AAAS; an ACM Fellow, a AAAI Fellow, and an ACL Fellow; and was President of the ACL in 2015. Manning earned a B.A. (Hons) from The Australian National University, a Ph.D. from Stanford in 1994, and an Honorary Doctorate from U. Amsterdam in 2023. He held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

Prof Anton van den Hengel welcomes Prof Chris Manning to AIML.

Prof Chris Manning presents to the AIML community.

AIML members attend Prof Chris Manning's seminar.