BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//RENCI - ECPv5.4.0.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:RENCI
X-ORIGINAL-URL:https://archive.renci.org
X-WR-CALDESC:Events for RENCI
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220408T120000
DTEND;TZID=America/New_York:20220408T130000
DTSTAMP:20260428T081012
CREATED:20220328T202727Z
LAST-MODIFIED:20220328T202727Z
UID:19321-1649419200-1649422800@archive.renci.org
SUMMARY:Seminar: Neural networks as cognitive models of syntax
DESCRIPTION:Speakers of a language generalize their knowledge of syntax in a systematic way to constructions they have never encountered before. This observation has motivated the influential position in linguistics that humans are innately endowed with syntax-specific inductive biases. The applied success of deep learning systems that are not designed with such biases invites a reconsideration of this position. In this talk\, Prof. Tal Linzen\, Assistant Professor of Linguistics and Data Science at NYU\, will review work that uses paradigms from psycholinguistics to examine the syntactic generalization capabilities of contemporary neural network architectures. Alongside some successes\, this work suggests that human-like generalization requires stronger inductive biases than those expressed in standard neural network architectures.Hanes 
URL:https://archive.renci.org/event/seminar-neural-networks-as-cognitive-models-of-syntax/
LOCATION:Hanes Hall\, University of North Carolina at Chapel Hill\, Chapel Hill\, NC\, 27514\, United States
CATEGORIES:Webinar
ATTACH;FMTTYPE=image/jpeg:https://archive.renci.org/wp-content/uploads/2022/03/tallinzen_very_small.jpeg
END:VEVENT
END:VCALENDAR