About Us
Our lab’s goal is to develop general-purpose AI algorithms that represent, comprehend, and reason about diverse forms of data at large scale. We publish at NLP conferences (ACL, NAACL, EMNLP), AI and ML conferences (AAAI, ICLR), and Computer vision conferences (CVPR, ICCV, ECCV). Toward this end, our lab focuses on research efforts to address foundational problems in AI and NLP. We build algorithms that balance three competing desiderata: interpretable, robust with high performance, and efficient and scalable in the following categories:
- General-purpose NLP. Building NLP models that go beyond solving individual tasks and can learn new tasks from their descriptions or a few examples.
- Reasoning and question answering. Building benchmarks and algorithms that offer rich natural language comprehension using open domain, multi-lingual, multi-hop, and interpretable reasoning; developing some of the first deep neural models for general reading comprehension (BiDAF), open domain QA, cross-lingual QA, multi-hop reasoning, and symbolic methods to solve math and geometry word problems.
- Knowledge acquisition from multi-modal data. Devising general high-performance algorithms to extract knowledge from textual and visual data; devloping some of the first work in extracting knowledge from scientific text.
- Representation learning. Integrating capabilities of symbolic representations into neural models to represent knowledge acquired from diverse structured and un-structured resources and forming knowledge-rich dense vectors to encode them; designing neural architectures that efficiently encode textual and visual data.
We are affiliated with UW-NLP and UW-AI.