We build models for answering various types of questions that require understanding and reasoning over the given context.
We build unified models that perform a wider range of tasks, learn from few-shot examples, and work robustly to out-of-distribution inputs.
We develop algorithms to extract knowledge from unstructured text, with applications to downstream tasks like knowledge base construction and text generation.
Our work in interactive language agents and text generation focuses on generating interesting and correct conversational, educational, or technical natural language text especially in grounded environments.
We develop new transformations and neural architectures that allow models to learn richer representations effectively and efficiently across different domains, including visual and sequence modeling tasks.