General-purpose, few-shot, robust NLP

While prior work has been focusing on building models for a specific task, it is important to build a unified model that performs a wider range of different tasks like sentiment analysis, natural language inference, question answering and more, and ideally learns from only a few examples. Our group aims to achieve this goal, with techniques that use instructions, in-context learning and meta-training. Furthermore, our group leads the state-of-the-art methods and the new evaluation protocol for generalizable and robust NLP models.

Related Publications