Do Text-to-Text Multi-Task Learners Suffer from Task Conflict?

David Mueller, Nicholas Andrews, Mark Dredze

Research output: Contribution to conferencePaperpeer-review

Abstract

Traditional multi-task learning architectures learn a single model across multiple tasks through a shared encoder followed by task-specific decoders. Learning these models often requires specialized training algorithms that address task-conflict in the shared parameter updates, which otherwise can lead to negative transfer. A new type of multi-task learning within NLP homogenizes multi-task architectures as a shared encoder and language model decoder, which does surprisingly well across a range of diverse tasks (Raffel et al., 2020). Does this new architecture suffer from task-conflicts that require specialized training algorithms? We study how certain factors in the shift towards text-to-text models affects multitask conflict and negative transfer, finding that both directional conflict and transfer are surprisingly constant across architectures.

Original languageEnglish (US)
Pages2843-2858
Number of pages16
StatePublished - 2022
Event2022 Findings of the Association for Computational Linguistics: EMNLP 2022 - Abu Dhabi, United Arab Emirates
Duration: Dec 7 2022Dec 11 2022

Conference

Conference2022 Findings of the Association for Computational Linguistics: EMNLP 2022
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period12/7/2212/11/22

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Computer Science Applications
  • Information Systems

Fingerprint

Dive into the research topics of 'Do Text-to-Text Multi-Task Learners Suffer from Task Conflict?'. Together they form a unique fingerprint.

Cite this