Challenges of Self-Supervised Learning for Unified, Multi-Modal, Multi-Task Transformer Models

Graham Annett, Tim Andersen, Robert Annett

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The recent success of multi-modal multi-task transformer models combined with their ability to learn in a scalable self-supervised fashion has presented evidence that omnipotent models trained with heterogeneous data and tasks are within the realms of possibility. This paper presents several research questions and impediments related towards the training of generalized transformer architectures.

Original languageEnglish
Title of host publicationProceedings - 2022 International Conference on Computational Science and Computational Intelligence, CSCI 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages293-297
Number of pages5
ISBN (Electronic)9798350320282
DOIs
StatePublished - 2022
Event2022 International Conference on Computational Science and Computational Intelligence, CSCI 2022 - Las Vegas, United States
Duration: 14 Dec 202216 Dec 2022

Publication series

NameProceedings - 2022 International Conference on Computational Science and Computational Intelligence, CSCI 2022

Conference

Conference2022 International Conference on Computational Science and Computational Intelligence, CSCI 2022
Country/TerritoryUnited States
CityLas Vegas
Period14/12/2216/12/22

Keywords

  • multi-modal
  • multi-task
  • self-supervised learning

Fingerprint

Dive into the research topics of 'Challenges of Self-Supervised Learning for Unified, Multi-Modal, Multi-Task Transformer Models'. Together they form a unique fingerprint.

Cite this