Skip to main content

Transfer Learning



This workshop provides the foundational concepts and practical applications of transfer learning, a powerful technique in deep learning that allows AI models to leverage pre-trained knowledge to improve performance on new tasks. The sessions will cover different types of transfer learning techniques, such as feature extraction and fine-tuning. This includes hands-on experience in applying these techniques to computer vision and language models.

Prerequisites:

  • Active SCINet Account
  • Familiarity with accessing Open OnDemand on Atlas and launching a JupyterLab session (we will offer a pre-workshop help session for those who need assistance with this)
  • Basic Python programming skills (how to read Python syntax, call functions, use arguments, etc.).
  • Basic understanding of deep learning principles (understanding the basic structure of a deep neural network, what parameters and hyperparameters are, how to read model evaluation metrics, etc.).

Objectives – By the end of this workshop, participants will be able to:

  • Define transfer learning and explain its advantages in deep learning.
  • Differentiate between various transfer learning techniques, including domain adaptation, feature extraction, fine-tuning, and LoRA.
  • Implement transfer learning in computer vision and LLMs using Python and Jupyter Notebooks.
  • Evaluate the effectiveness of transfer learning models compared to other training regimes such as pre-training on a limited dataset.
  • Troubleshoot common challenges in transfer learning, such as catastrophic forgetting and negative transfer.