Postdoctoral Researcher opportunity (2 years) - Efficient Transformers Fine-Tuning

less than 1 minute read

Published:

EIDOS Lab invites applications for a two-year postdoctoral researcher position focusing on developing cutting-edge methods for efficient fine-tuning of Large Vision-Language Models (LVLMs). This research will explore Low-Rank Adaptation (LoRA) combined with advanced pruning, quantization, and simplification techniques to address scalability and resource constraints challenges for real-world applications.

The position is funded by the European project DARE. The goal is to design novel methodologies for replacing fully connected layers of Transformers with lower-rank trainable matrices, enabling reduced parameter fine-tuning, faster inference, and enhanced model compressibility.

Qualifications:

Required:

  1. PhD in Computer Science, Artificial Intelligence, Machine Learning or a related field.
  2. Strong background in deep learning, particularly with Transformer architectures.
  3. Proficiency in Python and deep learning frameworks (e.g., PyTorch, TensorFlow).

Preferred:

  1. Experience with model optimization techniques such as pruning, quantization, or sparsity-based approaches.
  2. Experience in LoRA or other efficient fine-tuning methods.
  3. Strong publication record in top-tier conferences and journals.

How to apply

For more information please email marco.grangetto@unito.it