Postdoctoral Researcher opportunity (2 years) - Efficient Transformers Fine-Tuning
Published:
EIDOS Lab invites applications for a two-year postdoctoral researcher position focusing on developing cutting-edge methods for efficient fine-tuning of Large Vision-Language Models (LVLMs). This research will explore Low-Rank Adaptation (LoRA) combined with advanced pruning, quantization, and simplification techniques to address scalability and resource constraints challenges for real-world applications.
The position is funded by the European project DARE. The goal is to design novel methodologies for replacing fully connected layers of Transformers with lower-rank trainable matrices, enabling reduced parameter fine-tuning, faster inference, and enhanced model compressibility.
Qualifications:
Required:
- PhD in Computer Science, Artificial Intelligence, Machine Learning or a related field.
- Strong background in deep learning, particularly with Transformer architectures.
- Proficiency in Python and deep learning frameworks (e.g., PyTorch, TensorFlow).
Preferred:
- Experience with model optimization techniques such as pruning, quantization, or sparsity-based approaches.
- Experience in LoRA or other efficient fine-tuning methods.
- Strong publication record in top-tier conferences and journals.
How to apply
For more information please email marco.grangetto@unito.it