LAMDA-CL Lab is part of the LAMDA group at Nanjing University, with a research focus on continual, lifelong, and incremental learning—the ability for models to learn continuously without forgetting, adapt to new tasks, and generalize over time with or without access to past data. We aim to empower intelligent systems with the ability to learn across tasks, domains, and modalities, enabling them to grow more capable and versatile as data evolves.
Our lab is composed of faculty members, PhD students, and interns working together to push the frontier of general-purpose and adaptive AI. Members of LAMDA-CL are actively contributing to top AI venues such as CVPR, ICML, NeurIPS, ICLR, TPAMI, and IJCV.
We investigate both theoretical foundations and practical algorithms for continual learning, with emphasis on:
- Class-Incremental Learning (CIL): Adapting models to new classes without retraining from scratch or accessing full historical data.
- Domain-Incremental Learning (DIL): Transferring and adapting models to new environments and data distributions.
- Pre-trained Model Adaptation: Leveraging foundation models for efficient continual learning with minimal forgetting.
- Few-shot and memory-efficient continual learning: Solving incremental tasks with limited exemplars or task labels.
- Toolbox and benchmark development: Providing accessible, extensible toolkits for reproducible continual learning research.
- ...
We contribute to the community with practical and research-oriented toolkits:
- PyCIL: An open-source Python toolbox for class-incremental learning.
- PILOT: A toolkit for pre-trained model-based continual learning.
We are always looking for motivated PhD students, postdocs, Master's students, and visiting interns who share our interests in lifelong learning, generalization, and pre-trained models. If you are passionate about building adaptive, intelligent systems that learn continually from the world, we welcome you to join our team.
Feel free to check out our latest updates, explore our research, and reach out to us for potential collaboration opportunities.