Skip to main content
Modern PyTorch Guide home page
Search...
⌘K
Official Docs
GitHub
GitHub
Search...
Navigation
Fine-tuning & Adaptation
PEFT & LoRA
Foundations
Building Models
Performance
Domains
Production
Advanced
API Reference
Community
Forums
Fine-tuning & Adaptation
Transfer Learning
PEFT & LoRA
QLoRA
RLHF
Warmstarting
Custom Operators
Custom Python Operators
C++ Extensions
CUDA Kernels
Triton Kernels
Dispatching
Backend integration
Double backward
Advanced Parallelism
Megatron-LM
DeepSpeed ZeRO
Expert Parallelism
3D Parallelism
Device mesh
Symmetric memory
Research Tools
torch.func
torch.fx
functorch
FlashAttention
Vmap
Jacobian hessian
Extending PyTorch
Overview
Autograd functions
Cpp frontend
Custom backends
Privateuse1
Sparse & Structured Tensors
Sparse tensors
Nested tensors
Masked tensors
Operations
Advanced Features
Complex numbers
Foreach map
Packaging
Hub
Autoload extensions
Torch modes
Low-Level & Internals
Fake tensors
Aten ir
Provenance tracking
Minifier
Logging
On this page
PEFT & LoRA
Fine-tuning & Adaptation
PEFT & LoRA
Parameter-efficient fine-tuning techniques
PEFT & LoRA
Low-Rank Adaptation, adapters, prompt tuning with HuggingFace PEFT.
Transfer Learning
QLoRA
⌘I