Category

Fine-Tuning

4 products in this category

Unsloth logo

Unsloth

Verified

Up to 5x faster, 80% less memory LLM fine-tuning

Unsloth makes LLM fine-tuning up to 5ร— faster and uses up to 80% less GPU memory by using custom CUDA kernels and optimized training paths. Compatible with Hugging Face ecosystem, LoRA, QLoRA, and popular base models.

Axolotl logo

Axolotl

Verified

Streamlined fine-tuning for LLMs with LoRA, QLoRA, and full fine-tune

Axolotl is an open-source fine-tuning framework that simplifies training LLMs with LoRA, QLoRA, FSDP, and DeepSpeed. It supports a wide range of base models and dataset formats with a single YAML config.

Hugging Face Transformers logo

Hugging Face Transformers

Featured

The standard library for working with pretrained ML models

Transformers by Hugging Face is the most widely used open-source library for working with pretrained models across NLP, vision, audio, and multimodal tasks. Supports PyTorch, TensorFlow, and JAX with 500k+ models on the Hub.

SuperML Java logo

SuperML Java

Verified

A comprehensive, modular Java Machine Learning Framework inspired by scikit-learn, developed by the SuperML community.

SuperML Java 3.1.2 is a sophisticated 21-module machine learning library for Java that delivers enterprise-grade performance with 400K+ predictions/second and 21/21 modules compiling successfully. Now published on Maven Central.