Red Hat OpenShift AI 3.3 introduces modular, example AI pipelines for fine-tuning large language models in enterprise environments. The pipelines use Kubeflow Trainer to distribute workloads and support both supervised fine-tuning (SFT) and orthogonal supervised fine-tuning (OSFT). A four-step workflow covers dataset download

Table of contents
The challenge: Moving beyond manual one-offsFine-tuning pipeline: Data preparation, fine-tuning, evaluation, and model registrationChoosing the right path: Our new pipeline optionsCustomize the pipelines for your environmentWhy this approach matters: Your pipeline, your waySort: