Gemma is an advanced open-source framework for AI and Python applications, capable of tasks like text generation and code completion. Using Ray on Vertex AI, this tutorial guides users through setting up Gemma supervised tuning, creating Docker repositories, configuring custom Ray clusters, and fine-tuning models using HuggingFace Transformers. Key steps include using Cloud SDK, managing storage buckets, and utilizing TensorBoard for job tracking. Additionally, it covers the practical aspects of generating predictions and evaluating model performance using metrics such as ROUGE.
Table of contents
PrerequisiteWhat you needDocker Image RepositoryVertex AI TensorBoard InstanceHow to set a Ray cluster on Vertex AICreate the Ray ClusterFine-Tune Gemma with Ray on Vertex AICheck training artifacts and monitor the trainingValidate Gemma training on Vertex AIEvaluate the tuned modelServing tuned Gemma model with Ray Data for offline predictionsSummaryReferencesSort: