Dimensionality reduction is a technique we use to reduce the dimensionality of the features space so that we end up with a lower feature space that maintains most of the information in the data. We often deal with datasets with 1,000s or even millions of features when building machine learning models. The Principal Component Analysis (PCA) is the method that the Kernel PCA generalizes on nonlinear data. In this article, we will learn how we can reduce thedimensionality of non linear data using the kernel PCA.

6m read timeFrom section.io
Post cover image
Table of contents
Prerequisites:Introduction to Kernel PCAPython ImplementationStep 2: Implementing the Kernel PCA and a Logistic RegressionStep 4: Visualizing the training setStep 5: Visualizing the test setConclusion

Sort: