Machine Learning Library (MLlib) Guide
MLlib is Spark’s scalable machine learning library consisting of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as underlying optimization primitives. Guides for individual algorithms are listed below.
The API is divided into 2 parts:
- The original
spark.mllib
API is the primary API. - The “Pipelines”
spark.ml
API is a higher-level API for constructing ML workflows.
We list major functionality from both below, with links to detailed guides.
MLlib types, algorithms and utilities
This lists functionality included in spark.mllib
, the main MLlib API.
- Data types
- Basic statistics
- summary statistics
- correlations
- stratified sampling
- hypothesis testing
- random data generation
- Classification and regression
- linear models (SVMs, logistic regression, linear regression)
- naive Bayes
- decision trees
- ensembles of trees (Random Forests and Gradient-Boosted Trees)
- isotonic regression
- Collaborative filtering
- alternating least squares (ALS)
- Clustering
- Dimensionality reduction
- singular value decomposition (SVD)
- principal component analysis (PCA)
- Feature extraction and transformation
- Frequent pattern mining
- FP-growth
- Optimization (developer)
- stochastic gradient descent
- limited-memory BFGS (L-BFGS)
- PMML model export
MLlib is under active development.
The APIs marked Experimental
/DeveloperApi
may change in future releases,
and the migration guide below will explain all changes between releases.
spark.ml: high-level APIs for ML pipelines
Spark 1.2 introduced a new package called spark.ml
, which aims to provide a uniform set of
high-level APIs that help users create and tune practical machine learning pipelines.
Graduated from Alpha! The Pipelines API is no longer an alpha component, although many elements of it are still Experimental
or DeveloperApi
.
Note that we will keep supporting and adding features to spark.mllib
along with the
development of spark.ml
.
Users should be comfortable using spark.mllib
features and expect more features coming.
Developers should contribute new algorithms to spark.mllib
and can optionally contribute
to spark.ml
.
More detailed guides for spark.ml
include:
- spark.ml programming guide: overview of the Pipelines API and major concepts
- Feature transformers: Details on transformers supported in the Pipelines API, including a few not in the lower-level
spark.mllib
API - Ensembles: Details on ensemble learning methods in the Pipelines API
Dependencies
MLlib uses the linear algebra package Breeze, which depends on netlib-java for optimised numerical processing. If natives are not available at runtime, you will see a warning message and a pure JVM implementation will be used instead.
To learn more about the benefits and background of system optimised natives, you may wish to watch Sam Halliday’s ScalaX talk on High Performance Linear Algebra in Scala).
Due to licensing issues with runtime proprietary binaries, we do not
include netlib-java
’s native proxies by default. To configure
netlib-java
/ Breeze to use system optimised binaries, include
com.github.fommil.netlib:all:1.1.2
(or build Spark with
-Pnetlib-lgpl
) as a dependency of your project and read the
netlib-java documentation for
your platform’s additional installation instructions.
To use MLlib in Python, you will need NumPy version 1.4 or newer.
Migration Guide
For the spark.ml
package, please see the spark.ml Migration Guide.
From 1.3 to 1.4
In the spark.mllib
package, there were several breaking changes, but all in DeveloperApi
or Experimental
APIs:
- Gradient-Boosted Trees
- (Breaking change) The signature of the
Loss.gradient
method was changed. This is only an issues for users who wrote their own losses for GBTs. - (Breaking change) The
apply
andcopy
methods for the case classBoostingStrategy
have been changed because of a modification to the case class fields. This could be an issue for users who useBoostingStrategy
to set GBT parameters.
- (Breaking change) The signature of the
- (Breaking change) The return value of
LDA.run
has changed. It now returns an abstract classLDAModel
instead of the concrete classDistributedLDAModel
. The object of typeLDAModel
can still be cast to the appropriate concrete type, which depends on the optimization algorithm.
Previous Spark Versions
Earlier migration guides are archived on this page.