pyspark.mllib.util.
MLUtils
Helper methods to load, save and pre-process data used in MLlib.
New in version 1.0.0.
Methods
appendBias(data)
appendBias
Returns a new vector with 1.0 (bias) appended to the end of the input vector.
convertMatrixColumnsFromML(dataset, *cols)
convertMatrixColumnsFromML
Converts matrix columns in an input DataFrame to the pyspark.mllib.linalg.Matrix type from the new pyspark.ml.linalg.Matrix type under the spark.ml package.
pyspark.mllib.linalg.Matrix
pyspark.ml.linalg.Matrix
convertMatrixColumnsToML(dataset, *cols)
convertMatrixColumnsToML
Converts matrix columns in an input DataFrame from the pyspark.mllib.linalg.Matrix type to the new pyspark.ml.linalg.Matrix type under the spark.ml package.
convertVectorColumnsFromML(dataset, *cols)
convertVectorColumnsFromML
Converts vector columns in an input DataFrame to the pyspark.mllib.linalg.Vector type from the new pyspark.ml.linalg.Vector type under the spark.ml package.
pyspark.mllib.linalg.Vector
pyspark.ml.linalg.Vector
convertVectorColumnsToML(dataset, *cols)
convertVectorColumnsToML
Converts vector columns in an input DataFrame from the pyspark.mllib.linalg.Vector type to the new pyspark.ml.linalg.Vector type under the spark.ml package.
loadLabeledPoints(sc, path[, minPartitions])
loadLabeledPoints
Load labeled points saved using RDD.saveAsTextFile.
loadLibSVMFile(sc, path[, numFeatures, …])
loadLibSVMFile
Loads labeled data in the LIBSVM format into an RDD of LabeledPoint.
loadVectors(sc, path)
loadVectors
Loads vectors saved using RDD[Vector].saveAsTextFile with the default number of partitions.
saveAsLibSVMFile(data, dir)
saveAsLibSVMFile
Save labeled data in LIBSVM format.
Methods Documentation
New in version 1.5.0.
New in version 2.0.0.
pyspark.sql.DataFrame
input dataset
Matrix columns to be converted.
Old matrix columns will be ignored. If unspecified, all new matrix columns will be converted except nested ones.
the input dataset with new matrix columns converted to the old matrix type
Examples
>>> import pyspark >>> from pyspark.ml.linalg import Matrices >>> from pyspark.mllib.util import MLUtils >>> df = spark.createDataFrame( ... [(0, Matrices.sparse(2, 2, [0, 2, 3], [0, 1, 1], [2, 3, 4]), ... Matrices.dense(2, 2, range(4)))], ["id", "x", "y"]) >>> r1 = MLUtils.convertMatrixColumnsFromML(df).first() >>> isinstance(r1.x, pyspark.mllib.linalg.SparseMatrix) True >>> isinstance(r1.y, pyspark.mllib.linalg.DenseMatrix) True >>> r2 = MLUtils.convertMatrixColumnsFromML(df, "x").first() >>> isinstance(r2.x, pyspark.mllib.linalg.SparseMatrix) True >>> isinstance(r2.y, pyspark.ml.linalg.DenseMatrix) True
New matrix columns will be ignored. If unspecified, all old matrix columns will be converted excepted nested ones.
the input dataset with old matrix columns converted to the new matrix type
>>> import pyspark >>> from pyspark.mllib.linalg import Matrices >>> from pyspark.mllib.util import MLUtils >>> df = spark.createDataFrame( ... [(0, Matrices.sparse(2, 2, [0, 2, 3], [0, 1, 1], [2, 3, 4]), ... Matrices.dense(2, 2, range(4)))], ["id", "x", "y"]) >>> r1 = MLUtils.convertMatrixColumnsToML(df).first() >>> isinstance(r1.x, pyspark.ml.linalg.SparseMatrix) True >>> isinstance(r1.y, pyspark.ml.linalg.DenseMatrix) True >>> r2 = MLUtils.convertMatrixColumnsToML(df, "x").first() >>> isinstance(r2.x, pyspark.ml.linalg.SparseMatrix) True >>> isinstance(r2.y, pyspark.mllib.linalg.DenseMatrix) True
Vector columns to be converted.
Old vector columns will be ignored. If unspecified, all new vector columns will be converted except nested ones.
the input dataset with new vector columns converted to the old vector type
>>> import pyspark >>> from pyspark.ml.linalg import Vectors >>> from pyspark.mllib.util import MLUtils >>> df = spark.createDataFrame( ... [(0, Vectors.sparse(2, [1], [1.0]), Vectors.dense(2.0, 3.0))], ... ["id", "x", "y"]) >>> r1 = MLUtils.convertVectorColumnsFromML(df).first() >>> isinstance(r1.x, pyspark.mllib.linalg.SparseVector) True >>> isinstance(r1.y, pyspark.mllib.linalg.DenseVector) True >>> r2 = MLUtils.convertVectorColumnsFromML(df, "x").first() >>> isinstance(r2.x, pyspark.mllib.linalg.SparseVector) True >>> isinstance(r2.y, pyspark.ml.linalg.DenseVector) True
New vector columns will be ignored. If unspecified, all old vector columns will be converted excepted nested ones.
the input dataset with old vector columns converted to the new vector type
>>> import pyspark >>> from pyspark.mllib.linalg import Vectors >>> from pyspark.mllib.util import MLUtils >>> df = spark.createDataFrame( ... [(0, Vectors.sparse(2, [1], [1.0]), Vectors.dense(2.0, 3.0))], ... ["id", "x", "y"]) >>> r1 = MLUtils.convertVectorColumnsToML(df).first() >>> isinstance(r1.x, pyspark.ml.linalg.SparseVector) True >>> isinstance(r1.y, pyspark.ml.linalg.DenseVector) True >>> r2 = MLUtils.convertVectorColumnsToML(df, "x").first() >>> isinstance(r2.x, pyspark.ml.linalg.SparseVector) True >>> isinstance(r2.y, pyspark.mllib.linalg.DenseVector) True
pyspark.SparkContext
Spark context
file or directory path in any Hadoop-supported file system URI
min number of partitions
pyspark.RDD
labeled data stored as an RDD of LabeledPoint
>>> from tempfile import NamedTemporaryFile >>> from pyspark.mllib.util import MLUtils >>> from pyspark.mllib.regression import LabeledPoint >>> examples = [LabeledPoint(1.1, Vectors.sparse(3, [(0, -1.23), (2, 4.56e-7)])), ... LabeledPoint(0.0, Vectors.dense([1.01, 2.02, 3.03]))] >>> tempFile = NamedTemporaryFile(delete=True) >>> tempFile.close() >>> sc.parallelize(examples, 1).saveAsTextFile(tempFile.name) >>> MLUtils.loadLabeledPoints(sc, tempFile.name).collect() [LabeledPoint(1.1, (3,[0,2],[-1.23,4.56e-07])), LabeledPoint(0.0, [1.01,2.02,3.03])]
Loads labeled data in the LIBSVM format into an RDD of LabeledPoint. The LIBSVM format is a text-based format used by LIBSVM and LIBLINEAR. Each line represents a labeled sparse feature vector using the following format:
label index1:value1 index2:value2 …
where the indices are one-based and in ascending order. This method parses each line into a LabeledPoint, where the feature indices are converted to zero-based.
number of features, which will be determined from the input data if a nonpositive value is given. This is useful when the dataset is already split into multiple files and you want to load them separately, because some features may not present in certain files, which leads to inconsistent feature dimensions.
>>> from tempfile import NamedTemporaryFile >>> from pyspark.mllib.util import MLUtils >>> from pyspark.mllib.regression import LabeledPoint >>> tempFile = NamedTemporaryFile(delete=True) >>> _ = tempFile.write(b"+1 1:1.0 3:2.0 5:3.0\n-1\n-1 2:4.0 4:5.0 6:6.0") >>> tempFile.flush() >>> examples = MLUtils.loadLibSVMFile(sc, tempFile.name).collect() >>> tempFile.close() >>> examples[0] LabeledPoint(1.0, (6,[0,2,4],[1.0,2.0,3.0])) >>> examples[1] LabeledPoint(-1.0, (6,[],[])) >>> examples[2] LabeledPoint(-1.0, (6,[1,3,5],[4.0,5.0,6.0]))
an RDD of LabeledPoint to be saved
directory to save the data
>>> from tempfile import NamedTemporaryFile >>> from fileinput import input >>> from pyspark.mllib.regression import LabeledPoint >>> from glob import glob >>> from pyspark.mllib.util import MLUtils >>> examples = [LabeledPoint(1.1, Vectors.sparse(3, [(0, 1.23), (2, 4.56)])), ... LabeledPoint(0.0, Vectors.dense([1.01, 2.02, 3.03]))] >>> tempFile = NamedTemporaryFile(delete=True) >>> tempFile.close() >>> MLUtils.saveAsLibSVMFile(sc.parallelize(examples), tempFile.name) >>> ''.join(sorted(input(glob(tempFile.name + "/part-0000*")))) '0.0 1:1.01 2:2.02 3:3.03\n1.1 1:1.23 3:4.56\n'