mlpack.sparse_coding

sparse_coding(...)
Sparse Coding

>>> from mlpack import sparse_coding

An implementation of Sparse Coding with Dictionary Learning, which achieves sparsity via an l1-norm regularizer on the codes (LASSO) or an (l1+l2)-norm regularizer on the codes (the Elastic Net). Given a dense data matrix X with d dimensions and n points, sparse coding seeks to find a dense dictionary matrix D with k atoms in d dimensions, and a sparse coding matrix Z with n points in k dimensions.

The original data matrix X can then be reconstructed as Z * D. Therefore, this program finds a representation of each point in X as a sparse linear combination of atoms in the dictionary D.

The sparse coding is found with an algorithm which alternates between a dictionary step, which updates the dictionary D, and a sparse coding step, which updates the sparse coding matrix.

Once a dictionary D is found, the sparse coding model may be used to encode other matrices, and saved for future usage.

To run this program, either an input matrix or an already-saved sparse coding model must be specified. An input matrix may be specified with the 'training' option, along with the number of atoms in the dictionary (specified with the 'atoms' parameter). It is also possible to specify an initial dictionary for the optimization, with the 'initial_dictionary' parameter. An input model may be specified with the 'input_model' parameter.

As an example, to build a sparse coding model on the dataset 'data' using 200 atoms and an l1-regularization parameter of 0.1, saving the model into 'model', use

>>> sparse_coding(training=data, atoms=200, lambda1=0.1)
>>> model = output['output_model']

Then, this model could be used to encode a new matrix, 'otherdata', and save the output codes to 'codes':

>>> sparse_coding(input_model=model, test=otherdata)
>>> codes = output['codes']

input options

output options

The return value from the binding is a dict containing the following elements: