Package | Description |
---|---|
gov.sandia.cognition.learning.algorithm |
Provides general interfaces for learning algorithms.
|
gov.sandia.cognition.learning.algorithm.annealing |
Provides the Simulated Annealing algorithm.
|
gov.sandia.cognition.learning.algorithm.clustering |
Provides clustering algorithms.
|
gov.sandia.cognition.learning.algorithm.ensemble |
Provides ensemble methods.
|
gov.sandia.cognition.learning.algorithm.factor.machine |
Provides factorization machine algorithms.
|
gov.sandia.cognition.learning.algorithm.genetic |
Provides a genetic algorithm implementation.
|
gov.sandia.cognition.learning.algorithm.hmm |
Provides hidden Markov model (HMM) algorithms.
|
gov.sandia.cognition.learning.algorithm.minimization |
Provides minimization algorithms.
|
gov.sandia.cognition.learning.algorithm.minimization.line |
Provides line (scalar) minimization algorithms.
|
gov.sandia.cognition.learning.algorithm.pca |
Provides implementations of Principle Components Analysis (PCA).
|
gov.sandia.cognition.learning.algorithm.perceptron |
Provides the Perceptron algorithm and some of its variations.
|
gov.sandia.cognition.learning.algorithm.perceptron.kernel | |
gov.sandia.cognition.learning.algorithm.regression |
Provides regression algorithms, such as Linear Regression.
|
gov.sandia.cognition.learning.algorithm.root |
Provides algorithms for finding the roots, or zero crossings, of scalar functions.
|
gov.sandia.cognition.learning.algorithm.svm |
Provides implementations of Support Vector Machine (SVM) learning algorithms.
|
gov.sandia.cognition.statistics.bayesian |
Provides algorithms for computing Bayesian estimates of parameters.
|
gov.sandia.cognition.statistics.distribution |
Provides statistical distributions.
|
gov.sandia.cognition.text.topic |
Provides topic modeling algorithms.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractAnytimeSupervisedBatchLearner<InputType,OutputType,ResultType extends Evaluator<? super InputType,? extends OutputType>>
The
AbstractAnytimeSupervisedBatchLearner abstract class extends
the AbstractAnytimeBatchLearner to implement the
SupervisedBatchLearner interface. |
Modifier and Type | Method and Description |
---|---|
AbstractAnytimeBatchLearner<DataType,ResultType> |
AbstractAnytimeBatchLearner.clone() |
Modifier and Type | Class and Description |
---|---|
class |
SimulatedAnnealer<CostParametersType,AnnealedType>
The SimulatedAnnealer class implements the simulated annealing algorithm
using the provided cost function and perturbation function.
|
Modifier and Type | Class and Description |
---|---|
class |
AffinityPropagation<DataType>
The
AffinityPropagation algorithm requires three parameters:
a divergence function, a value to use for self-divergence, and a damping
factor (called lambda in the paper; 0.5 is the default). |
class |
AgglomerativeClusterer<DataType,ClusterType extends Cluster<DataType>>
The
AgglomerativeClusterer implements an agglomerative clustering
algorithm, which is a type of hierarchical clustering algorithm. |
class |
DBSCANClusterer<DataType extends Vectorizable,ClusterType extends Cluster<DataType>>
The
DBSCAN algorithm requires three parameters: a distance
metric, a value for neighborhood radius, and a value for the minimum number
of surrounding neighbors for a point to be considered non-noise. |
class |
KMeansClusterer<DataType,ClusterType extends Cluster<DataType>>
The
KMeansClusterer class implements the standard k-means
(k-centroids) clustering algorithm. |
class |
KMeansClustererWithRemoval<DataType,ClusterType extends Cluster<DataType>>
Creates a k-means clustering algorithm that removes clusters that do
not have sufficient membership to pass a simple statistical significance
test.
|
class |
MiniBatchKMeansClusterer<DataType extends Vector>
Approximates k-means clustering by working on random subsets of the
data.
|
class |
OptimizedKMeansClusterer<DataType>
This class implements an optimized version of the k-means algorithm that
makes use of the triangle inequality to compute the same answer as k-means
while using less distance calculations.
|
class |
ParallelizedKMeansClusterer<DataType,ClusterType extends Cluster<DataType>>
This is a parallel implementation of the k-means clustering algorithm.
|
class |
PartitionalClusterer<DataType,ClusterType extends Cluster<DataType>>
The
PartitionalClusterer implements a partitional clustering
algorithm, which is a type of hierarchical clustering algorithm. |
Modifier and Type | Class and Description |
---|---|
class |
AbstractBaggingLearner<InputType,OutputType,MemberType,EnsembleType extends Evaluator<? super InputType,? extends OutputType>>
Learns an ensemble by randomly sampling with replacement
(duplicates allowed) some percentage of the size of the data (defaults to
100%) on each iteration to train a new ensemble member.
|
class |
AdaBoost<InputType>
The
AdaBoost class implements the Adaptive Boosting (AdaBoost)
algorithm formulated by Yoav Freund and Robert Shapire. |
class |
BaggingCategorizerLearner<InputType,CategoryType>
Learns an categorization ensemble by randomly sampling with replacement
(duplicates allowed) some percentage of the size of the data (defaults to
100%) on each iteration to train a new ensemble member.
|
class |
BaggingRegressionLearner<InputType>
Learns an ensemble for regression by randomly sampling with replacement
(duplicates allowed) some percentage of the size of the data (defaults to
100%) on each iteration to train a new ensemble member.
|
class |
BinaryBaggingLearner<InputType>
The
BinaryBaggingLearner implements the Bagging learning algorithm. |
class |
CategoryBalancedBaggingLearner<InputType,CategoryType>
An extension of the basic bagging learner that attempts to sample bags that
have equal numbers of examples from every category.
|
class |
CategoryBalancedIVotingLearner<InputType,CategoryType>
An extension of IVoting for dealing with skew problems that makes sure that
there are an equal number of examples from each category in each sample that
an ensemble member is trained on.
|
class |
IVotingCategorizerLearner<InputType,CategoryType>
Learns an ensemble in a method similar to bagging except that on each
iteration the bag is built from two parts, each sampled from elements from
disjoint sets.
|
class |
MultiCategoryAdaBoost<InputType,CategoryType>
An implementation of a multi-class version of the Adaptive Boosting
(AdaBoost) algorithm, known as AdaBoost.M1.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractFactorizationMachineLearner
An abstract class for learning
FactorizationMachine s. |
class |
FactorizationMachineAlternatingLeastSquares
Implements an Alternating Least Squares (ALS) algorithm for learning a
Factorization Machine.
|
class |
FactorizationMachineStochasticGradient
Implements a Stochastic Gradient Descent (SGD) algorithm for learning a
Factorization Machine.
|
Modifier and Type | Class and Description |
---|---|
class |
GeneticAlgorithm<CostParametersType,GenomeType>
The GeneticAlgorithm class implements a generic genetic algorithm
that uses a given cost function to minimize and a given reproduction
function for generating the population.
|
class |
ParallelizedGeneticAlgorithm<CostParametersType,GenomeType>
This is a parallel implementation of the genetic algorithm.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractBaumWelchAlgorithm<ObservationType,DataType>
Partial implementation of the Baum-Welch algorithm.
|
class |
BaumWelchAlgorithm<ObservationType>
Implements the Baum-Welch algorithm, also known as the "forward-backward
algorithm", the expectation-maximization algorithm, etc for
Hidden Markov Models (HMMs).
|
class |
ParallelBaumWelchAlgorithm<ObservationType>
A Parallelized implementation of some of the methods of the
Baum-Welch Algorithm.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractAnytimeFunctionMinimizer<InputType,OutputType,EvaluatorType extends Evaluator<? super InputType,? extends OutputType>>
A partial implementation of a minimization algorithm that is iterative,
stoppable, and approximate.
|
class |
FunctionMinimizerBFGS
Implementation of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) Quasi-Newton
nonlinear minimization algorithm.
|
class |
FunctionMinimizerConjugateGradient
Conjugate gradient method is a class of algorithms for finding the
unconstrained local minimum of a nonlinear function.
|
class |
FunctionMinimizerDFP
Implementation of the Davidon-Fletcher-Powell (DFP) formula for a
Quasi-Newton minimization update.
|
class |
FunctionMinimizerDirectionSetPowell
Implementation of the derivative-free unconstrained nonlinear direction-set
minimization algorithm called "Powell's Method" by Numerical Recipes.
|
class |
FunctionMinimizerFletcherReeves
This is an implementation of the Fletcher-Reeves conjugate gradient
minimization procedure.
|
class |
FunctionMinimizerGradientDescent
This is an implementation of the classic Gradient Descent algorithm, also
known as Steepest Descent, Backpropagation (for neural nets), or Hill
Climbing.
|
class |
FunctionMinimizerLiuStorey
This is an implementation of the Liu-Storey conjugate gradient
minimization procedure.
|
class |
FunctionMinimizerNelderMead
Implementation of the Downhill Simplex minimization algorithm, also known as
the Nelder-Mead method.
|
class |
FunctionMinimizerPolakRibiere
This is an implementation of the Polack-Ribiere conjugate gradient
minimization procedure.
|
class |
FunctionMinimizerQuasiNewton
This is an abstract implementation of the Quasi-Newton minimization method,
sometimes called "Variable-Metric methods."
This family of minimization algorithms uses first-order gradient information
to find a locally minimum to a scalar function.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractAnytimeLineMinimizer<EvaluatorType extends Evaluator<java.lang.Double,java.lang.Double>>
Partial AnytimeAlgorithm implementation of a LineMinimizer.
|
class |
LineMinimizerBacktracking
Implementation of the backtracking line-minimization algorithm.
|
class |
LineMinimizerDerivativeBased
This is an implementation of a line-minimization algorithm proposed by
Fletcher that makes extensive use of first-order derivative information.
|
class |
LineMinimizerDerivativeFree
This is an implementation of a LineMinimizer that does not require
derivative information.
|
Modifier and Type | Class and Description |
---|---|
class |
GeneralizedHebbianAlgorithm
Implementation of the Generalized Hebbian Algorithm, also known as
Sanger's Rule, which is a generalization of Oja's Rule.
|
Modifier and Type | Class and Description |
---|---|
class |
BatchMultiPerceptron<CategoryType>
Implements a multi-class version of the standard batch Perceptron learning
algorithm.
|
class |
Perceptron
The
Perceptron class implements the standard Perceptron learning
algorithm that learns a binary classifier based on vector input. |
Modifier and Type | Class and Description |
---|---|
class |
KernelAdatron<InputType>
The
KernelAdatron class implements an online version of the Support
Vector Machine learning algorithm. |
class |
KernelPerceptron<InputType>
The
KernelPerceptron class implements the kernel version of
the Perceptron algorithm. |
Modifier and Type | Class and Description |
---|---|
class |
AbstractLogisticRegression<InputType,OutputType,FunctionType extends Evaluator<? super InputType,OutputType>>
Abstract partial implementation for logistic regression classes.
|
class |
AbstractParameterCostMinimizer<ResultType extends VectorizableVectorFunction,CostFunctionType extends SupervisedCostFunction<Vector,Vector>>
Partial implementation of ParameterCostMinimizer.
|
class |
FletcherXuHybridEstimation
The Fletcher-Xu hybrid estimation for solving the nonlinear least-squares
parameters.
|
class |
GaussNewtonAlgorithm
Implementation of the Gauss-Newton parameter-estimation procedure.
|
class |
KernelBasedIterativeRegression<InputType>
The
KernelBasedIterativeRegression class implements an online version of
the Support Vector Regression algorithm. |
class |
KernelWeightedRobustRegression<InputType,OutputType>
KernelWeightedRobustRegression takes a supervised learning algorithm that
operates on a weighted collection of InputOutputPairs and modifies the
weight of a sample based on the dataset output and its corresponding
estimate from the Evaluator from the supervised learning algorithm at each
iteration.
|
class |
LeastSquaresEstimator
Abstract implementation of iterative least-squares estimators.
|
class |
LevenbergMarquardtEstimation
Implementation of the nonlinear regression algorithm, known as
Levenberg-Marquardt Estimation (or LMA).
|
class |
LogisticRegression
Performs Logistic Regression by means of the iterative reweighted least
squares (IRLS) algorithm, where the logistic function has an explicit bias
term, and a diagonal L2 regularization term.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractBracketedRootFinder
Partial implementation of RootFinder that maintains a bracket on the root.
|
class |
AbstractRootFinder
Partial implementation of RootFinder.
|
class |
RootBracketExpander
The root-bracketing expansion algorithm.
|
class |
RootFinderBisectionMethod
Bisection algorithm for root finding.
|
class |
RootFinderFalsePositionMethod
The false-position algorithm for root finding.
|
class |
RootFinderNewtonsMethod
Newton's method, sometimes called Newton-Raphson method, uses first-order
derivative information to iteratively locate a root.
|
class |
RootFinderRiddersMethod
The root-finding algorithm due to Ridders.
|
class |
RootFinderSecantMethod
The secant algorithm for root finding.
|
Modifier and Type | Class and Description |
---|---|
class |
PrimalEstimatedSubGradient
An implementation of the Primal Estimated Sub-Gradient Solver (PEGASOS)
algorithm for learning a linear support vector machine (SVM).
|
class |
SequentialMinimalOptimization<InputType>
An implementation of the Sequential Minimal Optimization (SMO) algorithm for
training a Support Vector Machine (SVM), which is a kernel-based binary
categorizer.
|
class |
SuccessiveOverrelaxation<InputType>
The
SuccessiveOverrelaxation class implements the Successive
Overrelaxation (SOR) algorithm for learning a Support Vector Machine (SVM). |
Modifier and Type | Class and Description |
---|---|
class |
AbstractMarkovChainMonteCarlo<ObservationType,ParameterType>
Partial abstract implementation of MarkovChainMonteCarlo.
|
class |
DirichletProcessMixtureModel<ObservationType>
An implementation of Dirichlet Process clustering, which estimates the
number of clusters and the centroids of the clusters from a set of
data.
|
class |
MetropolisHastingsAlgorithm<ObservationType,ParameterType>
An implementation of the Metropolis-Hastings MCMC algorithm, which is the
most general formulation of MCMC but can be slow.
|
class |
ParallelDirichletProcessMixtureModel<ObservationType>
A Parallelized version of vanilla Dirichlet Process Mixture Model learning.
|
Modifier and Type | Class and Description |
---|---|
static class |
MixtureOfGaussians.EMLearner
An Expectation-Maximization based "soft" assignment learner.
|
static class |
ScalarMixtureDensityModel.EMLearner
An EM learner that estimates a mixture model from data
|
Modifier and Type | Class and Description |
---|---|
class |
LatentDirichletAllocationVectorGibbsSampler
A Gibbs sampler for performing Latent Dirichlet Allocation (LDA).
|
class |
ParallelLatentDirichletAllocationVectorGibbsSampler
A parallel implementation of
LatentDirichletAllocationVectorGibbsSampler . |
class |
ProbabilisticLatentSemanticAnalysis
An implementation of the Probabilistic Latent Semantic Analysis (PLSA)
algorithm.
|