From c9cfbd1b1f6a9fc75bd1298af882010aaf5fd4cd Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sun, 3 Nov 2024 22:14:07 +0000 Subject: [PATCH] build based on 1449814 --- dev/.documenter-siteinfo.json | 2 +- dev/accessor_functions/index.html | 6 ++--- dev/anatomy_of_an_implementation/index.html | 22 +++++++++--------- dev/common_implementation_patterns/index.html | 2 +- dev/fit_update/index.html | 10 ++++---- dev/index.html | 2 +- dev/kinds_of_target_proxy/index.html | 2 +- dev/objects.inv | Bin 2289 -> 2296 bytes dev/obs/index.html | 2 +- dev/patterns/classification/index.html | 2 +- dev/patterns/clusterering/index.html | 2 +- dev/patterns/density_estimation/index.html | 2 +- dev/patterns/dimension_reduction/index.html | 2 +- dev/patterns/ensembling/index.html | 2 +- dev/patterns/feature_engineering/index.html | 2 +- dev/patterns/gradient_descent/index.html | 2 +- .../incremental_algorithms/index.html | 2 +- dev/patterns/iterative_algorithms/index.html | 2 +- dev/patterns/meta_algorithms/index.html | 2 +- .../missing_value_imputation/index.html | 2 +- dev/patterns/outlier_detection/index.html | 2 +- dev/patterns/regression/index.html | 2 +- dev/patterns/static_algorithms/index.html | 2 +- .../supervised_bayesian_algorithms/index.html | 2 +- .../supervised_bayesian_models/index.html | 2 +- dev/patterns/survival_analysis/index.html | 2 +- .../time_series_classification/index.html | 2 +- .../time_series_forecasting/index.html | 2 +- dev/patterns/transformers/index.html | 2 +- dev/predict_transform/index.html | 11 ++++----- dev/reference/index.html | 6 ++--- dev/search_index.js | 2 +- dev/target_weights_features/index.html | 4 ++-- dev/testing_an_implementation/index.html | 2 +- dev/traits/index.html | 8 +++---- 35 files changed, 60 insertions(+), 61 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 2cef845..bf59f73 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-03T05:07:28","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-03T22:14:03","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/accessor_functions/index.html b/dev/accessor_functions/index.html index c7a126a..bae16d0 100644 --- a/dev/accessor_functions/index.html +++ b/dev/accessor_functions/index.html @@ -1,6 +1,6 @@ -Accessor Functions · LearnAPI.jl

Accessor Functions

The sole argument of an accessor function is the output, model, of fit. Learners are free to implement any number of these, or none of them. Only LearnAPI.strip has a fallback, namely the identity.

Learner-specific accessor functions may also be implemented. The names of all accessor functions are included in the list returned by LearnAPI.functions(learner).

Implementation guide

All new implementations must implement LearnAPI.learner. While, all others are optional, any implemented accessor functions must be added to the list returned by LearnAPI.functions.

Reference

LearnAPI.learnerFunction
LearnAPI.learner(model)
-LearnAPI.learner(stripped_model)

Recover the learner used to train model or the output, stripped_model, of LearnAPI.strip(model).

In other words, if model = fit(learner, data...), for some learner and data, then

LearnAPI.learner(model) == learner == LearnAPI.learner(LearnAPI.strip(model))

is true.

New implementations

Implementation is compulsory for new learner types. The behaviour described above is the only contract. You must include :(LearnAPI.learner) in the return value of LearnAPI.functions(learner).

source
LearnAPI.extrasFunction
LearnAPI.extras(model)

Return miscellaneous byproducts of a learning algorithm's execution, from the object model returned by a call of the form fit(learner, data).

For "static" learners (those without training data) it may be necessary to first call transform or predict on model.

See also fit.

New implementations

Implementation is discouraged for byproducts already covered by other LearnAPI.jl accessor functions: LearnAPI.learner, LearnAPI.coefficients, LearnAPI.intercept, LearnAPI.tree, LearnAPI.trees, LearnAPI.feature_importances, LearnAPI.training_labels, LearnAPI.training_losses, LearnAPI.training_predictions, LearnAPI.training_scores and LearnAPI.components.

If implemented, you must include :(LearnAPI.training_labels) in the tuple returned by the LearnAPI.functions trait. .

source
Base.stripFunction
LearnAPI.strip(model; options...)

Return a version of model that will generally have a smaller memory allocation than model, suitable for serialization. Here model is any object returned by fit. Accessor functions that can be called on model may not work on LearnAPI.strip(model), but predict, transform and inverse_transform will work, if implemented. Check LearnAPI.functions(LearnAPI.learner(model)) to view see what the original model implements.

Implementations may provide learner-specific keyword options to control how much of the original functionality is preserved by LearnAPI.strip.

Typical workflow

model = fit(learner, (X, y)) # or `fit(learner, X, y)`
+Accessor Functions · LearnAPI.jl

Accessor Functions

The sole argument of an accessor function is the output, model, of fit. Learners are free to implement any number of these, or none of them. Only LearnAPI.strip has a fallback, namely the identity.

Learner-specific accessor functions may also be implemented. The names of all accessor functions are included in the list returned by LearnAPI.functions(learner).

Implementation guide

All new implementations must implement LearnAPI.learner. While, all others are optional, any implemented accessor functions must be added to the list returned by LearnAPI.functions.

Reference

LearnAPI.learnerFunction
LearnAPI.learner(model)
+LearnAPI.learner(stripped_model)

Recover the learner used to train model or the output, stripped_model, of LearnAPI.strip(model).

In other words, if model = fit(learner, data...), for some learner and data, then

LearnAPI.learner(model) == learner == LearnAPI.learner(LearnAPI.strip(model))

is true.

New implementations

Implementation is compulsory for new learner types. The behaviour described above is the only contract. You must include :(LearnAPI.learner) in the return value of LearnAPI.functions(learner).

source
LearnAPI.extrasFunction
LearnAPI.extras(model)

Return miscellaneous byproducts of a learning algorithm's execution, from the object model returned by a call of the form fit(learner, data).

For "static" learners (those without training data) it may be necessary to first call transform or predict on model.

See also fit.

New implementations

Implementation is discouraged for byproducts already covered by other LearnAPI.jl accessor functions: LearnAPI.learner, LearnAPI.coefficients, LearnAPI.intercept, LearnAPI.tree, LearnAPI.trees, LearnAPI.feature_names, LearnAPI.feature_importances, LearnAPI.training_labels, LearnAPI.training_losses, LearnAPI.training_predictions, LearnAPI.training_scores and LearnAPI.components.

If implemented, you must include :(LearnAPI.training_labels) in the tuple returned by the LearnAPI.functions trait. .

source
Base.stripFunction
LearnAPI.strip(model; options...)

Return a version of model that will generally have a smaller memory allocation than model, suitable for serialization. Here model is any object returned by fit. Accessor functions that can be called on model may not work on LearnAPI.strip(model), but predict, transform and inverse_transform will work, if implemented. Check LearnAPI.functions(LearnAPI.learner(model)) to view see what the original model implements.

Implementations may provide learner-specific keyword options to control how much of the original functionality is preserved by LearnAPI.strip.

Typical workflow

model = fit(learner, (X, y)) # or `fit(learner, X, y)`
 ŷ = predict(model, Point(), Xnew)
 
 small_model = LearnAPI.strip(model)
@@ -12,4 +12,4 @@
 transform(LearnAPI.strip(model; options...), args...; kwargs...) ==
     transform(model, args...; kwargs...)
 inverse_transform(LearnAPI.strip(model; options), args...; kwargs...) ==
-    inverse_transform(model, args...; kwargs...)

Additionally:

LearnAPI.strip(LearnAPI.strip(model)) == LearnAPI.strip(model)
source
LearnAPI.coefficientsFunction
LearnAPI.coefficients(model)

For a linear model, return the learned coefficients. The value returned has the form of an abstract vector of feature_or_class::Symbol => coefficient::Real pairs (e.g [:gender => 0.23, :height => 0.7, :weight => 0.1]) or, in the case of multi-targets, feature::Symbol => coefficients::AbstractVector{<:Real} pairs.

The model reports coefficients if :(LearnAPI.coefficients) in LearnAPI.functions(Learn.learner(model)).

See also LearnAPI.intercept.

New implementations

Implementation is optional.

If implemented, you must include :(LearnAPI.coefficients) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.interceptFunction
LearnAPI.intercept(model)

For a linear model, return the learned intercept. The value returned is Real (single target) or an AbstractVector{<:Real} (multi-target).

The model reports intercept if :(LearnAPI.intercept) in LearnAPI.functions(Learn.learner(model)).

See also LearnAPI.coefficients.

New implementations

Implementation is optional.

If implemented, you must include :(LearnAPI.intercept) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.treeFunction
LearnAPI.tree(model)

Return a user-friendly tree, in the form of a root object implementing the following interface defined in AbstractTrees.jl:

  • subtypes AbstractTrees.AbstractNode{T}
  • implements AbstractTrees.children()
  • implements AbstractTrees.printnode()

Such a tree can be visualized using the TreeRecipe.jl package, for example.

See also LearnAPI.trees.

New implementations

Implementation is optional.

If implemented, you must include :(LearnAPI.tree) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.treesFunction
LearnAPI.trees(model)

For some ensemble model, return a vector of trees. See LearnAPI.tree for the form of such trees.

See also LearnAPI.tree.

New implementations

Implementation is optional.

If implemented, you must include :(LearnAPI.trees) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.feature_importancesFunction
LearnAPI.feature_importances(model)

Return the learner-specific feature importances of a model output by fit(learner, ...) for some learner. The value returned has the form of an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender => 0.23, :height => 0.7, :weight => 0.1]).

The learner supports feature importances if :(LearnAPI.feature_importances) in LearnAPI.functions(learner).

If a learner is sometimes unable to report feature importances then LearnAPI.feature_importances will return all importances as 0.0, as in [:gender => 0.0, :height => 0.0, :weight => 0.0].

New implementations

Implementation is optional.

If implemented, you must include :(LearnAPI.feature_importances) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.training_lossesFunction
LearnAPI.training_losses(model)

Return the training losses obtained when running model = fit(learner, ...) for some learner.

See also fit.

New implementations

Implement for iterative algorithms that compute and record training losses as part of training (e.g. neural networks).

If implemented, you must include :(LearnAPI.training_losses) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.training_predictionsFunction
LearnAPI.training_predictions(model)

Return internally computed training predictions when running model = fit(learner, ...) for some learner.

See also fit.

New implementations

Implement for iterative algorithms that compute and record training losses as part of training (e.g. neural networks).

If implemented, you must include :(LearnAPI.training_predictions) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.training_scoresFunction
LearnAPI.training_scores(model)

Return the training scores obtained when running model = fit(learner, ...) for some learner.

See also fit.

New implementations

Implement for learners, such as outlier detection algorithms, which associate a score with each observation during training, where these scores are of interest in later processes (e.g, in defining normalized scores for new data).

If implemented, you must include :(LearnAPI.training_scores) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.training_labelsFunction
LearnAPI.training_labels(model)

Return the training labels obtained when running model = fit(learner, ...) for some learner.

See also fit.

New implementations

If implemented, you must include :(LearnAPI.training_labels) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.componentsFunction
LearnAPI.components(model)

For a composite model, return the component models (fit outputs). These will be in the form of a vector of named pairs, property_name::Symbol => component_model. Here property_name is the name of some learner-valued property (hyper-parameter) of learner = LearnAPI.learner(model).

A composite model is one for which the corresponding learner includes one or more learner-valued properties, and for which LearnAPI.is_composite(learner) is true.

See also is_composite.

New implementations

Implementent if and only if model is a composite model.

If implemented, you must include :(LearnAPI.components) in the tuple returned by the LearnAPI.functions trait. .

source
+ inverse_transform(model, args...; kwargs...)

Additionally:

LearnAPI.strip(LearnAPI.strip(model)) == LearnAPI.strip(model)
source
LearnAPI.coefficientsFunction
LearnAPI.coefficients(model)

For a linear model, return the learned coefficients. The value returned has the form of an abstract vector of feature_or_class::Symbol => coefficient::Real pairs (e.g [:gender => 0.23, :height => 0.7, :weight => 0.1]) or, in the case of multi-targets, feature::Symbol => coefficients::AbstractVector{<:Real} pairs.

The model reports coefficients if :(LearnAPI.coefficients) in LearnAPI.functions(Learn.learner(model)).

See also LearnAPI.intercept.

New implementations

Implementation is optional.

If implemented, you must include :(LearnAPI.coefficients) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.interceptFunction
LearnAPI.intercept(model)

For a linear model, return the learned intercept. The value returned is Real (single target) or an AbstractVector{<:Real} (multi-target).

The model reports intercept if :(LearnAPI.intercept) in LearnAPI.functions(Learn.learner(model)).

See also LearnAPI.coefficients.

New implementations

Implementation is optional.

If implemented, you must include :(LearnAPI.intercept) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.treeFunction
LearnAPI.tree(model)

Return a user-friendly tree, in the form of a root object implementing the following interface defined in AbstractTrees.jl:

  • subtypes AbstractTrees.AbstractNode{T}
  • implements AbstractTrees.children()
  • implements AbstractTrees.printnode()

Such a tree can be visualized using the TreeRecipe.jl package, for example.

See also LearnAPI.trees.

New implementations

Implementation is optional.

If implemented, you must include :(LearnAPI.tree) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.treesFunction
LearnAPI.trees(model)

For some ensemble model, return a vector of trees. See LearnAPI.tree for the form of such trees.

See also LearnAPI.tree.

New implementations

Implementation is optional.

If implemented, you must include :(LearnAPI.trees) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.feature_namesFunction
LearnAPI.feature_names(model)

Return the names of features encountered when fitting or updating some learner to obtain model.

The value returned value is a vector of symbols.

This method is implemented if :(LearnAPI.feature_names) in LearnAPI.functions(learner).

See also fit.

New implementations

If implemented, you must include :(LearnAPI.feature_names) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.feature_importancesFunction
LearnAPI.feature_importances(model)

Return the learner-specific feature importances of a model output by fit(learner, ...) for some learner. The value returned has the form of an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender => 0.23, :height => 0.7, :weight => 0.1]).

The learner supports feature importances if :(LearnAPI.feature_importances) in LearnAPI.functions(learner).

If a learner is sometimes unable to report feature importances then LearnAPI.feature_importances will return all importances as 0.0, as in [:gender => 0.0, :height => 0.0, :weight => 0.0].

New implementations

Implementation is optional.

If implemented, you must include :(LearnAPI.feature_importances) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.training_lossesFunction
LearnAPI.training_losses(model)

Return the training losses obtained when running model = fit(learner, ...) for some learner.

See also fit.

New implementations

Implement for iterative algorithms that compute and record training losses as part of training (e.g. neural networks).

If implemented, you must include :(LearnAPI.training_losses) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.training_predictionsFunction
LearnAPI.training_predictions(model)

Return internally computed training predictions when running model = fit(learner, ...) for some learner.

See also fit.

New implementations

Implement for iterative algorithms that compute and record training losses as part of training (e.g. neural networks).

If implemented, you must include :(LearnAPI.training_predictions) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.training_scoresFunction
LearnAPI.training_scores(model)

Return the training scores obtained when running model = fit(learner, ...) for some learner.

See also fit.

New implementations

Implement for learners, such as outlier detection algorithms, which associate a score with each observation during training, where these scores are of interest in later processes (e.g, in defining normalized scores for new data).

If implemented, you must include :(LearnAPI.training_scores) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.training_labelsFunction
LearnAPI.training_labels(model)

Return the training labels obtained when running model = fit(learner, ...) for some learner.

See also fit.

New implementations

If implemented, you must include :(LearnAPI.training_labels) in the tuple returned by the LearnAPI.functions trait. .

source
LearnAPI.componentsFunction
LearnAPI.components(model)

For a composite model, return the component models (fit outputs). These will be in the form of a vector of named pairs, property_name::Symbol => component_model. Here property_name is the name of some learner-valued property (hyper-parameter) of learner = LearnAPI.learner(model).

A composite model is one for which the corresponding learner includes one or more learner-valued properties, and for which LearnAPI.is_composite(learner) is true.

See also is_composite.

New implementations

Implementent if and only if model is a composite model.

If implemented, you must include :(LearnAPI.components) in the tuple returned by the LearnAPI.functions trait. .

source
diff --git a/dev/anatomy_of_an_implementation/index.html b/dev/anatomy_of_an_implementation/index.html index 77f51de..05c9c33 100644 --- a/dev/anatomy_of_an_implementation/index.html +++ b/dev/anatomy_of_an_implementation/index.html @@ -69,13 +69,13 @@ ytrain = y[train] model = fit(learner, (Xtrain, ytrain)) # `fit(learner, Xtrain, ytrain)` will also work ŷ = predict(model, Tables.subset(X, test))
4-element Vector{Float64}:
- 1.223460926373845
- 1.9655438267016696
- 2.013257105832323
- 2.165906291335041

Extracting coefficients:

LearnAPI.coefficients(model)
3-element Vector{Pair{Symbol, Float64}}:
- :a => 1.3432032404289198
- :b => 0.0894047227153813
- :c => 1.9243065489417621

Serialization/deserialization:

using Serialization
+ 1.1799889093605174
+ 0.9113591242652774
+ 2.9852540052717593
+ 2.6583570214105947

Extracting coefficients:

LearnAPI.coefficients(model)
3-element Vector{Pair{Symbol, Float64}}:
+ :a => 0.8741690704515571
+ :b => 0.599971004279929
+ :c => 1.9374537027375354

Serialization/deserialization:

using Serialization
 small_model = LearnAPI.strip(model)
 filename = tempname()
 serialize(filename, small_model)
recovered_model = deserialize(filename)
@@ -126,7 +126,7 @@
 model = fit(learner, MLUtils.getobs(observations_for_fit, train))
 observations_for_predict = obs(model, X)
 ẑ = predict(model, MLUtils.getobs(observations_for_predict, test))
4-element Vector{Float64}:
- 1.8791289909932145
- 2.3890585093864587
- 2.378916821580873
- 2.637900579797121
@assert ẑ == ŷ

For an application of obs to efficient cross-validation, see here.


¹ In LearnAPI.jl a table is any object X implementing the Tables.jl interface, additionally satisfying Tables.istable(X) == true and implementing DataAPI.nrow (and whence MLUtils.numobs). Tables that are also (unnamed) tuples are disallowed.

² An implementation can provide further accessor functions, if necessary, but like the native ones, they must be included in the LearnAPI.functions declaration.

³ The last index must be the observation index.

⁴ The data = (X, y) pattern implemented here is not the only supported pattern. For, example, data might be a single table containing both features and target variable. In this case, it will be necessary to overload LearnAPI.features in addition to LearnAPI.target; the name of the target column would need to be a hyperparameter.

+ 2.6618786123131226 + 0.433555022591084 + 2.6745771400103076 + 2.2192865576847436
@assert ẑ == ŷ

For an application of obs to efficient cross-validation, see here.


¹ In LearnAPI.jl a table is any object X implementing the Tables.jl interface, additionally satisfying Tables.istable(X) == true and implementing DataAPI.nrow (and whence MLUtils.numobs). Tables that are also (unnamed) tuples are disallowed.

² An implementation can provide further accessor functions, if necessary, but like the native ones, they must be included in the LearnAPI.functions declaration.

³ The last index must be the observation index.

⁴ The data = (X, y) pattern implemented here is not the only supported pattern. For, example, data might be a single table containing both features and target variable. In this case, it will be necessary to overload LearnAPI.features in addition to LearnAPI.target; the name of the target column would need to be a hyperparameter.

diff --git a/dev/common_implementation_patterns/index.html b/dev/common_implementation_patterns/index.html index 7e8e630..ce4e9ce 100644 --- a/dev/common_implementation_patterns/index.html +++ b/dev/common_implementation_patterns/index.html @@ -1,2 +1,2 @@ -Common Implementation Patterns · LearnAPI.jl

Common Implementation Patterns

Important

This section is only an implementation guide. The definitive specification of the Learn API is given in Reference.

This guide is intended to be consulted after reading Anatomy of an Implementation, which introduces the main interface objects and terminology.

Although an implementation is defined purely by the methods and traits it implements, many implementations fall into one (or more) of the following informally understood patterns or "tasks":

  • Regression: Supervised learners for continuous targets

  • Classification: Supervised learners for categorical targets

  • Clusterering: Algorithms that group data into clusters for classification and possibly dimension reduction. May be true learners (generalize to new data) or static.

  • Gradient Descent: Including neural networks.

  • Iterative Algorithms

  • Incremental Algorithms: Algorithms that can be updated with new observations.

  • Feature Engineering: Algorithms for selecting or combining features

  • Dimension Reduction: Transformers that learn to reduce feature space dimension

  • Missing Value Imputation

  • Transformers: Other transformers, such as standardizers, and categorical encoders.

  • Static Algorithms: Algorithms that do not learn, in the sense they must be re-executed for each new data set (do not generalize), but which have hyperparameters and/or deliver ancillary information about the computation.

  • Ensembling: Algorithms that blend predictions of multiple algorithms

  • Time Series Forecasting

  • Time Series Classification

  • Survival Analysis

  • Density Estimation: Algorithms that learn a probability distribution

  • Bayesian Algorithms

  • Outlier Detection: Supervised, unsupervised, or semi-supervised learners for anomaly detection.

  • Text Analysis

  • Audio Analysis

  • Natural Language Processing

  • Image Processing

  • Meta-algorithms

+Common Implementation Patterns · LearnAPI.jl

Common Implementation Patterns

Important

This section is only an implementation guide. The definitive specification of the Learn API is given in Reference.

This guide is intended to be consulted after reading Anatomy of an Implementation, which introduces the main interface objects and terminology.

Although an implementation is defined purely by the methods and traits it implements, many implementations fall into one (or more) of the following informally understood patterns or "tasks":

  • Regression: Supervised learners for continuous targets

  • Classification: Supervised learners for categorical targets

  • Clusterering: Algorithms that group data into clusters for classification and possibly dimension reduction. May be true learners (generalize to new data) or static.

  • Gradient Descent: Including neural networks.

  • Iterative Algorithms

  • Incremental Algorithms: Algorithms that can be updated with new observations.

  • Feature Engineering: Algorithms for selecting or combining features

  • Dimension Reduction: Transformers that learn to reduce feature space dimension

  • Missing Value Imputation

  • Transformers: Other transformers, such as standardizers, and categorical encoders.

  • Static Algorithms: Algorithms that do not learn, in the sense they must be re-executed for each new data set (do not generalize), but which have hyperparameters and/or deliver ancillary information about the computation.

  • Ensembling: Algorithms that blend predictions of multiple algorithms

  • Time Series Forecasting

  • Time Series Classification

  • Survival Analysis

  • Density Estimation: Algorithms that learn a probability distribution

  • Bayesian Algorithms

  • Outlier Detection: Supervised, unsupervised, or semi-supervised learners for anomaly detection.

  • Text Analysis

  • Audio Analysis

  • Natural Language Processing

  • Image Processing

  • Meta-algorithms

diff --git a/dev/fit_update/index.html b/dev/fit_update/index.html index 99b7331..0e8bdec 100644 --- a/dev/fit_update/index.html +++ b/dev/fit_update/index.html @@ -24,13 +24,13 @@ LearnAPI.extras(model)

See also Static Algorithms

Density estimation

In density estimation, fit consumes no features, only a target variable; predict, which consumes no data, returns the learned density:

model = fit(learner, y) # no features
 predict(model)  # shortcut for  `predict(model, SingleDistribution())`, or similar

A one-liner will typically be implemented as well:

predict(learner, y)

See also Density Estimation.

Implementation guide

Training

Exactly one of the following must be implemented:

methodfallback
fit(learner, data; verbosity=LearnAPI.default_verbosity())none
fit(learner; verbosity=LearnAPI.default_verbosity())none

Updating

methodfallbackcompulsory?
update(model, data; verbosity=..., hyperparameter_updates...)noneno
update_observations(model, data; verbosity=..., hyperparameter_updates...)noneno
update_features(model, data; verbosity=..., hyperparameter_updates...)noneno

There are some contracts governing the behaviour of the update methods, as they relate to a previous fit call. Consult the document strings for details.

Reference

LearnAPI.fitFunction
fit(learner, data; verbosity=LearnAPI.default_verbosity())
 fit(learner; verbosity=LearnAPI.default_verbosity())

Execute the machine learning or statistical algorithm with configuration learner using the provided training data, returning an object, model, on which other methods, such as predict or transform, can be dispatched. LearnAPI.functions(learner) returns a list of methods that can be applied to either learner or model.

For example, a supervised classifier might have a workflow like this:

model = fit(learner, (X, y))
-ŷ = predict(model, Xnew)

The signature fit(learner; verbosity=...) (no data) is provided by learners that do not generalize to new observations (called static algorithms). In that case, transform(model, data) or predict(model, ..., data) carries out the actual algorithm execution, writing any byproducts of that operation to the mutable object model returned by fit.

Use verbosity=0 for warnings only, and -1 for silent training.

See also LearnAPI.default_verbosity, predict, transform, inverse_transform, LearnAPI.functions, obs.

Extended help

New implementations

Implementation of exactly one of the signatures is compulsory. If fit(learner; verbosity=...) is implemented, then the trait LearnAPI.is_static must be overloaded to return true.

The signature must include verbosity with LearnAPI.default_verbosity() as default.

If data encapsulates a target variable, as defined in LearnAPI.jl documentation, then LearnAPI.target(data) must be overloaded to return it. If predict or transform are implemented and consume data, then LearnAPI.features(data) must return something that can be passed as data to these methods. A fallback returns first(data) if data is a tuple, and data otherwise.

The LearnAPI.jl specification has nothing to say regarding fit signatures with more than two arguments. For convenience, for example, an implementation is free to implement a slurping signature, such as fit(learner, X, y, extras...) = fit(learner, (X, y, extras...)) but LearnAPI.jl does not guarantee such signatures are actually implemented.

Assumptions about data

By default, it is assumed that data supports the LearnAPI.RandomAccess interface; this includes all matrices, with observations-as-columns, most tables, and tuples thereof. See LearnAPI.RandomAccess for details. If this is not the case then an implementation must either: (i) overload obs to articulate how provided data can be transformed into a form that does support LearnAPI.RandomAccess; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API. Refer tbo document strings for details.

source
LearnAPI.updateFunction
update(model, data; verbosity=LearnAPI.default_verbosity(), hyperparam_replacements...)

Return an updated version of the model object returned by a previous fit or update call, but with the specified hyperparameter replacements, in the form p1=value1, p2=value2, ....

learner = MyForest(ntrees=100)
+ŷ = predict(model, Xnew)

The signature fit(learner; verbosity=...) (no data) is provided by learners that do not generalize to new observations (called static algorithms). In that case, transform(model, data) or predict(model, ..., data) carries out the actual algorithm execution, writing any byproducts of that operation to the mutable object model returned by fit.

Use verbosity=0 for warnings only, and -1 for silent training.

See also LearnAPI.default_verbosity, predict, transform, inverse_transform, LearnAPI.functions, obs.

Extended help

New implementations

Implementation of exactly one of the signatures is compulsory. If fit(learner; verbosity=...) is implemented, then the trait LearnAPI.is_static must be overloaded to return true.

The signature must include verbosity with LearnAPI.default_verbosity() as default.

If data encapsulates a target variable, as defined in LearnAPI.jl documentation, then LearnAPI.target(data) must be overloaded to return it. If predict or transform are implemented and consume data, then LearnAPI.features(data) must return something that can be passed as data to these methods. A fallback returns first(data) if data is a tuple, and data otherwise.

The LearnAPI.jl specification has nothing to say regarding fit signatures with more than two arguments. For convenience, for example, an implementation is free to implement a slurping signature, such as fit(learner, X, y, extras...) = fit(learner, (X, y, extras...)) but LearnAPI.jl does not guarantee such signatures are actually implemented.

Assumptions about data

By default, it is assumed that data supports the LearnAPI.RandomAccess interface; this includes all matrices, with observations-as-columns, most tables, and tuples thereof. See LearnAPI.RandomAccess for details. If this is not the case then an implementation must either: (i) overload obs to articulate how provided data can be transformed into a form that does support LearnAPI.RandomAccess; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API. Refer tbo document strings for details.

source
LearnAPI.updateFunction
update(model, data; verbosity=LearnAPI.default_verbosity(), hyperparam_replacements...)

Return an updated version of the model object returned by a previous fit or update call, but with the specified hyperparameter replacements, in the form p1=value1, p2=value2, ....

learner = MyForest(ntrees=100)
 
 # train with 100 trees:
 model = fit(learner, data)
 
 # add 50 more trees:
-model = update(model, data; ntrees=150)

Provided that data is identical with the data presented in a preceding fit call and there is at most one hyperparameter replacement, as in the above example, execution is semantically equivalent to the call fit(learner, data), where learner is LearnAPI.learner(model) with the specified replacements. In some cases (typically, when changing an iteration parameter) there may be a performance benefit to using update instead of retraining ab initio.

If data differs from that in the preceding fit or update call, or there is more than one hyperparameter replacement, then behaviour is learner-specific.

See also fit, update_observations, update_features.

New implementations

Implementation is optional. The signature must include verbosity. If implemented, you must include :(LearnAPI.update) in the tuple returned by the LearnAPI.functions trait.

See also LearnAPI.clone

source
LearnAPI.update_observationsFunction
update_observations(
+model = update(model, data; ntrees=150)

Provided that data is identical with the data presented in a preceding fit call and there is at most one hyperparameter replacement, as in the above example, execution is semantically equivalent to the call fit(learner, data), where learner is LearnAPI.learner(model) with the specified replacements. In some cases (typically, when changing an iteration parameter) there may be a performance benefit to using update instead of retraining ab initio.

If data differs from that in the preceding fit or update call, or there is more than one hyperparameter replacement, then behaviour is learner-specific.

See also fit, update_observations, update_features.

New implementations

Implementation is optional. The signature must include verbosity. If implemented, you must include :(LearnAPI.update) in the tuple returned by the LearnAPI.functions trait.

See also LearnAPI.clone

source
LearnAPI.update_observationsFunction
update_observations(
     model,
     new_data;
     parameter_replacements...,
@@ -41,10 +41,10 @@
 model = fit(learner, data)
 
 # train for two more epochs using new data and new learning rate:
-model = update_observations(model, new_data; epochs=2, learning_rate=0.1)

When following the call fit(learner, data), the update call is semantically equivalent to retraining ab initio using a concatenation of data and new_data, provided there are no hyperparameter replacements (which rules out the example above). Behaviour is otherwise learner-specific.

See also fit, update, update_features.

Extended help

New implementations

Implementation is optional. The signature must include verbosity. If implemented, you must include :(LearnAPI.update_observations) in the tuple returned by the LearnAPI.functions trait.

See also LearnAPI.clone.

source
LearnAPI.update_featuresFunction
update_features(
+model = update_observations(model, new_data; epochs=2, learning_rate=0.1)

When following the call fit(learner, data), the update call is semantically equivalent to retraining ab initio using a concatenation of data and new_data, provided there are no hyperparameter replacements (which rules out the example above). Behaviour is otherwise learner-specific.

See also fit, update, update_features.

Extended help

New implementations

Implementation is optional. The signature must include verbosity. If implemented, you must include :(LearnAPI.update_observations) in the tuple returned by the LearnAPI.functions trait.

See also LearnAPI.clone.

source
LearnAPI.update_featuresFunction
update_features(
     model,
     new_data;
     parameter_replacements...,
     verbosity=LearnAPI.default_verbosity(),
-)

Return an updated version of the model object returned by a previous fit or update call given the new features encapsulated in new_data. One may additionally specify hyperparameter replacements in the form p1=value1, p2=value2, ....

When following the call fit(learner, data), the update call is semantically equivalent to retraining ab initio using a concatenation of data and new_data, provided there are no hyperparameter replacements. Behaviour is otherwise learner-specific.

See also fit, update, update_features.

Extended help

New implementations

Implementation is optional. The signature must include verbosity. If implemented, you must include :(LearnAPI.update_features) in the tuple returned by the LearnAPI.functions trait.

See also LearnAPI.clone.

source
LearnAPI.default_verbosityFunction
LearnAPI.default_verbosity()
-LearnAPI.default_verbosity(level::Int)

Respectively return, or set, the default verbosity level for LearnAPI.jl methods that support it, which includes fit, update, update_observations, and update_features. The effect in a top-level call is generally:

levelbehaviour
1informational
0warnings only
-1silent

Methods consuming verbosity generally call other verbosity-supporting methods at one level lower, so increasing verbosity beyond 1 may be useful.

source
+)

Return an updated version of the model object returned by a previous fit or update call given the new features encapsulated in new_data. One may additionally specify hyperparameter replacements in the form p1=value1, p2=value2, ....

When following the call fit(learner, data), the update call is semantically equivalent to retraining ab initio using a concatenation of data and new_data, provided there are no hyperparameter replacements. Behaviour is otherwise learner-specific.

See also fit, update, update_features.

Extended help

New implementations

Implementation is optional. The signature must include verbosity. If implemented, you must include :(LearnAPI.update_features) in the tuple returned by the LearnAPI.functions trait.

See also LearnAPI.clone.

source
LearnAPI.default_verbosityFunction
LearnAPI.default_verbosity()
+LearnAPI.default_verbosity(level::Int)

Respectively return, or set, the default verbosity level for LearnAPI.jl methods that support it, which includes fit, update, update_observations, and update_features. The effect in a top-level call is generally:

levelbehaviour
1informational
0warnings only
-1silent

Methods consuming verbosity generally call other verbosity-supporting methods at one level lower, so increasing verbosity beyond 1 may be useful.

source
diff --git a/dev/index.html b/dev/index.html index 0a77b7b..e152dc4 100644 --- a/dev/index.html +++ b/dev/index.html @@ -32,4 +32,4 @@ # Recover saved model and algorithm configuration ("learner"): recovered_model = deserialize("my_random_forest.jls") @assert LearnAPI.learner(recovered_model) == forest -@assert predict(recovered_model, Point(), Xnew) == ŷ

Distribution and Point are singleton types owned by LearnAPI.jl. They allow dispatch based on the kind of target proxy, a key LearnAPI.jl concept. LearnAPI.jl places more emphasis on the notion of target variables and target proxies than on the usual supervised/unsupervised learning dichotomy. From this point of view, a supervised learner is simply one in which a target variable exists, and happens to appear as an input to training but not to prediction.

Data interfaces

Algorithms are free to consume data in any format. However, a method called obs (read as "observations") gives users and meta-algorithms access to an algorithm-specific representation of input data, which is also guaranteed to implement a standard interface for accessing individual observations, unless the algorithm explicitly opts out. Moreover, the fit and predict methods will also be able to consume these alternative data representations, for performance benefits in some situations.

The fallback data interface is the MLUtils.jl getobs/numobs interface (here tagged as LearnAPI.RandomAccess()) and if the input consumed by the algorithm already implements that interface (tables, arrays, etc.) then overloading obs is completely optional. Plain iteration interfaces, with or without knowledge of the number of observations, can also be specified (to support, e.g., data loaders reading images from disk).

Learning more

+@assert predict(recovered_model, Point(), Xnew) == ŷ

Distribution and Point are singleton types owned by LearnAPI.jl. They allow dispatch based on the kind of target proxy, a key LearnAPI.jl concept. LearnAPI.jl places more emphasis on the notion of target variables and target proxies than on the usual supervised/unsupervised learning dichotomy. From this point of view, a supervised learner is simply one in which a target variable exists, and happens to appear as an input to training but not to prediction.

Data interfaces

Algorithms are free to consume data in any format. However, a method called obs (read as "observations") gives users and meta-algorithms access to an algorithm-specific representation of input data, which is also guaranteed to implement a standard interface for accessing individual observations, unless the algorithm explicitly opts out. Moreover, the fit and predict methods will also be able to consume these alternative data representations, for performance benefits in some situations.

The fallback data interface is the MLUtils.jl getobs/numobs interface (here tagged as LearnAPI.RandomAccess()) and if the input consumed by the algorithm already implements that interface (tables, arrays, etc.) then overloading obs is completely optional. Plain iteration interfaces, with or without knowledge of the number of observations, can also be specified (to support, e.g., data loaders reading images from disk).

Learning more

diff --git a/dev/kinds_of_target_proxy/index.html b/dev/kinds_of_target_proxy/index.html index 5fc8a66..45d7da6 100644 --- a/dev/kinds_of_target_proxy/index.html +++ b/dev/kinds_of_target_proxy/index.html @@ -1,2 +1,2 @@ -Kinds of Target Proxy · LearnAPI.jl

Kinds of Target Proxy

The available kinds of target proxy (used for predict dispatch) are classified by subtypes of LearnAPI.KindOfProxy. These types are intended for dispatch only and have no fields.

LearnAPI.KindOfProxyType
LearnAPI.KindOfProxy

Abstract type whose concrete subtypes T each represent a different kind of proxy for some target variable, associated with some learner. Instances T() are used to request the form of target predictions in predict calls.

See LearnAPI.jl documentation for an explanation of "targets" and "target proxies".

For example, Distribution is a concrete subtype of IID <: LearnAPI.KindOfProxy and a call like predict(model, Distribution(), Xnew) returns a data object whose observations are probability density/mass functions, assuming learner = LearnAPI.learner(model) supports predictions of that form, which is true if Distribution() in LearnAPI.kinds_of_proxy(learner).

Proxy types are grouped under three abstract subtypes:

  • LearnAPI.IID: The main type, for proxies consisting of uncorrelated individual components, one for each input observation

  • LearnAPI.Joint: For learners that predict a single probabilistic structure encapsulating correlations between target predictions for different input observations

  • LearnAPI.Single: For learners, such as density estimators, that are trained on a target variable only (no features); predict consumes no data and the returned target proxy is a single probabilistic structure.

For lists of all concrete instances, refer to documentation for the relevant subtype.

source

Simple target proxies

LearnAPI.IIDType
LearnAPI.IID <: LearnAPI.KindOfProxy

Abstract subtype of LearnAPI.KindOfProxy. If kind_of_proxy is an instance of LearnAPI.IID then, given data constisting of $n$ observations, the following must hold:

  • ŷ = LearnAPI.predict(model, kind_of_proxy, data) is data also consisting of $n$ observations.

  • The $j$th observation of , for any $j$, depends only on the $j$th observation of the provided data (no correlation between observations).

See also LearnAPI.KindOfProxy.

Extended help

typeform of an observation
Pointsame as target observations; may have the interpretation of a 50% quantile, 50% expectile or mode
Sampleableobject that can be sampled to obtain object of the same form as target observation
Distributionexplicit probability density/mass function whose sample space is all possible target observations
LogDistributionexplicit log-probability density/mass function whose sample space is possible target observations
Probability¹numerical probability or probability vector
LogProbability¹log-probability or log-probability vector
Parametric¹a list of parameters (e.g., mean and variance) describing some distribution
LabelAmbiguouscollections of labels (in case of multi-class target) but without a known correspondence to the original target labels (and of possibly different number) as in, e.g., clustering
LabelAmbiguousSampleablesampleable version of LabelAmbiguous; see Sampleable above
LabelAmbiguousDistributionpdf/pmf version of LabelAmbiguous; see Distribution above
LabelAmbiguousFuzzysame as LabelAmbiguous but with multiple values of indeterminant number
Quantile²same as target but with quantile interpretation
Expectile²same as target but with expectile interpretation
ConfidenceInterval²confidence interval
Fuzzyfinite but possibly varying number of target observations
ProbabilisticFuzzyas for Fuzzy but labeled with probabilities (not necessarily summing to one)
SurvivalFunctionsurvival function
SurvivalDistributionprobability distribution for survival time
SurvivalHazardFunctionhazard function for survival time
OutlierScorenumerical score reflecting degree of outlierness (not necessarily normalized)
Continuousreal-valued approximation/interpolation of a discrete-valued target, such as a count (e.g., number of phone calls)

¹Provided for completeness but discouraged to avoid ambiguities in representation.

²The level will be controlled by a hyper-parameter; models providing only quantiles or expectiles at 50% will provide Point instead.

source

Proxies for density estimation algorithms

LearnAPI.SingleType
Single <: KindOfProxy

Abstract subtype of LearnAPI.KindOfProxy. It applies only to learners for which predict has no data argument, i.e., is of the form predict(model, kind_of_proxy). An example is an algorithm learning a probability distribution from samples, and we regard the samples as drawn from the "target" variable. If in this case, kind_of_proxy is an instance of LearnAPI.Single then, predict(learner) returns a single object representing a probability distribution.

type Tform of output of predict(model, ::T)
SingleSampleableobject that can be sampled to obtain a single target observation
SingleDistributionexplicit probability density/mass function for sampling the target
SingleLogDistributionexplicit log-probability density/mass function for sampling the target
source

Joint probability distributions

LearnAPI.JointType
Joint <: KindOfProxy

Abstract subtype of LearnAPI.KindOfProxy. If kind_of_proxy is an instance of LearnAPI.Joint then, given data consisting of $n$ observations, predict(model, kind_of_proxy, data) represents a single probability distribution for the sample space $Y^n$, where $Y$ is the space from which the target variable takes its values.

type Tform of output of predict(model, ::T, data)
JointSampleableobject that can be sampled to obtain a vector whose elements have the form of target observations; the vector length matches the number of observations in data.
JointDistributionexplicit probability density/mass function whose sample space is vectors of target observations; the vector length matches the number of observations in data
JointLogDistributionexplicit log-probability density/mass function whose sample space is vectors of target observations; the vector length matches the number of observations in data
source
+Kinds of Target Proxy · LearnAPI.jl

Kinds of Target Proxy

The available kinds of target proxy (used for predict dispatch) are classified by subtypes of LearnAPI.KindOfProxy. These types are intended for dispatch only and have no fields.

LearnAPI.KindOfProxyType
LearnAPI.KindOfProxy

Abstract type whose concrete subtypes T each represent a different kind of proxy for some target variable, associated with some learner. Instances T() are used to request the form of target predictions in predict calls.

See LearnAPI.jl documentation for an explanation of "targets" and "target proxies".

For example, Distribution is a concrete subtype of IID <: LearnAPI.KindOfProxy and a call like predict(model, Distribution(), Xnew) returns a data object whose observations are probability density/mass functions, assuming learner = LearnAPI.learner(model) supports predictions of that form, which is true if Distribution() in LearnAPI.kinds_of_proxy(learner).

Proxy types are grouped under three abstract subtypes:

  • LearnAPI.IID: The main type, for proxies consisting of uncorrelated individual components, one for each input observation

  • LearnAPI.Joint: For learners that predict a single probabilistic structure encapsulating correlations between target predictions for different input observations

  • LearnAPI.Single: For learners, such as density estimators, that are trained on a target variable only (no features); predict consumes no data and the returned target proxy is a single probabilistic structure.

For lists of all concrete instances, refer to documentation for the relevant subtype.

source

Simple target proxies

LearnAPI.IIDType
LearnAPI.IID <: LearnAPI.KindOfProxy

Abstract subtype of LearnAPI.KindOfProxy. If kind_of_proxy is an instance of LearnAPI.IID then, given data constisting of $n$ observations, the following must hold:

  • ŷ = LearnAPI.predict(model, kind_of_proxy, data) is data also consisting of $n$ observations.

  • The $j$th observation of , for any $j$, depends only on the $j$th observation of the provided data (no correlation between observations).

See also LearnAPI.KindOfProxy.

Extended help

typeform of an observation
Pointsame as target observations; may have the interpretation of a 50% quantile, 50% expectile or mode
Sampleableobject that can be sampled to obtain object of the same form as target observation
Distributionexplicit probability density/mass function whose sample space is all possible target observations
LogDistributionexplicit log-probability density/mass function whose sample space is possible target observations
Probability¹numerical probability or probability vector
LogProbability¹log-probability or log-probability vector
Parametric¹a list of parameters (e.g., mean and variance) describing some distribution
LabelAmbiguouscollections of labels (in case of multi-class target) but without a known correspondence to the original target labels (and of possibly different number) as in, e.g., clustering
LabelAmbiguousSampleablesampleable version of LabelAmbiguous; see Sampleable above
LabelAmbiguousDistributionpdf/pmf version of LabelAmbiguous; see Distribution above
LabelAmbiguousFuzzysame as LabelAmbiguous but with multiple values of indeterminant number
Quantile²same as target but with quantile interpretation
Expectile²same as target but with expectile interpretation
ConfidenceInterval²confidence interval
Fuzzyfinite but possibly varying number of target observations
ProbabilisticFuzzyas for Fuzzy but labeled with probabilities (not necessarily summing to one)
SurvivalFunctionsurvival function
SurvivalDistributionprobability distribution for survival time
SurvivalHazardFunctionhazard function for survival time
OutlierScorenumerical score reflecting degree of outlierness (not necessarily normalized)
Continuousreal-valued approximation/interpolation of a discrete-valued target, such as a count (e.g., number of phone calls)

¹Provided for completeness but discouraged to avoid ambiguities in representation.

²The level will be controlled by a hyper-parameter; models providing only quantiles or expectiles at 50% will provide Point instead.

source

Proxies for density estimation algorithms

LearnAPI.SingleType
Single <: KindOfProxy

Abstract subtype of LearnAPI.KindOfProxy. It applies only to learners for which predict has no data argument, i.e., is of the form predict(model, kind_of_proxy). An example is an algorithm learning a probability distribution from samples, and we regard the samples as drawn from the "target" variable. If in this case, kind_of_proxy is an instance of LearnAPI.Single then, predict(learner) returns a single object representing a probability distribution.

type Tform of output of predict(model, ::T)
SingleSampleableobject that can be sampled to obtain a single target observation
SingleDistributionexplicit probability density/mass function for sampling the target
SingleLogDistributionexplicit log-probability density/mass function for sampling the target
source

Joint probability distributions

LearnAPI.JointType
Joint <: KindOfProxy

Abstract subtype of LearnAPI.KindOfProxy. If kind_of_proxy is an instance of LearnAPI.Joint then, given data consisting of $n$ observations, predict(model, kind_of_proxy, data) represents a single probability distribution for the sample space $Y^n$, where $Y$ is the space from which the target variable takes its values.

type Tform of output of predict(model, ::T, data)
JointSampleableobject that can be sampled to obtain a vector whose elements have the form of target observations; the vector length matches the number of observations in data.
JointDistributionexplicit probability density/mass function whose sample space is vectors of target observations; the vector length matches the number of observations in data
JointLogDistributionexplicit log-probability density/mass function whose sample space is vectors of target observations; the vector length matches the number of observations in data
source
diff --git a/dev/objects.inv b/dev/objects.inv index 192e7aa1bc3489738a061b88b38c10ad5569c356..f2f796fb35949cdf5070768d0e3851249fb977d8 100644 GIT binary patch delta 299 zcmV+`0o4BS5%>|X*9U(-#x<|jM>o8{{+_WQYK|WUmeUo~+aE}S8j5oDG)Hb1mL|pV z%K?rzIsSgLXnBs6Go7nueLP(R-V~mvt^#JmZCo-@T-Q?DT!}qs3Hg@LJ3+VF4Ni>s z>($ee`3dZj>zN@Os}{Th)mY!`y!P-~^V9y`SHc=v;D<%CC{%x*bxpa%6sY!yW&KxI zO1^??-}ht1@!(o^aa_X6l-BF%aZZ)Q x+~}XJ7R6hvpUWfJ4cn=p!sZuT_uJpXjIjsOCB}oB)x~iMD^pspr^h*A5p$z|JX#cQ qv3@RRzh$gU{=6K7UN-O<6SIDTZ_qA(JOA7oMo4n%_5TONWl#ztWRv0m diff --git a/dev/obs/index.html b/dev/obs/index.html index e51dca3..8429aa7 100644 --- a/dev/obs/index.html +++ b/dev/obs/index.html @@ -49,4 +49,4 @@ predict_observations = obs(model, X) ẑ = predict(model, Point(), MLUtils.getobs(predict_observations, 101:150)) @assert ẑ == ŷ

See also LearnAPI.data_interface.

Extended help

New implementations

Implementation is typically optional.

For each supported form of data in fit(learner, data), it must be true that model = fit(learner, observations) is equivalent to model = fit(learner, data), whenever observations = obs(learner, data). For each supported form of data in calls predict(model, ..., data) and transform(model, data), where implemented, the calls predict(model, ..., observations) and transform(model, observations) must be supported alternatives with the same output, whenever observations = obs(model, data).

If LearnAPI.data_interface(learner) == RandomAccess() (the default), then fit, predict and transform must additionally accept obs output that has been subsampled using MLUtils.getobs, with the obvious interpretation applying to the outcomes of such calls (e.g., if all observations are subsampled, then outcomes should be the same as if using the original data).

Implicit in preceding requirements is that obs(learner, _) and obs(model, _) are involutive, meaning both the following hold:

obs(learner, obs(learner, data)) == obs(learner, data)
-obs(model, obs(model, data) == obs(model, obs(model, data)

If one overloads obs, one typically needs additionally overloadings to guarantee involutivity.

The fallback for obs is obs(model_or_learner, data) = data, and the fallback for LearnAPI.data_interface(learner) is LearnAPI.RandomAccess(). For details refer to the LearnAPI.data_interface document string.

In particular, if the data to be consumed by fit, predict or transform consists only of suitable tables and arrays, then obs and LearnAPI.data_interface do not need to be overloaded. However, the user will get no performance benefits by using obs in that case.

If overloading obs(learner, data) to output new model-specific representations of data, it may be necessary to also overload LearnAPI.features(learner, observations), LearnAPI.target(learner, observations) (supervised learners), and/or LearnAPI.weights(learner, observations) (if weights are supported), for each kind output observations of obs(learner, data). Moreover, the outputs of these methods, applied to observations, must also implement the interface specified by LearnAPI.data_interface(learner).

Sample implementation

Refer to the "Anatomy of an Implementation" section of the LearnAPI.jl manual.

source

Data interfaces

New implementations must overload LearnAPI.data_interface(learner) if the output of obs does not implement LearnAPI.RandomAccess. (Arrays, most tables, and all tuples thereof, implement RandomAccess.)

LearnAPI.RandomAccessType
LearnAPI.RandomAccess

A data interface type. We say that data implements the RandomAccess interface if data implements the methods getobs and numobs from MLUtils.jl. The first method allows one to grab observations specified by an arbitrary index set, as in MLUtils.getobs(data, [2, 3, 5]), while the second method returns the total number of available observations, which is assumed to be known and finite.

All arrays implement RandomAccess, with the last index being the observation index (observations-as-columns in matrices).

A Tables.jl compatible table data implements RandomAccess if Tables.istable(data) is true and if data implements DataAPI.nrow. This includes many tables, and in particular, DataFrames. Tables that are also tuples are explicitly excluded.

Any tuple of objects implementing RandomAccess also implements RandomAccess.

If LearnAPI.data_interface(learner) takes the value RandomAccess(), then obs(learner, ...) is guaranteed to return objects implementing the RandomAccess interface, and the same holds for obs(model, ...), whenever LearnAPI.learner(model) == learner.

Implementing RandomAccess for new data types

Typically, to implement RandomAccess for a new data type requires only implementing Base.getindex and Base.length, which are the fallbacks for MLUtils.getobs and MLUtils.numobs, and this avoids making MLUtils.jl a package dependency.

See also LearnAPI.FiniteIterable, LearnAPI.Iterable.

source
LearnAPI.FiniteIterableType
LearnAPI.FiniteIterable

A data interface type. We say that data implements the FiniteIterable interface if it implements Julia's iterate interface, including Base.length, and if Base.IteratorSize(typeof(data)) == Base.HasLength(). For example, this is true if:

  • data implements the LearnAPI.RandomAccess interface (arrays and most tables)

  • data isa MLUtils.DataLoader, which includes output from MLUtils.eachobs.

If LearnAPI.data_interface(learner) takes the value FiniteIterable(), then obs(learner, ...) is guaranteed to return objects implementing the FiniteIterable interface, and the same holds for obs(model, ...), whenever LearnAPI.learner(model) == learner.

See also LearnAPI.RandomAccess, LearnAPI.Iterable.

source
LearnAPI.IterableType
LearnAPI.Iterable

A data interface type. We say that data implements the Iterable interface if it implements Julia's basic iterate interface. (Such objects may not implement MLUtils.numobs or Base.length.)

If LearnAPI.data_interface(learner) takes the value Iterable(), then obs(learner, ...) is guaranteed to return objects implementing Iterable, and the same holds for obs(model, ...), whenever LearnAPI.learner(model) == learner.

See also LearnAPI.FiniteIterable, LearnAPI.RandomAccess.

source
+obs(model, obs(model, data) == obs(model, obs(model, data)

If one overloads obs, one typically needs additionally overloadings to guarantee involutivity.

The fallback for obs is obs(model_or_learner, data) = data, and the fallback for LearnAPI.data_interface(learner) is LearnAPI.RandomAccess(). For details refer to the LearnAPI.data_interface document string.

In particular, if the data to be consumed by fit, predict or transform consists only of suitable tables and arrays, then obs and LearnAPI.data_interface do not need to be overloaded. However, the user will get no performance benefits by using obs in that case.

If overloading obs(learner, data) to output new model-specific representations of data, it may be necessary to also overload LearnAPI.features(learner, observations), LearnAPI.target(learner, observations) (supervised learners), and/or LearnAPI.weights(learner, observations) (if weights are supported), for each kind output observations of obs(learner, data). Moreover, the outputs of these methods, applied to observations, must also implement the interface specified by LearnAPI.data_interface(learner).

Sample implementation

Refer to the "Anatomy of an Implementation" section of the LearnAPI.jl manual.

source

Data interfaces

New implementations must overload LearnAPI.data_interface(learner) if the output of obs does not implement LearnAPI.RandomAccess. (Arrays, most tables, and all tuples thereof, implement RandomAccess.)

LearnAPI.RandomAccessType
LearnAPI.RandomAccess

A data interface type. We say that data implements the RandomAccess interface if data implements the methods getobs and numobs from MLUtils.jl. The first method allows one to grab observations specified by an arbitrary index set, as in MLUtils.getobs(data, [2, 3, 5]), while the second method returns the total number of available observations, which is assumed to be known and finite.

All arrays implement RandomAccess, with the last index being the observation index (observations-as-columns in matrices).

A Tables.jl compatible table data implements RandomAccess if Tables.istable(data) is true and if data implements DataAPI.nrow. This includes many tables, and in particular, DataFrames. Tables that are also tuples are explicitly excluded.

Any tuple of objects implementing RandomAccess also implements RandomAccess.

If LearnAPI.data_interface(learner) takes the value RandomAccess(), then obs(learner, ...) is guaranteed to return objects implementing the RandomAccess interface, and the same holds for obs(model, ...), whenever LearnAPI.learner(model) == learner.

Implementing RandomAccess for new data types

Typically, to implement RandomAccess for a new data type requires only implementing Base.getindex and Base.length, which are the fallbacks for MLUtils.getobs and MLUtils.numobs, and this avoids making MLUtils.jl a package dependency.

See also LearnAPI.FiniteIterable, LearnAPI.Iterable.

source
LearnAPI.FiniteIterableType
LearnAPI.FiniteIterable

A data interface type. We say that data implements the FiniteIterable interface if it implements Julia's iterate interface, including Base.length, and if Base.IteratorSize(typeof(data)) == Base.HasLength(). For example, this is true if:

  • data implements the LearnAPI.RandomAccess interface (arrays and most tables)

  • data isa MLUtils.DataLoader, which includes output from MLUtils.eachobs.

If LearnAPI.data_interface(learner) takes the value FiniteIterable(), then obs(learner, ...) is guaranteed to return objects implementing the FiniteIterable interface, and the same holds for obs(model, ...), whenever LearnAPI.learner(model) == learner.

See also LearnAPI.RandomAccess, LearnAPI.Iterable.

source
LearnAPI.IterableType
LearnAPI.Iterable

A data interface type. We say that data implements the Iterable interface if it implements Julia's basic iterate interface. (Such objects may not implement MLUtils.numobs or Base.length.)

If LearnAPI.data_interface(learner) takes the value Iterable(), then obs(learner, ...) is guaranteed to return objects implementing Iterable, and the same holds for obs(model, ...), whenever LearnAPI.learner(model) == learner.

See also LearnAPI.FiniteIterable, LearnAPI.RandomAccess.

source
diff --git a/dev/patterns/classification/index.html b/dev/patterns/classification/index.html index 030c749..2e5d842 100644 --- a/dev/patterns/classification/index.html +++ b/dev/patterns/classification/index.html @@ -1,2 +1,2 @@ -Classification · LearnAPI.jl
+Classification · LearnAPI.jl
diff --git a/dev/patterns/clusterering/index.html b/dev/patterns/clusterering/index.html index 0c8f93d..54b5150 100644 --- a/dev/patterns/clusterering/index.html +++ b/dev/patterns/clusterering/index.html @@ -1,2 +1,2 @@ -Clusterering · LearnAPI.jl
+Clusterering · LearnAPI.jl
diff --git a/dev/patterns/density_estimation/index.html b/dev/patterns/density_estimation/index.html index 66a1327..deb4646 100644 --- a/dev/patterns/density_estimation/index.html +++ b/dev/patterns/density_estimation/index.html @@ -1,2 +1,2 @@ -Density Estimation · LearnAPI.jl
+Density Estimation · LearnAPI.jl
diff --git a/dev/patterns/dimension_reduction/index.html b/dev/patterns/dimension_reduction/index.html index e3cabcf..8b5141c 100644 --- a/dev/patterns/dimension_reduction/index.html +++ b/dev/patterns/dimension_reduction/index.html @@ -1,2 +1,2 @@ -Dimension Reduction · LearnAPI.jl
+Dimension Reduction · LearnAPI.jl
diff --git a/dev/patterns/ensembling/index.html b/dev/patterns/ensembling/index.html index f10c6bf..29eaffc 100644 --- a/dev/patterns/ensembling/index.html +++ b/dev/patterns/ensembling/index.html @@ -1,2 +1,2 @@ -Ensembling · LearnAPI.jl
+Ensembling · LearnAPI.jl
diff --git a/dev/patterns/feature_engineering/index.html b/dev/patterns/feature_engineering/index.html index 57ade28..5bdcde6 100644 --- a/dev/patterns/feature_engineering/index.html +++ b/dev/patterns/feature_engineering/index.html @@ -1,2 +1,2 @@ -Feature Engineering · LearnAPI.jl
+Feature Engineering · LearnAPI.jl
diff --git a/dev/patterns/gradient_descent/index.html b/dev/patterns/gradient_descent/index.html index 6f337b5..15620fe 100644 --- a/dev/patterns/gradient_descent/index.html +++ b/dev/patterns/gradient_descent/index.html @@ -1,2 +1,2 @@ -Gradient Descent · LearnAPI.jl
+Gradient Descent · LearnAPI.jl
diff --git a/dev/patterns/incremental_algorithms/index.html b/dev/patterns/incremental_algorithms/index.html index 3060ca0..a9ed0fc 100644 --- a/dev/patterns/incremental_algorithms/index.html +++ b/dev/patterns/incremental_algorithms/index.html @@ -1,2 +1,2 @@ -Incremental Algorithms · LearnAPI.jl
+Incremental Algorithms · LearnAPI.jl
diff --git a/dev/patterns/iterative_algorithms/index.html b/dev/patterns/iterative_algorithms/index.html index c29d1fe..a6a975d 100644 --- a/dev/patterns/iterative_algorithms/index.html +++ b/dev/patterns/iterative_algorithms/index.html @@ -1,2 +1,2 @@ -Iterative Algorithms · LearnAPI.jl
+Iterative Algorithms · LearnAPI.jl
diff --git a/dev/patterns/meta_algorithms/index.html b/dev/patterns/meta_algorithms/index.html index 30d4036..5c23097 100644 --- a/dev/patterns/meta_algorithms/index.html +++ b/dev/patterns/meta_algorithms/index.html @@ -1,2 +1,2 @@ -Meta-algorithms · LearnAPI.jl
+Meta-algorithms · LearnAPI.jl
diff --git a/dev/patterns/missing_value_imputation/index.html b/dev/patterns/missing_value_imputation/index.html index 04025ce..bd8c974 100644 --- a/dev/patterns/missing_value_imputation/index.html +++ b/dev/patterns/missing_value_imputation/index.html @@ -1,2 +1,2 @@ -Missing Value Imputation · LearnAPI.jl
+Missing Value Imputation · LearnAPI.jl
diff --git a/dev/patterns/outlier_detection/index.html b/dev/patterns/outlier_detection/index.html index ca21d89..5f00ca9 100644 --- a/dev/patterns/outlier_detection/index.html +++ b/dev/patterns/outlier_detection/index.html @@ -1,2 +1,2 @@ -Outlier Detection · LearnAPI.jl
+Outlier Detection · LearnAPI.jl
diff --git a/dev/patterns/regression/index.html b/dev/patterns/regression/index.html index e5c2744..3045f35 100644 --- a/dev/patterns/regression/index.html +++ b/dev/patterns/regression/index.html @@ -1,2 +1,2 @@ -Regression · LearnAPI.jl
+Regression · LearnAPI.jl
diff --git a/dev/patterns/static_algorithms/index.html b/dev/patterns/static_algorithms/index.html index 037dc75..c4d958e 100644 --- a/dev/patterns/static_algorithms/index.html +++ b/dev/patterns/static_algorithms/index.html @@ -1,2 +1,2 @@ -Static Algorithms · LearnAPI.jl
+Static Algorithms · LearnAPI.jl
diff --git a/dev/patterns/supervised_bayesian_algorithms/index.html b/dev/patterns/supervised_bayesian_algorithms/index.html index 277bc5a..0006edf 100644 --- a/dev/patterns/supervised_bayesian_algorithms/index.html +++ b/dev/patterns/supervised_bayesian_algorithms/index.html @@ -1,2 +1,2 @@ -Supervised Bayesian Models · LearnAPI.jl
+Supervised Bayesian Models · LearnAPI.jl
diff --git a/dev/patterns/supervised_bayesian_models/index.html b/dev/patterns/supervised_bayesian_models/index.html index 65ccf17..b85808e 100644 --- a/dev/patterns/supervised_bayesian_models/index.html +++ b/dev/patterns/supervised_bayesian_models/index.html @@ -1,2 +1,2 @@ -Supervised Bayesian Algorithms · LearnAPI.jl
+Supervised Bayesian Algorithms · LearnAPI.jl
diff --git a/dev/patterns/survival_analysis/index.html b/dev/patterns/survival_analysis/index.html index 6296297..ef07971 100644 --- a/dev/patterns/survival_analysis/index.html +++ b/dev/patterns/survival_analysis/index.html @@ -1,2 +1,2 @@ -Survival Analysis · LearnAPI.jl
+Survival Analysis · LearnAPI.jl
diff --git a/dev/patterns/time_series_classification/index.html b/dev/patterns/time_series_classification/index.html index 9487abb..c6e40e5 100644 --- a/dev/patterns/time_series_classification/index.html +++ b/dev/patterns/time_series_classification/index.html @@ -1,2 +1,2 @@ -Time Series Classification · LearnAPI.jl
+Time Series Classification · LearnAPI.jl
diff --git a/dev/patterns/time_series_forecasting/index.html b/dev/patterns/time_series_forecasting/index.html index 6ba0ad4..aa7394c 100644 --- a/dev/patterns/time_series_forecasting/index.html +++ b/dev/patterns/time_series_forecasting/index.html @@ -1,2 +1,2 @@ -Time Series Forecasting · LearnAPI.jl
+Time Series Forecasting · LearnAPI.jl
diff --git a/dev/patterns/transformers/index.html b/dev/patterns/transformers/index.html index 12d1d45..b2dc6e4 100644 --- a/dev/patterns/transformers/index.html +++ b/dev/patterns/transformers/index.html @@ -1,2 +1,2 @@ -Transformers · LearnAPI.jl

Transformers

Check out the following examples:

  • [Truncated SVD]((https://github.com/JuliaAI/LearnTestAPI.jl/blob/dev/test/patterns/dimension_reduction.jl (from the TestLearnAPI.jl test suite)
+Transformers · LearnAPI.jl

Transformers

Check out the following examples:

  • [Truncated SVD]((https://github.com/JuliaAI/LearnTestAPI.jl/blob/dev/test/patterns/dimension_reduction.jl (from the TestLearnAPI.jl test suite)
diff --git a/dev/predict_transform/index.html b/dev/predict_transform/index.html index 7cfff86..6ac1b47 100644 --- a/dev/predict_transform/index.html +++ b/dev/predict_transform/index.html @@ -5,12 +5,11 @@ Xnew_reduced = transform(model, Xnew)

Apply an approximate right inverse:

inverse_transform(model, Xnew_reduced)

Fit and transform in one line:

transform(learner, data) # `fit` implied

An advanced workflow

fitobs = obs(learner, (X, y)) # learner-specific repr. of data
 model = fit(learner, MLUtils.getobs(fitobs, 1:100))
 predictobs = obs(model, MLUtils.getobs(X, 101:150))
-ŷ = predict(model, Point(), predictobs)

Implementation guide

methodcompulsory?fallback
predictnonone
transformnonone
inverse_transformnonone

Predict or transform?

If the learner has a notion of target variable, then use predict to output each supported kind of target proxy (Point(), Distribution(), etc).

For output not associated with a target variable, implement transform instead, which does not dispatch on LearnAPI.KindOfProxy, but can be optionally paired with an implementation of inverse_transform, for returning (approximate) right or left inverses to transform.

Of course, the one learner can implement both a predict and transform method. For example a K-means clustering algorithm can predict labels and transform to reduce dimension using distances from the cluster centres.

One-liners combining fit and transform/predict

Learners may optionally overload transform to apply fit first, using the supplied data if required, and then immediately transform the same data. The same applies to predict. In that case the first argument of transform/predict is an learner instead of the output of fit:

predict(learner, kind_of_proxy, data) # `fit` implied
-transform(learner, data) # `fit` implied

For example, if fit(learner, X) is defined, then predict(learner, X) will be shorthand for

model = fit(learner, X)
-predict(model, X)

Reference

LearnAPI.predictFunction
predict(model, kind_of_proxy::LearnAPI.KindOfProxy, data)
+ŷ = predict(model, Point(), predictobs)

Implementation guide

methodcompulsory?fallback
predictnonone
transformnonone
inverse_transformnonone

Predict or transform?

If the learner has a notion of target variable, then use predict to output each supported kind of target proxy (Point(), Distribution(), etc).

For output not associated with a target variable, implement transform instead, which does not dispatch on LearnAPI.KindOfProxy, but can be optionally paired with an implementation of inverse_transform, for returning (approximate) right or left inverses to transform.

Of course, the one learner can implement both a predict and transform method. For example a K-means clustering algorithm can predict labels and transform to reduce dimension using distances from the cluster centres.

One-liners combining fit and transform/predict

Learners may additionally overload transform to apply fit first, using the supplied data if required, and then immediately transform the same data. In that case the first argument of transform is an learner instead of the output of fit:

transform(learner, data) # `fit` implied

This will be shorthand for

model = fit(learner, X) # or `fit(learner)` in the static case
+transform(model, X)

The same remarks apply to predict, as in

predict(learner, kind_of_proxy, data) # `fit` implied

LearnAPI.jl does not, however, guarantee the provision of these one-liners.

Reference

LearnAPI.predictFunction
predict(model, kind_of_proxy::LearnAPI.KindOfProxy, data)
 predict(model, data)

The first signature returns target predictions, or proxies for target predictions, for input features data, according to some model returned by fit. Where supported, these are literally target predictions if kind_of_proxy = Point(), and probability density/mass functions if kind_of_proxy = Distribution(). List all options with LearnAPI.kinds_of_proxy(learner), where learner = LearnAPI.learner(model).

model = fit(learner, (X, y))
-predict(model, Point(), Xnew)

The shortcut predict(model, data) calls the first method with learner-specific kind_of_proxy, namely the first element of LearnAPI.kinds_of_proxy(learner), which lists all supported target proxies.

The argument model is anything returned by a call of the form fit(learner, ...).

If LearnAPI.features(LearnAPI.learner(model)) == nothing, then the argument data is omitted in both signatures. An example is density estimators.

See also fit, transform, inverse_transform.

Extended help

Note predict must not mutate any argument, except in the special case LearnAPI.is_static(learner) == true.

New implementations

If there is no notion of a "target" variable in the LearnAPI.jl sense, or you need an operation with an inverse, implement transform instead.

Implementation is optional. Only the first signature (with or without the data argument) is implemented, but each kind_of_proxy::KindOfProxy that gets an implementation must be added to the list returned by LearnAPI.kinds_of_proxy(learner). List all available kinds of proxy by doing LearnAPI.kinds_of_proxy().

If data is not present in the implemented signature (eg., for density estimators) then LearnAPI.features(learner, data) must return nothing.

If implemented, you must include :(LearnAPI.predict) in the tuple returned by the LearnAPI.functions trait.

If, additionally, LearnAPI.strip(model) is overloaded, then the following identity must hold:

predict(LearnAPI.strip(model), args...) == predict(model, args...)

If LearnAPI.is_static(learner) is true, then predict may mutate it's first argument, but not in a way that alters the result of a subsequent call to predict, transform or inverse_transform. See more at fit.

Assumptions about data

By default, it is assumed that data supports the LearnAPI.RandomAccess interface; this includes all matrices, with observations-as-columns, most tables, and tuples thereof. See LearnAPI.RandomAccess for details. If this is not the case then an implementation must either: (i) overload obs to articulate how provided data can be transformed into a form that does support LearnAPI.RandomAccess; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API. Refer tbo document strings for details.

source
LearnAPI.transformFunction
transform(model, data)

Return a transformation of some data, using some model, as returned by fit.

Example

Below, X and Xnew are data of the same form.

For a learner that generalizes to new data ("learns"):

model = fit(learner, X; verbosity=0)
+predict(model, Point(), Xnew)

The shortcut predict(model, data) calls the first method with learner-specific kind_of_proxy, namely the first element of LearnAPI.kinds_of_proxy(learner), which lists all supported target proxies.

The argument model is anything returned by a call of the form fit(learner, ...).

If LearnAPI.features(LearnAPI.learner(model)) == nothing, then the argument data is omitted in both signatures. An example is density estimators.

See also fit, transform, inverse_transform.

Extended help

Note predict must not mutate any argument, except in the special case LearnAPI.is_static(learner) == true.

New implementations

If there is no notion of a "target" variable in the LearnAPI.jl sense, or you need an operation with an inverse, implement transform instead.

Implementation is optional. Only the first signature (with or without the data argument) is implemented, but each kind_of_proxy::KindOfProxy that gets an implementation must be added to the list returned by LearnAPI.kinds_of_proxy(learner). List all available kinds of proxy by doing LearnAPI.kinds_of_proxy().

If data is not present in the implemented signature (eg., for density estimators) then LearnAPI.features(learner, data) must return nothing.

If implemented, you must include :(LearnAPI.predict) in the tuple returned by the LearnAPI.functions trait.

If, additionally, LearnAPI.strip(model) is overloaded, then the following identity must hold:

predict(LearnAPI.strip(model), args...) == predict(model, args...)

If LearnAPI.is_static(learner) is true, then predict may mutate it's first argument, but not in a way that alters the result of a subsequent call to predict, transform or inverse_transform. See more at fit.

Assumptions about data

By default, it is assumed that data supports the LearnAPI.RandomAccess interface; this includes all matrices, with observations-as-columns, most tables, and tuples thereof. See LearnAPI.RandomAccess for details. If this is not the case then an implementation must either: (i) overload obs to articulate how provided data can be transformed into a form that does support LearnAPI.RandomAccess; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API. Refer tbo document strings for details.

source
LearnAPI.transformFunction
transform(model, data)

Return a transformation of some data, using some model, as returned by fit.

Example

Below, X and Xnew are data of the same form.

For a learner that generalizes to new data ("learns"):

model = fit(learner, X; verbosity=0)
 transform(model, Xnew)

or, in one step (where supported):

W = transform(learner, X) # `fit` implied

For a static (non-generalizing) transformer:

model = fit(learner)
-W = transform(model, X)

or, in one step (where supported):

W = transform(learner, X) # `fit` implied

Note transform does not mutate any argument, except in the special case LearnAPI.is_static(learner) == true.

See also fit, predict, inverse_transform.

Extended help

New implementations

Implementation for new LearnAPI.jl learners is optional. If implemented, you must include :(LearnAPI.transform) in the tuple returned by the LearnAPI.functions trait.

An implementation is free to implement transform signatures with additional positional arguments (eg., data-slurping signatures) but LearnAPI.jl is silent about their interpretation or existence.

If, additionally, LearnAPI.strip(model) is overloaded, then the following identity must hold:

transform(LearnAPI.strip(model), args...) == transform(model, args...)

If LearnAPI.is_static(learner) is true, then transform may mutate it's first argument, but not in a way that alters the result of a subsequent call to predict, transform or inverse_transform. See more at fit.

Assumptions about data

By default, it is assumed that data supports the LearnAPI.RandomAccess interface; this includes all matrices, with observations-as-columns, most tables, and tuples thereof. See LearnAPI.RandomAccess for details. If this is not the case then an implementation must either: (i) overload obs to articulate how provided data can be transformed into a form that does support LearnAPI.RandomAccess; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API. Refer tbo document strings for details.

source
LearnAPI.inverse_transformFunction
inverse_transform(model, data)

Inverse transform data according to some model returned by fit. Here "inverse" is to be understood broadly, e.g, an approximate right or left inverse for transform.

Example

In the following, learner is some dimension-reducing algorithm that generalizes to new data (such as PCA); Xtrain is the training input and Xnew the input to be reduced:

model = fit(learner, Xtrain)
+W = transform(model, X)

or, in one step (where supported):

W = transform(learner, X) # `fit` implied

Note transform does not mutate any argument, except in the special case LearnAPI.is_static(learner) == true.

See also fit, predict, inverse_transform.

Extended help

New implementations

Implementation for new LearnAPI.jl learners is optional. If implemented, you must include :(LearnAPI.transform) in the tuple returned by the LearnAPI.functions trait.

An implementation is free to implement transform signatures with additional positional arguments (eg., data-slurping signatures) but LearnAPI.jl is silent about their interpretation or existence.

If, additionally, LearnAPI.strip(model) is overloaded, then the following identity must hold:

transform(LearnAPI.strip(model), args...) == transform(model, args...)

If LearnAPI.is_static(learner) is true, then transform may mutate it's first argument, but not in a way that alters the result of a subsequent call to predict, transform or inverse_transform. See more at fit.

Assumptions about data

By default, it is assumed that data supports the LearnAPI.RandomAccess interface; this includes all matrices, with observations-as-columns, most tables, and tuples thereof. See LearnAPI.RandomAccess for details. If this is not the case then an implementation must either: (i) overload obs to articulate how provided data can be transformed into a form that does support LearnAPI.RandomAccess; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API. Refer tbo document strings for details.

source
LearnAPI.inverse_transformFunction
inverse_transform(model, data)

Inverse transform data according to some model returned by fit. Here "inverse" is to be understood broadly, e.g, an approximate right or left inverse for transform.

Example

In the following, learner is some dimension-reducing algorithm that generalizes to new data (such as PCA); Xtrain is the training input and Xnew the input to be reduced:

model = fit(learner, Xtrain)
 W = transform(model, Xnew)       # reduced version of `Xnew`
-Ŵ = inverse_transform(model, W)  # embedding of `W` in original space

See also fit, transform, predict.

Extended help

New implementations

Implementation is optional. If implemented, you must include :(LearnAPI.inverse_transform) in the tuple returned by the LearnAPI.functions trait.

If, additionally, LearnAPI.strip(model) is overloaded, then the following identity must hold:

inverse_transform(LearnAPI.strip(model), args...) == inverse_transform(model, args...)
source
+Ŵ = inverse_transform(model, W) # embedding of `W` in original space

See also fit, transform, predict.

Extended help

New implementations

Implementation is optional. If implemented, you must include :(LearnAPI.inverse_transform) in the tuple returned by the LearnAPI.functions trait.

If, additionally, LearnAPI.strip(model) is overloaded, then the following identity must hold:

inverse_transform(LearnAPI.strip(model), args...) == inverse_transform(model, args...)
source diff --git a/dev/reference/index.html b/dev/reference/index.html index e6fb752..0ed798a 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,16 +1,16 @@ Overview · LearnAPI.jl

Reference

Here we give the definitive specification of the LearnAPI.jl interface. For informal guides see Anatomy of an Implementation and Common Implementation Patterns.

Important terms and concepts

The LearnAPI.jl specification is predicated on a few basic, informally defined notions:

Data and observations

ML/statistical algorithms are typically applied in conjunction with resampling of observations, as in cross-validation. In this document data will always refer to objects encapsulating an ordered sequence of individual observations.

A DataFrame instance, from DataFrames.jl, is an example of data, the observations being the rows. Typically, data provided to LearnAPI.jl algorithms, will implement the MLUtils.jl getobs/numobs interface for accessing individual observations, but implementations can opt out of this requirement; see obs and LearnAPI.data_interface for details.

Note

In the MLUtils.jl convention, observations in tables are the rows but observations in a matrix are the columns.

Hyperparameters

Besides the data it consumes, a machine learning algorithm's behavior is governed by a number of user-specified hyperparameters, such as the number of trees in a random forest. In LearnAPI.jl, one is allowed to have hyperparameters that are not data-generic. For example, a class weight dictionary, which will only make sense for a target taking values in the set of dictionary keys, can be specified as a hyperparameter.

Targets and target proxies

Context

After training, a supervised classifier predicts labels on some input which are then compared with ground truth labels using some accuracy measure, to assesses the performance of the classifier. Alternatively, the classifier predicts class probabilities, which are instead paired with ground truth labels using a proper scoring rule, say. In outlier detection, "outlier"/"inlier" predictions, or probability-like scores, are similarly compared with ground truth labels. In clustering, integer labels assigned to observations by the clustering algorithm can can be paired with human labels using, say, the Rand index. In survival analysis, predicted survival functions or probability distributions are compared with censored ground truth survival times. And so on ...

Definitions

More generally, whenever we have a variable (e.g., a class label) that can, at least in principle, be paired with a predicted value, or some predicted "proxy" for that variable (such as a class probability), then we call the variable a target variable, and the predicted output a target proxy. In this definition, it is immaterial whether or not the target appears in training (the algorithm is supervised) or whether or not predictions generalize to new input observations (the algorithm "learns").

LearnAPI.jl provides singleton target proxy types for prediction dispatch. These are also used to distinguish performance metrics provided by the package StatisticalMeasures.jl.

Learners

An object implementing the LearnAPI.jl interface is called a learner, although it is more accurately "the configuration of some machine learning or statistical algorithm".¹ A learner encapsulates a particular set of user-specified hyperparameters as the object's properties (which conceivably differ from its fields). It does not store learned parameters.

Informally, we will sometimes use the word "model" to refer to the output of fit(learner, ...) (see below), something which typically does store learned parameters.

For learner to be a valid LearnAPI.jl learner, LearnAPI.constructor(learner) must be defined and return a keyword constructor enabling recovery of learner from its properties:

properties = propertynames(learner)
 named_properties = NamedTuple{properties}(getproperty.(Ref(learner), properties))
-@assert learner == LearnAPI.constructor(learner)(; named_properties...)

which can be tested with @assertLearnAPI.clone(learner)== learner.

Note that if if learner is an instance of a mutable struct, this requirement generally requires overloading Base.== for the struct.

Important

No LearnAPI.jl method is permitted to mutate a learner. In particular, one should make deep copies of RNG hyperparameters before using them in a new implementation of fit.

Composite learners (wrappers)

A composite learner is one with at least one property that can take other learners as values; for such learners LearnAPI.is_composite(learner) must be true (fallback is false). Generally, the keyword constructor provided by LearnAPI.constructor must provide default values for all properties that are not learner-valued. Instead, these learner-valued properties can have a nothing default, with the constructor throwing an error if the the constructor call does not explicitly specify a new value.

Any object learner for which LearnAPI.functions(learner) is non-empty is understood to have a valid implementation of the LearnAPI.jl interface.

Example

Below is an example of a learner type with a valid constructor:

struct GradientRidgeRegressor{T<:Real}
+@assert learner == LearnAPI.constructor(learner)(; named_properties...)

which can be tested with @assertLearnAPI.clone(learner)== learner.

Note that if if learner is an instance of a mutable struct, this requirement generally requires overloading Base.== for the struct.

Important

No LearnAPI.jl method is permitted to mutate a learner. In particular, one should make deep copies of RNG hyperparameters before using them in a new implementation of fit.

Composite learners (wrappers)

A composite learner is one with at least one property that can take other learners as values; for such learners LearnAPI.is_composite(learner) must be true (fallback is false). Generally, the keyword constructor provided by LearnAPI.constructor must provide default values for all properties that are not learner-valued. Instead, these learner-valued properties can have a nothing default, with the constructor throwing an error if the constructor call does not explicitly specify a new value.

Any object learner for which LearnAPI.functions(learner) is non-empty is understood to have a valid implementation of the LearnAPI.jl interface.

Example

Below is an example of a learner type with a valid constructor:

struct GradientRidgeRegressor{T<:Real}
     learning_rate::T
     epochs::Int
     l2_regularization::T
 end
 GradientRidgeRegressor(; learning_rate=0.01, epochs=10, l2_regularization=0.01) =
     GradientRidgeRegressor(learning_rate, epochs, l2_regularization)
-LearnAPI.constructor(::GradientRidgeRegressor) = GradientRidgeRegressor

Documentation

Attach public LearnAPI.jl-related documentation for a learner to it's constructor, rather than to the struct defining its type. In this way, a learner can implement multiple interfaces, in addition to the LearnAPI interface, with separate document strings for each.

Methods

Compulsory methods

All new learner types must implement fit, LearnAPI.learner, LearnAPI.constructor and LearnAPI.functions.

Most learners will also implement predict and/or transform. For a minimal (but useless) implementation, see the implementation of SmallLearner here.

List of methods

  • fit: for (i) training or updating learners that generalize to new data; or (ii) wrapping learner in an object that is possibly mutated by predict/transform, to record byproducts of those operations, in the special case of non-generalizing learners (called here static algorithms)

  • update: for updating learning outcomes after hyperparameter changes, such as increasing an iteration parameter.

  • update_observations, update_features: update learning outcomes by presenting additional training data.

  • predict: for outputting targets or target proxies (such as probability density functions)

  • transform: similar to predict, but for arbitrary kinds of output, and which can be paired with an inverse_transform method

  • inverse_transform: for inverting the output of transform ("inverting" broadly understood)

  • LearnAPI.target, LearnAPI.weights, LearnAPI.features: for extracting relevant parts of training data, where defined.

  • obs: method for exposing to the user learner-specific representations of data, which are additionally guaranteed to implement the observation access API specified by LearnAPI.data_interface(learner).

  • Accessor functions: these include functions like LearnAPI.feature_importances and LearnAPI.training_losses, for extracting, from training outcomes, information common to many learners. This includes LearnAPI.strip(model) for replacing a learning outcome model with a serializable version that can still predict or transform.

  • Learner traits: methods that promise specific learner behavior or record general information about the learner. Only LearnAPI.constructor and LearnAPI.functions are universally compulsory.

Utilities

LearnAPI.cloneFunction
LearnAPI.clone(learner; replacements...)

Return a shallow copy of learner with the specified hyperparameter replacements.

clone(learner; epochs=100, learning_rate=0.01)

A LearnAPI.jl contract ensures that LearnAPI.clone(learner) == learner.

source
LearnAPI.@traitMacro
@trait(LearnerType, trait1=value1, trait2=value2, ...)

Overload a number of traits for learners of type LearnerType. For example, the code

@trait(
+LearnAPI.constructor(::GradientRidgeRegressor) = GradientRidgeRegressor

Documentation

Attach public LearnAPI.jl-related documentation for a learner to it's constructor, rather than to the struct defining its type. In this way, a learner can implement multiple interfaces, in addition to the LearnAPI interface, with separate document strings for each.

Methods

Compulsory methods

All new learner types must implement fit, LearnAPI.learner, LearnAPI.constructor and LearnAPI.functions.

Most learners will also implement predict and/or transform. For a minimal (but useless) implementation, see the implementation of SmallLearner here.

List of methods

  • fit: for (i) training learners that generalize to new data; or (ii) wrapping learner in an object that is possibly mutated by predict/transform, to record byproducts of those operations, in the special case of non-generalizing learners (called here static algorithms)

  • update: for updating learning outcomes after hyperparameter changes, such as increasing an iteration parameter.

  • update_observations, update_features: update learning outcomes by presenting additional training data.

  • predict: for outputting targets or target proxies (such as probability density functions)

  • transform: similar to predict, but for arbitrary kinds of output, and which can be paired with an inverse_transform method

  • inverse_transform: for inverting the output of transform ("inverting" broadly understood)

  • LearnAPI.target, LearnAPI.weights, LearnAPI.features: for extracting relevant parts of training data, where defined.

  • obs: method for exposing to the user learner-specific representations of data, which are additionally guaranteed to implement the observation access API specified by LearnAPI.data_interface(learner).

  • Accessor functions: these include functions like LearnAPI.feature_importances and LearnAPI.training_losses, for extracting, from training outcomes, information common to many learners. This includes LearnAPI.strip(model) for replacing a learning outcome model with a serializable version that can still predict or transform.

  • Learner traits: methods that promise specific learner behavior or record general information about the learner. Only LearnAPI.constructor and LearnAPI.functions are universally compulsory.

Utilities

LearnAPI.cloneFunction
LearnAPI.clone(learner; replacements...)

Return a shallow copy of learner with the specified hyperparameter replacements.

clone(learner; epochs=100, learning_rate=0.01)

A LearnAPI.jl contract ensures that LearnAPI.clone(learner) == learner.

source
LearnAPI.@traitMacro
@trait(LearnerType, trait1=value1, trait2=value2, ...)

Overload a number of traits for learners of type LearnerType. For example, the code

@trait(
     RidgeRegressor,
     tags = ("regression", ),
     doc_url = "https://some.cool.documentation",
 )

is equivalent to

LearnAPI.tags(::RidgeRegressor) = ("regression", ),
-LearnAPI.doc_url(::RidgeRegressor) = "https://some.cool.documentation",
source

¹ We acknowledge users may not like this terminology, and may know "learner" by some other name, such as "strategy", "options", "hyperparameter set", "configuration", "algorithm", or "model". Consensus on this point is difficult; see, e.g., this Julia Discourse discussion.

+LearnAPI.doc_url(::RidgeRegressor) = "https://some.cool.documentation",source

¹ We acknowledge users may not like this terminology, and may know "learner" by some other name, such as "strategy", "options", "hyperparameter set", "configuration", "algorithm", or "model". Consensus on this point is difficult; see, e.g., this Julia Discourse discussion.

diff --git a/dev/search_index.js b/dev/search_index.js index e2f5bb6..1677ca6 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"patterns/regression/#Regression","page":"Regression","title":"Regression","text":"","category":"section"},{"location":"patterns/regression/","page":"Regression","title":"Regression","text":"See these examples from the JuliaTestAPI.jl test suite:","category":"page"},{"location":"patterns/regression/","page":"Regression","title":"Regression","text":"ridge regression","category":"page"},{"location":"patterns/missing_value_imputation/#Missing-Value-Imputation","page":"Missing Value Imputation","title":"Missing Value Imputation","text":"","category":"section"},{"location":"patterns/iterative_algorithms/#Iterative-Algorithms","page":"Iterative Algorithms","title":"Iterative Algorithms","text":"","category":"section"},{"location":"patterns/iterative_algorithms/","page":"Iterative Algorithms","title":"Iterative Algorithms","text":"See these examples from the JuliaTestAI.jl test suite:","category":"page"},{"location":"patterns/iterative_algorithms/","page":"Iterative Algorithms","title":"Iterative Algorithms","text":"bagged ensembling\nperceptron classifier","category":"page"},{"location":"patterns/survival_analysis/#Survival-Analysis","page":"Survival Analysis","title":"Survival Analysis","text":"","category":"section"},{"location":"predict_transform/#operations","page":"predict/transform","title":"predict, transform and inverse_transform","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"predict(model, kind_of_proxy, data)\ntransform(model, data)\ninverse_transform(model, data)","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Versions without the data argument may apply, for example in Density estimation.","category":"page"},{"location":"predict_transform/#predict_workflow","page":"predict/transform","title":"Typical worklows","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Train some supervised learner:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"model = fit(learner, (X, y))","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Predict probability distributions:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"ŷ = predict(model, Distribution(), Xnew)","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Generate point predictions:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"ŷ = predict(model, Point(), Xnew)","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Train a dimension-reducing learner:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"model = fit(learner, X)\nXnew_reduced = transform(model, Xnew)","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Apply an approximate right inverse:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"inverse_transform(model, Xnew_reduced)","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Fit and transform in one line:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"transform(learner, data) # `fit` implied","category":"page"},{"location":"predict_transform/#An-advanced-workflow","page":"predict/transform","title":"An advanced workflow","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"fitobs = obs(learner, (X, y)) # learner-specific repr. of data\nmodel = fit(learner, MLUtils.getobs(fitobs, 1:100))\npredictobs = obs(model, MLUtils.getobs(X, 101:150))\nŷ = predict(model, Point(), predictobs)","category":"page"},{"location":"predict_transform/#predict_guide","page":"predict/transform","title":"Implementation guide","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"method compulsory? fallback\npredict no none\ntransform no none\ninverse_transform no none","category":"page"},{"location":"predict_transform/#Predict-or-transform?","page":"predict/transform","title":"Predict or transform?","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"If the learner has a notion of target variable, then use predict to output each supported kind of target proxy (Point(), Distribution(), etc).","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"For output not associated with a target variable, implement transform instead, which does not dispatch on LearnAPI.KindOfProxy, but can be optionally paired with an implementation of inverse_transform, for returning (approximate) right or left inverses to transform.","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Of course, the one learner can implement both a predict and transform method. For example a K-means clustering algorithm can predict labels and transform to reduce dimension using distances from the cluster centres.","category":"page"},{"location":"predict_transform/#one_liners","page":"predict/transform","title":"One-liners combining fit and transform/predict","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Learners may optionally overload transform to apply fit first, using the supplied data if required, and then immediately transform the same data. The same applies to predict. In that case the first argument of transform/predict is an learner instead of the output of fit:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"predict(learner, kind_of_proxy, data) # `fit` implied\ntransform(learner, data) # `fit` implied","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"For example, if fit(learner, X) is defined, then predict(learner, X) will be shorthand for","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"model = fit(learner, X)\npredict(model, X)","category":"page"},{"location":"predict_transform/#predict_ref","page":"predict/transform","title":"Reference","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"predict\ntransform\ninverse_transform","category":"page"},{"location":"predict_transform/#LearnAPI.predict","page":"predict/transform","title":"LearnAPI.predict","text":"predict(model, kind_of_proxy::LearnAPI.KindOfProxy, data)\npredict(model, data)\n\nThe first signature returns target predictions, or proxies for target predictions, for input features data, according to some model returned by fit. Where supported, these are literally target predictions if kind_of_proxy = Point(), and probability density/mass functions if kind_of_proxy = Distribution(). List all options with LearnAPI.kinds_of_proxy(learner), where learner = LearnAPI.learner(model).\n\nmodel = fit(learner, (X, y))\npredict(model, Point(), Xnew)\n\nThe shortcut predict(model, data) calls the first method with learner-specific kind_of_proxy, namely the first element of LearnAPI.kinds_of_proxy(learner), which lists all supported target proxies.\n\nThe argument model is anything returned by a call of the form fit(learner, ...).\n\nIf LearnAPI.features(LearnAPI.learner(model)) == nothing, then the argument data is omitted in both signatures. An example is density estimators.\n\nSee also fit, transform, inverse_transform.\n\nExtended help\n\nNote predict must not mutate any argument, except in the special case LearnAPI.is_static(learner) == true.\n\nNew implementations\n\nIf there is no notion of a \"target\" variable in the LearnAPI.jl sense, or you need an operation with an inverse, implement transform instead.\n\nImplementation is optional. Only the first signature (with or without the data argument) is implemented, but each kind_of_proxy::KindOfProxy that gets an implementation must be added to the list returned by LearnAPI.kinds_of_proxy(learner). List all available kinds of proxy by doing LearnAPI.kinds_of_proxy().\n\nIf data is not present in the implemented signature (eg., for density estimators) then LearnAPI.features(learner, data) must return nothing.\n\nIf implemented, you must include :(LearnAPI.predict) in the tuple returned by the LearnAPI.functions trait. \n\nIf, additionally, LearnAPI.strip(model) is overloaded, then the following identity must hold:\n\npredict(LearnAPI.strip(model), args...) == predict(model, args...)\n\nIf LearnAPI.is_static(learner) is true, then predict may mutate it's first argument, but not in a way that alters the result of a subsequent call to predict, transform or inverse_transform. See more at fit.\n\nAssumptions about data\n\nBy default, it is assumed that data supports the LearnAPI.RandomAccess interface; this includes all matrices, with observations-as-columns, most tables, and tuples thereof. See LearnAPI.RandomAccess for details. If this is not the case then an implementation must either: (i) overload obs to articulate how provided data can be transformed into a form that does support LearnAPI.RandomAccess; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API. Refer tbo document strings for details.\n\n\n\n\n\n","category":"function"},{"location":"predict_transform/#LearnAPI.transform","page":"predict/transform","title":"LearnAPI.transform","text":"transform(model, data)\n\nReturn a transformation of some data, using some model, as returned by fit.\n\nExample\n\nBelow, X and Xnew are data of the same form.\n\nFor a learner that generalizes to new data (\"learns\"):\n\nmodel = fit(learner, X; verbosity=0)\ntransform(model, Xnew)\n\nor, in one step (where supported):\n\nW = transform(learner, X) # `fit` implied\n\nFor a static (non-generalizing) transformer:\n\nmodel = fit(learner)\nW = transform(model, X)\n\nor, in one step (where supported):\n\nW = transform(learner, X) # `fit` implied\n\nNote transform does not mutate any argument, except in the special case LearnAPI.is_static(learner) == true.\n\nSee also fit, predict, inverse_transform.\n\nExtended help\n\nNew implementations\n\nImplementation for new LearnAPI.jl learners is optional. If implemented, you must include :(LearnAPI.transform) in the tuple returned by the LearnAPI.functions trait. \n\nAn implementation is free to implement transform signatures with additional positional arguments (eg., data-slurping signatures) but LearnAPI.jl is silent about their interpretation or existence.\n\nIf, additionally, LearnAPI.strip(model) is overloaded, then the following identity must hold:\n\ntransform(LearnAPI.strip(model), args...) == transform(model, args...)\n\nIf LearnAPI.is_static(learner) is true, then transform may mutate it's first argument, but not in a way that alters the result of a subsequent call to predict, transform or inverse_transform. See more at fit.\n\nAssumptions about data\n\nBy default, it is assumed that data supports the LearnAPI.RandomAccess interface; this includes all matrices, with observations-as-columns, most tables, and tuples thereof. See LearnAPI.RandomAccess for details. If this is not the case then an implementation must either: (i) overload obs to articulate how provided data can be transformed into a form that does support LearnAPI.RandomAccess; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API. Refer tbo document strings for details.\n\n\n\n\n\n","category":"function"},{"location":"predict_transform/#LearnAPI.inverse_transform","page":"predict/transform","title":"LearnAPI.inverse_transform","text":"inverse_transform(model, data)\n\nInverse transform data according to some model returned by fit. Here \"inverse\" is to be understood broadly, e.g, an approximate right or left inverse for transform.\n\nExample\n\nIn the following, learner is some dimension-reducing algorithm that generalizes to new data (such as PCA); Xtrain is the training input and Xnew the input to be reduced:\n\nmodel = fit(learner, Xtrain)\nW = transform(model, Xnew) # reduced version of `Xnew`\nŴ = inverse_transform(model, W) # embedding of `W` in original space\n\nSee also fit, transform, predict.\n\nExtended help\n\nNew implementations\n\nImplementation is optional. If implemented, you must include :(LearnAPI.inverse_transform) in the tuple returned by the LearnAPI.functions trait. \n\nIf, additionally, LearnAPI.strip(model) is overloaded, then the following identity must hold:\n\ninverse_transform(LearnAPI.strip(model), args...) == inverse_transform(model, args...)\n\n\n\n\n\n","category":"function"},{"location":"patterns/ensembling/#Ensembling","page":"Ensembling","title":"Ensembling","text":"","category":"section"},{"location":"patterns/ensembling/","page":"Ensembling","title":"Ensembling","text":"See these examples from the JuliaTestAPI.jl test suite:","category":"page"},{"location":"patterns/ensembling/","page":"Ensembling","title":"Ensembling","text":"bagged ensembling of a regression model","category":"page"},{"location":"patterns/supervised_bayesian_algorithms/#Supervised-Bayesian-Models","page":"Supervised Bayesian Models","title":"Supervised Bayesian Models","text":"","category":"section"},{"location":"patterns/classification/#Classification","page":"Classification","title":"Classification","text":"","category":"section"},{"location":"patterns/classification/","page":"Classification","title":"Classification","text":"See these examples from the JuliaTestAPI.jl test suite:","category":"page"},{"location":"patterns/classification/","page":"Classification","title":"Classification","text":"perceptron classifier","category":"page"},{"location":"patterns/density_estimation/#Density-Estimation","page":"Density Estimation","title":"Density Estimation","text":"","category":"section"},{"location":"patterns/density_estimation/","page":"Density Estimation","title":"Density Estimation","text":"See these examples from the JuliaTestAPI.jl test suite:","category":"page"},{"location":"patterns/density_estimation/","page":"Density Estimation","title":"Density Estimation","text":"normal distribution estimator","category":"page"},{"location":"patterns/gradient_descent/#Gradient-Descent","page":"Gradient Descent","title":"Gradient Descent","text":"","category":"section"},{"location":"patterns/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"See these examples from the JuliaTestAI.jl test suite:","category":"page"},{"location":"patterns/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"perceptron classifier","category":"page"},{"location":"patterns/transformers/#transformers","page":"Transformers","title":"Transformers","text":"","category":"section"},{"location":"patterns/transformers/","page":"Transformers","title":"Transformers","text":"Check out the following examples:","category":"page"},{"location":"patterns/transformers/","page":"Transformers","title":"Transformers","text":"[Truncated SVD]((https://github.com/JuliaAI/LearnTestAPI.jl/blob/dev/test/patterns/dimension_reduction.jl (from the TestLearnAPI.jl test suite)","category":"page"},{"location":"common_implementation_patterns/#patterns","page":"Common Implementation Patterns","title":"Common Implementation Patterns","text":"","category":"section"},{"location":"common_implementation_patterns/","page":"Common Implementation Patterns","title":"Common Implementation Patterns","text":"important: Important\nThis section is only an implementation guide. The definitive specification of the Learn API is given in Reference.","category":"page"},{"location":"common_implementation_patterns/","page":"Common Implementation Patterns","title":"Common Implementation Patterns","text":"This guide is intended to be consulted after reading Anatomy of an Implementation, which introduces the main interface objects and terminology.","category":"page"},{"location":"common_implementation_patterns/","page":"Common Implementation Patterns","title":"Common Implementation Patterns","text":"Although an implementation is defined purely by the methods and traits it implements, many implementations fall into one (or more) of the following informally understood patterns or \"tasks\":","category":"page"},{"location":"common_implementation_patterns/","page":"Common Implementation Patterns","title":"Common Implementation Patterns","text":"Regression: Supervised learners for continuous targets\nClassification: Supervised learners for categorical targets \nClusterering: Algorithms that group data into clusters for classification and possibly dimension reduction. May be true learners (generalize to new data) or static.\nGradient Descent: Including neural networks.\nIterative Algorithms\nIncremental Algorithms: Algorithms that can be updated with new observations.\nFeature Engineering: Algorithms for selecting or combining features\nDimension Reduction: Transformers that learn to reduce feature space dimension\nMissing Value Imputation\nTransformers: Other transformers, such as standardizers, and categorical encoders.\nStatic Algorithms: Algorithms that do not learn, in the sense they must be re-executed for each new data set (do not generalize), but which have hyperparameters and/or deliver ancillary information about the computation.\nEnsembling: Algorithms that blend predictions of multiple algorithms\nTime Series Forecasting\nTime Series Classification\nSurvival Analysis\nDensity Estimation: Algorithms that learn a probability distribution\nBayesian Algorithms\nOutlier Detection: Supervised, unsupervised, or semi-supervised learners for anomaly detection.\nText Analysis\nAudio Analysis\nNatural Language Processing\nImage Processing\nMeta-algorithms","category":"page"},{"location":"traits/#traits","page":"Learner Traits","title":"Learner Traits","text":"","category":"section"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"Learner traits are simply functions whose sole argument is a learner.","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"Traits promise specific learner behavior, such as: This learner can make point or probabilistic predictions or This learner is supervised (sees a target in training). They may also record more mundane information, such as a package license.","category":"page"},{"location":"traits/#trait_summary","page":"Learner Traits","title":"Trait summary","text":"","category":"section"},{"location":"traits/#traits_list","page":"Learner Traits","title":"Overloadable traits","text":"","category":"section"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"In the examples column of the table below, Continuous is a name owned the package ScientificTypesBase.jl.","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"trait return value fallback value example\nLearnAPI.constructor(learner) constructor for generating new or modified versions of learner (no fallback) RidgeRegressor\nLearnAPI.functions(learner) functions you can apply to learner or associated model (traits excluded) () (:fit, :predict, :LearnAPI.strip, :(LearnAPI.learner), :obs)\nLearnAPI.kinds_of_proxy(learner) instances kind of KindOfProxy for which an implementation of LearnAPI.predict(learner, kind, ...) is guaranteed. () (Distribution(), Interval())\nLearnAPI.tags(learner) lists one or more suggestive learner tags from LearnAPI.tags() () (:regression, :probabilistic)\nLearnAPI.is_pure_julia(learner) true if implementation is 100% Julia code false true\nLearnAPI.pkg_name(learner) name of package providing core code (may be different from package providing LearnAPI.jl implementation) \"unknown\" \"DecisionTree\"\nLearnAPI.pkg_license(learner) name of license of package providing core code \"unknown\" \"MIT\"\nLearnAPI.doc_url(learner) url providing documentation of the core code \"unknown\" \"https://en.wikipedia.org/wiki/Decision_tree_learning\"\nLearnAPI.load_path(learner) string locating name returned by LearnAPI.constructor(learner), beginning with a package name \"unknown\"` FastTrees.LearnAPI.DecisionTreeClassifier\nLearnAPI.is_composite(learner) true if one or more properties of learner may be a learner false true\nLearnAPI.human_name(learner) human name for the learner; should be a noun type name with spaces \"elastic net regressor\"\nLearnAPI.iteration_parameter(learner) symbolic name of an iteration parameter nothing :epochs\nLearnAPI.data_interface(learner) Interface implemented by objects returned by obs Base.HasLength() (supports MLUtils.getobs/numobs) Base.SizeUnknown() (supports iterate)\nLearnAPI.fit_observation_scitype(learner) upper bound on scitype(observation) for observation in data ensuring fit(learner, data) works Union{} Tuple{AbstractVector{Continuous}, Continuous}\nLearnAPI.target_observation_scitype(learner) upper bound on the scitype of each observation of the targget Any Continuous\nLearnAPI.is_static(learner) true if fit consumes no data false true","category":"page"},{"location":"traits/#Derived-Traits","page":"Learner Traits","title":"Derived Traits","text":"","category":"section"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"The following are provided for convenience but should not be overloaded by new learners:","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"trait return value example\nLearnAPI.name(learner) learner type name as string \"PCA\"\nLearnAPI.is_learner(learner) true if learner is LearnAPI.jl-compliant true\nLearnAPI.target(learner) true if fit sees a target variable; see LearnAPI.target false\nLearnAPI.weights(learner) true if fit supports per-observation; see LearnAPI.weights false","category":"page"},{"location":"traits/#Implementation-guide","page":"Learner Traits","title":"Implementation guide","text":"","category":"section"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"A single-argument trait is declared following this pattern:","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"LearnAPI.is_pure_julia(learner::MyLearnerType) = true","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"A shorthand for single-argument traits is available:","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"@trait MyLearnerType is_pure_julia=true","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"Multiple traits can be declared like this:","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"@trait(\n MyLearnerType,\n is_pure_julia = true,\n pkg_name = \"MyPackage\",\n)","category":"page"},{"location":"traits/#trait_contract","page":"Learner Traits","title":"The global trait contract","text":"","category":"section"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"To ensure that trait metadata can be stored in an external learner registry, LearnAPI.jl requires:","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"Finiteness: The value of a trait is the same for all learners with same value of LearnAPI.constructor(learner). This typically means trait values do not depend on type parameters! If is_composite(learner) = true, this requirement is dropped.\nLow level deserializability: It should be possible to evaluate the trait value when LearnAPI is the only imported module. ","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"Because of 1, combining a lot of functionality into one learner (e.g. the learner can perform both classification or regression) can mean traits are necessarily less informative (as in LearnAPI.target_observation_scitype(learner) = Any).","category":"page"},{"location":"traits/#Reference","page":"Learner Traits","title":"Reference","text":"","category":"section"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"LearnAPI.constructor\nLearnAPI.functions\nLearnAPI.kinds_of_proxy\nLearnAPI.tags\nLearnAPI.is_pure_julia\nLearnAPI.pkg_name\nLearnAPI.pkg_license\nLearnAPI.doc_url\nLearnAPI.load_path\nLearnAPI.is_composite\nLearnAPI.human_name\nLearnAPI.data_interface\nLearnAPI.iteration_parameter\nLearnAPI.fit_observation_scitype\nLearnAPI.target_observation_scitype\nLearnAPI.is_static","category":"page"},{"location":"traits/#LearnAPI.constructor","page":"Learner Traits","title":"LearnAPI.constructor","text":"Learn.API.constructor(learner)\n\nReturn a keyword constructor that can be used to clone learner:\n\njulia> learner.lambda\n0.1\njulia> C = LearnAPI.constructor(learner)\njulia> learner2 = C(lambda=0.2)\njulia> learner2.lambda\n0.2\n\nNew implementations\n\nAll new implementations must overload this trait.\n\nAttach public LearnAPI.jl-related documentation for learner to the constructor, not the learner struct.\n\nIt must be possible to recover learner from the constructor returned as follows:\n\nproperties = propertynames(learner)\nnamed_properties = NamedTuple{properties}(getproperty.(Ref(learner), properties))\n@assert learner == LearnAPI.constructor(learner)(; named_properties...)\n\nwhich can be tested with @assert LearnAPI.clone(learner) == learner.\n\nThe keyword constructor provided by LearnAPI.constructor must provide default values for all properties, with the exception of those that can take other LearnAPI.jl learners as values. These can be provided with the default nothing, with the constructor throwing an error if the default value persists.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.functions","page":"Learner Traits","title":"LearnAPI.functions","text":"LearnAPI.functions(learner)\n\nReturn a tuple of expressions representing functions that can be meaningfully applied with learner, or an associated model (object returned by fit(learner, ...), as the first argument. Learner traits (methods for which learner is the only argument) are excluded.\n\nThe returned tuple may include expressions like :(DecisionTree.print_tree), which reference functions not owned by LearnAPI.jl.\n\nThe understanding is that learner is a LearnAPI-compliant object whenever the return value is non-empty.\n\nExtended help\n\nNew implementations\n\nAll new implementations must implement this trait. Here's a checklist for elements in the return value:\n\nexpression implementation compulsory? include in returned tuple?\n:(LearnAPI.fit) yes yes\n:(LearnAPI.learner) yes yes\n:(LearnAPI.strip) no yes\n:(LearnAPI.obs) no yes\n:(LearnAPI.features) no yes, unless fit consumes no data\n:(LearnAPI.target) no only if implemented\n:(LearnAPI.weights) no only if implemented\n:(LearnAPI.update) no only if implemented\n:(LearnAPI.update_observations) no only if implemented\n:(LearnAPI.update_features) no only if implemented\n:(LearnAPI.predict) no only if implemented\n:(LearnAPI.transform) no only if implemented\n:(LearnAPI.inverse_transform) no only if implemented\n< accessor functions> no only if implemented\n\nAlso include any implemented accessor functions, both those owned by LearnaAPI.jl, and any learner-specific ones. The LearnAPI.jl accessor functions are: LearnAPI.extras, LearnAPI.learner, LearnAPI.coefficients, LearnAPI.intercept, LearnAPI.tree, LearnAPI.trees, LearnAPI.feature_importances, LearnAPI.training_labels, LearnAPI.training_losses, LearnAPI.training_predictions, LearnAPI.training_scores and LearnAPI.components (LearnAPI.strip is always included).\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.kinds_of_proxy","page":"Learner Traits","title":"LearnAPI.kinds_of_proxy","text":"LearnAPI.kinds_of_proxy(learner)\n\nReturns a tuple of all instances, kind, for which for which predict(learner, kind, data...) has a guaranteed implementation. Each such kind subtypes LearnAPI.KindOfProxy. Examples are Point() (for predicting actual target values) and Distributions() (for predicting probability mass/density functions).\n\nThe call predict(model, data) always returns predict(model, kind, data), where kind is the first element of the trait's return value.\n\nSee also LearnAPI.predict, LearnAPI.KindOfProxy.\n\nExtended help\n\nNew implementations\n\nMust be overloaded whenever predict is implemented.\n\nElements of the returned tuple must be instances of LearnAPI.KindOfProxy. List all possibilities by running LearnAPI.kinds_of_proxy().\n\nSuppose, for example, we have the following implementation of a supervised learner returning only probabilistic predictions:\n\nLearnAPI.predict(learner::MyNewLearnerType, LearnAPI.Distribution(), Xnew) = ...\n\nThen we can declare\n\n@trait MyNewLearnerType kinds_of_proxy = (LearnaAPI.Distribution(),)\n\nLearnAPI.jl provides the fallback for predict(model, data).\n\nFor more on target variables and target proxies, refer to the LearnAPI documentation.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.tags","page":"Learner Traits","title":"LearnAPI.tags","text":"LearnAPI.tags(learner)\n\nLists one or more suggestive learner tags. Do LearnAPI.tags() to list all possible.\n\nwarning: Warning\nThe value of this trait guarantees no particular behavior. The trait is intended for informal classification purposes only.\n\nNew implementations\n\nThis trait should return a tuple of strings, as in (\"classifier\", \"text analysis\").\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.is_pure_julia","page":"Learner Traits","title":"LearnAPI.is_pure_julia","text":"LearnAPI.is_pure_julia(learner)\n\nReturns true if training learner requires evaluation of pure Julia code only.\n\nNew implementations\n\nThe fallback is false.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.pkg_name","page":"Learner Traits","title":"LearnAPI.pkg_name","text":"LearnAPI.pkg_name(learner)\n\nReturn the name of the package module which supplies the core training algorithm for learner. This is not necessarily the package providing the LearnAPI interface.\n\nReturns \"unknown\" if the learner implementation has not overloaded the trait. \n\nNew implementations\n\nMust return a string, as in \"DecisionTree\".\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.pkg_license","page":"Learner Traits","title":"LearnAPI.pkg_license","text":"LearnAPI.pkg_license(learner)\n\nReturn the name of the software license, such as \"MIT\", applying to the package where the core algorithm for learner is implemented.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.doc_url","page":"Learner Traits","title":"LearnAPI.doc_url","text":"LearnAPI.doc_url(learner)\n\nReturn a url where the core algorithm for learner is documented.\n\nReturns \"unknown\" if the learner implementation has not overloaded the trait. \n\nNew implementations\n\nMust return a string, such as \"https://en.wikipedia.org/wiki/Decision_tree_learning\".\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.load_path","page":"Learner Traits","title":"LearnAPI.load_path","text":"LearnAPI.load_path(learner)\n\nReturn a string indicating where in code the definition of the learner's constructor can be found, beginning with the name of the package module defining it. By \"constructor\" we mean the return value of LearnAPI.constructor(learner).\n\nImplementation\n\nFor example, a return value of \"FastTrees.LearnAPI.DecisionTreeClassifier\" means the following julia code will not error:\n\nimport FastTrees\nimport LearnAPI\n@assert FastTrees.LearnAPI.DecisionTreeClassifier == LearnAPI.constructor(learner)\n\nReturns \"unknown\" if the learner implementation has not overloaded the trait. \n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.is_composite","page":"Learner Traits","title":"LearnAPI.is_composite","text":"LearnAPI.is_composite(learner)\n\nReturns true if one or more properties (fields) of learner may themselves be learners, and false otherwise.\n\nSee also LearnAPI.components.\n\nNew implementations\n\nThis trait should be overloaded if one or more properties (fields) of learner may take learner values. Fallback return value is false. The keyword constructor for such an learner need not prescribe defaults for learner-valued properties. Implementation of the accessor function LearnAPI.components is recommended.\n\nThe value of the trait must depend only on the type of learner. \n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.human_name","page":"Learner Traits","title":"LearnAPI.human_name","text":"LearnAPI.human_name(learner)\n\nReturn a human-readable string representation of typeof(learner). Primarily intended for auto-generation of documentation.\n\nNew implementations\n\nOptional. A fallback takes the type name, inserts spaces and removes capitalization. For example, KNNRegressor becomes \"knn regressor\". Better would be to overload the trait to return \"K-nearest neighbors regressor\". Ideally, this is a \"concrete\" noun like \"ridge regressor\" rather than an \"abstract\" noun like \"ridge regression\".\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.data_interface","page":"Learner Traits","title":"LearnAPI.data_interface","text":"LearnAPI.data_interface(learner)\n\nReturn the data interface supported by learner for accessing individual observations in representations of input data returned by obs(learner, data) or obs(model, data), whenever learner == LearnAPI.learner(model). Here data is fit, predict, or transform-consumable data.\n\nPossible return values are LearnAPI.RandomAccess, LearnAPI.FiniteIterable, and LearnAPI.Iterable.\n\nSee also obs.\n\nNew implementations\n\nThe fallback returns LearnAPI.RandomAccess, which applies to arrays, most tables, and tuples of these. See the doc-string for details.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.iteration_parameter","page":"Learner Traits","title":"LearnAPI.iteration_parameter","text":"LearnAPI.iteration_parameter(learner)\n\nThe name of the iteration parameter of learner, or nothing if the algorithm is not iterative.\n\nNew implementations\n\nImplement if algorithm is iterative. Returns a symbol or nothing.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.fit_observation_scitype","page":"Learner Traits","title":"LearnAPI.fit_observation_scitype","text":"LearnAPI.fit_observation_scitype(learner)\n\nReturn an upper bound S on the scitype of individual observations guaranteed to work when calling fit: if observations = obs(learner, data) and ScientificTypes.scitype(o) <:S for each o in observations, then the call fit(learner, data) is supported.\n\nHere, \"for each o in observations\" is understood in the sense of LearnAPI.data_interface(learner). For example, if LearnAPI.data_interface(learner) == Base.HasLength(), then this means \"for o in MLUtils.eachobs(observations)\".\n\nSee also LearnAPI.target_observation_scitype.\n\nNew implementations\n\nOptional. The fallback return value is Union{}. \n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.target_observation_scitype","page":"Learner Traits","title":"LearnAPI.target_observation_scitype","text":"LearnAPI.target_observation_scitype(learner)\n\nReturn an upper bound S on the scitype of each observation of an applicable target variable. Specifically:\n\nIf :(LearnAPI.target) in LearnAPI.functions(learner) (i.e., fit consumes target variables) then \"target\" means anything returned by LearnAPI.target(learner, data), where data is an admissible argument in the call fit(learner, data).\nS will always be an upper bound on the scitype of (point) observations that could be conceivably extracted from the output of predict.\n\nTo illustate the second case, suppose we have\n\nmodel = fit(learner, data)\nŷ = predict(model, Sampleable(), data_new)\n\nThen each individual sample generated by each \"observation\" of ŷ (a vector of sampleable objects, say) will be bound in scitype by S.\n\nSee also See also LearnAPI.fit_observation_scitype.\n\nNew implementations\n\nOptional. The fallback return value is Any.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.is_static","page":"Learner Traits","title":"LearnAPI.is_static","text":"LearnAPI.is_static(learner)\n\nReturns true if fit is called with no data arguments, as in fit(learner). That is, learner does not generalize to new data, and data is only provided at the predict or transform step.\n\nFor example, some clustering algorithms are applied with this workflow, to assign labels to the observations in X:\n\nmodel = fit(learner) # no training data\nlabels = predict(model, X) # may mutate `model`!\n\n# extract some byproducts of the clustering algorithm (e.g., outliers):\nLearnAPI.extras(model)\n\nNew implementations\n\nThis trait, falling back to false, may only be overloaded when fit has no data arguments. See more at fit.\n\n\n\n\n\n","category":"function"},{"location":"target_weights_features/#input","page":"target/weights/features","title":"target, weights, and features","text":"","category":"section"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"Methods for extracting parts of training data:","category":"page"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"LearnAPI.target(learner, data) -> \nLearnAPI.weights(learner, data) -> \nLearnAPI.features(learner, data) -> ","category":"page"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"Here data is something supported in a call of the form fit(learner, data). ","category":"page"},{"location":"target_weights_features/#Typical-workflow","page":"target/weights/features","title":"Typical workflow","text":"","category":"section"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"Not typically appearing in a general user's workflow but useful in meta-alagorithms, such as cross-validation (see the example in obs and Data Interfaces).","category":"page"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"Supposing learner is a supervised classifier predicting a one-dimensional vector target:","category":"page"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"model = fit(learner, data)\nX = LearnAPI.features(learner, data)\ny = LearnAPI.target(learner, data)\nŷ = predict(model, Point(), X)\ntraining_loss = sum(ŷ .!= y)","category":"page"},{"location":"target_weights_features/#Implementation-guide","page":"target/weights/features","title":"Implementation guide","text":"","category":"section"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"method fallback compulsory?\nLearnAPI.target returns nothing no\nLearnAPI.weights returns nothing no\nLearnAPI.features see docstring if fallback insufficient","category":"page"},{"location":"target_weights_features/#Reference","page":"target/weights/features","title":"Reference","text":"","category":"section"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"LearnAPI.target\nLearnAPI.weights\nLearnAPI.features","category":"page"},{"location":"target_weights_features/#LearnAPI.target","page":"target/weights/features","title":"LearnAPI.target","text":"LearnAPI.target(learner, data) -> target\n\nReturn, for each form of data supported in a call of the form fit(learner, data), the target variable part of data. If nothing is returned, the learner does not see a target variable in training (is unsupervised).\n\nThe returned object y has the same number of observations as data. If data is the output of an obs call, then y is additionally guaranteed to implement the data interface specified by LearnAPI.data_interface(learner).\n\nExtended help\n\nWhat is a target variable?\n\nExamples of target variables are house prices in real estate pricing estimates, the \"spam\"/\"not spam\" labels in an email spam filtering task, \"outlier\"/\"inlier\" labels in outlier detection, cluster labels in clustering problems, and censored survival times in survival analysis. For more on targets and target proxies, see the \"Reference\" section of the LearnAPI.jl documentation.\n\nNew implementations\n\nA fallback returns nothing. The method must be overloaded if fit consumes data including a target variable.\n\nIf overloading obs, ensure that the return value, unless nothing, implements the data interface specified by LearnAPI.data_interface(learner), in the special case that data is the output of an obs call.\n\nIf overloaded, you must include :(LearnAPI.target) in the tuple returned by the LearnAPI.functions trait. \n\n\n\n\n\n","category":"function"},{"location":"target_weights_features/#LearnAPI.weights","page":"target/weights/features","title":"LearnAPI.weights","text":"LearnAPI.weights(learner, data) -> weights\n\nReturn, for each form of data supported in a call of the form fit(learner, data), the per-observation weights part of data. Where nothing is returned, no weights are part of data, which is to be interpreted as uniform weighting.\n\nThe returned object w has the same number of observations as data. If data is the output of an obs call, then w is additionally guaranteed to implement the data interface specified by LearnAPI.data_interface(learner).\n\nExtended help\n\nNew implementations\n\nOverloading is optional. A fallback returns nothing.\n\nIf overloading obs, ensure that the return value, unless nothing, implements the data interface specified by LearnAPI.data_interface(learner), in the special case that data is the output of an obs call.\n\nIf overloaded, you must include :(LearnAPI.weights) in the tuple returned by the LearnAPI.functions trait. \n\n\n\n\n\n","category":"function"},{"location":"target_weights_features/#LearnAPI.features","page":"target/weights/features","title":"LearnAPI.features","text":"LearnAPI.features(learner, data)\n\nReturn, for each form of data supported in a call of the form fit(learner, data), the \"features\" part of data (as opposed to the target variable, for example).\n\nThe returned object X may always be passed to predict or transform, where implemented, as in the following sample workflow:\n\nmodel = fit(learner, data)\nX = LearnAPI.features(learner, data)\nŷ = predict(model, kind_of_proxy, X) # eg, `kind_of_proxy = Point()`\n\nFor supervised models (i.e., where :(LearnAPI.target) in LearnAPI.functions(learner)) ŷ above is generally intended to be an approximate proxy for LearnAPI.target(learner, data), the training target.\n\nThe object X returned by LearnAPI.target has the same number of observations as data. If data is the output of an obs call, then X is additionally guaranteed to implement the data interface specified by LearnAPI.data_interface(learner).\n\nExtended help\n\nNew implementations\n\nFor density estimators, whose fit typically consumes only a target variable, you should overload this method to return nothing.\n\nIt must otherwise be possible to pass the return value X to predict and/or transform, and X must have same number of observations as data. A fallback returns first(data) if data is a tuple, and otherwise returns data.\n\nFurther overloadings may be necessary to handle the case that data is the output of obs(learner, data), if obs is being overloaded. In this case, be sure that X, unless nothing, implements the data interface specified by LearnAPI.data_interface(learner).\n\n\n\n\n\n","category":"function"},{"location":"patterns/feature_engineering/#Feature-Engineering","page":"Feature Engineering","title":"Feature Engineering","text":"","category":"section"},{"location":"patterns/feature_engineering/","page":"Feature Engineering","title":"Feature Engineering","text":"See these examples from the JuliaTestAI.jl test suite:","category":"page"},{"location":"patterns/feature_engineering/","page":"Feature Engineering","title":"Feature Engineering","text":"feature selectors from tests.","category":"page"},{"location":"fit_update/#fit_docs","page":"fit/update","title":"fit, update, update_observations, and update_features","text":"","category":"section"},{"location":"fit_update/#Training","page":"fit/update","title":"Training","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"fit(learner, data; verbosity=LearnAPI.default_verbosity()) -> model\nfit(learner; verbosity=LearnAPI.default_verbosity()) -> static_model ","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"A \"static\" algorithm is one that does not generalize to new observations (e.g., some clustering algorithms); there is no training data and the algorithm is executed by predict or transform which receive the data. See example below.","category":"page"},{"location":"fit_update/#Updating","page":"fit/update","title":"Updating","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"update(model, data; verbosity=..., param1=new_value1, param2=new_value2, ...) -> updated_model\nupdate_observations(model, new_data; verbosity=..., param1=new_value1, ...) -> updated_model\nupdate_features(model, new_data; verbosity=1, param1=new_value1, ...) -> updated_model","category":"page"},{"location":"fit_update/#Typical-workflows","page":"fit/update","title":"Typical workflows","text":"","category":"section"},{"location":"fit_update/#Supervised-models","page":"fit/update","title":"Supervised models","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"Supposing Learner is some supervised classifier type, with an iteration parameter n:","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"learner = Learner(n=100)\nmodel = fit(learner, (X, y))\n\n# Predict probability distributions:\nŷ = predict(model, Distribution(), Xnew) \n\n# Inspect some byproducts of training:\nLearnAPI.feature_importances(model)\n\n# Add 50 iterations and predict again:\nmodel = update(model; n=150)\npredict(model, Distribution(), X)","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"See also Classification and Regression.","category":"page"},{"location":"fit_update/#Transformers","page":"fit/update","title":"Transformers","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"A dimension-reducing transformer, learner, might be used in this way:","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"model = fit(learner, X)\ntransform(model, X) # or `transform(model, Xnew)`","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"or, if implemented, using a single call:","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"transform(learner, X) # `fit` implied","category":"page"},{"location":"fit_update/#static_algorithms","page":"fit/update","title":"Static algorithms (no \"learning\")","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"Suppose learner is some clustering algorithm that cannot be generalized to new data (e.g. DBSCAN):","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"model = fit(learner) # no training data\nlabels = predict(model, X) # may mutate `model`\n\n# Or, in one line:\nlabels = predict(learner, X)\n\n# But two-line version exposes byproducts of the clustering algorithm (e.g., outliers):\nLearnAPI.extras(model)","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"See also Static Algorithms","category":"page"},{"location":"fit_update/#Density-estimation","page":"fit/update","title":"Density estimation","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"In density estimation, fit consumes no features, only a target variable; predict, which consumes no data, returns the learned density:","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"model = fit(learner, y) # no features\npredict(model) # shortcut for `predict(model, SingleDistribution())`, or similar","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"A one-liner will typically be implemented as well:","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"predict(learner, y)","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"See also Density Estimation.","category":"page"},{"location":"fit_update/#Implementation-guide","page":"fit/update","title":"Implementation guide","text":"","category":"section"},{"location":"fit_update/#Training-2","page":"fit/update","title":"Training","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"Exactly one of the following must be implemented:","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"method fallback\nfit(learner, data; verbosity=LearnAPI.default_verbosity()) none\nfit(learner; verbosity=LearnAPI.default_verbosity()) none","category":"page"},{"location":"fit_update/#Updating-2","page":"fit/update","title":"Updating","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"method fallback compulsory?\nupdate(model, data; verbosity=..., hyperparameter_updates...) none no\nupdate_observations(model, data; verbosity=..., hyperparameter_updates...) none no\nupdate_features(model, data; verbosity=..., hyperparameter_updates...) none no","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"There are some contracts governing the behaviour of the update methods, as they relate to a previous fit call. Consult the document strings for details.","category":"page"},{"location":"fit_update/#Reference","page":"fit/update","title":"Reference","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"fit\nupdate\nupdate_observations\nupdate_features\nLearnAPI.default_verbosity","category":"page"},{"location":"fit_update/#LearnAPI.fit","page":"fit/update","title":"LearnAPI.fit","text":"fit(learner, data; verbosity=LearnAPI.default_verbosity())\nfit(learner; verbosity=LearnAPI.default_verbosity())\n\nExecute the machine learning or statistical algorithm with configuration learner using the provided training data, returning an object, model, on which other methods, such as predict or transform, can be dispatched. LearnAPI.functions(learner) returns a list of methods that can be applied to either learner or model.\n\nFor example, a supervised classifier might have a workflow like this:\n\nmodel = fit(learner, (X, y))\nŷ = predict(model, Xnew)\n\nThe signature fit(learner; verbosity=...) (no data) is provided by learners that do not generalize to new observations (called static algorithms). In that case, transform(model, data) or predict(model, ..., data) carries out the actual algorithm execution, writing any byproducts of that operation to the mutable object model returned by fit.\n\nUse verbosity=0 for warnings only, and -1 for silent training.\n\nSee also LearnAPI.default_verbosity, predict, transform, inverse_transform, LearnAPI.functions, obs.\n\nExtended help\n\nNew implementations\n\nImplementation of exactly one of the signatures is compulsory. If fit(learner; verbosity=...) is implemented, then the trait LearnAPI.is_static must be overloaded to return true.\n\nThe signature must include verbosity with LearnAPI.default_verbosity() as default.\n\nIf data encapsulates a target variable, as defined in LearnAPI.jl documentation, then LearnAPI.target(data) must be overloaded to return it. If predict or transform are implemented and consume data, then LearnAPI.features(data) must return something that can be passed as data to these methods. A fallback returns first(data) if data is a tuple, and data otherwise.\n\nThe LearnAPI.jl specification has nothing to say regarding fit signatures with more than two arguments. For convenience, for example, an implementation is free to implement a slurping signature, such as fit(learner, X, y, extras...) = fit(learner, (X, y, extras...)) but LearnAPI.jl does not guarantee such signatures are actually implemented.\n\nAssumptions about data\n\nBy default, it is assumed that data supports the LearnAPI.RandomAccess interface; this includes all matrices, with observations-as-columns, most tables, and tuples thereof. See LearnAPI.RandomAccess for details. If this is not the case then an implementation must either: (i) overload obs to articulate how provided data can be transformed into a form that does support LearnAPI.RandomAccess; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API. Refer tbo document strings for details.\n\n\n\n\n\n","category":"function"},{"location":"fit_update/#LearnAPI.update","page":"fit/update","title":"LearnAPI.update","text":"update(model, data; verbosity=LearnAPI.default_verbosity(), hyperparam_replacements...)\n\nReturn an updated version of the model object returned by a previous fit or update call, but with the specified hyperparameter replacements, in the form p1=value1, p2=value2, ....\n\nlearner = MyForest(ntrees=100)\n\n# train with 100 trees:\nmodel = fit(learner, data)\n\n# add 50 more trees:\nmodel = update(model, data; ntrees=150)\n\nProvided that data is identical with the data presented in a preceding fit call and there is at most one hyperparameter replacement, as in the above example, execution is semantically equivalent to the call fit(learner, data), where learner is LearnAPI.learner(model) with the specified replacements. In some cases (typically, when changing an iteration parameter) there may be a performance benefit to using update instead of retraining ab initio.\n\nIf data differs from that in the preceding fit or update call, or there is more than one hyperparameter replacement, then behaviour is learner-specific.\n\nSee also fit, update_observations, update_features.\n\nNew implementations\n\nImplementation is optional. The signature must include verbosity. If implemented, you must include :(LearnAPI.update) in the tuple returned by the LearnAPI.functions trait. \n\nSee also LearnAPI.clone\n\n\n\n\n\n","category":"function"},{"location":"fit_update/#LearnAPI.update_observations","page":"fit/update","title":"LearnAPI.update_observations","text":"update_observations(\n model,\n new_data;\n parameter_replacements...,\n verbosity=LearnAPI.default_verbosity(),\n)\n\nReturn an updated version of the model object returned by a previous fit or update call given the new observations present in new_data. One may additionally specify hyperparameter replacements in the form p1=value1, p2=value2, ....\n\nlearner = MyNeuralNetwork(epochs=10, learning_rate=0.01)\n\n# train for ten epochs:\nmodel = fit(learner, data)\n\n# train for two more epochs using new data and new learning rate:\nmodel = update_observations(model, new_data; epochs=2, learning_rate=0.1)\n\nWhen following the call fit(learner, data), the update call is semantically equivalent to retraining ab initio using a concatenation of data and new_data, provided there are no hyperparameter replacements (which rules out the example above). Behaviour is otherwise learner-specific.\n\nSee also fit, update, update_features.\n\nExtended help\n\nNew implementations\n\nImplementation is optional. The signature must include verbosity. If implemented, you must include :(LearnAPI.update_observations) in the tuple returned by the LearnAPI.functions trait. \n\nSee also LearnAPI.clone.\n\n\n\n\n\n","category":"function"},{"location":"fit_update/#LearnAPI.update_features","page":"fit/update","title":"LearnAPI.update_features","text":"update_features(\n model,\n new_data;\n parameter_replacements...,\n verbosity=LearnAPI.default_verbosity(),\n)\n\nReturn an updated version of the model object returned by a previous fit or update call given the new features encapsulated in new_data. One may additionally specify hyperparameter replacements in the form p1=value1, p2=value2, ....\n\nWhen following the call fit(learner, data), the update call is semantically equivalent to retraining ab initio using a concatenation of data and new_data, provided there are no hyperparameter replacements. Behaviour is otherwise learner-specific.\n\nSee also fit, update, update_features.\n\nExtended help\n\nNew implementations\n\nImplementation is optional. The signature must include verbosity. If implemented, you must include :(LearnAPI.update_features) in the tuple returned by the LearnAPI.functions trait. \n\nSee also LearnAPI.clone.\n\n\n\n\n\n","category":"function"},{"location":"fit_update/#LearnAPI.default_verbosity","page":"fit/update","title":"LearnAPI.default_verbosity","text":"LearnAPI.default_verbosity()\nLearnAPI.default_verbosity(level::Int)\n\nRespectively return, or set, the default verbosity level for LearnAPI.jl methods that support it, which includes fit, update, update_observations, and update_features. The effect in a top-level call is generally:\n\nlevel behaviour\n1 informational\n0 warnings only\n-1 silent\n\nMethods consuming verbosity generally call other verbosity-supporting methods at one level lower, so increasing verbosity beyond 1 may be useful.\n\n\n\n\n\n","category":"function"},{"location":"kinds_of_target_proxy/#proxy_types","page":"Kinds of Target Proxy","title":"Kinds of Target Proxy","text":"","category":"section"},{"location":"kinds_of_target_proxy/","page":"Kinds of Target Proxy","title":"Kinds of Target Proxy","text":"The available kinds of target proxy (used for predict dispatch) are classified by subtypes of LearnAPI.KindOfProxy. These types are intended for dispatch only and have no fields.","category":"page"},{"location":"kinds_of_target_proxy/","page":"Kinds of Target Proxy","title":"Kinds of Target Proxy","text":"LearnAPI.KindOfProxy","category":"page"},{"location":"kinds_of_target_proxy/#LearnAPI.KindOfProxy","page":"Kinds of Target Proxy","title":"LearnAPI.KindOfProxy","text":"LearnAPI.KindOfProxy\n\nAbstract type whose concrete subtypes T each represent a different kind of proxy for some target variable, associated with some learner. Instances T() are used to request the form of target predictions in predict calls.\n\nSee LearnAPI.jl documentation for an explanation of \"targets\" and \"target proxies\".\n\nFor example, Distribution is a concrete subtype of IID <: LearnAPI.KindOfProxy and a call like predict(model, Distribution(), Xnew) returns a data object whose observations are probability density/mass functions, assuming learner = LearnAPI.learner(model) supports predictions of that form, which is true if Distribution() in LearnAPI.kinds_of_proxy(learner).\n\nProxy types are grouped under three abstract subtypes:\n\nLearnAPI.IID: The main type, for proxies consisting of uncorrelated individual components, one for each input observation\nLearnAPI.Joint: For learners that predict a single probabilistic structure encapsulating correlations between target predictions for different input observations\nLearnAPI.Single: For learners, such as density estimators, that are trained on a target variable only (no features); predict consumes no data and the returned target proxy is a single probabilistic structure.\n\nFor lists of all concrete instances, refer to documentation for the relevant subtype.\n\n\n\n\n\n","category":"type"},{"location":"kinds_of_target_proxy/#Simple-target-proxies","page":"Kinds of Target Proxy","title":"Simple target proxies","text":"","category":"section"},{"location":"kinds_of_target_proxy/","page":"Kinds of Target Proxy","title":"Kinds of Target Proxy","text":"LearnAPI.IID","category":"page"},{"location":"kinds_of_target_proxy/#LearnAPI.IID","page":"Kinds of Target Proxy","title":"LearnAPI.IID","text":"LearnAPI.IID <: LearnAPI.KindOfProxy\n\nAbstract subtype of LearnAPI.KindOfProxy. If kind_of_proxy is an instance of LearnAPI.IID then, given data constisting of n observations, the following must hold:\n\nŷ = LearnAPI.predict(model, kind_of_proxy, data) is data also consisting of n observations.\nThe jth observation of ŷ, for any j, depends only on the jth observation of the provided data (no correlation between observations).\n\nSee also LearnAPI.KindOfProxy.\n\nExtended help\n\ntype form of an observation\nPoint same as target observations; may have the interpretation of a 50% quantile, 50% expectile or mode\nSampleable object that can be sampled to obtain object of the same form as target observation\nDistribution explicit probability density/mass function whose sample space is all possible target observations\nLogDistribution explicit log-probability density/mass function whose sample space is possible target observations\nProbability¹ numerical probability or probability vector\nLogProbability¹ log-probability or log-probability vector\nParametric¹ a list of parameters (e.g., mean and variance) describing some distribution\nLabelAmbiguous collections of labels (in case of multi-class target) but without a known correspondence to the original target labels (and of possibly different number) as in, e.g., clustering\nLabelAmbiguousSampleable sampleable version of LabelAmbiguous; see Sampleable above\nLabelAmbiguousDistribution pdf/pmf version of LabelAmbiguous; see Distribution above\nLabelAmbiguousFuzzy same as LabelAmbiguous but with multiple values of indeterminant number\nQuantile² same as target but with quantile interpretation\nExpectile² same as target but with expectile interpretation\nConfidenceInterval² confidence interval\nFuzzy finite but possibly varying number of target observations\nProbabilisticFuzzy as for Fuzzy but labeled with probabilities (not necessarily summing to one)\nSurvivalFunction survival function\nSurvivalDistribution probability distribution for survival time\nSurvivalHazardFunction hazard function for survival time\nOutlierScore numerical score reflecting degree of outlierness (not necessarily normalized)\nContinuous real-valued approximation/interpolation of a discrete-valued target, such as a count (e.g., number of phone calls)\n\n¹Provided for completeness but discouraged to avoid ambiguities in representation.\n\n²The level will be controlled by a hyper-parameter; models providing only quantiles or expectiles at 50% will provide Point instead.\n\n\n\n\n\n","category":"type"},{"location":"kinds_of_target_proxy/#Proxies-for-density-estimation-algorithms","page":"Kinds of Target Proxy","title":"Proxies for density estimation algorithms","text":"","category":"section"},{"location":"kinds_of_target_proxy/","page":"Kinds of Target Proxy","title":"Kinds of Target Proxy","text":"LearnAPI.Single","category":"page"},{"location":"kinds_of_target_proxy/#LearnAPI.Single","page":"Kinds of Target Proxy","title":"LearnAPI.Single","text":"Single <: KindOfProxy\n\nAbstract subtype of LearnAPI.KindOfProxy. It applies only to learners for which predict has no data argument, i.e., is of the form predict(model, kind_of_proxy). An example is an algorithm learning a probability distribution from samples, and we regard the samples as drawn from the \"target\" variable. If in this case, kind_of_proxy is an instance of LearnAPI.Single then, predict(learner) returns a single object representing a probability distribution.\n\ntype T form of output of predict(model, ::T)\nSingleSampleable object that can be sampled to obtain a single target observation\nSingleDistribution explicit probability density/mass function for sampling the target\nSingleLogDistribution explicit log-probability density/mass function for sampling the target\n\n\n\n\n\n","category":"type"},{"location":"kinds_of_target_proxy/#Joint-probability-distributions","page":"Kinds of Target Proxy","title":"Joint probability distributions","text":"","category":"section"},{"location":"kinds_of_target_proxy/","page":"Kinds of Target Proxy","title":"Kinds of Target Proxy","text":"LearnAPI.Joint","category":"page"},{"location":"kinds_of_target_proxy/#LearnAPI.Joint","page":"Kinds of Target Proxy","title":"LearnAPI.Joint","text":"Joint <: KindOfProxy\n\nAbstract subtype of LearnAPI.KindOfProxy. If kind_of_proxy is an instance of LearnAPI.Joint then, given data consisting of n observations, predict(model, kind_of_proxy, data) represents a single probability distribution for the sample space Y^n, where Y is the space from which the target variable takes its values.\n\ntype T form of output of predict(model, ::T, data)\nJointSampleable object that can be sampled to obtain a vector whose elements have the form of target observations; the vector length matches the number of observations in data.\nJointDistribution explicit probability density/mass function whose sample space is vectors of target observations; the vector length matches the number of observations in data\nJointLogDistribution explicit log-probability density/mass function whose sample space is vectors of target observations; the vector length matches the number of observations in data\n\n\n\n\n\n","category":"type"},{"location":"patterns/supervised_bayesian_models/#Supervised-Bayesian-Algorithms","page":"Supervised Bayesian Algorithms","title":"Supervised Bayesian Algorithms","text":"","category":"section"},{"location":"testing_an_implementation/#Testing-an-Implementation","page":"Testing an Implementation","title":"Testing an Implementation","text":"","category":"section"},{"location":"testing_an_implementation/","page":"Testing an Implementation","title":"Testing an Implementation","text":"🚧","category":"page"},{"location":"testing_an_implementation/","page":"Testing an Implementation","title":"Testing an Implementation","text":"warning: Warning\nUnder construction","category":"page"},{"location":"patterns/time_series_classification/#Time-Series-Classification","page":"Time Series Classification","title":"Time Series Classification","text":"","category":"section"},{"location":"anatomy_of_an_implementation/#Anatomy-of-an-Implementation","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"This section explains a detailed implementation of the LearnAPI.jl for naive ridge regression with no intercept. The kind of workflow we want to enable has been previewed in Sample workflow. Readers can also refer to the demonstration of the implementation given later.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The core LearnAPI.jl pattern looks like this:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"model = fit(learner, data)\npredict(model, newdata)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Here learner specifies hyperparameters, while model stores learned parameters and any byproducts of algorithm execution.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"A transformer ordinarily implements transform instead of predict. For more on predict versus transform, see Predict or transform?","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"note: Note\nNew implementations of fit, predict, etc, always have a single data argument as above. For convenience, a signature such as fit(learner, X, y), calling fit(learner, (X, y)), can be added, but the LearnAPI.jl specification is silent on the meaning or existence of signatures with extra arguments.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"note: Note\nIf the data object consumed by fit, predict, or transform is not not a suitable table¹, array³, tuple of tables and arrays, or some other object implementing the MLUtils.jl getobs/numobs interface, then an implementation must: (i) overload obs to articulate how provided data can be transformed into a form that does support this interface, as illustrated below under Providing a separate data front end, and which may additionally enable certain performance benefits; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The first line below imports the lightweight package LearnAPI.jl whose methods we will be extending. The second imports libraries needed for the core algorithm.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"using LearnAPI\nusing LinearAlgebra, Tables\nnothing # hide","category":"page"},{"location":"anatomy_of_an_implementation/#Defining-learners","page":"Anatomy of an Implementation","title":"Defining learners","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Here's a new type whose instances specify ridge regression hyperparameters:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"struct Ridge{T<:Real}\n lambda::T\nend\nnothing # hide","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Instances of Ridge are learners, in LearnAPI.jl parlance.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Associated with each new type of LearnAPI.jl learner will be a keyword argument constructor, providing default values for all properties (typically, struct fields) that are not other learners, and we must implement LearnAPI.constructor(learner), for recovering the constructor from an instance:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"\"\"\"\n Ridge(; lambda=0.1)\n\nInstantiate a ridge regression learner, with regularization of `lambda`.\n\"\"\"\nRidge(; lambda=0.1) = Ridge(lambda)\nLearnAPI.constructor(::Ridge) = Ridge\nnothing # hide","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"For example, in this case, if learner = Ridge(0.2), then LearnAPI.constructor(learner)(lambda=0.2) == learner is true. Note that we attach the docstring to the constructor, not the struct.","category":"page"},{"location":"anatomy_of_an_implementation/#Implementing-fit","page":"Anatomy of an Implementation","title":"Implementing fit","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"A ridge regressor requires two types of data for training: input features X, which here we suppose are tabular¹, and a target y, which we suppose is a vector.⁴","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"It is convenient to define a new type for the fit output, which will include coefficients labelled by feature name for inspection after training:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"struct RidgeFitted{T,F}\n learner::Ridge\n coefficients::Vector{T}\n named_coefficients::F\nend\nnothing # hide","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Note that we also include learner in the struct, for it must be possible to recover learner from the output of fit; see Accessor functions below.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The core implementation of fit looks like this:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"function LearnAPI.fit(learner::Ridge, data; verbosity=LearnAPI.default_verbosity())\n\n X, y = data\n\n # data preprocessing:\n table = Tables.columntable(X)\n names = Tables.columnnames(table) |> collect\n A = Tables.matrix(table, transpose=true)\n\n lambda = learner.lambda\n\n # apply core algorithm:\n coefficients = (A*A' + learner.lambda*I)\\(A*y) # vector\n\n # determine named coefficients:\n named_coefficients = [names[j] => coefficients[j] for j in eachindex(names)]\n\n # make some noise, if allowed:\n verbosity > 0 && @info \"Coefficients: $named_coefficients\"\n\n return RidgeFitted(learner, coefficients, named_coefficients)\nend","category":"page"},{"location":"anatomy_of_an_implementation/#Implementing-predict","page":"Anatomy of an Implementation","title":"Implementing predict","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Users will be able to call predict like this:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"predict(model, Point(), Xnew)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"where Xnew is a table (of the same form as X above). The argument Point() signals that literal predictions of the target variable are sought, as opposed to some proxy for the target, such as probability density functions. Point is an example of a LearnAPI.KindOfProxy type. Targets and target proxies are discussed here.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We provide this implementation for our ridge regressor:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.predict(model::RidgeFitted, ::Point, Xnew) =\n Tables.matrix(Xnew)*model.coefficients","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"If the kind of proxy is omitted, as in predict(model, Xnew), then a fallback grabs the first element of the tuple returned by LearnAPI.kinds_of_proxy(learner), which we overload appropriately below.","category":"page"},{"location":"anatomy_of_an_implementation/#Extracting-the-target-from-training-data","page":"Anatomy of an Implementation","title":"Extracting the target from training data","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The fit method consumes data which includes a target variable, i.e., the learner is a supervised learner. We must therefore declare how the target variable can be extracted from training data, by implementing LearnAPI.target:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.target(learner, data) = last(data)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"There is a similar method, LearnAPI.features for declaring how training features can be extracted (something that can be passed to predict) but this method has a fallback which suffices here: it returns first(data) if data is a tuple, and data otherwise.","category":"page"},{"location":"anatomy_of_an_implementation/#Accessor-functions","page":"Anatomy of an Implementation","title":"Accessor functions","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"An accessor function has the output of fit as it's sole argument. Every new implementation must implement the accessor function LearnAPI.learner for recovering a learner from a fitted object:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.learner(model::RidgeFitted) = model.learner","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Other accessor functions extract learned parameters or some standard byproducts of training, such as feature importances or training losses.² Here we implement an accessor function to extract the linear coefficients:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.coefficients(model::RidgeFitted) = model.named_coefficients\nnothing #hide","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The LearnAPI.strip(model) accessor function is for returning a version of model suitable for serialization (typically smaller and data anonymized). It has a fallback that just returns model but for the sake of illustration, we overload it to dump the named version of the coefficients:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.strip(model::RidgeFitted) =\n RidgeFitted(model.learner, model.coefficients, nothing)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Crucially, we can still use LearnAPI.strip(model) in place of model to make new predictions.","category":"page"},{"location":"anatomy_of_an_implementation/#Learner-traits","page":"Anatomy of an Implementation","title":"Learner traits","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Learner traits record extra generic information about a learner, or make specific promises of behavior. They are methods that have a learner as the sole argument, and so we regard LearnAPI.constructor defined above as a trait.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Because we have implemented predict, we are required to overload the LearnAPI.kinds_of_proxy trait. Because we can only make point predictions of the target, we make this definition:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.kinds_of_proxy(::Ridge) = (Point(),)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"A macro provides a shortcut, convenient when multiple traits are to be defined:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"@trait(\n Ridge,\n constructor = Ridge,\n kinds_of_proxy=(Point(),),\n tags = (:regression,),\n functions = (\n :(LearnAPI.fit),\n :(LearnAPI.learner),\n :(LearnAPI.strip),\n :(LearnAPI.obs),\n :(LearnAPI.features),\n :(LearnAPI.target),\n :(LearnAPI.predict),\n :(LearnAPI.coefficients),\n )\n)\nnothing # hide","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The last trait, functions, returns a list of all LearnAPI.jl methods that can be meaningfully applied to the learner or associated model. See LearnAPI.functions for a checklist. LearnAPI.functions and LearnAPI.constructor, are the only universally compulsory traits. However, it is worthwhile studying the list of all traits to see which might apply to a new implementation, to enable maximum buy into functionality provided by third party packages, and to assist third party algorithms that match machine learning algorithms to user-defined tasks.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Note that we know Ridge instances are supervised learners because :(LearnAPI.target) in LearnAPI.functions(learner), for every instance learner. With some exceptions, the value of a trait should depend only on the type of the argument.","category":"page"},{"location":"anatomy_of_an_implementation/#Signatures-added-for-convenience","page":"Anatomy of an Implementation","title":"Signatures added for convenience","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We add one fit signature for user-convenience only. The LearnAPI.jl specification has nothing to say about fit signatures with more than two positional arguments.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.fit(learner::Ridge, X, y; kwargs...) = fit(learner, (X, y); kwargs...)","category":"page"},{"location":"anatomy_of_an_implementation/#workflow","page":"Anatomy of an Implementation","title":"Demonstration","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We now illustrate how to interact directly with Ridge instances using the methods just implemented.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"# synthesize some data:\nn = 10 # number of observations\ntrain = 1:6\ntest = 7:10\na, b, c = rand(n), rand(n), rand(n)\nX = (; a, b, c)\ny = 2a - b + 3c + 0.05*rand(n)\nnothing # hide","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"learner = Ridge(lambda=0.5)\nforeach(println, LearnAPI.functions(learner))","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Training and predicting:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Xtrain = Tables.subset(X, train)\nytrain = y[train]\nmodel = fit(learner, (Xtrain, ytrain)) # `fit(learner, Xtrain, ytrain)` will also work\nŷ = predict(model, Tables.subset(X, test))","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Extracting coefficients:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.coefficients(model)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Serialization/deserialization:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"using Serialization\nsmall_model = LearnAPI.strip(model)\nfilename = tempname()\nserialize(filename, small_model)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"recovered_model = deserialize(filename)\n@assert LearnAPI.learner(recovered_model) == learner\n@assert predict(recovered_model, X) == predict(model, X)","category":"page"},{"location":"anatomy_of_an_implementation/#Providing-a-separate-data-front-end","page":"Anatomy of an Implementation","title":"Providing a separate data front end","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"using LearnAPI\nusing LinearAlgebra, Tables\n\nstruct Ridge{T<:Real}\n lambda::T\nend\n\nRidge(; lambda=0.1) = Ridge(lambda)\n\nstruct RidgeFitted{T,F}\n learner::Ridge\n coefficients::Vector{T}\n named_coefficients::F\nend\n\nLearnAPI.learner(model::RidgeFitted) = model.learner\nLearnAPI.coefficients(model::RidgeFitted) = model.named_coefficients\nLearnAPI.strip(model::RidgeFitted) =\n RidgeFitted(model.learner, model.coefficients, nothing)\n\n@trait(\n Ridge,\n constructor = Ridge,\n kinds_of_proxy=(Point(),),\n tags = (:regression,),\n functions = (\n :(LearnAPI.fit),\n :(LearnAPI.learner),\n :(LearnAPI.strip),\n :(LearnAPI.obs),\n :(LearnAPI.features),\n :(LearnAPI.target),\n :(LearnAPI.predict),\n :(LearnAPI.coefficients),\n )\n)\n\nn = 10 # number of observations\ntrain = 1:6\ntest = 7:10\na, b, c = rand(n), rand(n), rand(n)\nX = (; a, b, c)\ny = 2a - b + 3c + 0.05*rand(n)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"An implementation may optionally implement obs, to expose to the user (or some meta-algorithm like cross-validation) the representation of input data internal to fit or predict, such as the matrix version A of X in the ridge example. That is, we may factor out of fit (and also predict) the data pre-processing step, obs, to expose its outcomes. These outcomes become alternative user inputs to fit. To see the use of obs in action, see below.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Here we specifically wrap all the pre-processed data into single object, for which we introduce a new type:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"struct RidgeFitObs{T,M<:AbstractMatrix{T}}\n A::M # `p` x `n` matrix\n names::Vector{Symbol} # features\n y::Vector{T} # target\nend","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Now we overload obs to carry out the data pre-processing previously in fit, like this:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"function LearnAPI.obs(::Ridge, data)\n X, y = data\n table = Tables.columntable(X)\n names = Tables.columnnames(table) |> collect\n return RidgeFitObs(Tables.matrix(table)', names, y)\nend","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We informally refer to the output of obs as \"observations\" (see The obs contract below). The previous core fit signature is now replaced with two methods - one to handle \"regular\" input, and one to handle the pre-processed data (observations) which appears first below:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"function LearnAPI.fit(learner::Ridge, observations::RidgeFitObs; verbosity=LearnAPI.default_verbosity())\n\n lambda = learner.lambda\n\n A = observations.A\n names = observations.names\n y = observations.y\n\n # apply core learner:\n coefficients = (A*A' + learner.lambda*I)\\(A*y) # 1 x p matrix\n\n # determine named coefficients:\n named_coefficients = [names[j] => coefficients[j] for j in eachindex(names)]\n\n # make some noise, if allowed:\n verbosity > 0 && @info \"Coefficients: $named_coefficients\"\n\n return RidgeFitted(learner, coefficients, named_coefficients)\n\nend\n\nLearnAPI.fit(learner::Ridge, data; kwargs...) =\n fit(learner, obs(learner, data); kwargs...)","category":"page"},{"location":"anatomy_of_an_implementation/#The-obs-contract","page":"Anatomy of an Implementation","title":"The obs contract","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Providing fit signatures matching the output of obs, is the first part of the obs contract. Since obs(learner, data) should evidently support all data that fit(learner, data) supports, we must be able to apply obs(learner, _) to it's own output (observations below). This leads to the additional \"no-op\" declaration","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.obs(::Ridge, observations::RidgeFitObs) = observations","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"In other words, we ensure that obs(learner, _) is involutive.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The second part of the obs contract is this: The output of obs must implement the interface specified by the trait LearnAPI.data_interface(learner). Assuming this is LearnAPI.RandomAccess() (the default) it usually suffices to overload Base.getindex and Base.length:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Base.getindex(data::RidgeFitObs, I) =\n RidgeFitObs(data.A[:,I], data.names, y[I])\nBase.length(data::RidgeFitObs) = length(data.y)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We do something similar for predict, but there's no need for a new type in this case:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.obs(::RidgeFitted, Xnew) = Tables.matrix(Xnew)'\nLearnAPI.obs(::RidgeFitted, observations::AbstractArray) = observations # involutivity\n\nLearnAPI.predict(model::RidgeFitted, ::Point, observations::AbstractMatrix) =\n observations'*model.coefficients\n\nLearnAPI.predict(model::RidgeFitted, ::Point, Xnew) =\n predict(model, Point(), obs(model, Xnew))","category":"page"},{"location":"anatomy_of_an_implementation/#target-and-features-methods","page":"Anatomy of an Implementation","title":"target and features methods","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We provide an additional overloading of LearnAPI.target to handle the additional supported data argument of fit:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.target(::Ridge, observations::RidgeFitObs) = observations.y","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Similarly, we must overload LearnAPI.features, which extracts features from training data (objects that can be passed to predict) like this","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.features(::Ridge, observations::RidgeFitObs) = observations.A","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"as the fallback mentioned above is no longer adequate.","category":"page"},{"location":"anatomy_of_an_implementation/#Important-notes:","page":"Anatomy of an Implementation","title":"Important notes:","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The observations to be consumed by fit are returned by obs(learner::Ridge, ...), while those consumed by predict are returned by obs(model::RidgeFitted, ...). We need the different signatures because the form of data consumed by fit and predict are generally different.\nWe need the adjoint operator, ', because the last dimension in arrays is the observation dimension, according to the MLUtils.jl convention. Remember, Xnew is a table here.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Since LearnAPI.jl provides fallbacks for obs that simply return the unadulterated data argument, overloading obs is optional. This is provided data in publicized fit/predict signatures consists only of objects implement the LearnAPI.RandomAccess interface (most tables¹, arrays³, and tuples thereof).","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"To opt out of supporting the MLUtils.jl interface altogether, an implementation must overload the trait, LearnAPI.data_interface(learner). See Data interfaces for details.","category":"page"},{"location":"anatomy_of_an_implementation/#Addition-of-signatures-for-user-convenience","page":"Anatomy of an Implementation","title":"Addition of signatures for user convenience","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"As above, we add a signature which plays no role vis-à-vis LearnAPI.jl.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.fit(learner::Ridge, X, y; kwargs...) = fit(learner, (X, y); kwargs...)","category":"page"},{"location":"anatomy_of_an_implementation/#advanced_demo","page":"Anatomy of an Implementation","title":"Demonstration of an advanced obs workflow","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We now can train and predict using internal data representations, resampled using the generic MLUtils.jl interface:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"import MLUtils\nlearner = Ridge()\nobservations_for_fit = obs(learner, (X, y))\nmodel = fit(learner, MLUtils.getobs(observations_for_fit, train))\nobservations_for_predict = obs(model, X)\nẑ = predict(model, MLUtils.getobs(observations_for_predict, test))","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"@assert ẑ == ŷ","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"For an application of obs to efficient cross-validation, see here.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"¹ In LearnAPI.jl a table is any object X implementing the Tables.jl interface, additionally satisfying Tables.istable(X) == true and implementing DataAPI.nrow (and whence MLUtils.numobs). Tables that are also (unnamed) tuples are disallowed.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"² An implementation can provide further accessor functions, if necessary, but like the native ones, they must be included in the LearnAPI.functions declaration.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"³ The last index must be the observation index.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"⁴ The data = (X, y) pattern implemented here is not the only supported pattern. For, example, data might be a single table containing both features and target variable. In this case, it will be necessary to overload LearnAPI.features in addition to LearnAPI.target; the name of the target column would need to be a hyperparameter.","category":"page"},{"location":"patterns/static_algorithms/#Static-Algorithms","page":"Static Algorithms","title":"Static Algorithms","text":"","category":"section"},{"location":"patterns/static_algorithms/","page":"Static Algorithms","title":"Static Algorithms","text":"See these examples from the JuliaTestAI.jl test suite:","category":"page"},{"location":"patterns/static_algorithms/","page":"Static Algorithms","title":"Static Algorithms","text":"feature selection","category":"page"},{"location":"patterns/meta_algorithms/#Meta-algorithms","page":"Meta-algorithms","title":"Meta-algorithms","text":"","category":"section"},{"location":"patterns/meta_algorithms/","page":"Meta-algorithms","title":"Meta-algorithms","text":"Many meta-algorithms are can be implemented as wrappers. An example is this bagged ensemble algorithm from tests.","category":"page"},{"location":"patterns/clusterering/#Clusterering","page":"Clusterering","title":"Clusterering","text":"","category":"section"},{"location":"reference/#reference","page":"Overview","title":"Reference","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"Here we give the definitive specification of the LearnAPI.jl interface. For informal guides see Anatomy of an Implementation and Common Implementation Patterns.","category":"page"},{"location":"reference/#scope","page":"Overview","title":"Important terms and concepts","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"The LearnAPI.jl specification is predicated on a few basic, informally defined notions:","category":"page"},{"location":"reference/#Data-and-observations","page":"Overview","title":"Data and observations","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"ML/statistical algorithms are typically applied in conjunction with resampling of observations, as in cross-validation. In this document data will always refer to objects encapsulating an ordered sequence of individual observations.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"A DataFrame instance, from DataFrames.jl, is an example of data, the observations being the rows. Typically, data provided to LearnAPI.jl algorithms, will implement the MLUtils.jl getobs/numobs interface for accessing individual observations, but implementations can opt out of this requirement; see obs and LearnAPI.data_interface for details.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"note: Note\nIn the MLUtils.jl convention, observations in tables are the rows but observations in a matrix are the columns.","category":"page"},{"location":"reference/#hyperparameters","page":"Overview","title":"Hyperparameters","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"Besides the data it consumes, a machine learning algorithm's behavior is governed by a number of user-specified hyperparameters, such as the number of trees in a random forest. In LearnAPI.jl, one is allowed to have hyperparameters that are not data-generic. For example, a class weight dictionary, which will only make sense for a target taking values in the set of dictionary keys, can be specified as a hyperparameter.","category":"page"},{"location":"reference/#proxy","page":"Overview","title":"Targets and target proxies","text":"","category":"section"},{"location":"reference/#Context","page":"Overview","title":"Context","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"After training, a supervised classifier predicts labels on some input which are then compared with ground truth labels using some accuracy measure, to assesses the performance of the classifier. Alternatively, the classifier predicts class probabilities, which are instead paired with ground truth labels using a proper scoring rule, say. In outlier detection, \"outlier\"/\"inlier\" predictions, or probability-like scores, are similarly compared with ground truth labels. In clustering, integer labels assigned to observations by the clustering algorithm can can be paired with human labels using, say, the Rand index. In survival analysis, predicted survival functions or probability distributions are compared with censored ground truth survival times. And so on ...","category":"page"},{"location":"reference/#Definitions","page":"Overview","title":"Definitions","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"More generally, whenever we have a variable (e.g., a class label) that can, at least in principle, be paired with a predicted value, or some predicted \"proxy\" for that variable (such as a class probability), then we call the variable a target variable, and the predicted output a target proxy. In this definition, it is immaterial whether or not the target appears in training (the algorithm is supervised) or whether or not predictions generalize to new input observations (the algorithm \"learns\").","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"LearnAPI.jl provides singleton target proxy types for prediction dispatch. These are also used to distinguish performance metrics provided by the package StatisticalMeasures.jl.","category":"page"},{"location":"reference/#learners","page":"Overview","title":"Learners","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"An object implementing the LearnAPI.jl interface is called a learner, although it is more accurately \"the configuration of some machine learning or statistical algorithm\".¹ A learner encapsulates a particular set of user-specified hyperparameters as the object's properties (which conceivably differ from its fields). It does not store learned parameters.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"Informally, we will sometimes use the word \"model\" to refer to the output of fit(learner, ...) (see below), something which typically does store learned parameters.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"For learner to be a valid LearnAPI.jl learner, LearnAPI.constructor(learner) must be defined and return a keyword constructor enabling recovery of learner from its properties:","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"properties = propertynames(learner)\nnamed_properties = NamedTuple{properties}(getproperty.(Ref(learner), properties))\n@assert learner == LearnAPI.constructor(learner)(; named_properties...)","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"which can be tested with @assertLearnAPI.clone(learner)== learner.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"Note that if if learner is an instance of a mutable struct, this requirement generally requires overloading Base.== for the struct.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"important: Important\nNo LearnAPI.jl method is permitted to mutate a learner. In particular, one should make deep copies of RNG hyperparameters before using them in a new implementation of fit.","category":"page"},{"location":"reference/#Composite-learners-(wrappers)","page":"Overview","title":"Composite learners (wrappers)","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"A composite learner is one with at least one property that can take other learners as values; for such learners LearnAPI.is_composite(learner) must be true (fallback is false). Generally, the keyword constructor provided by LearnAPI.constructor must provide default values for all properties that are not learner-valued. Instead, these learner-valued properties can have a nothing default, with the constructor throwing an error if the the constructor call does not explicitly specify a new value.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"Any object learner for which LearnAPI.functions(learner) is non-empty is understood to have a valid implementation of the LearnAPI.jl interface.","category":"page"},{"location":"reference/#Example","page":"Overview","title":"Example","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"Below is an example of a learner type with a valid constructor:","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"struct GradientRidgeRegressor{T<:Real}\n learning_rate::T\n epochs::Int\n l2_regularization::T\nend\nGradientRidgeRegressor(; learning_rate=0.01, epochs=10, l2_regularization=0.01) =\n GradientRidgeRegressor(learning_rate, epochs, l2_regularization)\nLearnAPI.constructor(::GradientRidgeRegressor) = GradientRidgeRegressor","category":"page"},{"location":"reference/#Documentation","page":"Overview","title":"Documentation","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"Attach public LearnAPI.jl-related documentation for a learner to it's constructor, rather than to the struct defining its type. In this way, a learner can implement multiple interfaces, in addition to the LearnAPI interface, with separate document strings for each.","category":"page"},{"location":"reference/#Methods","page":"Overview","title":"Methods","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"note: Compulsory methods\nAll new learner types must implement fit, LearnAPI.learner, LearnAPI.constructor and LearnAPI.functions.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"Most learners will also implement predict and/or transform. For a minimal (but useless) implementation, see the implementation of SmallLearner here.","category":"page"},{"location":"reference/#List-of-methods","page":"Overview","title":"List of methods","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"fit: for (i) training or updating learners that generalize to new data; or (ii) wrapping learner in an object that is possibly mutated by predict/transform, to record byproducts of those operations, in the special case of non-generalizing learners (called here static algorithms)\nupdate: for updating learning outcomes after hyperparameter changes, such as increasing an iteration parameter.\nupdate_observations, update_features: update learning outcomes by presenting additional training data.\npredict: for outputting targets or target proxies (such as probability density functions)\ntransform: similar to predict, but for arbitrary kinds of output, and which can be paired with an inverse_transform method\ninverse_transform: for inverting the output of transform (\"inverting\" broadly understood)\nLearnAPI.target, LearnAPI.weights, LearnAPI.features: for extracting relevant parts of training data, where defined.\nobs: method for exposing to the user learner-specific representations of data, which are additionally guaranteed to implement the observation access API specified by LearnAPI.data_interface(learner).\nAccessor functions: these include functions like LearnAPI.feature_importances and LearnAPI.training_losses, for extracting, from training outcomes, information common to many learners. This includes LearnAPI.strip(model) for replacing a learning outcome model with a serializable version that can still predict or transform.\nLearner traits: methods that promise specific learner behavior or record general information about the learner. Only LearnAPI.constructor and LearnAPI.functions are universally compulsory.","category":"page"},{"location":"reference/#Utilities","page":"Overview","title":"Utilities","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"LearnAPI.clone\nLearnAPI.@trait","category":"page"},{"location":"reference/#LearnAPI.clone","page":"Overview","title":"LearnAPI.clone","text":"LearnAPI.clone(learner; replacements...)\n\nReturn a shallow copy of learner with the specified hyperparameter replacements.\n\nclone(learner; epochs=100, learning_rate=0.01)\n\nA LearnAPI.jl contract ensures that LearnAPI.clone(learner) == learner.\n\n\n\n\n\n","category":"function"},{"location":"reference/#LearnAPI.@trait","page":"Overview","title":"LearnAPI.@trait","text":"@trait(LearnerType, trait1=value1, trait2=value2, ...)\n\nOverload a number of traits for learners of type LearnerType. For example, the code\n\n@trait(\n RidgeRegressor,\n tags = (\"regression\", ),\n doc_url = \"https://some.cool.documentation\",\n)\n\nis equivalent to\n\nLearnAPI.tags(::RidgeRegressor) = (\"regression\", ),\nLearnAPI.doc_url(::RidgeRegressor) = \"https://some.cool.documentation\",\n\n\n\n\n\n","category":"macro"},{"location":"reference/","page":"Overview","title":"Overview","text":"","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"¹ We acknowledge users may not like this terminology, and may know \"learner\" by some other name, such as \"strategy\", \"options\", \"hyperparameter set\", \"configuration\", \"algorithm\", or \"model\". Consensus on this point is difficult; see, e.g., this Julia Discourse discussion.","category":"page"},{"location":"accessor_functions/#accessor_functions","page":"Accessor Functions","title":"Accessor Functions","text":"","category":"section"},{"location":"accessor_functions/","page":"Accessor Functions","title":"Accessor Functions","text":"The sole argument of an accessor function is the output, model, of fit. Learners are free to implement any number of these, or none of them. Only LearnAPI.strip has a fallback, namely the identity.","category":"page"},{"location":"accessor_functions/","page":"Accessor Functions","title":"Accessor Functions","text":"LearnAPI.learner(model)\nLearnAPI.extras(model)\nLearnAPI.strip(model)\nLearnAPI.coefficients(model)\nLearnAPI.intercept(model)\nLearnAPI.tree(model)\nLearnAPI.trees(model)\nLearnAPI.feature_importances(model)\nLearnAPI.training_labels(model)\nLearnAPI.training_losses(model)\nLearnAPI.training_predictions(model)\nLearnAPI.training_scores(model)\nLearnAPI.components(model)","category":"page"},{"location":"accessor_functions/","page":"Accessor Functions","title":"Accessor Functions","text":"Learner-specific accessor functions may also be implemented. The names of all accessor functions are included in the list returned by LearnAPI.functions(learner).","category":"page"},{"location":"accessor_functions/#Implementation-guide","page":"Accessor Functions","title":"Implementation guide","text":"","category":"section"},{"location":"accessor_functions/","page":"Accessor Functions","title":"Accessor Functions","text":"All new implementations must implement LearnAPI.learner. While, all others are optional, any implemented accessor functions must be added to the list returned by LearnAPI.functions.","category":"page"},{"location":"accessor_functions/#Reference","page":"Accessor Functions","title":"Reference","text":"","category":"section"},{"location":"accessor_functions/","page":"Accessor Functions","title":"Accessor Functions","text":"LearnAPI.learner\nLearnAPI.extras\nLearnAPI.strip\nLearnAPI.coefficients\nLearnAPI.intercept\nLearnAPI.tree\nLearnAPI.trees\nLearnAPI.feature_importances\nLearnAPI.training_losses\nLearnAPI.training_predictions\nLearnAPI.training_scores\nLearnAPI.training_labels\nLearnAPI.components","category":"page"},{"location":"accessor_functions/#LearnAPI.learner","page":"Accessor Functions","title":"LearnAPI.learner","text":"LearnAPI.learner(model)\nLearnAPI.learner(stripped_model)\n\nRecover the learner used to train model or the output, stripped_model, of LearnAPI.strip(model).\n\nIn other words, if model = fit(learner, data...), for some learner and data, then\n\nLearnAPI.learner(model) == learner == LearnAPI.learner(LearnAPI.strip(model))\n\nis true.\n\nNew implementations\n\nImplementation is compulsory for new learner types. The behaviour described above is the only contract. You must include :(LearnAPI.learner) in the return value of LearnAPI.functions(learner).\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.extras","page":"Accessor Functions","title":"LearnAPI.extras","text":"LearnAPI.extras(model)\n\nReturn miscellaneous byproducts of a learning algorithm's execution, from the object model returned by a call of the form fit(learner, data).\n\nFor \"static\" learners (those without training data) it may be necessary to first call transform or predict on model.\n\nSee also fit.\n\nNew implementations\n\nImplementation is discouraged for byproducts already covered by other LearnAPI.jl accessor functions: LearnAPI.learner, LearnAPI.coefficients, LearnAPI.intercept, LearnAPI.tree, LearnAPI.trees, LearnAPI.feature_importances, LearnAPI.training_labels, LearnAPI.training_losses, LearnAPI.training_predictions, LearnAPI.training_scores and LearnAPI.components.\n\nIf implemented, you must include :(LearnAPI.training_labels) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#Base.strip","page":"Accessor Functions","title":"Base.strip","text":"LearnAPI.strip(model; options...)\n\nReturn a version of model that will generally have a smaller memory allocation than model, suitable for serialization. Here model is any object returned by fit. Accessor functions that can be called on model may not work on LearnAPI.strip(model), but predict, transform and inverse_transform will work, if implemented. Check LearnAPI.functions(LearnAPI.learner(model)) to view see what the original model implements.\n\nImplementations may provide learner-specific keyword options to control how much of the original functionality is preserved by LearnAPI.strip.\n\nTypical workflow\n\nmodel = fit(learner, (X, y)) # or `fit(learner, X, y)`\nŷ = predict(model, Point(), Xnew)\n\nsmall_model = LearnAPI.strip(model)\nserialize(\"my_model.jls\", small_model)\n\nrecovered_model = deserialize(\"my_random_forest.jls\")\n@assert predict(recovered_model, Point(), Xnew) == ŷ\n\nExtended help\n\nNew implementations\n\nOverloading LearnAPI.strip for new learners is optional. The fallback is the identity.\n\nNew implementations must enforce the following identities, whenever the right-hand side is defined:\n\npredict(LearnAPI.strip(model; options...), args...; kwargs...) ==\n predict(model, args...; kwargs...)\ntransform(LearnAPI.strip(model; options...), args...; kwargs...) ==\n transform(model, args...; kwargs...)\ninverse_transform(LearnAPI.strip(model; options), args...; kwargs...) ==\n inverse_transform(model, args...; kwargs...)\n\nAdditionally:\n\nLearnAPI.strip(LearnAPI.strip(model)) == LearnAPI.strip(model)\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.coefficients","page":"Accessor Functions","title":"LearnAPI.coefficients","text":"LearnAPI.coefficients(model)\n\nFor a linear model, return the learned coefficients. The value returned has the form of an abstract vector of feature_or_class::Symbol => coefficient::Real pairs (e.g [:gender => 0.23, :height => 0.7, :weight => 0.1]) or, in the case of multi-targets, feature::Symbol => coefficients::AbstractVector{<:Real} pairs.\n\nThe model reports coefficients if :(LearnAPI.coefficients) in LearnAPI.functions(Learn.learner(model)).\n\nSee also LearnAPI.intercept.\n\nNew implementations\n\nImplementation is optional.\n\nIf implemented, you must include :(LearnAPI.coefficients) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.intercept","page":"Accessor Functions","title":"LearnAPI.intercept","text":"LearnAPI.intercept(model)\n\nFor a linear model, return the learned intercept. The value returned is Real (single target) or an AbstractVector{<:Real} (multi-target).\n\nThe model reports intercept if :(LearnAPI.intercept) in LearnAPI.functions(Learn.learner(model)).\n\nSee also LearnAPI.coefficients.\n\nNew implementations\n\nImplementation is optional.\n\nIf implemented, you must include :(LearnAPI.intercept) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.tree","page":"Accessor Functions","title":"LearnAPI.tree","text":"LearnAPI.tree(model)\n\nReturn a user-friendly tree, in the form of a root object implementing the following interface defined in AbstractTrees.jl:\n\nsubtypes AbstractTrees.AbstractNode{T}\nimplements AbstractTrees.children()\nimplements AbstractTrees.printnode()\n\nSuch a tree can be visualized using the TreeRecipe.jl package, for example.\n\nSee also LearnAPI.trees.\n\nNew implementations\n\nImplementation is optional.\n\nIf implemented, you must include :(LearnAPI.tree) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.trees","page":"Accessor Functions","title":"LearnAPI.trees","text":"LearnAPI.trees(model)\n\nFor some ensemble model, return a vector of trees. See LearnAPI.tree for the form of such trees.\n\nSee also LearnAPI.tree.\n\nNew implementations\n\nImplementation is optional.\n\nIf implemented, you must include :(LearnAPI.trees) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.feature_importances","page":"Accessor Functions","title":"LearnAPI.feature_importances","text":"LearnAPI.feature_importances(model)\n\nReturn the learner-specific feature importances of a model output by fit(learner, ...) for some learner. The value returned has the form of an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender => 0.23, :height => 0.7, :weight => 0.1]).\n\nThe learner supports feature importances if :(LearnAPI.feature_importances) in LearnAPI.functions(learner).\n\nIf a learner is sometimes unable to report feature importances then LearnAPI.feature_importances will return all importances as 0.0, as in [:gender => 0.0, :height => 0.0, :weight => 0.0].\n\nNew implementations\n\nImplementation is optional.\n\nIf implemented, you must include :(LearnAPI.feature_importances) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.training_losses","page":"Accessor Functions","title":"LearnAPI.training_losses","text":"LearnAPI.training_losses(model)\n\nReturn the training losses obtained when running model = fit(learner, ...) for some learner.\n\nSee also fit.\n\nNew implementations\n\nImplement for iterative algorithms that compute and record training losses as part of training (e.g. neural networks).\n\nIf implemented, you must include :(LearnAPI.training_losses) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.training_predictions","page":"Accessor Functions","title":"LearnAPI.training_predictions","text":"LearnAPI.training_predictions(model)\n\nReturn internally computed training predictions when running model = fit(learner, ...) for some learner.\n\nSee also fit.\n\nNew implementations\n\nImplement for iterative algorithms that compute and record training losses as part of training (e.g. neural networks).\n\nIf implemented, you must include :(LearnAPI.training_predictions) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.training_scores","page":"Accessor Functions","title":"LearnAPI.training_scores","text":"LearnAPI.training_scores(model)\n\nReturn the training scores obtained when running model = fit(learner, ...) for some learner.\n\nSee also fit.\n\nNew implementations\n\nImplement for learners, such as outlier detection algorithms, which associate a score with each observation during training, where these scores are of interest in later processes (e.g, in defining normalized scores for new data).\n\nIf implemented, you must include :(LearnAPI.training_scores) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.training_labels","page":"Accessor Functions","title":"LearnAPI.training_labels","text":"LearnAPI.training_labels(model)\n\nReturn the training labels obtained when running model = fit(learner, ...) for some learner.\n\nSee also fit.\n\nNew implementations\n\nIf implemented, you must include :(LearnAPI.training_labels) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.components","page":"Accessor Functions","title":"LearnAPI.components","text":"LearnAPI.components(model)\n\nFor a composite model, return the component models (fit outputs). These will be in the form of a vector of named pairs, property_name::Symbol => component_model. Here property_name is the name of some learner-valued property (hyper-parameter) of learner = LearnAPI.learner(model).\n\nA composite model is one for which the corresponding learner includes one or more learner-valued properties, and for which LearnAPI.is_composite(learner) is true.\n\nSee also is_composite.\n\nNew implementations\n\nImplementent if and only if model is a composite model.\n\nIf implemented, you must include :(LearnAPI.components) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"patterns/dimension_reduction/#Dimension-Reduction","page":"Dimension Reduction","title":"Dimension Reduction","text":"","category":"section"},{"location":"patterns/dimension_reduction/","page":"Dimension Reduction","title":"Dimension Reduction","text":"See these examples from the JuliaTestAPI.jl test suite:","category":"page"},{"location":"patterns/dimension_reduction/","page":"Dimension Reduction","title":"Dimension Reduction","text":"Truncated SVD","category":"page"},{"location":"patterns/time_series_forecasting/#Time-Series-Forecasting","page":"Time Series Forecasting","title":"Time Series Forecasting","text":"","category":"section"},{"location":"obs/#data_interface","page":"obs","title":"obs and Data Interfaces","text":"","category":"section"},{"location":"obs/","page":"obs","title":"obs","text":"The obs method takes data intended as input to fit, predict or transform, and transforms it to a learner-specific form guaranteed to implement a form of observation access designated by the learner. The transformed data can then passed on to the relevant method in place of the original input (after first resampling it, if the learner supports this). Using obs may provide performance advantages over naive workflows in some cases (e.g., cross-validation).","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"obs(learner, data) # can be passed to `fit` instead of `data`\nobs(model, data) # can be passed to `predict` or `transform` instead of `data`","category":"page"},{"location":"obs/#obs_workflows","page":"obs","title":"Typical workflows","text":"","category":"section"},{"location":"obs/","page":"obs","title":"obs","text":"LearnAPI.jl makes no universal assumptions about the form of data in a call like fit(learner, data). However, if we define","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"observations = obs(learner, data)","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"then, assuming the typical case that LearnAPI.data_interface(learner) == LearnAPI.RandomAccess(), observations implements the MLUtils.jl getobs/numobs interface, for grabbing and counting observations. Moreover, we can pass observations to fit in place of the original data, or first resample it using MLUtils.getobs:","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"# equivalent to `model = fit(learner, data)`\nmodel = fit(learner, observations)\n\n# with resampling:\nresampled_observations = MLUtils.getobs(observations, 1:10)\nmodel = fit(learner, resampled_observations)","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"In some implementations, the alternative pattern above can be used to avoid repeating unnecessary internal data preprocessing, or inefficient resampling. For example, here's how a user might call obs and MLUtils.getobs to perform efficient cross-validation:","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"using LearnAPI\nimport MLUtils\n\nlearner = \n\ndata = \nX = LearnAPI.features(learner, data)\ny = LearnAPI.target(learner, data)\n\ntrain_test_folds = map([1:10, 11:20, 21:30]) do test\n (setdiff(1:30, test), test)\nend\n\nfitobs = obs(learner, data)\nnever_trained = true\n\nscores = map(train_test_folds) do (train, test)\n\n # train using model-specific representation of data:\n fitobs_subset = MLUtils.getobs(fitobs, train)\n model = fit(learner, fitobs_subset)\n\n # predict on the fold complement:\n if never_trained\n global predictobs = obs(model, X)\n global never_trained = false\n end\n predictobs_subset = MLUtils.getobs(predictobs, test)\n ŷ = predict(model, Point(), predictobs_subset)\n\n return \n\nend","category":"page"},{"location":"obs/#Implementation-guide","page":"obs","title":"Implementation guide","text":"","category":"section"},{"location":"obs/","page":"obs","title":"obs","text":"method comment compulsory? fallback\nobs(learner, data) here data is fit-consumable not typically returns data\nobs(model, data) here data is predict-consumable not typically returns data","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"A sample implementation is given in Providing a separate data front end. ","category":"page"},{"location":"obs/#Reference","page":"obs","title":"Reference","text":"","category":"section"},{"location":"obs/","page":"obs","title":"obs","text":"obs","category":"page"},{"location":"obs/#LearnAPI.obs","page":"obs","title":"LearnAPI.obs","text":"obs(learner, data)\nobs(model, data)\n\nReturn learner-specific representation of data, suitable for passing to fit (first signature) or to predict and transform (second signature), in place of data. Here model is the return value of fit(learner, ...) for some LearnAPI.jl learner, learner.\n\nThe returned object is guaranteed to implement observation access as indicated by LearnAPI.data_interface(learner), typically LearnAPI.RandomAccess().\n\nCalling fit/predict/transform on the returned objects may have performance advantages over calling directly on data in some contexts.\n\nExample\n\nUsual workflow, using data-specific resampling methods:\n\ndata = (X, y) # a DataFrame and a vector\ndata_train = (Tables.select(X, 1:100), y[1:100])\nmodel = fit(learner, data_train)\nŷ = predict(model, Point(), X[101:150])\n\nAlternative, data agnostic, workflow using obs and the MLUtils.jl method getobs (assumes LearnAPI.data_interface(learner) == RandomAccess()):\n\nimport MLUtils\n\nfit_observations = obs(learner, data)\nmodel = fit(learner, MLUtils.getobs(fit_observations, 1:100))\n\npredict_observations = obs(model, X)\nẑ = predict(model, Point(), MLUtils.getobs(predict_observations, 101:150))\n@assert ẑ == ŷ\n\nSee also LearnAPI.data_interface.\n\nExtended help\n\nNew implementations\n\nImplementation is typically optional.\n\nFor each supported form of data in fit(learner, data), it must be true that model = fit(learner, observations) is equivalent to model = fit(learner, data), whenever observations = obs(learner, data). For each supported form of data in calls predict(model, ..., data) and transform(model, data), where implemented, the calls predict(model, ..., observations) and transform(model, observations) must be supported alternatives with the same output, whenever observations = obs(model, data).\n\nIf LearnAPI.data_interface(learner) == RandomAccess() (the default), then fit, predict and transform must additionally accept obs output that has been subsampled using MLUtils.getobs, with the obvious interpretation applying to the outcomes of such calls (e.g., if all observations are subsampled, then outcomes should be the same as if using the original data).\n\nImplicit in preceding requirements is that obs(learner, _) and obs(model, _) are involutive, meaning both the following hold:\n\nobs(learner, obs(learner, data)) == obs(learner, data)\nobs(model, obs(model, data) == obs(model, obs(model, data)\n\nIf one overloads obs, one typically needs additionally overloadings to guarantee involutivity.\n\nThe fallback for obs is obs(model_or_learner, data) = data, and the fallback for LearnAPI.data_interface(learner) is LearnAPI.RandomAccess(). For details refer to the LearnAPI.data_interface document string.\n\nIn particular, if the data to be consumed by fit, predict or transform consists only of suitable tables and arrays, then obs and LearnAPI.data_interface do not need to be overloaded. However, the user will get no performance benefits by using obs in that case.\n\nIf overloading obs(learner, data) to output new model-specific representations of data, it may be necessary to also overload LearnAPI.features(learner, observations), LearnAPI.target(learner, observations) (supervised learners), and/or LearnAPI.weights(learner, observations) (if weights are supported), for each kind output observations of obs(learner, data). Moreover, the outputs of these methods, applied to observations, must also implement the interface specified by LearnAPI.data_interface(learner).\n\nSample implementation\n\nRefer to the \"Anatomy of an Implementation\" section of the LearnAPI.jl manual.\n\n\n\n\n\n","category":"function"},{"location":"obs/#data_interfaces","page":"obs","title":"Data interfaces","text":"","category":"section"},{"location":"obs/","page":"obs","title":"obs","text":"New implementations must overload LearnAPI.data_interface(learner) if the output of obs does not implement LearnAPI.RandomAccess. (Arrays, most tables, and all tuples thereof, implement RandomAccess.)","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"LearnAPI.RandomAccess (default)\nLearnAPI.FiniteIterable\nLearnAPI.Iterable","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"LearnAPI.RandomAccess\nLearnAPI.FiniteIterable\nLearnAPI.Iterable","category":"page"},{"location":"obs/#LearnAPI.RandomAccess","page":"obs","title":"LearnAPI.RandomAccess","text":"LearnAPI.RandomAccess\n\nA data interface type. We say that data implements the RandomAccess interface if data implements the methods getobs and numobs from MLUtils.jl. The first method allows one to grab observations specified by an arbitrary index set, as in MLUtils.getobs(data, [2, 3, 5]), while the second method returns the total number of available observations, which is assumed to be known and finite.\n\nAll arrays implement RandomAccess, with the last index being the observation index (observations-as-columns in matrices).\n\nA Tables.jl compatible table data implements RandomAccess if Tables.istable(data) is true and if data implements DataAPI.nrow. This includes many tables, and in particular, DataFrames. Tables that are also tuples are explicitly excluded.\n\nAny tuple of objects implementing RandomAccess also implements RandomAccess.\n\nIf LearnAPI.data_interface(learner) takes the value RandomAccess(), then obs(learner, ...) is guaranteed to return objects implementing the RandomAccess interface, and the same holds for obs(model, ...), whenever LearnAPI.learner(model) == learner.\n\nImplementing RandomAccess for new data types\n\nTypically, to implement RandomAccess for a new data type requires only implementing Base.getindex and Base.length, which are the fallbacks for MLUtils.getobs and MLUtils.numobs, and this avoids making MLUtils.jl a package dependency.\n\nSee also LearnAPI.FiniteIterable, LearnAPI.Iterable.\n\n\n\n\n\n","category":"type"},{"location":"obs/#LearnAPI.FiniteIterable","page":"obs","title":"LearnAPI.FiniteIterable","text":"LearnAPI.FiniteIterable\n\nA data interface type. We say that data implements the FiniteIterable interface if it implements Julia's iterate interface, including Base.length, and if Base.IteratorSize(typeof(data)) == Base.HasLength(). For example, this is true if:\n\ndata implements the LearnAPI.RandomAccess interface (arrays and most tables)\ndata isa MLUtils.DataLoader, which includes output from MLUtils.eachobs.\n\nIf LearnAPI.data_interface(learner) takes the value FiniteIterable(), then obs(learner, ...) is guaranteed to return objects implementing the FiniteIterable interface, and the same holds for obs(model, ...), whenever LearnAPI.learner(model) == learner.\n\nSee also LearnAPI.RandomAccess, LearnAPI.Iterable.\n\n\n\n\n\n","category":"type"},{"location":"obs/#LearnAPI.Iterable","page":"obs","title":"LearnAPI.Iterable","text":"LearnAPI.Iterable\n\nA data interface type. We say that data implements the Iterable interface if it implements Julia's basic iterate interface. (Such objects may not implement MLUtils.numobs or Base.length.)\n\nIf LearnAPI.data_interface(learner) takes the value Iterable(), then obs(learner, ...) is guaranteed to return objects implementing Iterable, and the same holds for obs(model, ...), whenever LearnAPI.learner(model) == learner.\n\nSee also LearnAPI.FiniteIterable, LearnAPI.RandomAccess.\n\n\n\n\n\n","category":"type"},{"location":"","page":"Home","title":"Home","text":"\n\nLearnAPI.jl\n
\n\nA base Julia interface for machine learning and statistics \n
\n
","category":"page"},{"location":"","page":"Home","title":"Home","text":"LearnAPI.jl is a lightweight, functional-style interface, providing a collection of methods, such as fit and predict, to be implemented by algorithms from machine learning and statistics, some examples of which are listed here. A careful design ensures algorithms implementing LearnAPI.jl can buy into functionality, such as external performance estimates, hyperparameter optimization and model composition, provided by ML/statistics toolboxes and other packages. LearnAPI.jl includes a number of Julia traits for promising specific behavior.","category":"page"},{"location":"","page":"Home","title":"Home","text":"LearnAPI.jl's has no package dependencies.","category":"page"},{"location":"","page":"Home","title":"Home","text":"🚧","category":"page"},{"location":"","page":"Home","title":"Home","text":"warning: Warning\nThe API described here is under active development and not ready for adoption. Join an ongoing design discussion at this Julia Discourse thread.","category":"page"},{"location":"#Sample-workflow","page":"Home","title":"Sample workflow","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Suppose forest is some object encapsulating the hyperparameters of the random forest algorithm (the number of trees, etc.). Then, a LearnAPI.jl interface can be implemented, for objects with the type of forest, to enable the basic workflow below. In this case data is presented following the \"scikit-learn\" X, y pattern, although LearnAPI.jl supports other patterns as well.","category":"page"},{"location":"","page":"Home","title":"Home","text":"X = \ny = \nXnew = \n\n# List LearnaAPI functions implemented for `forest`:\nLearnAPI.functions(forest)\n\n# Train:\nmodel = fit(forest, X, y)\n\n# Generate point predictions:\nŷ = predict(model, Xnew) # or `predict(model, Point(), Xnew)`\n\n# Predict probability distributions:\npredict(model, Distribution(), Xnew)\n\n# Apply an \"accessor function\" to inspect byproducts of training:\nLearnAPI.feature_importances(model)\n\n# Slim down and otherwise prepare model for serialization:\nsmall_model = LearnAPI.strip(model)\nserialize(\"my_random_forest.jls\", small_model)\n\n# Recover saved model and algorithm configuration (\"learner\"):\nrecovered_model = deserialize(\"my_random_forest.jls\")\n@assert LearnAPI.learner(recovered_model) == forest\n@assert predict(recovered_model, Point(), Xnew) == ŷ","category":"page"},{"location":"","page":"Home","title":"Home","text":"Distribution and Point are singleton types owned by LearnAPI.jl. They allow dispatch based on the kind of target proxy, a key LearnAPI.jl concept. LearnAPI.jl places more emphasis on the notion of target variables and target proxies than on the usual supervised/unsupervised learning dichotomy. From this point of view, a supervised learner is simply one in which a target variable exists, and happens to appear as an input to training but not to prediction.","category":"page"},{"location":"#Data-interfaces","page":"Home","title":"Data interfaces","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Algorithms are free to consume data in any format. However, a method called obs (read as \"observations\") gives users and meta-algorithms access to an algorithm-specific representation of input data, which is also guaranteed to implement a standard interface for accessing individual observations, unless the algorithm explicitly opts out. Moreover, the fit and predict methods will also be able to consume these alternative data representations, for performance benefits in some situations.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The fallback data interface is the MLUtils.jl getobs/numobs interface (here tagged as LearnAPI.RandomAccess()) and if the input consumed by the algorithm already implements that interface (tables, arrays, etc.) then overloading obs is completely optional. Plain iteration interfaces, with or without knowledge of the number of observations, can also be specified (to support, e.g., data loaders reading images from disk).","category":"page"},{"location":"#Learning-more","page":"Home","title":"Learning more","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Anatomy of an Implementation: informal introduction to the main actors in a new LearnAPI.jl implementation\nReference: official specification\nCommon Implementation Patterns: implementation suggestions for common, informally defined, algorithm types\nTesting an Implementation","category":"page"},{"location":"patterns/outlier_detection/#Outlier-Detection","page":"Outlier Detection","title":"Outlier Detection","text":"","category":"section"},{"location":"patterns/incremental_algorithms/#Incremental-Algorithms","page":"Incremental Algorithms","title":"Incremental Algorithms","text":"","category":"section"},{"location":"patterns/incremental_algorithms/","page":"Incremental Algorithms","title":"Incremental Algorithms","text":"See these examples from the JuliaTestAI.jl test suite:","category":"page"},{"location":"patterns/incremental_algorithms/","page":"Incremental Algorithms","title":"Incremental Algorithms","text":"normal distribution estimator","category":"page"}] +[{"location":"patterns/regression/#Regression","page":"Regression","title":"Regression","text":"","category":"section"},{"location":"patterns/regression/","page":"Regression","title":"Regression","text":"See these examples from the JuliaTestAPI.jl test suite:","category":"page"},{"location":"patterns/regression/","page":"Regression","title":"Regression","text":"ridge regression","category":"page"},{"location":"patterns/missing_value_imputation/#Missing-Value-Imputation","page":"Missing Value Imputation","title":"Missing Value Imputation","text":"","category":"section"},{"location":"patterns/iterative_algorithms/#Iterative-Algorithms","page":"Iterative Algorithms","title":"Iterative Algorithms","text":"","category":"section"},{"location":"patterns/iterative_algorithms/","page":"Iterative Algorithms","title":"Iterative Algorithms","text":"See these examples from the JuliaTestAI.jl test suite:","category":"page"},{"location":"patterns/iterative_algorithms/","page":"Iterative Algorithms","title":"Iterative Algorithms","text":"bagged ensembling\nperceptron classifier","category":"page"},{"location":"patterns/survival_analysis/#Survival-Analysis","page":"Survival Analysis","title":"Survival Analysis","text":"","category":"section"},{"location":"predict_transform/#operations","page":"predict/transform","title":"predict, transform and inverse_transform","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"predict(model, kind_of_proxy, data)\ntransform(model, data)\ninverse_transform(model, data)","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Versions without the data argument may apply, for example in Density estimation.","category":"page"},{"location":"predict_transform/#predict_workflow","page":"predict/transform","title":"Typical worklows","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Train some supervised learner:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"model = fit(learner, (X, y))","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Predict probability distributions:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"ŷ = predict(model, Distribution(), Xnew)","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Generate point predictions:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"ŷ = predict(model, Point(), Xnew)","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Train a dimension-reducing learner:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"model = fit(learner, X)\nXnew_reduced = transform(model, Xnew)","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Apply an approximate right inverse:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"inverse_transform(model, Xnew_reduced)","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Fit and transform in one line:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"transform(learner, data) # `fit` implied","category":"page"},{"location":"predict_transform/#An-advanced-workflow","page":"predict/transform","title":"An advanced workflow","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"fitobs = obs(learner, (X, y)) # learner-specific repr. of data\nmodel = fit(learner, MLUtils.getobs(fitobs, 1:100))\npredictobs = obs(model, MLUtils.getobs(X, 101:150))\nŷ = predict(model, Point(), predictobs)","category":"page"},{"location":"predict_transform/#predict_guide","page":"predict/transform","title":"Implementation guide","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"method compulsory? fallback\npredict no none\ntransform no none\ninverse_transform no none","category":"page"},{"location":"predict_transform/#Predict-or-transform?","page":"predict/transform","title":"Predict or transform?","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"If the learner has a notion of target variable, then use predict to output each supported kind of target proxy (Point(), Distribution(), etc).","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"For output not associated with a target variable, implement transform instead, which does not dispatch on LearnAPI.KindOfProxy, but can be optionally paired with an implementation of inverse_transform, for returning (approximate) right or left inverses to transform.","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Of course, the one learner can implement both a predict and transform method. For example a K-means clustering algorithm can predict labels and transform to reduce dimension using distances from the cluster centres.","category":"page"},{"location":"predict_transform/#one_liners","page":"predict/transform","title":"One-liners combining fit and transform/predict","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"Learners may additionally overload transform to apply fit first, using the supplied data if required, and then immediately transform the same data. In that case the first argument of transform is an learner instead of the output of fit:","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"transform(learner, data) # `fit` implied","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"This will be shorthand for ","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"model = fit(learner, X) # or `fit(learner)` in the static case\ntransform(model, X)","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"The same remarks apply to predict, as in ","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"predict(learner, kind_of_proxy, data) # `fit` implied","category":"page"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"LearnAPI.jl does not, however, guarantee the provision of these one-liners.","category":"page"},{"location":"predict_transform/#predict_ref","page":"predict/transform","title":"Reference","text":"","category":"section"},{"location":"predict_transform/","page":"predict/transform","title":"predict/transform","text":"predict\ntransform\ninverse_transform","category":"page"},{"location":"predict_transform/#LearnAPI.predict","page":"predict/transform","title":"LearnAPI.predict","text":"predict(model, kind_of_proxy::LearnAPI.KindOfProxy, data)\npredict(model, data)\n\nThe first signature returns target predictions, or proxies for target predictions, for input features data, according to some model returned by fit. Where supported, these are literally target predictions if kind_of_proxy = Point(), and probability density/mass functions if kind_of_proxy = Distribution(). List all options with LearnAPI.kinds_of_proxy(learner), where learner = LearnAPI.learner(model).\n\nmodel = fit(learner, (X, y))\npredict(model, Point(), Xnew)\n\nThe shortcut predict(model, data) calls the first method with learner-specific kind_of_proxy, namely the first element of LearnAPI.kinds_of_proxy(learner), which lists all supported target proxies.\n\nThe argument model is anything returned by a call of the form fit(learner, ...).\n\nIf LearnAPI.features(LearnAPI.learner(model)) == nothing, then the argument data is omitted in both signatures. An example is density estimators.\n\nSee also fit, transform, inverse_transform.\n\nExtended help\n\nNote predict must not mutate any argument, except in the special case LearnAPI.is_static(learner) == true.\n\nNew implementations\n\nIf there is no notion of a \"target\" variable in the LearnAPI.jl sense, or you need an operation with an inverse, implement transform instead.\n\nImplementation is optional. Only the first signature (with or without the data argument) is implemented, but each kind_of_proxy::KindOfProxy that gets an implementation must be added to the list returned by LearnAPI.kinds_of_proxy(learner). List all available kinds of proxy by doing LearnAPI.kinds_of_proxy().\n\nIf data is not present in the implemented signature (eg., for density estimators) then LearnAPI.features(learner, data) must return nothing.\n\nIf implemented, you must include :(LearnAPI.predict) in the tuple returned by the LearnAPI.functions trait. \n\nIf, additionally, LearnAPI.strip(model) is overloaded, then the following identity must hold:\n\npredict(LearnAPI.strip(model), args...) == predict(model, args...)\n\nIf LearnAPI.is_static(learner) is true, then predict may mutate it's first argument, but not in a way that alters the result of a subsequent call to predict, transform or inverse_transform. See more at fit.\n\nAssumptions about data\n\nBy default, it is assumed that data supports the LearnAPI.RandomAccess interface; this includes all matrices, with observations-as-columns, most tables, and tuples thereof. See LearnAPI.RandomAccess for details. If this is not the case then an implementation must either: (i) overload obs to articulate how provided data can be transformed into a form that does support LearnAPI.RandomAccess; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API. Refer tbo document strings for details.\n\n\n\n\n\n","category":"function"},{"location":"predict_transform/#LearnAPI.transform","page":"predict/transform","title":"LearnAPI.transform","text":"transform(model, data)\n\nReturn a transformation of some data, using some model, as returned by fit.\n\nExample\n\nBelow, X and Xnew are data of the same form.\n\nFor a learner that generalizes to new data (\"learns\"):\n\nmodel = fit(learner, X; verbosity=0)\ntransform(model, Xnew)\n\nor, in one step (where supported):\n\nW = transform(learner, X) # `fit` implied\n\nFor a static (non-generalizing) transformer:\n\nmodel = fit(learner)\nW = transform(model, X)\n\nor, in one step (where supported):\n\nW = transform(learner, X) # `fit` implied\n\nNote transform does not mutate any argument, except in the special case LearnAPI.is_static(learner) == true.\n\nSee also fit, predict, inverse_transform.\n\nExtended help\n\nNew implementations\n\nImplementation for new LearnAPI.jl learners is optional. If implemented, you must include :(LearnAPI.transform) in the tuple returned by the LearnAPI.functions trait. \n\nAn implementation is free to implement transform signatures with additional positional arguments (eg., data-slurping signatures) but LearnAPI.jl is silent about their interpretation or existence.\n\nIf, additionally, LearnAPI.strip(model) is overloaded, then the following identity must hold:\n\ntransform(LearnAPI.strip(model), args...) == transform(model, args...)\n\nIf LearnAPI.is_static(learner) is true, then transform may mutate it's first argument, but not in a way that alters the result of a subsequent call to predict, transform or inverse_transform. See more at fit.\n\nAssumptions about data\n\nBy default, it is assumed that data supports the LearnAPI.RandomAccess interface; this includes all matrices, with observations-as-columns, most tables, and tuples thereof. See LearnAPI.RandomAccess for details. If this is not the case then an implementation must either: (i) overload obs to articulate how provided data can be transformed into a form that does support LearnAPI.RandomAccess; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API. Refer tbo document strings for details.\n\n\n\n\n\n","category":"function"},{"location":"predict_transform/#LearnAPI.inverse_transform","page":"predict/transform","title":"LearnAPI.inverse_transform","text":"inverse_transform(model, data)\n\nInverse transform data according to some model returned by fit. Here \"inverse\" is to be understood broadly, e.g, an approximate right or left inverse for transform.\n\nExample\n\nIn the following, learner is some dimension-reducing algorithm that generalizes to new data (such as PCA); Xtrain is the training input and Xnew the input to be reduced:\n\nmodel = fit(learner, Xtrain)\nW = transform(model, Xnew) # reduced version of `Xnew`\nŴ = inverse_transform(model, W) # embedding of `W` in original space\n\nSee also fit, transform, predict.\n\nExtended help\n\nNew implementations\n\nImplementation is optional. If implemented, you must include :(LearnAPI.inverse_transform) in the tuple returned by the LearnAPI.functions trait. \n\nIf, additionally, LearnAPI.strip(model) is overloaded, then the following identity must hold:\n\ninverse_transform(LearnAPI.strip(model), args...) == inverse_transform(model, args...)\n\n\n\n\n\n","category":"function"},{"location":"patterns/ensembling/#Ensembling","page":"Ensembling","title":"Ensembling","text":"","category":"section"},{"location":"patterns/ensembling/","page":"Ensembling","title":"Ensembling","text":"See these examples from the JuliaTestAPI.jl test suite:","category":"page"},{"location":"patterns/ensembling/","page":"Ensembling","title":"Ensembling","text":"bagged ensembling of a regression model","category":"page"},{"location":"patterns/supervised_bayesian_algorithms/#Supervised-Bayesian-Models","page":"Supervised Bayesian Models","title":"Supervised Bayesian Models","text":"","category":"section"},{"location":"patterns/classification/#Classification","page":"Classification","title":"Classification","text":"","category":"section"},{"location":"patterns/classification/","page":"Classification","title":"Classification","text":"See these examples from the JuliaTestAPI.jl test suite:","category":"page"},{"location":"patterns/classification/","page":"Classification","title":"Classification","text":"perceptron classifier","category":"page"},{"location":"patterns/density_estimation/#Density-Estimation","page":"Density Estimation","title":"Density Estimation","text":"","category":"section"},{"location":"patterns/density_estimation/","page":"Density Estimation","title":"Density Estimation","text":"See these examples from the JuliaTestAPI.jl test suite:","category":"page"},{"location":"patterns/density_estimation/","page":"Density Estimation","title":"Density Estimation","text":"normal distribution estimator","category":"page"},{"location":"patterns/gradient_descent/#Gradient-Descent","page":"Gradient Descent","title":"Gradient Descent","text":"","category":"section"},{"location":"patterns/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"See these examples from the JuliaTestAI.jl test suite:","category":"page"},{"location":"patterns/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"perceptron classifier","category":"page"},{"location":"patterns/transformers/#transformers","page":"Transformers","title":"Transformers","text":"","category":"section"},{"location":"patterns/transformers/","page":"Transformers","title":"Transformers","text":"Check out the following examples:","category":"page"},{"location":"patterns/transformers/","page":"Transformers","title":"Transformers","text":"[Truncated SVD]((https://github.com/JuliaAI/LearnTestAPI.jl/blob/dev/test/patterns/dimension_reduction.jl (from the TestLearnAPI.jl test suite)","category":"page"},{"location":"common_implementation_patterns/#patterns","page":"Common Implementation Patterns","title":"Common Implementation Patterns","text":"","category":"section"},{"location":"common_implementation_patterns/","page":"Common Implementation Patterns","title":"Common Implementation Patterns","text":"important: Important\nThis section is only an implementation guide. The definitive specification of the Learn API is given in Reference.","category":"page"},{"location":"common_implementation_patterns/","page":"Common Implementation Patterns","title":"Common Implementation Patterns","text":"This guide is intended to be consulted after reading Anatomy of an Implementation, which introduces the main interface objects and terminology.","category":"page"},{"location":"common_implementation_patterns/","page":"Common Implementation Patterns","title":"Common Implementation Patterns","text":"Although an implementation is defined purely by the methods and traits it implements, many implementations fall into one (or more) of the following informally understood patterns or \"tasks\":","category":"page"},{"location":"common_implementation_patterns/","page":"Common Implementation Patterns","title":"Common Implementation Patterns","text":"Regression: Supervised learners for continuous targets\nClassification: Supervised learners for categorical targets \nClusterering: Algorithms that group data into clusters for classification and possibly dimension reduction. May be true learners (generalize to new data) or static.\nGradient Descent: Including neural networks.\nIterative Algorithms\nIncremental Algorithms: Algorithms that can be updated with new observations.\nFeature Engineering: Algorithms for selecting or combining features\nDimension Reduction: Transformers that learn to reduce feature space dimension\nMissing Value Imputation\nTransformers: Other transformers, such as standardizers, and categorical encoders.\nStatic Algorithms: Algorithms that do not learn, in the sense they must be re-executed for each new data set (do not generalize), but which have hyperparameters and/or deliver ancillary information about the computation.\nEnsembling: Algorithms that blend predictions of multiple algorithms\nTime Series Forecasting\nTime Series Classification\nSurvival Analysis\nDensity Estimation: Algorithms that learn a probability distribution\nBayesian Algorithms\nOutlier Detection: Supervised, unsupervised, or semi-supervised learners for anomaly detection.\nText Analysis\nAudio Analysis\nNatural Language Processing\nImage Processing\nMeta-algorithms","category":"page"},{"location":"traits/#traits","page":"Learner Traits","title":"Learner Traits","text":"","category":"section"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"Learner traits are simply functions whose sole argument is a learner.","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"Traits promise specific learner behavior, such as: This learner can make point or probabilistic predictions or This learner is supervised (sees a target in training). They may also record more mundane information, such as a package license.","category":"page"},{"location":"traits/#trait_summary","page":"Learner Traits","title":"Trait summary","text":"","category":"section"},{"location":"traits/#traits_list","page":"Learner Traits","title":"Overloadable traits","text":"","category":"section"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"In the examples column of the table below, Continuous is a name owned the package ScientificTypesBase.jl.","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"trait return value fallback value example\nLearnAPI.constructor(learner) constructor for generating new or modified versions of learner (no fallback) RidgeRegressor\nLearnAPI.functions(learner) functions you can apply to learner or associated model (traits excluded) () (:fit, :predict, :LearnAPI.strip, :(LearnAPI.learner), :obs)\nLearnAPI.kinds_of_proxy(learner) instances kind of KindOfProxy for which an implementation of LearnAPI.predict(learner, kind, ...) is guaranteed. () (Distribution(), Interval())\nLearnAPI.tags(learner) lists one or more suggestive learner tags from LearnAPI.tags() () (:regression, :probabilistic)\nLearnAPI.is_pure_julia(learner) true if implementation is 100% Julia code false true\nLearnAPI.pkg_name(learner) name of package providing core code (may be different from package providing LearnAPI.jl implementation) \"unknown\" \"DecisionTree\"\nLearnAPI.pkg_license(learner) name of license of package providing core code \"unknown\" \"MIT\"\nLearnAPI.doc_url(learner) url providing documentation of the core code \"unknown\" \"https://en.wikipedia.org/wiki/Decision_tree_learning\"\nLearnAPI.load_path(learner) string locating name returned by LearnAPI.constructor(learner), beginning with a package name \"unknown\"` FastTrees.LearnAPI.DecisionTreeClassifier\nLearnAPI.is_composite(learner) true if one or more properties of learner may be a learner false true\nLearnAPI.human_name(learner) human name for the learner; should be a noun type name with spaces \"elastic net regressor\"\nLearnAPI.iteration_parameter(learner) symbolic name of an iteration parameter nothing :epochs\nLearnAPI.data_interface(learner) Interface implemented by objects returned by obs Base.HasLength() (supports MLUtils.getobs/numobs) Base.SizeUnknown() (supports iterate)\nLearnAPI.fit_observation_scitype(learner) upper bound on scitype(observation) for observation in data ensuring fit(learner, data) works Union{} Tuple{AbstractVector{Continuous}, Continuous}\nLearnAPI.target_observation_scitype(learner) upper bound on the scitype of each observation of the targget Any Continuous\nLearnAPI.is_static(learner) true if fit consumes no data false true","category":"page"},{"location":"traits/#Derived-Traits","page":"Learner Traits","title":"Derived Traits","text":"","category":"section"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"The following are provided for convenience but should not be overloaded by new learners:","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"trait return value example\nLearnAPI.name(learner) learner type name as string \"PCA\"\nLearnAPI.is_learner(learner) true if learner is LearnAPI.jl-compliant true\nLearnAPI.target(learner) true if fit sees a target variable; see LearnAPI.target false\nLearnAPI.weights(learner) true if fit supports per-observation; see LearnAPI.weights false","category":"page"},{"location":"traits/#Implementation-guide","page":"Learner Traits","title":"Implementation guide","text":"","category":"section"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"A single-argument trait is declared following this pattern:","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"LearnAPI.is_pure_julia(learner::MyLearnerType) = true","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"A shorthand for single-argument traits is available:","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"@trait MyLearnerType is_pure_julia=true","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"Multiple traits can be declared like this:","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"@trait(\n MyLearnerType,\n is_pure_julia = true,\n pkg_name = \"MyPackage\",\n)","category":"page"},{"location":"traits/#trait_contract","page":"Learner Traits","title":"The global trait contract","text":"","category":"section"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"To ensure that trait metadata can be stored in an external learner registry, LearnAPI.jl requires:","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"Finiteness: The value of a trait is the same for all learners with same value of LearnAPI.constructor(learner). This typically means trait values do not depend on type parameters! If is_composite(learner) = true, this requirement is dropped.\nLow level deserializability: It should be possible to evaluate the trait value when LearnAPI is the only imported module. ","category":"page"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"Because of 1, combining a lot of functionality into one learner (e.g. the learner can perform both classification or regression) can mean traits are necessarily less informative (as in LearnAPI.target_observation_scitype(learner) = Any).","category":"page"},{"location":"traits/#Reference","page":"Learner Traits","title":"Reference","text":"","category":"section"},{"location":"traits/","page":"Learner Traits","title":"Learner Traits","text":"LearnAPI.constructor\nLearnAPI.functions\nLearnAPI.kinds_of_proxy\nLearnAPI.tags\nLearnAPI.is_pure_julia\nLearnAPI.pkg_name\nLearnAPI.pkg_license\nLearnAPI.doc_url\nLearnAPI.load_path\nLearnAPI.is_composite\nLearnAPI.human_name\nLearnAPI.data_interface\nLearnAPI.iteration_parameter\nLearnAPI.fit_observation_scitype\nLearnAPI.target_observation_scitype\nLearnAPI.is_static","category":"page"},{"location":"traits/#LearnAPI.constructor","page":"Learner Traits","title":"LearnAPI.constructor","text":"Learn.API.constructor(learner)\n\nReturn a keyword constructor that can be used to clone learner:\n\njulia> learner.lambda\n0.1\njulia> C = LearnAPI.constructor(learner)\njulia> learner2 = C(lambda=0.2)\njulia> learner2.lambda\n0.2\n\nNew implementations\n\nAll new implementations must overload this trait.\n\nAttach public LearnAPI.jl-related documentation for learner to the constructor, not the learner struct.\n\nIt must be possible to recover learner from the constructor returned as follows:\n\nproperties = propertynames(learner)\nnamed_properties = NamedTuple{properties}(getproperty.(Ref(learner), properties))\n@assert learner == LearnAPI.constructor(learner)(; named_properties...)\n\nwhich can be tested with @assert LearnAPI.clone(learner) == learner.\n\nThe keyword constructor provided by LearnAPI.constructor must provide default values for all properties, with the exception of those that can take other LearnAPI.jl learners as values. These can be provided with the default nothing, with the constructor throwing an error if the default value persists.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.functions","page":"Learner Traits","title":"LearnAPI.functions","text":"LearnAPI.functions(learner)\n\nReturn a tuple of expressions representing functions that can be meaningfully applied with learner, or an associated model (object returned by fit(learner, ...), as the first argument. Learner traits (methods for which learner is the only argument) are excluded.\n\nThe returned tuple may include expressions like :(DecisionTree.print_tree), which reference functions not owned by LearnAPI.jl.\n\nThe understanding is that learner is a LearnAPI-compliant object whenever the return value is non-empty.\n\nExtended help\n\nNew implementations\n\nAll new implementations must implement this trait. Here's a checklist for elements in the return value:\n\nexpression implementation compulsory? include in returned tuple?\n:(LearnAPI.fit) yes yes\n:(LearnAPI.learner) yes yes\n:(LearnAPI.strip) no yes\n:(LearnAPI.obs) no yes\n:(LearnAPI.features) no yes, unless fit consumes no data\n:(LearnAPI.target) no only if implemented\n:(LearnAPI.weights) no only if implemented\n:(LearnAPI.update) no only if implemented\n:(LearnAPI.update_observations) no only if implemented\n:(LearnAPI.update_features) no only if implemented\n:(LearnAPI.predict) no only if implemented\n:(LearnAPI.transform) no only if implemented\n:(LearnAPI.inverse_transform) no only if implemented\n< accessor functions> no only if implemented\n\nAlso include any implemented accessor functions, both those owned by LearnaAPI.jl, and any learner-specific ones. The LearnAPI.jl accessor functions are: LearnAPI.extras, LearnAPI.learner, LearnAPI.coefficients, LearnAPI.intercept, LearnAPI.tree, LearnAPI.trees, LearnAPI.feature_names, LearnAPI.feature_importances, LearnAPI.training_labels, LearnAPI.training_losses, LearnAPI.training_predictions, LearnAPI.training_scores and LearnAPI.components (LearnAPI.strip is always included).\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.kinds_of_proxy","page":"Learner Traits","title":"LearnAPI.kinds_of_proxy","text":"LearnAPI.kinds_of_proxy(learner)\n\nReturns a tuple of all instances, kind, for which for which predict(learner, kind, data...) has a guaranteed implementation. Each such kind subtypes LearnAPI.KindOfProxy. Examples are Point() (for predicting actual target values) and Distributions() (for predicting probability mass/density functions).\n\nThe call predict(model, data) always returns predict(model, kind, data), where kind is the first element of the trait's return value.\n\nSee also LearnAPI.predict, LearnAPI.KindOfProxy.\n\nExtended help\n\nNew implementations\n\nMust be overloaded whenever predict is implemented.\n\nElements of the returned tuple must be instances of LearnAPI.KindOfProxy. List all possibilities by running LearnAPI.kinds_of_proxy().\n\nSuppose, for example, we have the following implementation of a supervised learner returning only probabilistic predictions:\n\nLearnAPI.predict(learner::MyNewLearnerType, LearnAPI.Distribution(), Xnew) = ...\n\nThen we can declare\n\n@trait MyNewLearnerType kinds_of_proxy = (LearnaAPI.Distribution(),)\n\nLearnAPI.jl provides the fallback for predict(model, data).\n\nFor more on target variables and target proxies, refer to the LearnAPI documentation.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.tags","page":"Learner Traits","title":"LearnAPI.tags","text":"LearnAPI.tags(learner)\n\nLists one or more suggestive learner tags. Do LearnAPI.tags() to list all possible.\n\nwarning: Warning\nThe value of this trait guarantees no particular behavior. The trait is intended for informal classification purposes only.\n\nNew implementations\n\nThis trait should return a tuple of strings, as in (\"classifier\", \"text analysis\").\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.is_pure_julia","page":"Learner Traits","title":"LearnAPI.is_pure_julia","text":"LearnAPI.is_pure_julia(learner)\n\nReturns true if training learner requires evaluation of pure Julia code only.\n\nNew implementations\n\nThe fallback is false.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.pkg_name","page":"Learner Traits","title":"LearnAPI.pkg_name","text":"LearnAPI.pkg_name(learner)\n\nReturn the name of the package module which supplies the core training algorithm for learner. This is not necessarily the package providing the LearnAPI interface.\n\nReturns \"unknown\" if the learner implementation has not overloaded the trait. \n\nNew implementations\n\nMust return a string, as in \"DecisionTree\".\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.pkg_license","page":"Learner Traits","title":"LearnAPI.pkg_license","text":"LearnAPI.pkg_license(learner)\n\nReturn the name of the software license, such as \"MIT\", applying to the package where the core algorithm for learner is implemented.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.doc_url","page":"Learner Traits","title":"LearnAPI.doc_url","text":"LearnAPI.doc_url(learner)\n\nReturn a url where the core algorithm for learner is documented.\n\nReturns \"unknown\" if the learner implementation has not overloaded the trait. \n\nNew implementations\n\nMust return a string, such as \"https://en.wikipedia.org/wiki/Decision_tree_learning\".\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.load_path","page":"Learner Traits","title":"LearnAPI.load_path","text":"LearnAPI.load_path(learner)\n\nReturn a string indicating where in code the definition of the learner's constructor can be found, beginning with the name of the package module defining it. By \"constructor\" we mean the return value of LearnAPI.constructor(learner).\n\nImplementation\n\nFor example, a return value of \"FastTrees.LearnAPI.DecisionTreeClassifier\" means the following julia code will not error:\n\nimport FastTrees\nimport LearnAPI\n@assert FastTrees.LearnAPI.DecisionTreeClassifier == LearnAPI.constructor(learner)\n\nReturns \"unknown\" if the learner implementation has not overloaded the trait. \n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.is_composite","page":"Learner Traits","title":"LearnAPI.is_composite","text":"LearnAPI.is_composite(learner)\n\nReturns true if one or more properties (fields) of learner may themselves be learners, and false otherwise.\n\nSee also LearnAPI.components.\n\nNew implementations\n\nThis trait should be overloaded if one or more properties (fields) of learner may take learner values. Fallback return value is false. The keyword constructor for such an learner need not prescribe defaults for learner-valued properties. Implementation of the accessor function LearnAPI.components is recommended.\n\nThe value of the trait must depend only on the type of learner. \n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.human_name","page":"Learner Traits","title":"LearnAPI.human_name","text":"LearnAPI.human_name(learner)\n\nReturn a human-readable string representation of typeof(learner). Primarily intended for auto-generation of documentation.\n\nNew implementations\n\nOptional. A fallback takes the type name, inserts spaces and removes capitalization. For example, KNNRegressor becomes \"knn regressor\". Better would be to overload the trait to return \"K-nearest neighbors regressor\". Ideally, this is a \"concrete\" noun like \"ridge regressor\" rather than an \"abstract\" noun like \"ridge regression\".\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.data_interface","page":"Learner Traits","title":"LearnAPI.data_interface","text":"LearnAPI.data_interface(learner)\n\nReturn the data interface supported by learner for accessing individual observations in representations of input data returned by obs(learner, data) or obs(model, data), whenever learner == LearnAPI.learner(model). Here data is fit, predict, or transform-consumable data.\n\nPossible return values are LearnAPI.RandomAccess, LearnAPI.FiniteIterable, and LearnAPI.Iterable.\n\nSee also obs.\n\nNew implementations\n\nThe fallback returns LearnAPI.RandomAccess, which applies to arrays, most tables, and tuples of these. See the doc-string for details.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.iteration_parameter","page":"Learner Traits","title":"LearnAPI.iteration_parameter","text":"LearnAPI.iteration_parameter(learner)\n\nThe name of the iteration parameter of learner, or nothing if the algorithm is not iterative.\n\nNew implementations\n\nImplement if algorithm is iterative. Returns a symbol or nothing.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.fit_observation_scitype","page":"Learner Traits","title":"LearnAPI.fit_observation_scitype","text":"LearnAPI.fit_observation_scitype(learner)\n\nReturn an upper bound S on the scitype of individual observations guaranteed to work when calling fit: if observations = obs(learner, data) and ScientificTypes.scitype(o) <:S for each o in observations, then the call fit(learner, data) is supported.\n\nHere, \"for each o in observations\" is understood in the sense of LearnAPI.data_interface(learner). For example, if LearnAPI.data_interface(learner) == Base.HasLength(), then this means \"for o in MLUtils.eachobs(observations)\".\n\nSee also LearnAPI.target_observation_scitype.\n\nNew implementations\n\nOptional. The fallback return value is Union{}. \n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.target_observation_scitype","page":"Learner Traits","title":"LearnAPI.target_observation_scitype","text":"LearnAPI.target_observation_scitype(learner)\n\nReturn an upper bound S on the scitype of each observation of an applicable target variable. Specifically:\n\nIf :(LearnAPI.target) in LearnAPI.functions(learner) (i.e., fit consumes target variables) then \"target\" means anything returned by LearnAPI.target(learner, data), where data is an admissible argument in the call fit(learner, data).\nS will always be an upper bound on the scitype of (point) observations that could be conceivably extracted from the output of predict.\n\nTo illustate the second case, suppose we have\n\nmodel = fit(learner, data)\nŷ = predict(model, Sampleable(), data_new)\n\nThen each individual sample generated by each \"observation\" of ŷ (a vector of sampleable objects, say) will be bound in scitype by S.\n\nSee also See also LearnAPI.fit_observation_scitype.\n\nNew implementations\n\nOptional. The fallback return value is Any.\n\n\n\n\n\n","category":"function"},{"location":"traits/#LearnAPI.is_static","page":"Learner Traits","title":"LearnAPI.is_static","text":"LearnAPI.is_static(learner)\n\nReturns true if fit is called with no data arguments, as in fit(learner). That is, learner does not generalize to new data, and data is only provided at the predict or transform step.\n\nFor example, some clustering algorithms are applied with this workflow, to assign labels to the observations in X:\n\nmodel = fit(learner) # no training data\nlabels = predict(model, X) # may mutate `model`!\n\n# extract some byproducts of the clustering algorithm (e.g., outliers):\nLearnAPI.extras(model)\n\nNew implementations\n\nThis trait, falling back to false, may only be overloaded when fit has no data arguments. See more at fit.\n\n\n\n\n\n","category":"function"},{"location":"target_weights_features/#input","page":"target/weights/features","title":"target, weights, and features","text":"","category":"section"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"Methods for extracting parts of training data:","category":"page"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"LearnAPI.target(learner, data) -> \nLearnAPI.weights(learner, data) -> \nLearnAPI.features(learner, data) -> ","category":"page"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"Here data is something supported in a call of the form fit(learner, data). ","category":"page"},{"location":"target_weights_features/#Typical-workflow","page":"target/weights/features","title":"Typical workflow","text":"","category":"section"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"Not typically appearing in a general user's workflow but useful in meta-alagorithms, such as cross-validation (see the example in obs and Data Interfaces).","category":"page"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"Supposing learner is a supervised classifier predicting a one-dimensional vector target:","category":"page"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"model = fit(learner, data)\nX = LearnAPI.features(learner, data)\ny = LearnAPI.target(learner, data)\nŷ = predict(model, Point(), X)\ntraining_loss = sum(ŷ .!= y)","category":"page"},{"location":"target_weights_features/#Implementation-guide","page":"target/weights/features","title":"Implementation guide","text":"","category":"section"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"method fallback compulsory?\nLearnAPI.target returns nothing no\nLearnAPI.weights returns nothing no\nLearnAPI.features see docstring if fallback insufficient","category":"page"},{"location":"target_weights_features/#Reference","page":"target/weights/features","title":"Reference","text":"","category":"section"},{"location":"target_weights_features/","page":"target/weights/features","title":"target/weights/features","text":"LearnAPI.target\nLearnAPI.weights\nLearnAPI.features","category":"page"},{"location":"target_weights_features/#LearnAPI.target","page":"target/weights/features","title":"LearnAPI.target","text":"LearnAPI.target(learner, data) -> target\n\nReturn, for each form of data supported in a call of the form fit(learner, data), the target variable part of data. If nothing is returned, the learner does not see a target variable in training (is unsupervised).\n\nThe returned object y has the same number of observations as data. If data is the output of an obs call, then y is additionally guaranteed to implement the data interface specified by LearnAPI.data_interface(learner).\n\nExtended help\n\nWhat is a target variable?\n\nExamples of target variables are house prices in real estate pricing estimates, the \"spam\"/\"not spam\" labels in an email spam filtering task, \"outlier\"/\"inlier\" labels in outlier detection, cluster labels in clustering problems, and censored survival times in survival analysis. For more on targets and target proxies, see the \"Reference\" section of the LearnAPI.jl documentation.\n\nNew implementations\n\nA fallback returns nothing. The method must be overloaded if fit consumes data including a target variable.\n\nIf overloading obs, ensure that the return value, unless nothing, implements the data interface specified by LearnAPI.data_interface(learner), in the special case that data is the output of an obs call.\n\nIf overloaded, you must include :(LearnAPI.target) in the tuple returned by the LearnAPI.functions trait. \n\n\n\n\n\n","category":"function"},{"location":"target_weights_features/#LearnAPI.weights","page":"target/weights/features","title":"LearnAPI.weights","text":"LearnAPI.weights(learner, data) -> weights\n\nReturn, for each form of data supported in a call of the form fit(learner, data), the per-observation weights part of data. Where nothing is returned, no weights are part of data, which is to be interpreted as uniform weighting.\n\nThe returned object w has the same number of observations as data. If data is the output of an obs call, then w is additionally guaranteed to implement the data interface specified by LearnAPI.data_interface(learner).\n\nExtended help\n\nNew implementations\n\nOverloading is optional. A fallback returns nothing.\n\nIf overloading obs, ensure that the return value, unless nothing, implements the data interface specified by LearnAPI.data_interface(learner), in the special case that data is the output of an obs call.\n\nIf overloaded, you must include :(LearnAPI.weights) in the tuple returned by the LearnAPI.functions trait. \n\n\n\n\n\n","category":"function"},{"location":"target_weights_features/#LearnAPI.features","page":"target/weights/features","title":"LearnAPI.features","text":"LearnAPI.features(learner, data)\n\nReturn, for each form of data supported in a call of the form fit(learner, data), the \"features\" part of data (as opposed to the target variable, for example).\n\nThe returned object X may always be passed to predict or transform, where implemented, as in the following sample workflow:\n\nmodel = fit(learner, data)\nX = LearnAPI.features(learner, data)\nŷ = predict(model, kind_of_proxy, X) # eg, `kind_of_proxy = Point()`\n\nFor supervised models (i.e., where :(LearnAPI.target) in LearnAPI.functions(learner)) ŷ above is generally intended to be an approximate proxy for LearnAPI.target(learner, data), the training target.\n\nThe object X returned by LearnAPI.target has the same number of observations as data. If data is the output of an obs call, then X is additionally guaranteed to implement the data interface specified by LearnAPI.data_interface(learner).\n\nExtended help\n\nNew implementations\n\nFor density estimators, whose fit typically consumes only a target variable, you should overload this method to return nothing.\n\nIt must otherwise be possible to pass the return value X to predict and/or transform, and X must have same number of observations as data. A fallback returns first(data) if data is a tuple, and otherwise returns data.\n\nFurther overloadings may be necessary to handle the case that data is the output of obs(learner, data), if obs is being overloaded. In this case, be sure that X, unless nothing, implements the data interface specified by LearnAPI.data_interface(learner).\n\n\n\n\n\n","category":"function"},{"location":"patterns/feature_engineering/#Feature-Engineering","page":"Feature Engineering","title":"Feature Engineering","text":"","category":"section"},{"location":"patterns/feature_engineering/","page":"Feature Engineering","title":"Feature Engineering","text":"See these examples from the JuliaTestAI.jl test suite:","category":"page"},{"location":"patterns/feature_engineering/","page":"Feature Engineering","title":"Feature Engineering","text":"feature selectors from tests.","category":"page"},{"location":"fit_update/#fit_docs","page":"fit/update","title":"fit, update, update_observations, and update_features","text":"","category":"section"},{"location":"fit_update/#Training","page":"fit/update","title":"Training","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"fit(learner, data; verbosity=LearnAPI.default_verbosity()) -> model\nfit(learner; verbosity=LearnAPI.default_verbosity()) -> static_model ","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"A \"static\" algorithm is one that does not generalize to new observations (e.g., some clustering algorithms); there is no training data and the algorithm is executed by predict or transform which receive the data. See example below.","category":"page"},{"location":"fit_update/#Updating","page":"fit/update","title":"Updating","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"update(model, data; verbosity=..., param1=new_value1, param2=new_value2, ...) -> updated_model\nupdate_observations(model, new_data; verbosity=..., param1=new_value1, ...) -> updated_model\nupdate_features(model, new_data; verbosity=1, param1=new_value1, ...) -> updated_model","category":"page"},{"location":"fit_update/#Typical-workflows","page":"fit/update","title":"Typical workflows","text":"","category":"section"},{"location":"fit_update/#Supervised-models","page":"fit/update","title":"Supervised models","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"Supposing Learner is some supervised classifier type, with an iteration parameter n:","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"learner = Learner(n=100)\nmodel = fit(learner, (X, y))\n\n# Predict probability distributions:\nŷ = predict(model, Distribution(), Xnew) \n\n# Inspect some byproducts of training:\nLearnAPI.feature_importances(model)\n\n# Add 50 iterations and predict again:\nmodel = update(model; n=150)\npredict(model, Distribution(), X)","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"See also Classification and Regression.","category":"page"},{"location":"fit_update/#Transformers","page":"fit/update","title":"Transformers","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"A dimension-reducing transformer, learner, might be used in this way:","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"model = fit(learner, X)\ntransform(model, X) # or `transform(model, Xnew)`","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"or, if implemented, using a single call:","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"transform(learner, X) # `fit` implied","category":"page"},{"location":"fit_update/#static_algorithms","page":"fit/update","title":"Static algorithms (no \"learning\")","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"Suppose learner is some clustering algorithm that cannot be generalized to new data (e.g. DBSCAN):","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"model = fit(learner) # no training data\nlabels = predict(model, X) # may mutate `model`\n\n# Or, in one line:\nlabels = predict(learner, X)\n\n# But two-line version exposes byproducts of the clustering algorithm (e.g., outliers):\nLearnAPI.extras(model)","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"See also Static Algorithms","category":"page"},{"location":"fit_update/#Density-estimation","page":"fit/update","title":"Density estimation","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"In density estimation, fit consumes no features, only a target variable; predict, which consumes no data, returns the learned density:","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"model = fit(learner, y) # no features\npredict(model) # shortcut for `predict(model, SingleDistribution())`, or similar","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"A one-liner will typically be implemented as well:","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"predict(learner, y)","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"See also Density Estimation.","category":"page"},{"location":"fit_update/#Implementation-guide","page":"fit/update","title":"Implementation guide","text":"","category":"section"},{"location":"fit_update/#Training-2","page":"fit/update","title":"Training","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"Exactly one of the following must be implemented:","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"method fallback\nfit(learner, data; verbosity=LearnAPI.default_verbosity()) none\nfit(learner; verbosity=LearnAPI.default_verbosity()) none","category":"page"},{"location":"fit_update/#Updating-2","page":"fit/update","title":"Updating","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"method fallback compulsory?\nupdate(model, data; verbosity=..., hyperparameter_updates...) none no\nupdate_observations(model, data; verbosity=..., hyperparameter_updates...) none no\nupdate_features(model, data; verbosity=..., hyperparameter_updates...) none no","category":"page"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"There are some contracts governing the behaviour of the update methods, as they relate to a previous fit call. Consult the document strings for details.","category":"page"},{"location":"fit_update/#Reference","page":"fit/update","title":"Reference","text":"","category":"section"},{"location":"fit_update/","page":"fit/update","title":"fit/update","text":"fit\nupdate\nupdate_observations\nupdate_features\nLearnAPI.default_verbosity","category":"page"},{"location":"fit_update/#LearnAPI.fit","page":"fit/update","title":"LearnAPI.fit","text":"fit(learner, data; verbosity=LearnAPI.default_verbosity())\nfit(learner; verbosity=LearnAPI.default_verbosity())\n\nExecute the machine learning or statistical algorithm with configuration learner using the provided training data, returning an object, model, on which other methods, such as predict or transform, can be dispatched. LearnAPI.functions(learner) returns a list of methods that can be applied to either learner or model.\n\nFor example, a supervised classifier might have a workflow like this:\n\nmodel = fit(learner, (X, y))\nŷ = predict(model, Xnew)\n\nThe signature fit(learner; verbosity=...) (no data) is provided by learners that do not generalize to new observations (called static algorithms). In that case, transform(model, data) or predict(model, ..., data) carries out the actual algorithm execution, writing any byproducts of that operation to the mutable object model returned by fit.\n\nUse verbosity=0 for warnings only, and -1 for silent training.\n\nSee also LearnAPI.default_verbosity, predict, transform, inverse_transform, LearnAPI.functions, obs.\n\nExtended help\n\nNew implementations\n\nImplementation of exactly one of the signatures is compulsory. If fit(learner; verbosity=...) is implemented, then the trait LearnAPI.is_static must be overloaded to return true.\n\nThe signature must include verbosity with LearnAPI.default_verbosity() as default.\n\nIf data encapsulates a target variable, as defined in LearnAPI.jl documentation, then LearnAPI.target(data) must be overloaded to return it. If predict or transform are implemented and consume data, then LearnAPI.features(data) must return something that can be passed as data to these methods. A fallback returns first(data) if data is a tuple, and data otherwise.\n\nThe LearnAPI.jl specification has nothing to say regarding fit signatures with more than two arguments. For convenience, for example, an implementation is free to implement a slurping signature, such as fit(learner, X, y, extras...) = fit(learner, (X, y, extras...)) but LearnAPI.jl does not guarantee such signatures are actually implemented.\n\nAssumptions about data\n\nBy default, it is assumed that data supports the LearnAPI.RandomAccess interface; this includes all matrices, with observations-as-columns, most tables, and tuples thereof. See LearnAPI.RandomAccess for details. If this is not the case then an implementation must either: (i) overload obs to articulate how provided data can be transformed into a form that does support LearnAPI.RandomAccess; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API. Refer tbo document strings for details.\n\n\n\n\n\n","category":"function"},{"location":"fit_update/#LearnAPI.update","page":"fit/update","title":"LearnAPI.update","text":"update(model, data; verbosity=LearnAPI.default_verbosity(), hyperparam_replacements...)\n\nReturn an updated version of the model object returned by a previous fit or update call, but with the specified hyperparameter replacements, in the form p1=value1, p2=value2, ....\n\nlearner = MyForest(ntrees=100)\n\n# train with 100 trees:\nmodel = fit(learner, data)\n\n# add 50 more trees:\nmodel = update(model, data; ntrees=150)\n\nProvided that data is identical with the data presented in a preceding fit call and there is at most one hyperparameter replacement, as in the above example, execution is semantically equivalent to the call fit(learner, data), where learner is LearnAPI.learner(model) with the specified replacements. In some cases (typically, when changing an iteration parameter) there may be a performance benefit to using update instead of retraining ab initio.\n\nIf data differs from that in the preceding fit or update call, or there is more than one hyperparameter replacement, then behaviour is learner-specific.\n\nSee also fit, update_observations, update_features.\n\nNew implementations\n\nImplementation is optional. The signature must include verbosity. If implemented, you must include :(LearnAPI.update) in the tuple returned by the LearnAPI.functions trait. \n\nSee also LearnAPI.clone\n\n\n\n\n\n","category":"function"},{"location":"fit_update/#LearnAPI.update_observations","page":"fit/update","title":"LearnAPI.update_observations","text":"update_observations(\n model,\n new_data;\n parameter_replacements...,\n verbosity=LearnAPI.default_verbosity(),\n)\n\nReturn an updated version of the model object returned by a previous fit or update call given the new observations present in new_data. One may additionally specify hyperparameter replacements in the form p1=value1, p2=value2, ....\n\nlearner = MyNeuralNetwork(epochs=10, learning_rate=0.01)\n\n# train for ten epochs:\nmodel = fit(learner, data)\n\n# train for two more epochs using new data and new learning rate:\nmodel = update_observations(model, new_data; epochs=2, learning_rate=0.1)\n\nWhen following the call fit(learner, data), the update call is semantically equivalent to retraining ab initio using a concatenation of data and new_data, provided there are no hyperparameter replacements (which rules out the example above). Behaviour is otherwise learner-specific.\n\nSee also fit, update, update_features.\n\nExtended help\n\nNew implementations\n\nImplementation is optional. The signature must include verbosity. If implemented, you must include :(LearnAPI.update_observations) in the tuple returned by the LearnAPI.functions trait. \n\nSee also LearnAPI.clone.\n\n\n\n\n\n","category":"function"},{"location":"fit_update/#LearnAPI.update_features","page":"fit/update","title":"LearnAPI.update_features","text":"update_features(\n model,\n new_data;\n parameter_replacements...,\n verbosity=LearnAPI.default_verbosity(),\n)\n\nReturn an updated version of the model object returned by a previous fit or update call given the new features encapsulated in new_data. One may additionally specify hyperparameter replacements in the form p1=value1, p2=value2, ....\n\nWhen following the call fit(learner, data), the update call is semantically equivalent to retraining ab initio using a concatenation of data and new_data, provided there are no hyperparameter replacements. Behaviour is otherwise learner-specific.\n\nSee also fit, update, update_features.\n\nExtended help\n\nNew implementations\n\nImplementation is optional. The signature must include verbosity. If implemented, you must include :(LearnAPI.update_features) in the tuple returned by the LearnAPI.functions trait. \n\nSee also LearnAPI.clone.\n\n\n\n\n\n","category":"function"},{"location":"fit_update/#LearnAPI.default_verbosity","page":"fit/update","title":"LearnAPI.default_verbosity","text":"LearnAPI.default_verbosity()\nLearnAPI.default_verbosity(level::Int)\n\nRespectively return, or set, the default verbosity level for LearnAPI.jl methods that support it, which includes fit, update, update_observations, and update_features. The effect in a top-level call is generally:\n\nlevel behaviour\n1 informational\n0 warnings only\n-1 silent\n\nMethods consuming verbosity generally call other verbosity-supporting methods at one level lower, so increasing verbosity beyond 1 may be useful.\n\n\n\n\n\n","category":"function"},{"location":"kinds_of_target_proxy/#proxy_types","page":"Kinds of Target Proxy","title":"Kinds of Target Proxy","text":"","category":"section"},{"location":"kinds_of_target_proxy/","page":"Kinds of Target Proxy","title":"Kinds of Target Proxy","text":"The available kinds of target proxy (used for predict dispatch) are classified by subtypes of LearnAPI.KindOfProxy. These types are intended for dispatch only and have no fields.","category":"page"},{"location":"kinds_of_target_proxy/","page":"Kinds of Target Proxy","title":"Kinds of Target Proxy","text":"LearnAPI.KindOfProxy","category":"page"},{"location":"kinds_of_target_proxy/#LearnAPI.KindOfProxy","page":"Kinds of Target Proxy","title":"LearnAPI.KindOfProxy","text":"LearnAPI.KindOfProxy\n\nAbstract type whose concrete subtypes T each represent a different kind of proxy for some target variable, associated with some learner. Instances T() are used to request the form of target predictions in predict calls.\n\nSee LearnAPI.jl documentation for an explanation of \"targets\" and \"target proxies\".\n\nFor example, Distribution is a concrete subtype of IID <: LearnAPI.KindOfProxy and a call like predict(model, Distribution(), Xnew) returns a data object whose observations are probability density/mass functions, assuming learner = LearnAPI.learner(model) supports predictions of that form, which is true if Distribution() in LearnAPI.kinds_of_proxy(learner).\n\nProxy types are grouped under three abstract subtypes:\n\nLearnAPI.IID: The main type, for proxies consisting of uncorrelated individual components, one for each input observation\nLearnAPI.Joint: For learners that predict a single probabilistic structure encapsulating correlations between target predictions for different input observations\nLearnAPI.Single: For learners, such as density estimators, that are trained on a target variable only (no features); predict consumes no data and the returned target proxy is a single probabilistic structure.\n\nFor lists of all concrete instances, refer to documentation for the relevant subtype.\n\n\n\n\n\n","category":"type"},{"location":"kinds_of_target_proxy/#Simple-target-proxies","page":"Kinds of Target Proxy","title":"Simple target proxies","text":"","category":"section"},{"location":"kinds_of_target_proxy/","page":"Kinds of Target Proxy","title":"Kinds of Target Proxy","text":"LearnAPI.IID","category":"page"},{"location":"kinds_of_target_proxy/#LearnAPI.IID","page":"Kinds of Target Proxy","title":"LearnAPI.IID","text":"LearnAPI.IID <: LearnAPI.KindOfProxy\n\nAbstract subtype of LearnAPI.KindOfProxy. If kind_of_proxy is an instance of LearnAPI.IID then, given data constisting of n observations, the following must hold:\n\nŷ = LearnAPI.predict(model, kind_of_proxy, data) is data also consisting of n observations.\nThe jth observation of ŷ, for any j, depends only on the jth observation of the provided data (no correlation between observations).\n\nSee also LearnAPI.KindOfProxy.\n\nExtended help\n\ntype form of an observation\nPoint same as target observations; may have the interpretation of a 50% quantile, 50% expectile or mode\nSampleable object that can be sampled to obtain object of the same form as target observation\nDistribution explicit probability density/mass function whose sample space is all possible target observations\nLogDistribution explicit log-probability density/mass function whose sample space is possible target observations\nProbability¹ numerical probability or probability vector\nLogProbability¹ log-probability or log-probability vector\nParametric¹ a list of parameters (e.g., mean and variance) describing some distribution\nLabelAmbiguous collections of labels (in case of multi-class target) but without a known correspondence to the original target labels (and of possibly different number) as in, e.g., clustering\nLabelAmbiguousSampleable sampleable version of LabelAmbiguous; see Sampleable above\nLabelAmbiguousDistribution pdf/pmf version of LabelAmbiguous; see Distribution above\nLabelAmbiguousFuzzy same as LabelAmbiguous but with multiple values of indeterminant number\nQuantile² same as target but with quantile interpretation\nExpectile² same as target but with expectile interpretation\nConfidenceInterval² confidence interval\nFuzzy finite but possibly varying number of target observations\nProbabilisticFuzzy as for Fuzzy but labeled with probabilities (not necessarily summing to one)\nSurvivalFunction survival function\nSurvivalDistribution probability distribution for survival time\nSurvivalHazardFunction hazard function for survival time\nOutlierScore numerical score reflecting degree of outlierness (not necessarily normalized)\nContinuous real-valued approximation/interpolation of a discrete-valued target, such as a count (e.g., number of phone calls)\n\n¹Provided for completeness but discouraged to avoid ambiguities in representation.\n\n²The level will be controlled by a hyper-parameter; models providing only quantiles or expectiles at 50% will provide Point instead.\n\n\n\n\n\n","category":"type"},{"location":"kinds_of_target_proxy/#Proxies-for-density-estimation-algorithms","page":"Kinds of Target Proxy","title":"Proxies for density estimation algorithms","text":"","category":"section"},{"location":"kinds_of_target_proxy/","page":"Kinds of Target Proxy","title":"Kinds of Target Proxy","text":"LearnAPI.Single","category":"page"},{"location":"kinds_of_target_proxy/#LearnAPI.Single","page":"Kinds of Target Proxy","title":"LearnAPI.Single","text":"Single <: KindOfProxy\n\nAbstract subtype of LearnAPI.KindOfProxy. It applies only to learners for which predict has no data argument, i.e., is of the form predict(model, kind_of_proxy). An example is an algorithm learning a probability distribution from samples, and we regard the samples as drawn from the \"target\" variable. If in this case, kind_of_proxy is an instance of LearnAPI.Single then, predict(learner) returns a single object representing a probability distribution.\n\ntype T form of output of predict(model, ::T)\nSingleSampleable object that can be sampled to obtain a single target observation\nSingleDistribution explicit probability density/mass function for sampling the target\nSingleLogDistribution explicit log-probability density/mass function for sampling the target\n\n\n\n\n\n","category":"type"},{"location":"kinds_of_target_proxy/#Joint-probability-distributions","page":"Kinds of Target Proxy","title":"Joint probability distributions","text":"","category":"section"},{"location":"kinds_of_target_proxy/","page":"Kinds of Target Proxy","title":"Kinds of Target Proxy","text":"LearnAPI.Joint","category":"page"},{"location":"kinds_of_target_proxy/#LearnAPI.Joint","page":"Kinds of Target Proxy","title":"LearnAPI.Joint","text":"Joint <: KindOfProxy\n\nAbstract subtype of LearnAPI.KindOfProxy. If kind_of_proxy is an instance of LearnAPI.Joint then, given data consisting of n observations, predict(model, kind_of_proxy, data) represents a single probability distribution for the sample space Y^n, where Y is the space from which the target variable takes its values.\n\ntype T form of output of predict(model, ::T, data)\nJointSampleable object that can be sampled to obtain a vector whose elements have the form of target observations; the vector length matches the number of observations in data.\nJointDistribution explicit probability density/mass function whose sample space is vectors of target observations; the vector length matches the number of observations in data\nJointLogDistribution explicit log-probability density/mass function whose sample space is vectors of target observations; the vector length matches the number of observations in data\n\n\n\n\n\n","category":"type"},{"location":"patterns/supervised_bayesian_models/#Supervised-Bayesian-Algorithms","page":"Supervised Bayesian Algorithms","title":"Supervised Bayesian Algorithms","text":"","category":"section"},{"location":"testing_an_implementation/#Testing-an-Implementation","page":"Testing an Implementation","title":"Testing an Implementation","text":"","category":"section"},{"location":"testing_an_implementation/","page":"Testing an Implementation","title":"Testing an Implementation","text":"🚧","category":"page"},{"location":"testing_an_implementation/","page":"Testing an Implementation","title":"Testing an Implementation","text":"warning: Warning\nUnder construction","category":"page"},{"location":"patterns/time_series_classification/#Time-Series-Classification","page":"Time Series Classification","title":"Time Series Classification","text":"","category":"section"},{"location":"anatomy_of_an_implementation/#Anatomy-of-an-Implementation","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"This section explains a detailed implementation of the LearnAPI.jl for naive ridge regression with no intercept. The kind of workflow we want to enable has been previewed in Sample workflow. Readers can also refer to the demonstration of the implementation given later.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The core LearnAPI.jl pattern looks like this:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"model = fit(learner, data)\npredict(model, newdata)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Here learner specifies hyperparameters, while model stores learned parameters and any byproducts of algorithm execution.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"A transformer ordinarily implements transform instead of predict. For more on predict versus transform, see Predict or transform?","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"note: Note\nNew implementations of fit, predict, etc, always have a single data argument as above. For convenience, a signature such as fit(learner, X, y), calling fit(learner, (X, y)), can be added, but the LearnAPI.jl specification is silent on the meaning or existence of signatures with extra arguments.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"note: Note\nIf the data object consumed by fit, predict, or transform is not not a suitable table¹, array³, tuple of tables and arrays, or some other object implementing the MLUtils.jl getobs/numobs interface, then an implementation must: (i) overload obs to articulate how provided data can be transformed into a form that does support this interface, as illustrated below under Providing a separate data front end, and which may additionally enable certain performance benefits; or (ii) overload the trait LearnAPI.data_interface to specify a more relaxed data API.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The first line below imports the lightweight package LearnAPI.jl whose methods we will be extending. The second imports libraries needed for the core algorithm.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"using LearnAPI\nusing LinearAlgebra, Tables\nnothing # hide","category":"page"},{"location":"anatomy_of_an_implementation/#Defining-learners","page":"Anatomy of an Implementation","title":"Defining learners","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Here's a new type whose instances specify ridge regression hyperparameters:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"struct Ridge{T<:Real}\n lambda::T\nend\nnothing # hide","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Instances of Ridge are learners, in LearnAPI.jl parlance.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Associated with each new type of LearnAPI.jl learner will be a keyword argument constructor, providing default values for all properties (typically, struct fields) that are not other learners, and we must implement LearnAPI.constructor(learner), for recovering the constructor from an instance:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"\"\"\"\n Ridge(; lambda=0.1)\n\nInstantiate a ridge regression learner, with regularization of `lambda`.\n\"\"\"\nRidge(; lambda=0.1) = Ridge(lambda)\nLearnAPI.constructor(::Ridge) = Ridge\nnothing # hide","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"For example, in this case, if learner = Ridge(0.2), then LearnAPI.constructor(learner)(lambda=0.2) == learner is true. Note that we attach the docstring to the constructor, not the struct.","category":"page"},{"location":"anatomy_of_an_implementation/#Implementing-fit","page":"Anatomy of an Implementation","title":"Implementing fit","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"A ridge regressor requires two types of data for training: input features X, which here we suppose are tabular¹, and a target y, which we suppose is a vector.⁴","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"It is convenient to define a new type for the fit output, which will include coefficients labelled by feature name for inspection after training:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"struct RidgeFitted{T,F}\n learner::Ridge\n coefficients::Vector{T}\n named_coefficients::F\nend\nnothing # hide","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Note that we also include learner in the struct, for it must be possible to recover learner from the output of fit; see Accessor functions below.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The core implementation of fit looks like this:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"function LearnAPI.fit(learner::Ridge, data; verbosity=LearnAPI.default_verbosity())\n\n X, y = data\n\n # data preprocessing:\n table = Tables.columntable(X)\n names = Tables.columnnames(table) |> collect\n A = Tables.matrix(table, transpose=true)\n\n lambda = learner.lambda\n\n # apply core algorithm:\n coefficients = (A*A' + learner.lambda*I)\\(A*y) # vector\n\n # determine named coefficients:\n named_coefficients = [names[j] => coefficients[j] for j in eachindex(names)]\n\n # make some noise, if allowed:\n verbosity > 0 && @info \"Coefficients: $named_coefficients\"\n\n return RidgeFitted(learner, coefficients, named_coefficients)\nend","category":"page"},{"location":"anatomy_of_an_implementation/#Implementing-predict","page":"Anatomy of an Implementation","title":"Implementing predict","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Users will be able to call predict like this:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"predict(model, Point(), Xnew)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"where Xnew is a table (of the same form as X above). The argument Point() signals that literal predictions of the target variable are sought, as opposed to some proxy for the target, such as probability density functions. Point is an example of a LearnAPI.KindOfProxy type. Targets and target proxies are discussed here.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We provide this implementation for our ridge regressor:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.predict(model::RidgeFitted, ::Point, Xnew) =\n Tables.matrix(Xnew)*model.coefficients","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"If the kind of proxy is omitted, as in predict(model, Xnew), then a fallback grabs the first element of the tuple returned by LearnAPI.kinds_of_proxy(learner), which we overload appropriately below.","category":"page"},{"location":"anatomy_of_an_implementation/#Extracting-the-target-from-training-data","page":"Anatomy of an Implementation","title":"Extracting the target from training data","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The fit method consumes data which includes a target variable, i.e., the learner is a supervised learner. We must therefore declare how the target variable can be extracted from training data, by implementing LearnAPI.target:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.target(learner, data) = last(data)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"There is a similar method, LearnAPI.features for declaring how training features can be extracted (something that can be passed to predict) but this method has a fallback which suffices here: it returns first(data) if data is a tuple, and data otherwise.","category":"page"},{"location":"anatomy_of_an_implementation/#Accessor-functions","page":"Anatomy of an Implementation","title":"Accessor functions","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"An accessor function has the output of fit as it's sole argument. Every new implementation must implement the accessor function LearnAPI.learner for recovering a learner from a fitted object:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.learner(model::RidgeFitted) = model.learner","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Other accessor functions extract learned parameters or some standard byproducts of training, such as feature importances or training losses.² Here we implement an accessor function to extract the linear coefficients:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.coefficients(model::RidgeFitted) = model.named_coefficients\nnothing #hide","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The LearnAPI.strip(model) accessor function is for returning a version of model suitable for serialization (typically smaller and data anonymized). It has a fallback that just returns model but for the sake of illustration, we overload it to dump the named version of the coefficients:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.strip(model::RidgeFitted) =\n RidgeFitted(model.learner, model.coefficients, nothing)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Crucially, we can still use LearnAPI.strip(model) in place of model to make new predictions.","category":"page"},{"location":"anatomy_of_an_implementation/#Learner-traits","page":"Anatomy of an Implementation","title":"Learner traits","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Learner traits record extra generic information about a learner, or make specific promises of behavior. They are methods that have a learner as the sole argument, and so we regard LearnAPI.constructor defined above as a trait.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Because we have implemented predict, we are required to overload the LearnAPI.kinds_of_proxy trait. Because we can only make point predictions of the target, we make this definition:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.kinds_of_proxy(::Ridge) = (Point(),)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"A macro provides a shortcut, convenient when multiple traits are to be defined:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"@trait(\n Ridge,\n constructor = Ridge,\n kinds_of_proxy=(Point(),),\n tags = (:regression,),\n functions = (\n :(LearnAPI.fit),\n :(LearnAPI.learner),\n :(LearnAPI.strip),\n :(LearnAPI.obs),\n :(LearnAPI.features),\n :(LearnAPI.target),\n :(LearnAPI.predict),\n :(LearnAPI.coefficients),\n )\n)\nnothing # hide","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The last trait, functions, returns a list of all LearnAPI.jl methods that can be meaningfully applied to the learner or associated model. See LearnAPI.functions for a checklist. LearnAPI.functions and LearnAPI.constructor, are the only universally compulsory traits. However, it is worthwhile studying the list of all traits to see which might apply to a new implementation, to enable maximum buy into functionality provided by third party packages, and to assist third party algorithms that match machine learning algorithms to user-defined tasks.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Note that we know Ridge instances are supervised learners because :(LearnAPI.target) in LearnAPI.functions(learner), for every instance learner. With some exceptions, the value of a trait should depend only on the type of the argument.","category":"page"},{"location":"anatomy_of_an_implementation/#Signatures-added-for-convenience","page":"Anatomy of an Implementation","title":"Signatures added for convenience","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We add one fit signature for user-convenience only. The LearnAPI.jl specification has nothing to say about fit signatures with more than two positional arguments.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.fit(learner::Ridge, X, y; kwargs...) = fit(learner, (X, y); kwargs...)","category":"page"},{"location":"anatomy_of_an_implementation/#workflow","page":"Anatomy of an Implementation","title":"Demonstration","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We now illustrate how to interact directly with Ridge instances using the methods just implemented.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"# synthesize some data:\nn = 10 # number of observations\ntrain = 1:6\ntest = 7:10\na, b, c = rand(n), rand(n), rand(n)\nX = (; a, b, c)\ny = 2a - b + 3c + 0.05*rand(n)\nnothing # hide","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"learner = Ridge(lambda=0.5)\nforeach(println, LearnAPI.functions(learner))","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Training and predicting:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Xtrain = Tables.subset(X, train)\nytrain = y[train]\nmodel = fit(learner, (Xtrain, ytrain)) # `fit(learner, Xtrain, ytrain)` will also work\nŷ = predict(model, Tables.subset(X, test))","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Extracting coefficients:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.coefficients(model)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Serialization/deserialization:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"using Serialization\nsmall_model = LearnAPI.strip(model)\nfilename = tempname()\nserialize(filename, small_model)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"recovered_model = deserialize(filename)\n@assert LearnAPI.learner(recovered_model) == learner\n@assert predict(recovered_model, X) == predict(model, X)","category":"page"},{"location":"anatomy_of_an_implementation/#Providing-a-separate-data-front-end","page":"Anatomy of an Implementation","title":"Providing a separate data front end","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"using LearnAPI\nusing LinearAlgebra, Tables\n\nstruct Ridge{T<:Real}\n lambda::T\nend\n\nRidge(; lambda=0.1) = Ridge(lambda)\n\nstruct RidgeFitted{T,F}\n learner::Ridge\n coefficients::Vector{T}\n named_coefficients::F\nend\n\nLearnAPI.learner(model::RidgeFitted) = model.learner\nLearnAPI.coefficients(model::RidgeFitted) = model.named_coefficients\nLearnAPI.strip(model::RidgeFitted) =\n RidgeFitted(model.learner, model.coefficients, nothing)\n\n@trait(\n Ridge,\n constructor = Ridge,\n kinds_of_proxy=(Point(),),\n tags = (:regression,),\n functions = (\n :(LearnAPI.fit),\n :(LearnAPI.learner),\n :(LearnAPI.strip),\n :(LearnAPI.obs),\n :(LearnAPI.features),\n :(LearnAPI.target),\n :(LearnAPI.predict),\n :(LearnAPI.coefficients),\n )\n)\n\nn = 10 # number of observations\ntrain = 1:6\ntest = 7:10\na, b, c = rand(n), rand(n), rand(n)\nX = (; a, b, c)\ny = 2a - b + 3c + 0.05*rand(n)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"An implementation may optionally implement obs, to expose to the user (or some meta-algorithm like cross-validation) the representation of input data internal to fit or predict, such as the matrix version A of X in the ridge example. That is, we may factor out of fit (and also predict) the data pre-processing step, obs, to expose its outcomes. These outcomes become alternative user inputs to fit. To see the use of obs in action, see below.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Here we specifically wrap all the pre-processed data into single object, for which we introduce a new type:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"struct RidgeFitObs{T,M<:AbstractMatrix{T}}\n A::M # `p` x `n` matrix\n names::Vector{Symbol} # features\n y::Vector{T} # target\nend","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Now we overload obs to carry out the data pre-processing previously in fit, like this:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"function LearnAPI.obs(::Ridge, data)\n X, y = data\n table = Tables.columntable(X)\n names = Tables.columnnames(table) |> collect\n return RidgeFitObs(Tables.matrix(table)', names, y)\nend","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We informally refer to the output of obs as \"observations\" (see The obs contract below). The previous core fit signature is now replaced with two methods - one to handle \"regular\" input, and one to handle the pre-processed data (observations) which appears first below:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"function LearnAPI.fit(learner::Ridge, observations::RidgeFitObs; verbosity=LearnAPI.default_verbosity())\n\n lambda = learner.lambda\n\n A = observations.A\n names = observations.names\n y = observations.y\n\n # apply core learner:\n coefficients = (A*A' + learner.lambda*I)\\(A*y) # 1 x p matrix\n\n # determine named coefficients:\n named_coefficients = [names[j] => coefficients[j] for j in eachindex(names)]\n\n # make some noise, if allowed:\n verbosity > 0 && @info \"Coefficients: $named_coefficients\"\n\n return RidgeFitted(learner, coefficients, named_coefficients)\n\nend\n\nLearnAPI.fit(learner::Ridge, data; kwargs...) =\n fit(learner, obs(learner, data); kwargs...)","category":"page"},{"location":"anatomy_of_an_implementation/#The-obs-contract","page":"Anatomy of an Implementation","title":"The obs contract","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Providing fit signatures matching the output of obs, is the first part of the obs contract. Since obs(learner, data) should evidently support all data that fit(learner, data) supports, we must be able to apply obs(learner, _) to it's own output (observations below). This leads to the additional \"no-op\" declaration","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.obs(::Ridge, observations::RidgeFitObs) = observations","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"In other words, we ensure that obs(learner, _) is involutive.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The second part of the obs contract is this: The output of obs must implement the interface specified by the trait LearnAPI.data_interface(learner). Assuming this is LearnAPI.RandomAccess() (the default) it usually suffices to overload Base.getindex and Base.length:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Base.getindex(data::RidgeFitObs, I) =\n RidgeFitObs(data.A[:,I], data.names, y[I])\nBase.length(data::RidgeFitObs) = length(data.y)","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We do something similar for predict, but there's no need for a new type in this case:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.obs(::RidgeFitted, Xnew) = Tables.matrix(Xnew)'\nLearnAPI.obs(::RidgeFitted, observations::AbstractArray) = observations # involutivity\n\nLearnAPI.predict(model::RidgeFitted, ::Point, observations::AbstractMatrix) =\n observations'*model.coefficients\n\nLearnAPI.predict(model::RidgeFitted, ::Point, Xnew) =\n predict(model, Point(), obs(model, Xnew))","category":"page"},{"location":"anatomy_of_an_implementation/#target-and-features-methods","page":"Anatomy of an Implementation","title":"target and features methods","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We provide an additional overloading of LearnAPI.target to handle the additional supported data argument of fit:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.target(::Ridge, observations::RidgeFitObs) = observations.y","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Similarly, we must overload LearnAPI.features, which extracts features from training data (objects that can be passed to predict) like this","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.features(::Ridge, observations::RidgeFitObs) = observations.A","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"as the fallback mentioned above is no longer adequate.","category":"page"},{"location":"anatomy_of_an_implementation/#Important-notes:","page":"Anatomy of an Implementation","title":"Important notes:","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"The observations to be consumed by fit are returned by obs(learner::Ridge, ...), while those consumed by predict are returned by obs(model::RidgeFitted, ...). We need the different signatures because the form of data consumed by fit and predict are generally different.\nWe need the adjoint operator, ', because the last dimension in arrays is the observation dimension, according to the MLUtils.jl convention. Remember, Xnew is a table here.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"Since LearnAPI.jl provides fallbacks for obs that simply return the unadulterated data argument, overloading obs is optional. This is provided data in publicized fit/predict signatures consists only of objects implement the LearnAPI.RandomAccess interface (most tables¹, arrays³, and tuples thereof).","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"To opt out of supporting the MLUtils.jl interface altogether, an implementation must overload the trait, LearnAPI.data_interface(learner). See Data interfaces for details.","category":"page"},{"location":"anatomy_of_an_implementation/#Addition-of-signatures-for-user-convenience","page":"Anatomy of an Implementation","title":"Addition of signatures for user convenience","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"As above, we add a signature which plays no role vis-à-vis LearnAPI.jl.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"LearnAPI.fit(learner::Ridge, X, y; kwargs...) = fit(learner, (X, y); kwargs...)","category":"page"},{"location":"anatomy_of_an_implementation/#advanced_demo","page":"Anatomy of an Implementation","title":"Demonstration of an advanced obs workflow","text":"","category":"section"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"We now can train and predict using internal data representations, resampled using the generic MLUtils.jl interface:","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"import MLUtils\nlearner = Ridge()\nobservations_for_fit = obs(learner, (X, y))\nmodel = fit(learner, MLUtils.getobs(observations_for_fit, train))\nobservations_for_predict = obs(model, X)\nẑ = predict(model, MLUtils.getobs(observations_for_predict, test))","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"@assert ẑ == ŷ","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"For an application of obs to efficient cross-validation, see here.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"¹ In LearnAPI.jl a table is any object X implementing the Tables.jl interface, additionally satisfying Tables.istable(X) == true and implementing DataAPI.nrow (and whence MLUtils.numobs). Tables that are also (unnamed) tuples are disallowed.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"² An implementation can provide further accessor functions, if necessary, but like the native ones, they must be included in the LearnAPI.functions declaration.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"³ The last index must be the observation index.","category":"page"},{"location":"anatomy_of_an_implementation/","page":"Anatomy of an Implementation","title":"Anatomy of an Implementation","text":"⁴ The data = (X, y) pattern implemented here is not the only supported pattern. For, example, data might be a single table containing both features and target variable. In this case, it will be necessary to overload LearnAPI.features in addition to LearnAPI.target; the name of the target column would need to be a hyperparameter.","category":"page"},{"location":"patterns/static_algorithms/#Static-Algorithms","page":"Static Algorithms","title":"Static Algorithms","text":"","category":"section"},{"location":"patterns/static_algorithms/","page":"Static Algorithms","title":"Static Algorithms","text":"See these examples from the JuliaTestAI.jl test suite:","category":"page"},{"location":"patterns/static_algorithms/","page":"Static Algorithms","title":"Static Algorithms","text":"feature selection","category":"page"},{"location":"patterns/meta_algorithms/#Meta-algorithms","page":"Meta-algorithms","title":"Meta-algorithms","text":"","category":"section"},{"location":"patterns/meta_algorithms/","page":"Meta-algorithms","title":"Meta-algorithms","text":"Many meta-algorithms are can be implemented as wrappers. An example is this bagged ensemble algorithm from tests.","category":"page"},{"location":"patterns/clusterering/#Clusterering","page":"Clusterering","title":"Clusterering","text":"","category":"section"},{"location":"reference/#reference","page":"Overview","title":"Reference","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"Here we give the definitive specification of the LearnAPI.jl interface. For informal guides see Anatomy of an Implementation and Common Implementation Patterns.","category":"page"},{"location":"reference/#scope","page":"Overview","title":"Important terms and concepts","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"The LearnAPI.jl specification is predicated on a few basic, informally defined notions:","category":"page"},{"location":"reference/#Data-and-observations","page":"Overview","title":"Data and observations","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"ML/statistical algorithms are typically applied in conjunction with resampling of observations, as in cross-validation. In this document data will always refer to objects encapsulating an ordered sequence of individual observations.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"A DataFrame instance, from DataFrames.jl, is an example of data, the observations being the rows. Typically, data provided to LearnAPI.jl algorithms, will implement the MLUtils.jl getobs/numobs interface for accessing individual observations, but implementations can opt out of this requirement; see obs and LearnAPI.data_interface for details.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"note: Note\nIn the MLUtils.jl convention, observations in tables are the rows but observations in a matrix are the columns.","category":"page"},{"location":"reference/#hyperparameters","page":"Overview","title":"Hyperparameters","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"Besides the data it consumes, a machine learning algorithm's behavior is governed by a number of user-specified hyperparameters, such as the number of trees in a random forest. In LearnAPI.jl, one is allowed to have hyperparameters that are not data-generic. For example, a class weight dictionary, which will only make sense for a target taking values in the set of dictionary keys, can be specified as a hyperparameter.","category":"page"},{"location":"reference/#proxy","page":"Overview","title":"Targets and target proxies","text":"","category":"section"},{"location":"reference/#Context","page":"Overview","title":"Context","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"After training, a supervised classifier predicts labels on some input which are then compared with ground truth labels using some accuracy measure, to assesses the performance of the classifier. Alternatively, the classifier predicts class probabilities, which are instead paired with ground truth labels using a proper scoring rule, say. In outlier detection, \"outlier\"/\"inlier\" predictions, or probability-like scores, are similarly compared with ground truth labels. In clustering, integer labels assigned to observations by the clustering algorithm can can be paired with human labels using, say, the Rand index. In survival analysis, predicted survival functions or probability distributions are compared with censored ground truth survival times. And so on ...","category":"page"},{"location":"reference/#Definitions","page":"Overview","title":"Definitions","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"More generally, whenever we have a variable (e.g., a class label) that can, at least in principle, be paired with a predicted value, or some predicted \"proxy\" for that variable (such as a class probability), then we call the variable a target variable, and the predicted output a target proxy. In this definition, it is immaterial whether or not the target appears in training (the algorithm is supervised) or whether or not predictions generalize to new input observations (the algorithm \"learns\").","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"LearnAPI.jl provides singleton target proxy types for prediction dispatch. These are also used to distinguish performance metrics provided by the package StatisticalMeasures.jl.","category":"page"},{"location":"reference/#learners","page":"Overview","title":"Learners","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"An object implementing the LearnAPI.jl interface is called a learner, although it is more accurately \"the configuration of some machine learning or statistical algorithm\".¹ A learner encapsulates a particular set of user-specified hyperparameters as the object's properties (which conceivably differ from its fields). It does not store learned parameters.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"Informally, we will sometimes use the word \"model\" to refer to the output of fit(learner, ...) (see below), something which typically does store learned parameters.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"For learner to be a valid LearnAPI.jl learner, LearnAPI.constructor(learner) must be defined and return a keyword constructor enabling recovery of learner from its properties:","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"properties = propertynames(learner)\nnamed_properties = NamedTuple{properties}(getproperty.(Ref(learner), properties))\n@assert learner == LearnAPI.constructor(learner)(; named_properties...)","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"which can be tested with @assertLearnAPI.clone(learner)== learner.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"Note that if if learner is an instance of a mutable struct, this requirement generally requires overloading Base.== for the struct.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"important: Important\nNo LearnAPI.jl method is permitted to mutate a learner. In particular, one should make deep copies of RNG hyperparameters before using them in a new implementation of fit.","category":"page"},{"location":"reference/#Composite-learners-(wrappers)","page":"Overview","title":"Composite learners (wrappers)","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"A composite learner is one with at least one property that can take other learners as values; for such learners LearnAPI.is_composite(learner) must be true (fallback is false). Generally, the keyword constructor provided by LearnAPI.constructor must provide default values for all properties that are not learner-valued. Instead, these learner-valued properties can have a nothing default, with the constructor throwing an error if the constructor call does not explicitly specify a new value.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"Any object learner for which LearnAPI.functions(learner) is non-empty is understood to have a valid implementation of the LearnAPI.jl interface.","category":"page"},{"location":"reference/#Example","page":"Overview","title":"Example","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"Below is an example of a learner type with a valid constructor:","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"struct GradientRidgeRegressor{T<:Real}\n learning_rate::T\n epochs::Int\n l2_regularization::T\nend\nGradientRidgeRegressor(; learning_rate=0.01, epochs=10, l2_regularization=0.01) =\n GradientRidgeRegressor(learning_rate, epochs, l2_regularization)\nLearnAPI.constructor(::GradientRidgeRegressor) = GradientRidgeRegressor","category":"page"},{"location":"reference/#Documentation","page":"Overview","title":"Documentation","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"Attach public LearnAPI.jl-related documentation for a learner to it's constructor, rather than to the struct defining its type. In this way, a learner can implement multiple interfaces, in addition to the LearnAPI interface, with separate document strings for each.","category":"page"},{"location":"reference/#Methods","page":"Overview","title":"Methods","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"note: Compulsory methods\nAll new learner types must implement fit, LearnAPI.learner, LearnAPI.constructor and LearnAPI.functions.","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"Most learners will also implement predict and/or transform. For a minimal (but useless) implementation, see the implementation of SmallLearner here.","category":"page"},{"location":"reference/#List-of-methods","page":"Overview","title":"List of methods","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"fit: for (i) training learners that generalize to new data; or (ii) wrapping learner in an object that is possibly mutated by predict/transform, to record byproducts of those operations, in the special case of non-generalizing learners (called here static algorithms)\nupdate: for updating learning outcomes after hyperparameter changes, such as increasing an iteration parameter.\nupdate_observations, update_features: update learning outcomes by presenting additional training data.\npredict: for outputting targets or target proxies (such as probability density functions)\ntransform: similar to predict, but for arbitrary kinds of output, and which can be paired with an inverse_transform method\ninverse_transform: for inverting the output of transform (\"inverting\" broadly understood)\nLearnAPI.target, LearnAPI.weights, LearnAPI.features: for extracting relevant parts of training data, where defined.\nobs: method for exposing to the user learner-specific representations of data, which are additionally guaranteed to implement the observation access API specified by LearnAPI.data_interface(learner).\nAccessor functions: these include functions like LearnAPI.feature_importances and LearnAPI.training_losses, for extracting, from training outcomes, information common to many learners. This includes LearnAPI.strip(model) for replacing a learning outcome model with a serializable version that can still predict or transform.\nLearner traits: methods that promise specific learner behavior or record general information about the learner. Only LearnAPI.constructor and LearnAPI.functions are universally compulsory.","category":"page"},{"location":"reference/#Utilities","page":"Overview","title":"Utilities","text":"","category":"section"},{"location":"reference/","page":"Overview","title":"Overview","text":"LearnAPI.clone\nLearnAPI.@trait","category":"page"},{"location":"reference/#LearnAPI.clone","page":"Overview","title":"LearnAPI.clone","text":"LearnAPI.clone(learner; replacements...)\n\nReturn a shallow copy of learner with the specified hyperparameter replacements.\n\nclone(learner; epochs=100, learning_rate=0.01)\n\nA LearnAPI.jl contract ensures that LearnAPI.clone(learner) == learner.\n\n\n\n\n\n","category":"function"},{"location":"reference/#LearnAPI.@trait","page":"Overview","title":"LearnAPI.@trait","text":"@trait(LearnerType, trait1=value1, trait2=value2, ...)\n\nOverload a number of traits for learners of type LearnerType. For example, the code\n\n@trait(\n RidgeRegressor,\n tags = (\"regression\", ),\n doc_url = \"https://some.cool.documentation\",\n)\n\nis equivalent to\n\nLearnAPI.tags(::RidgeRegressor) = (\"regression\", ),\nLearnAPI.doc_url(::RidgeRegressor) = \"https://some.cool.documentation\",\n\n\n\n\n\n","category":"macro"},{"location":"reference/","page":"Overview","title":"Overview","text":"","category":"page"},{"location":"reference/","page":"Overview","title":"Overview","text":"¹ We acknowledge users may not like this terminology, and may know \"learner\" by some other name, such as \"strategy\", \"options\", \"hyperparameter set\", \"configuration\", \"algorithm\", or \"model\". Consensus on this point is difficult; see, e.g., this Julia Discourse discussion.","category":"page"},{"location":"accessor_functions/#accessor_functions","page":"Accessor Functions","title":"Accessor Functions","text":"","category":"section"},{"location":"accessor_functions/","page":"Accessor Functions","title":"Accessor Functions","text":"The sole argument of an accessor function is the output, model, of fit. Learners are free to implement any number of these, or none of them. Only LearnAPI.strip has a fallback, namely the identity.","category":"page"},{"location":"accessor_functions/","page":"Accessor Functions","title":"Accessor Functions","text":"LearnAPI.learner(model)\nLearnAPI.extras(model)\nLearnAPI.strip(model)\nLearnAPI.coefficients(model)\nLearnAPI.intercept(model)\nLearnAPI.tree(model)\nLearnAPI.trees(model)\nLearnAPI.feature_names(model)\nLearnAPI.feature_importances(model)\nLearnAPI.training_labels(model)\nLearnAPI.training_losses(model)\nLearnAPI.training_predictions(model)\nLearnAPI.training_scores(model)\nLearnAPI.components(model)","category":"page"},{"location":"accessor_functions/","page":"Accessor Functions","title":"Accessor Functions","text":"Learner-specific accessor functions may also be implemented. The names of all accessor functions are included in the list returned by LearnAPI.functions(learner).","category":"page"},{"location":"accessor_functions/#Implementation-guide","page":"Accessor Functions","title":"Implementation guide","text":"","category":"section"},{"location":"accessor_functions/","page":"Accessor Functions","title":"Accessor Functions","text":"All new implementations must implement LearnAPI.learner. While, all others are optional, any implemented accessor functions must be added to the list returned by LearnAPI.functions.","category":"page"},{"location":"accessor_functions/#Reference","page":"Accessor Functions","title":"Reference","text":"","category":"section"},{"location":"accessor_functions/","page":"Accessor Functions","title":"Accessor Functions","text":"LearnAPI.learner\nLearnAPI.extras\nLearnAPI.strip\nLearnAPI.coefficients\nLearnAPI.intercept\nLearnAPI.tree\nLearnAPI.trees\nLearnAPI.feature_names\nLearnAPI.feature_importances\nLearnAPI.training_losses\nLearnAPI.training_predictions\nLearnAPI.training_scores\nLearnAPI.training_labels\nLearnAPI.components","category":"page"},{"location":"accessor_functions/#LearnAPI.learner","page":"Accessor Functions","title":"LearnAPI.learner","text":"LearnAPI.learner(model)\nLearnAPI.learner(stripped_model)\n\nRecover the learner used to train model or the output, stripped_model, of LearnAPI.strip(model).\n\nIn other words, if model = fit(learner, data...), for some learner and data, then\n\nLearnAPI.learner(model) == learner == LearnAPI.learner(LearnAPI.strip(model))\n\nis true.\n\nNew implementations\n\nImplementation is compulsory for new learner types. The behaviour described above is the only contract. You must include :(LearnAPI.learner) in the return value of LearnAPI.functions(learner).\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.extras","page":"Accessor Functions","title":"LearnAPI.extras","text":"LearnAPI.extras(model)\n\nReturn miscellaneous byproducts of a learning algorithm's execution, from the object model returned by a call of the form fit(learner, data).\n\nFor \"static\" learners (those without training data) it may be necessary to first call transform or predict on model.\n\nSee also fit.\n\nNew implementations\n\nImplementation is discouraged for byproducts already covered by other LearnAPI.jl accessor functions: LearnAPI.learner, LearnAPI.coefficients, LearnAPI.intercept, LearnAPI.tree, LearnAPI.trees, LearnAPI.feature_names, LearnAPI.feature_importances, LearnAPI.training_labels, LearnAPI.training_losses, LearnAPI.training_predictions, LearnAPI.training_scores and LearnAPI.components.\n\nIf implemented, you must include :(LearnAPI.training_labels) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#Base.strip","page":"Accessor Functions","title":"Base.strip","text":"LearnAPI.strip(model; options...)\n\nReturn a version of model that will generally have a smaller memory allocation than model, suitable for serialization. Here model is any object returned by fit. Accessor functions that can be called on model may not work on LearnAPI.strip(model), but predict, transform and inverse_transform will work, if implemented. Check LearnAPI.functions(LearnAPI.learner(model)) to view see what the original model implements.\n\nImplementations may provide learner-specific keyword options to control how much of the original functionality is preserved by LearnAPI.strip.\n\nTypical workflow\n\nmodel = fit(learner, (X, y)) # or `fit(learner, X, y)`\nŷ = predict(model, Point(), Xnew)\n\nsmall_model = LearnAPI.strip(model)\nserialize(\"my_model.jls\", small_model)\n\nrecovered_model = deserialize(\"my_random_forest.jls\")\n@assert predict(recovered_model, Point(), Xnew) == ŷ\n\nExtended help\n\nNew implementations\n\nOverloading LearnAPI.strip for new learners is optional. The fallback is the identity.\n\nNew implementations must enforce the following identities, whenever the right-hand side is defined:\n\npredict(LearnAPI.strip(model; options...), args...; kwargs...) ==\n predict(model, args...; kwargs...)\ntransform(LearnAPI.strip(model; options...), args...; kwargs...) ==\n transform(model, args...; kwargs...)\ninverse_transform(LearnAPI.strip(model; options), args...; kwargs...) ==\n inverse_transform(model, args...; kwargs...)\n\nAdditionally:\n\nLearnAPI.strip(LearnAPI.strip(model)) == LearnAPI.strip(model)\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.coefficients","page":"Accessor Functions","title":"LearnAPI.coefficients","text":"LearnAPI.coefficients(model)\n\nFor a linear model, return the learned coefficients. The value returned has the form of an abstract vector of feature_or_class::Symbol => coefficient::Real pairs (e.g [:gender => 0.23, :height => 0.7, :weight => 0.1]) or, in the case of multi-targets, feature::Symbol => coefficients::AbstractVector{<:Real} pairs.\n\nThe model reports coefficients if :(LearnAPI.coefficients) in LearnAPI.functions(Learn.learner(model)).\n\nSee also LearnAPI.intercept.\n\nNew implementations\n\nImplementation is optional.\n\nIf implemented, you must include :(LearnAPI.coefficients) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.intercept","page":"Accessor Functions","title":"LearnAPI.intercept","text":"LearnAPI.intercept(model)\n\nFor a linear model, return the learned intercept. The value returned is Real (single target) or an AbstractVector{<:Real} (multi-target).\n\nThe model reports intercept if :(LearnAPI.intercept) in LearnAPI.functions(Learn.learner(model)).\n\nSee also LearnAPI.coefficients.\n\nNew implementations\n\nImplementation is optional.\n\nIf implemented, you must include :(LearnAPI.intercept) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.tree","page":"Accessor Functions","title":"LearnAPI.tree","text":"LearnAPI.tree(model)\n\nReturn a user-friendly tree, in the form of a root object implementing the following interface defined in AbstractTrees.jl:\n\nsubtypes AbstractTrees.AbstractNode{T}\nimplements AbstractTrees.children()\nimplements AbstractTrees.printnode()\n\nSuch a tree can be visualized using the TreeRecipe.jl package, for example.\n\nSee also LearnAPI.trees.\n\nNew implementations\n\nImplementation is optional.\n\nIf implemented, you must include :(LearnAPI.tree) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.trees","page":"Accessor Functions","title":"LearnAPI.trees","text":"LearnAPI.trees(model)\n\nFor some ensemble model, return a vector of trees. See LearnAPI.tree for the form of such trees.\n\nSee also LearnAPI.tree.\n\nNew implementations\n\nImplementation is optional.\n\nIf implemented, you must include :(LearnAPI.trees) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.feature_names","page":"Accessor Functions","title":"LearnAPI.feature_names","text":"LearnAPI.feature_names(model)\n\nReturn the names of features encountered when fitting or updating some learner to obtain model.\n\nThe value returned value is a vector of symbols.\n\nThis method is implemented if :(LearnAPI.feature_names) in LearnAPI.functions(learner).\n\nSee also fit.\n\nNew implementations\n\nIf implemented, you must include :(LearnAPI.feature_names) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.feature_importances","page":"Accessor Functions","title":"LearnAPI.feature_importances","text":"LearnAPI.feature_importances(model)\n\nReturn the learner-specific feature importances of a model output by fit(learner, ...) for some learner. The value returned has the form of an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender => 0.23, :height => 0.7, :weight => 0.1]).\n\nThe learner supports feature importances if :(LearnAPI.feature_importances) in LearnAPI.functions(learner).\n\nIf a learner is sometimes unable to report feature importances then LearnAPI.feature_importances will return all importances as 0.0, as in [:gender => 0.0, :height => 0.0, :weight => 0.0].\n\nNew implementations\n\nImplementation is optional.\n\nIf implemented, you must include :(LearnAPI.feature_importances) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.training_losses","page":"Accessor Functions","title":"LearnAPI.training_losses","text":"LearnAPI.training_losses(model)\n\nReturn the training losses obtained when running model = fit(learner, ...) for some learner.\n\nSee also fit.\n\nNew implementations\n\nImplement for iterative algorithms that compute and record training losses as part of training (e.g. neural networks).\n\nIf implemented, you must include :(LearnAPI.training_losses) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.training_predictions","page":"Accessor Functions","title":"LearnAPI.training_predictions","text":"LearnAPI.training_predictions(model)\n\nReturn internally computed training predictions when running model = fit(learner, ...) for some learner.\n\nSee also fit.\n\nNew implementations\n\nImplement for iterative algorithms that compute and record training losses as part of training (e.g. neural networks).\n\nIf implemented, you must include :(LearnAPI.training_predictions) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.training_scores","page":"Accessor Functions","title":"LearnAPI.training_scores","text":"LearnAPI.training_scores(model)\n\nReturn the training scores obtained when running model = fit(learner, ...) for some learner.\n\nSee also fit.\n\nNew implementations\n\nImplement for learners, such as outlier detection algorithms, which associate a score with each observation during training, where these scores are of interest in later processes (e.g, in defining normalized scores for new data).\n\nIf implemented, you must include :(LearnAPI.training_scores) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.training_labels","page":"Accessor Functions","title":"LearnAPI.training_labels","text":"LearnAPI.training_labels(model)\n\nReturn the training labels obtained when running model = fit(learner, ...) for some learner.\n\nSee also fit.\n\nNew implementations\n\nIf implemented, you must include :(LearnAPI.training_labels) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"accessor_functions/#LearnAPI.components","page":"Accessor Functions","title":"LearnAPI.components","text":"LearnAPI.components(model)\n\nFor a composite model, return the component models (fit outputs). These will be in the form of a vector of named pairs, property_name::Symbol => component_model. Here property_name is the name of some learner-valued property (hyper-parameter) of learner = LearnAPI.learner(model).\n\nA composite model is one for which the corresponding learner includes one or more learner-valued properties, and for which LearnAPI.is_composite(learner) is true.\n\nSee also is_composite.\n\nNew implementations\n\nImplementent if and only if model is a composite model.\n\nIf implemented, you must include :(LearnAPI.components) in the tuple returned by the LearnAPI.functions trait. .\n\n\n\n\n\n","category":"function"},{"location":"patterns/dimension_reduction/#Dimension-Reduction","page":"Dimension Reduction","title":"Dimension Reduction","text":"","category":"section"},{"location":"patterns/dimension_reduction/","page":"Dimension Reduction","title":"Dimension Reduction","text":"See these examples from the JuliaTestAPI.jl test suite:","category":"page"},{"location":"patterns/dimension_reduction/","page":"Dimension Reduction","title":"Dimension Reduction","text":"Truncated SVD","category":"page"},{"location":"patterns/time_series_forecasting/#Time-Series-Forecasting","page":"Time Series Forecasting","title":"Time Series Forecasting","text":"","category":"section"},{"location":"obs/#data_interface","page":"obs","title":"obs and Data Interfaces","text":"","category":"section"},{"location":"obs/","page":"obs","title":"obs","text":"The obs method takes data intended as input to fit, predict or transform, and transforms it to a learner-specific form guaranteed to implement a form of observation access designated by the learner. The transformed data can then passed on to the relevant method in place of the original input (after first resampling it, if the learner supports this). Using obs may provide performance advantages over naive workflows in some cases (e.g., cross-validation).","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"obs(learner, data) # can be passed to `fit` instead of `data`\nobs(model, data) # can be passed to `predict` or `transform` instead of `data`","category":"page"},{"location":"obs/#obs_workflows","page":"obs","title":"Typical workflows","text":"","category":"section"},{"location":"obs/","page":"obs","title":"obs","text":"LearnAPI.jl makes no universal assumptions about the form of data in a call like fit(learner, data). However, if we define","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"observations = obs(learner, data)","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"then, assuming the typical case that LearnAPI.data_interface(learner) == LearnAPI.RandomAccess(), observations implements the MLUtils.jl getobs/numobs interface, for grabbing and counting observations. Moreover, we can pass observations to fit in place of the original data, or first resample it using MLUtils.getobs:","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"# equivalent to `model = fit(learner, data)`\nmodel = fit(learner, observations)\n\n# with resampling:\nresampled_observations = MLUtils.getobs(observations, 1:10)\nmodel = fit(learner, resampled_observations)","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"In some implementations, the alternative pattern above can be used to avoid repeating unnecessary internal data preprocessing, or inefficient resampling. For example, here's how a user might call obs and MLUtils.getobs to perform efficient cross-validation:","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"using LearnAPI\nimport MLUtils\n\nlearner = \n\ndata = \nX = LearnAPI.features(learner, data)\ny = LearnAPI.target(learner, data)\n\ntrain_test_folds = map([1:10, 11:20, 21:30]) do test\n (setdiff(1:30, test), test)\nend\n\nfitobs = obs(learner, data)\nnever_trained = true\n\nscores = map(train_test_folds) do (train, test)\n\n # train using model-specific representation of data:\n fitobs_subset = MLUtils.getobs(fitobs, train)\n model = fit(learner, fitobs_subset)\n\n # predict on the fold complement:\n if never_trained\n global predictobs = obs(model, X)\n global never_trained = false\n end\n predictobs_subset = MLUtils.getobs(predictobs, test)\n ŷ = predict(model, Point(), predictobs_subset)\n\n return \n\nend","category":"page"},{"location":"obs/#Implementation-guide","page":"obs","title":"Implementation guide","text":"","category":"section"},{"location":"obs/","page":"obs","title":"obs","text":"method comment compulsory? fallback\nobs(learner, data) here data is fit-consumable not typically returns data\nobs(model, data) here data is predict-consumable not typically returns data","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"A sample implementation is given in Providing a separate data front end. ","category":"page"},{"location":"obs/#Reference","page":"obs","title":"Reference","text":"","category":"section"},{"location":"obs/","page":"obs","title":"obs","text":"obs","category":"page"},{"location":"obs/#LearnAPI.obs","page":"obs","title":"LearnAPI.obs","text":"obs(learner, data)\nobs(model, data)\n\nReturn learner-specific representation of data, suitable for passing to fit (first signature) or to predict and transform (second signature), in place of data. Here model is the return value of fit(learner, ...) for some LearnAPI.jl learner, learner.\n\nThe returned object is guaranteed to implement observation access as indicated by LearnAPI.data_interface(learner), typically LearnAPI.RandomAccess().\n\nCalling fit/predict/transform on the returned objects may have performance advantages over calling directly on data in some contexts.\n\nExample\n\nUsual workflow, using data-specific resampling methods:\n\ndata = (X, y) # a DataFrame and a vector\ndata_train = (Tables.select(X, 1:100), y[1:100])\nmodel = fit(learner, data_train)\nŷ = predict(model, Point(), X[101:150])\n\nAlternative, data agnostic, workflow using obs and the MLUtils.jl method getobs (assumes LearnAPI.data_interface(learner) == RandomAccess()):\n\nimport MLUtils\n\nfit_observations = obs(learner, data)\nmodel = fit(learner, MLUtils.getobs(fit_observations, 1:100))\n\npredict_observations = obs(model, X)\nẑ = predict(model, Point(), MLUtils.getobs(predict_observations, 101:150))\n@assert ẑ == ŷ\n\nSee also LearnAPI.data_interface.\n\nExtended help\n\nNew implementations\n\nImplementation is typically optional.\n\nFor each supported form of data in fit(learner, data), it must be true that model = fit(learner, observations) is equivalent to model = fit(learner, data), whenever observations = obs(learner, data). For each supported form of data in calls predict(model, ..., data) and transform(model, data), where implemented, the calls predict(model, ..., observations) and transform(model, observations) must be supported alternatives with the same output, whenever observations = obs(model, data).\n\nIf LearnAPI.data_interface(learner) == RandomAccess() (the default), then fit, predict and transform must additionally accept obs output that has been subsampled using MLUtils.getobs, with the obvious interpretation applying to the outcomes of such calls (e.g., if all observations are subsampled, then outcomes should be the same as if using the original data).\n\nImplicit in preceding requirements is that obs(learner, _) and obs(model, _) are involutive, meaning both the following hold:\n\nobs(learner, obs(learner, data)) == obs(learner, data)\nobs(model, obs(model, data) == obs(model, obs(model, data)\n\nIf one overloads obs, one typically needs additionally overloadings to guarantee involutivity.\n\nThe fallback for obs is obs(model_or_learner, data) = data, and the fallback for LearnAPI.data_interface(learner) is LearnAPI.RandomAccess(). For details refer to the LearnAPI.data_interface document string.\n\nIn particular, if the data to be consumed by fit, predict or transform consists only of suitable tables and arrays, then obs and LearnAPI.data_interface do not need to be overloaded. However, the user will get no performance benefits by using obs in that case.\n\nIf overloading obs(learner, data) to output new model-specific representations of data, it may be necessary to also overload LearnAPI.features(learner, observations), LearnAPI.target(learner, observations) (supervised learners), and/or LearnAPI.weights(learner, observations) (if weights are supported), for each kind output observations of obs(learner, data). Moreover, the outputs of these methods, applied to observations, must also implement the interface specified by LearnAPI.data_interface(learner).\n\nSample implementation\n\nRefer to the \"Anatomy of an Implementation\" section of the LearnAPI.jl manual.\n\n\n\n\n\n","category":"function"},{"location":"obs/#data_interfaces","page":"obs","title":"Data interfaces","text":"","category":"section"},{"location":"obs/","page":"obs","title":"obs","text":"New implementations must overload LearnAPI.data_interface(learner) if the output of obs does not implement LearnAPI.RandomAccess. (Arrays, most tables, and all tuples thereof, implement RandomAccess.)","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"LearnAPI.RandomAccess (default)\nLearnAPI.FiniteIterable\nLearnAPI.Iterable","category":"page"},{"location":"obs/","page":"obs","title":"obs","text":"LearnAPI.RandomAccess\nLearnAPI.FiniteIterable\nLearnAPI.Iterable","category":"page"},{"location":"obs/#LearnAPI.RandomAccess","page":"obs","title":"LearnAPI.RandomAccess","text":"LearnAPI.RandomAccess\n\nA data interface type. We say that data implements the RandomAccess interface if data implements the methods getobs and numobs from MLUtils.jl. The first method allows one to grab observations specified by an arbitrary index set, as in MLUtils.getobs(data, [2, 3, 5]), while the second method returns the total number of available observations, which is assumed to be known and finite.\n\nAll arrays implement RandomAccess, with the last index being the observation index (observations-as-columns in matrices).\n\nA Tables.jl compatible table data implements RandomAccess if Tables.istable(data) is true and if data implements DataAPI.nrow. This includes many tables, and in particular, DataFrames. Tables that are also tuples are explicitly excluded.\n\nAny tuple of objects implementing RandomAccess also implements RandomAccess.\n\nIf LearnAPI.data_interface(learner) takes the value RandomAccess(), then obs(learner, ...) is guaranteed to return objects implementing the RandomAccess interface, and the same holds for obs(model, ...), whenever LearnAPI.learner(model) == learner.\n\nImplementing RandomAccess for new data types\n\nTypically, to implement RandomAccess for a new data type requires only implementing Base.getindex and Base.length, which are the fallbacks for MLUtils.getobs and MLUtils.numobs, and this avoids making MLUtils.jl a package dependency.\n\nSee also LearnAPI.FiniteIterable, LearnAPI.Iterable.\n\n\n\n\n\n","category":"type"},{"location":"obs/#LearnAPI.FiniteIterable","page":"obs","title":"LearnAPI.FiniteIterable","text":"LearnAPI.FiniteIterable\n\nA data interface type. We say that data implements the FiniteIterable interface if it implements Julia's iterate interface, including Base.length, and if Base.IteratorSize(typeof(data)) == Base.HasLength(). For example, this is true if:\n\ndata implements the LearnAPI.RandomAccess interface (arrays and most tables)\ndata isa MLUtils.DataLoader, which includes output from MLUtils.eachobs.\n\nIf LearnAPI.data_interface(learner) takes the value FiniteIterable(), then obs(learner, ...) is guaranteed to return objects implementing the FiniteIterable interface, and the same holds for obs(model, ...), whenever LearnAPI.learner(model) == learner.\n\nSee also LearnAPI.RandomAccess, LearnAPI.Iterable.\n\n\n\n\n\n","category":"type"},{"location":"obs/#LearnAPI.Iterable","page":"obs","title":"LearnAPI.Iterable","text":"LearnAPI.Iterable\n\nA data interface type. We say that data implements the Iterable interface if it implements Julia's basic iterate interface. (Such objects may not implement MLUtils.numobs or Base.length.)\n\nIf LearnAPI.data_interface(learner) takes the value Iterable(), then obs(learner, ...) is guaranteed to return objects implementing Iterable, and the same holds for obs(model, ...), whenever LearnAPI.learner(model) == learner.\n\nSee also LearnAPI.FiniteIterable, LearnAPI.RandomAccess.\n\n\n\n\n\n","category":"type"},{"location":"","page":"Home","title":"Home","text":"\n\nLearnAPI.jl\n
\n\nA base Julia interface for machine learning and statistics \n
\n
","category":"page"},{"location":"","page":"Home","title":"Home","text":"LearnAPI.jl is a lightweight, functional-style interface, providing a collection of methods, such as fit and predict, to be implemented by algorithms from machine learning and statistics, some examples of which are listed here. A careful design ensures algorithms implementing LearnAPI.jl can buy into functionality, such as external performance estimates, hyperparameter optimization and model composition, provided by ML/statistics toolboxes and other packages. LearnAPI.jl includes a number of Julia traits for promising specific behavior.","category":"page"},{"location":"","page":"Home","title":"Home","text":"LearnAPI.jl's has no package dependencies.","category":"page"},{"location":"","page":"Home","title":"Home","text":"🚧","category":"page"},{"location":"","page":"Home","title":"Home","text":"warning: Warning\nThe API described here is under active development and not ready for adoption. Join an ongoing design discussion at this Julia Discourse thread.","category":"page"},{"location":"#Sample-workflow","page":"Home","title":"Sample workflow","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Suppose forest is some object encapsulating the hyperparameters of the random forest algorithm (the number of trees, etc.). Then, a LearnAPI.jl interface can be implemented, for objects with the type of forest, to enable the basic workflow below. In this case data is presented following the \"scikit-learn\" X, y pattern, although LearnAPI.jl supports other patterns as well.","category":"page"},{"location":"","page":"Home","title":"Home","text":"X = \ny = \nXnew = \n\n# List LearnaAPI functions implemented for `forest`:\nLearnAPI.functions(forest)\n\n# Train:\nmodel = fit(forest, X, y)\n\n# Generate point predictions:\nŷ = predict(model, Xnew) # or `predict(model, Point(), Xnew)`\n\n# Predict probability distributions:\npredict(model, Distribution(), Xnew)\n\n# Apply an \"accessor function\" to inspect byproducts of training:\nLearnAPI.feature_importances(model)\n\n# Slim down and otherwise prepare model for serialization:\nsmall_model = LearnAPI.strip(model)\nserialize(\"my_random_forest.jls\", small_model)\n\n# Recover saved model and algorithm configuration (\"learner\"):\nrecovered_model = deserialize(\"my_random_forest.jls\")\n@assert LearnAPI.learner(recovered_model) == forest\n@assert predict(recovered_model, Point(), Xnew) == ŷ","category":"page"},{"location":"","page":"Home","title":"Home","text":"Distribution and Point are singleton types owned by LearnAPI.jl. They allow dispatch based on the kind of target proxy, a key LearnAPI.jl concept. LearnAPI.jl places more emphasis on the notion of target variables and target proxies than on the usual supervised/unsupervised learning dichotomy. From this point of view, a supervised learner is simply one in which a target variable exists, and happens to appear as an input to training but not to prediction.","category":"page"},{"location":"#Data-interfaces","page":"Home","title":"Data interfaces","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Algorithms are free to consume data in any format. However, a method called obs (read as \"observations\") gives users and meta-algorithms access to an algorithm-specific representation of input data, which is also guaranteed to implement a standard interface for accessing individual observations, unless the algorithm explicitly opts out. Moreover, the fit and predict methods will also be able to consume these alternative data representations, for performance benefits in some situations.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The fallback data interface is the MLUtils.jl getobs/numobs interface (here tagged as LearnAPI.RandomAccess()) and if the input consumed by the algorithm already implements that interface (tables, arrays, etc.) then overloading obs is completely optional. Plain iteration interfaces, with or without knowledge of the number of observations, can also be specified (to support, e.g., data loaders reading images from disk).","category":"page"},{"location":"#Learning-more","page":"Home","title":"Learning more","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Anatomy of an Implementation: informal introduction to the main actors in a new LearnAPI.jl implementation\nReference: official specification\nCommon Implementation Patterns: implementation suggestions for common, informally defined, algorithm types\nTesting an Implementation","category":"page"},{"location":"patterns/outlier_detection/#Outlier-Detection","page":"Outlier Detection","title":"Outlier Detection","text":"","category":"section"},{"location":"patterns/incremental_algorithms/#Incremental-Algorithms","page":"Incremental Algorithms","title":"Incremental Algorithms","text":"","category":"section"},{"location":"patterns/incremental_algorithms/","page":"Incremental Algorithms","title":"Incremental Algorithms","text":"See these examples from the JuliaTestAI.jl test suite:","category":"page"},{"location":"patterns/incremental_algorithms/","page":"Incremental Algorithms","title":"Incremental Algorithms","text":"normal distribution estimator","category":"page"}] } diff --git a/dev/target_weights_features/index.html b/dev/target_weights_features/index.html index 7f02f97..11981c1 100644 --- a/dev/target_weights_features/index.html +++ b/dev/target_weights_features/index.html @@ -5,6 +5,6 @@ X = LearnAPI.features(learner, data) y = LearnAPI.target(learner, data) ŷ = predict(model, Point(), X) -training_loss = sum(ŷ .!= y)

Implementation guide

methodfallbackcompulsory?
LearnAPI.targetreturns nothingno
LearnAPI.weightsreturns nothingno
LearnAPI.featuressee docstringif fallback insufficient

Reference

LearnAPI.targetFunction
LearnAPI.target(learner, data) -> target

Return, for each form of data supported in a call of the form fit(learner, data), the target variable part of data. If nothing is returned, the learner does not see a target variable in training (is unsupervised).

The returned object y has the same number of observations as data. If data is the output of an obs call, then y is additionally guaranteed to implement the data interface specified by LearnAPI.data_interface(learner).

Extended help

What is a target variable?

Examples of target variables are house prices in real estate pricing estimates, the "spam"/"not spam" labels in an email spam filtering task, "outlier"/"inlier" labels in outlier detection, cluster labels in clustering problems, and censored survival times in survival analysis. For more on targets and target proxies, see the "Reference" section of the LearnAPI.jl documentation.

New implementations

A fallback returns nothing. The method must be overloaded if fit consumes data including a target variable.

If overloading obs, ensure that the return value, unless nothing, implements the data interface specified by LearnAPI.data_interface(learner), in the special case that data is the output of an obs call.

If overloaded, you must include :(LearnAPI.target) in the tuple returned by the LearnAPI.functions trait.

source
LearnAPI.weightsFunction
LearnAPI.weights(learner, data) -> weights

Return, for each form of data supported in a call of the form fit(learner, data), the per-observation weights part of data. Where nothing is returned, no weights are part of data, which is to be interpreted as uniform weighting.

The returned object w has the same number of observations as data. If data is the output of an obs call, then w is additionally guaranteed to implement the data interface specified by LearnAPI.data_interface(learner).

Extended help

New implementations

Overloading is optional. A fallback returns nothing.

If overloading obs, ensure that the return value, unless nothing, implements the data interface specified by LearnAPI.data_interface(learner), in the special case that data is the output of an obs call.

If overloaded, you must include :(LearnAPI.weights) in the tuple returned by the LearnAPI.functions trait.

source
LearnAPI.featuresFunction
LearnAPI.features(learner, data)

Return, for each form of data supported in a call of the form fit(learner, data), the "features" part of data (as opposed to the target variable, for example).

The returned object X may always be passed to predict or transform, where implemented, as in the following sample workflow:

model = fit(learner, data)
+training_loss = sum(ŷ .!= y)

Implementation guide

methodfallbackcompulsory?
LearnAPI.targetreturns nothingno
LearnAPI.weightsreturns nothingno
LearnAPI.featuressee docstringif fallback insufficient

Reference

LearnAPI.targetFunction
LearnAPI.target(learner, data) -> target

Return, for each form of data supported in a call of the form fit(learner, data), the target variable part of data. If nothing is returned, the learner does not see a target variable in training (is unsupervised).

The returned object y has the same number of observations as data. If data is the output of an obs call, then y is additionally guaranteed to implement the data interface specified by LearnAPI.data_interface(learner).

Extended help

What is a target variable?

Examples of target variables are house prices in real estate pricing estimates, the "spam"/"not spam" labels in an email spam filtering task, "outlier"/"inlier" labels in outlier detection, cluster labels in clustering problems, and censored survival times in survival analysis. For more on targets and target proxies, see the "Reference" section of the LearnAPI.jl documentation.

New implementations

A fallback returns nothing. The method must be overloaded if fit consumes data including a target variable.

If overloading obs, ensure that the return value, unless nothing, implements the data interface specified by LearnAPI.data_interface(learner), in the special case that data is the output of an obs call.

If overloaded, you must include :(LearnAPI.target) in the tuple returned by the LearnAPI.functions trait.

source
LearnAPI.weightsFunction
LearnAPI.weights(learner, data) -> weights

Return, for each form of data supported in a call of the form fit(learner, data), the per-observation weights part of data. Where nothing is returned, no weights are part of data, which is to be interpreted as uniform weighting.

The returned object w has the same number of observations as data. If data is the output of an obs call, then w is additionally guaranteed to implement the data interface specified by LearnAPI.data_interface(learner).

Extended help

New implementations

Overloading is optional. A fallback returns nothing.

If overloading obs, ensure that the return value, unless nothing, implements the data interface specified by LearnAPI.data_interface(learner), in the special case that data is the output of an obs call.

If overloaded, you must include :(LearnAPI.weights) in the tuple returned by the LearnAPI.functions trait.

source
LearnAPI.featuresFunction
LearnAPI.features(learner, data)

Return, for each form of data supported in a call of the form fit(learner, data), the "features" part of data (as opposed to the target variable, for example).

The returned object X may always be passed to predict or transform, where implemented, as in the following sample workflow:

model = fit(learner, data)
 X = LearnAPI.features(learner, data)
-ŷ = predict(model, kind_of_proxy, X) # eg, `kind_of_proxy = Point()`

For supervised models (i.e., where :(LearnAPI.target) in LearnAPI.functions(learner)) above is generally intended to be an approximate proxy for LearnAPI.target(learner, data), the training target.

The object X returned by LearnAPI.target has the same number of observations as data. If data is the output of an obs call, then X is additionally guaranteed to implement the data interface specified by LearnAPI.data_interface(learner).

Extended help

New implementations

For density estimators, whose fit typically consumes only a target variable, you should overload this method to return nothing.

It must otherwise be possible to pass the return value X to predict and/or transform, and X must have same number of observations as data. A fallback returns first(data) if data is a tuple, and otherwise returns data.

Further overloadings may be necessary to handle the case that data is the output of obs(learner, data), if obs is being overloaded. In this case, be sure that X, unless nothing, implements the data interface specified by LearnAPI.data_interface(learner).

source
+ŷ = predict(model, kind_of_proxy, X) # eg, `kind_of_proxy = Point()`

For supervised models (i.e., where :(LearnAPI.target) in LearnAPI.functions(learner)) above is generally intended to be an approximate proxy for LearnAPI.target(learner, data), the training target.

The object X returned by LearnAPI.target has the same number of observations as data. If data is the output of an obs call, then X is additionally guaranteed to implement the data interface specified by LearnAPI.data_interface(learner).

Extended help

New implementations

For density estimators, whose fit typically consumes only a target variable, you should overload this method to return nothing.

It must otherwise be possible to pass the return value X to predict and/or transform, and X must have same number of observations as data. A fallback returns first(data) if data is a tuple, and otherwise returns data.

Further overloadings may be necessary to handle the case that data is the output of obs(learner, data), if obs is being overloaded. In this case, be sure that X, unless nothing, implements the data interface specified by LearnAPI.data_interface(learner).

source diff --git a/dev/testing_an_implementation/index.html b/dev/testing_an_implementation/index.html index 756267d..f97837f 100644 --- a/dev/testing_an_implementation/index.html +++ b/dev/testing_an_implementation/index.html @@ -1,2 +1,2 @@ -Testing an Implementation · LearnAPI.jl
+Testing an Implementation · LearnAPI.jl
diff --git a/dev/traits/index.html b/dev/traits/index.html index 6af99f4..9feac74 100644 --- a/dev/traits/index.html +++ b/dev/traits/index.html @@ -10,11 +10,11 @@ julia> learner2.lambda 0.2

New implementations

All new implementations must overload this trait.

Attach public LearnAPI.jl-related documentation for learner to the constructor, not the learner struct.

It must be possible to recover learner from the constructor returned as follows:

properties = propertynames(learner)
 named_properties = NamedTuple{properties}(getproperty.(Ref(learner), properties))
-@assert learner == LearnAPI.constructor(learner)(; named_properties...)

which can be tested with @assert LearnAPI.clone(learner) == learner.

The keyword constructor provided by LearnAPI.constructor must provide default values for all properties, with the exception of those that can take other LearnAPI.jl learners as values. These can be provided with the default nothing, with the constructor throwing an error if the default value persists.

source
LearnAPI.functionsFunction
LearnAPI.functions(learner)

Return a tuple of expressions representing functions that can be meaningfully applied with learner, or an associated model (object returned by fit(learner, ...), as the first argument. Learner traits (methods for which learner is the only argument) are excluded.

The returned tuple may include expressions like :(DecisionTree.print_tree), which reference functions not owned by LearnAPI.jl.

The understanding is that learner is a LearnAPI-compliant object whenever the return value is non-empty.

Extended help

New implementations

All new implementations must implement this trait. Here's a checklist for elements in the return value:

expressionimplementation compulsory?include in returned tuple?
:(LearnAPI.fit)yesyes
:(LearnAPI.learner)yesyes
:(LearnAPI.strip)noyes
:(LearnAPI.obs)noyes
:(LearnAPI.features)noyes, unless fit consumes no data
:(LearnAPI.target)noonly if implemented
:(LearnAPI.weights)noonly if implemented
:(LearnAPI.update)noonly if implemented
:(LearnAPI.update_observations)noonly if implemented
:(LearnAPI.update_features)noonly if implemented
:(LearnAPI.predict)noonly if implemented
:(LearnAPI.transform)noonly if implemented
:(LearnAPI.inverse_transform)noonly if implemented
< accessor functions>noonly if implemented

Also include any implemented accessor functions, both those owned by LearnaAPI.jl, and any learner-specific ones. The LearnAPI.jl accessor functions are: LearnAPI.extras, LearnAPI.learner, LearnAPI.coefficients, LearnAPI.intercept, LearnAPI.tree, LearnAPI.trees, LearnAPI.feature_importances, LearnAPI.training_labels, LearnAPI.training_losses, LearnAPI.training_predictions, LearnAPI.training_scores and LearnAPI.components (LearnAPI.strip is always included).

source
LearnAPI.kinds_of_proxyFunction
LearnAPI.kinds_of_proxy(learner)

Returns a tuple of all instances, kind, for which for which predict(learner, kind, data...) has a guaranteed implementation. Each such kind subtypes LearnAPI.KindOfProxy. Examples are Point() (for predicting actual target values) and Distributions() (for predicting probability mass/density functions).

The call predict(model, data) always returns predict(model, kind, data), where kind is the first element of the trait's return value.

See also LearnAPI.predict, LearnAPI.KindOfProxy.

Extended help

New implementations

Must be overloaded whenever predict is implemented.

Elements of the returned tuple must be instances of LearnAPI.KindOfProxy. List all possibilities by running LearnAPI.kinds_of_proxy().

Suppose, for example, we have the following implementation of a supervised learner returning only probabilistic predictions:

LearnAPI.predict(learner::MyNewLearnerType, LearnAPI.Distribution(), Xnew) = ...

Then we can declare

@trait MyNewLearnerType kinds_of_proxy = (LearnaAPI.Distribution(),)

LearnAPI.jl provides the fallback for predict(model, data).

For more on target variables and target proxies, refer to the LearnAPI documentation.

source
LearnAPI.tagsFunction
LearnAPI.tags(learner)

Lists one or more suggestive learner tags. Do LearnAPI.tags() to list all possible.

Warning

The value of this trait guarantees no particular behavior. The trait is intended for informal classification purposes only.

New implementations

This trait should return a tuple of strings, as in ("classifier", "text analysis").

source
LearnAPI.is_pure_juliaFunction
LearnAPI.is_pure_julia(learner)

Returns true if training learner requires evaluation of pure Julia code only.

New implementations

The fallback is false.

source
LearnAPI.pkg_nameFunction
LearnAPI.pkg_name(learner)

Return the name of the package module which supplies the core training algorithm for learner. This is not necessarily the package providing the LearnAPI interface.

Returns "unknown" if the learner implementation has not overloaded the trait.

New implementations

Must return a string, as in "DecisionTree".

source
LearnAPI.pkg_licenseFunction
LearnAPI.pkg_license(learner)

Return the name of the software license, such as "MIT", applying to the package where the core algorithm for learner is implemented.

source
LearnAPI.doc_urlFunction
LearnAPI.doc_url(learner)

Return a url where the core algorithm for learner is documented.

Returns "unknown" if the learner implementation has not overloaded the trait.

New implementations

Must return a string, such as "https://en.wikipedia.org/wiki/Decision_tree_learning".

source
LearnAPI.load_pathFunction
LearnAPI.load_path(learner)

Return a string indicating where in code the definition of the learner's constructor can be found, beginning with the name of the package module defining it. By "constructor" we mean the return value of LearnAPI.constructor(learner).

Implementation

For example, a return value of "FastTrees.LearnAPI.DecisionTreeClassifier" means the following julia code will not error:

import FastTrees
+@assert learner == LearnAPI.constructor(learner)(; named_properties...)

which can be tested with @assert LearnAPI.clone(learner) == learner.

The keyword constructor provided by LearnAPI.constructor must provide default values for all properties, with the exception of those that can take other LearnAPI.jl learners as values. These can be provided with the default nothing, with the constructor throwing an error if the default value persists.

source
LearnAPI.functionsFunction
LearnAPI.functions(learner)

Return a tuple of expressions representing functions that can be meaningfully applied with learner, or an associated model (object returned by fit(learner, ...), as the first argument. Learner traits (methods for which learner is the only argument) are excluded.

The returned tuple may include expressions like :(DecisionTree.print_tree), which reference functions not owned by LearnAPI.jl.

The understanding is that learner is a LearnAPI-compliant object whenever the return value is non-empty.

Extended help

New implementations

All new implementations must implement this trait. Here's a checklist for elements in the return value:

expressionimplementation compulsory?include in returned tuple?
:(LearnAPI.fit)yesyes
:(LearnAPI.learner)yesyes
:(LearnAPI.strip)noyes
:(LearnAPI.obs)noyes
:(LearnAPI.features)noyes, unless fit consumes no data
:(LearnAPI.target)noonly if implemented
:(LearnAPI.weights)noonly if implemented
:(LearnAPI.update)noonly if implemented
:(LearnAPI.update_observations)noonly if implemented
:(LearnAPI.update_features)noonly if implemented
:(LearnAPI.predict)noonly if implemented
:(LearnAPI.transform)noonly if implemented
:(LearnAPI.inverse_transform)noonly if implemented
< accessor functions>noonly if implemented

Also include any implemented accessor functions, both those owned by LearnaAPI.jl, and any learner-specific ones. The LearnAPI.jl accessor functions are: LearnAPI.extras, LearnAPI.learner, LearnAPI.coefficients, LearnAPI.intercept, LearnAPI.tree, LearnAPI.trees, LearnAPI.feature_names, LearnAPI.feature_importances, LearnAPI.training_labels, LearnAPI.training_losses, LearnAPI.training_predictions, LearnAPI.training_scores and LearnAPI.components (LearnAPI.strip is always included).

source
LearnAPI.kinds_of_proxyFunction
LearnAPI.kinds_of_proxy(learner)

Returns a tuple of all instances, kind, for which for which predict(learner, kind, data...) has a guaranteed implementation. Each such kind subtypes LearnAPI.KindOfProxy. Examples are Point() (for predicting actual target values) and Distributions() (for predicting probability mass/density functions).

The call predict(model, data) always returns predict(model, kind, data), where kind is the first element of the trait's return value.

See also LearnAPI.predict, LearnAPI.KindOfProxy.

Extended help

New implementations

Must be overloaded whenever predict is implemented.

Elements of the returned tuple must be instances of LearnAPI.KindOfProxy. List all possibilities by running LearnAPI.kinds_of_proxy().

Suppose, for example, we have the following implementation of a supervised learner returning only probabilistic predictions:

LearnAPI.predict(learner::MyNewLearnerType, LearnAPI.Distribution(), Xnew) = ...

Then we can declare

@trait MyNewLearnerType kinds_of_proxy = (LearnaAPI.Distribution(),)

LearnAPI.jl provides the fallback for predict(model, data).

For more on target variables and target proxies, refer to the LearnAPI documentation.

source
LearnAPI.tagsFunction
LearnAPI.tags(learner)

Lists one or more suggestive learner tags. Do LearnAPI.tags() to list all possible.

Warning

The value of this trait guarantees no particular behavior. The trait is intended for informal classification purposes only.

New implementations

This trait should return a tuple of strings, as in ("classifier", "text analysis").

source
LearnAPI.is_pure_juliaFunction
LearnAPI.is_pure_julia(learner)

Returns true if training learner requires evaluation of pure Julia code only.

New implementations

The fallback is false.

source
LearnAPI.pkg_nameFunction
LearnAPI.pkg_name(learner)

Return the name of the package module which supplies the core training algorithm for learner. This is not necessarily the package providing the LearnAPI interface.

Returns "unknown" if the learner implementation has not overloaded the trait.

New implementations

Must return a string, as in "DecisionTree".

source
LearnAPI.pkg_licenseFunction
LearnAPI.pkg_license(learner)

Return the name of the software license, such as "MIT", applying to the package where the core algorithm for learner is implemented.

source
LearnAPI.doc_urlFunction
LearnAPI.doc_url(learner)

Return a url where the core algorithm for learner is documented.

Returns "unknown" if the learner implementation has not overloaded the trait.

New implementations

Must return a string, such as "https://en.wikipedia.org/wiki/Decision_tree_learning".

source
LearnAPI.load_pathFunction
LearnAPI.load_path(learner)

Return a string indicating where in code the definition of the learner's constructor can be found, beginning with the name of the package module defining it. By "constructor" we mean the return value of LearnAPI.constructor(learner).

Implementation

For example, a return value of "FastTrees.LearnAPI.DecisionTreeClassifier" means the following julia code will not error:

import FastTrees
 import LearnAPI
-@assert FastTrees.LearnAPI.DecisionTreeClassifier == LearnAPI.constructor(learner)

Returns "unknown" if the learner implementation has not overloaded the trait.

source
LearnAPI.is_compositeFunction
LearnAPI.is_composite(learner)

Returns true if one or more properties (fields) of learner may themselves be learners, and false otherwise.

See also LearnAPI.components.

New implementations

This trait should be overloaded if one or more properties (fields) of learner may take learner values. Fallback return value is false. The keyword constructor for such an learner need not prescribe defaults for learner-valued properties. Implementation of the accessor function LearnAPI.components is recommended.

The value of the trait must depend only on the type of learner.

source
LearnAPI.human_nameFunction
LearnAPI.human_name(learner)

Return a human-readable string representation of typeof(learner). Primarily intended for auto-generation of documentation.

New implementations

Optional. A fallback takes the type name, inserts spaces and removes capitalization. For example, KNNRegressor becomes "knn regressor". Better would be to overload the trait to return "K-nearest neighbors regressor". Ideally, this is a "concrete" noun like "ridge regressor" rather than an "abstract" noun like "ridge regression".

source
LearnAPI.data_interfaceFunction
LearnAPI.data_interface(learner)

Return the data interface supported by learner for accessing individual observations in representations of input data returned by obs(learner, data) or obs(model, data), whenever learner == LearnAPI.learner(model). Here data is fit, predict, or transform-consumable data.

Possible return values are LearnAPI.RandomAccess, LearnAPI.FiniteIterable, and LearnAPI.Iterable.

See also obs.

New implementations

The fallback returns LearnAPI.RandomAccess, which applies to arrays, most tables, and tuples of these. See the doc-string for details.

source
LearnAPI.iteration_parameterFunction
LearnAPI.iteration_parameter(learner)

The name of the iteration parameter of learner, or nothing if the algorithm is not iterative.

New implementations

Implement if algorithm is iterative. Returns a symbol or nothing.

source
LearnAPI.fit_observation_scitypeFunction
LearnAPI.fit_observation_scitype(learner)

Return an upper bound S on the scitype of individual observations guaranteed to work when calling fit: if observations = obs(learner, data) and ScientificTypes.scitype(o) <:S for each o in observations, then the call fit(learner, data) is supported.

Here, "for each o in observations" is understood in the sense of LearnAPI.data_interface(learner). For example, if LearnAPI.data_interface(learner) == Base.HasLength(), then this means "for o in MLUtils.eachobs(observations)".

See also LearnAPI.target_observation_scitype.

New implementations

Optional. The fallback return value is Union{}.

source
LearnAPI.target_observation_scitypeFunction
LearnAPI.target_observation_scitype(learner)

Return an upper bound S on the scitype of each observation of an applicable target variable. Specifically:

  • If :(LearnAPI.target) in LearnAPI.functions(learner) (i.e., fit consumes target variables) then "target" means anything returned by LearnAPI.target(learner, data), where data is an admissible argument in the call fit(learner, data).

  • S will always be an upper bound on the scitype of (point) observations that could be conceivably extracted from the output of predict.

To illustate the second case, suppose we have

model = fit(learner, data)
-ŷ = predict(model, Sampleable(), data_new)

Then each individual sample generated by each "observation" of (a vector of sampleable objects, say) will be bound in scitype by S.

See also See also LearnAPI.fit_observation_scitype.

New implementations

Optional. The fallback return value is Any.

source
LearnAPI.is_staticFunction
LearnAPI.is_static(learner)

Returns true if fit is called with no data arguments, as in fit(learner). That is, learner does not generalize to new data, and data is only provided at the predict or transform step.

For example, some clustering algorithms are applied with this workflow, to assign labels to the observations in X:

model = fit(learner) # no training data
+@assert FastTrees.LearnAPI.DecisionTreeClassifier == LearnAPI.constructor(learner)

Returns "unknown" if the learner implementation has not overloaded the trait.

source
LearnAPI.is_compositeFunction
LearnAPI.is_composite(learner)

Returns true if one or more properties (fields) of learner may themselves be learners, and false otherwise.

See also LearnAPI.components.

New implementations

This trait should be overloaded if one or more properties (fields) of learner may take learner values. Fallback return value is false. The keyword constructor for such an learner need not prescribe defaults for learner-valued properties. Implementation of the accessor function LearnAPI.components is recommended.

The value of the trait must depend only on the type of learner.

source
LearnAPI.human_nameFunction
LearnAPI.human_name(learner)

Return a human-readable string representation of typeof(learner). Primarily intended for auto-generation of documentation.

New implementations

Optional. A fallback takes the type name, inserts spaces and removes capitalization. For example, KNNRegressor becomes "knn regressor". Better would be to overload the trait to return "K-nearest neighbors regressor". Ideally, this is a "concrete" noun like "ridge regressor" rather than an "abstract" noun like "ridge regression".

source
LearnAPI.data_interfaceFunction
LearnAPI.data_interface(learner)

Return the data interface supported by learner for accessing individual observations in representations of input data returned by obs(learner, data) or obs(model, data), whenever learner == LearnAPI.learner(model). Here data is fit, predict, or transform-consumable data.

Possible return values are LearnAPI.RandomAccess, LearnAPI.FiniteIterable, and LearnAPI.Iterable.

See also obs.

New implementations

The fallback returns LearnAPI.RandomAccess, which applies to arrays, most tables, and tuples of these. See the doc-string for details.

source
LearnAPI.iteration_parameterFunction
LearnAPI.iteration_parameter(learner)

The name of the iteration parameter of learner, or nothing if the algorithm is not iterative.

New implementations

Implement if algorithm is iterative. Returns a symbol or nothing.

source
LearnAPI.fit_observation_scitypeFunction
LearnAPI.fit_observation_scitype(learner)

Return an upper bound S on the scitype of individual observations guaranteed to work when calling fit: if observations = obs(learner, data) and ScientificTypes.scitype(o) <:S for each o in observations, then the call fit(learner, data) is supported.

Here, "for each o in observations" is understood in the sense of LearnAPI.data_interface(learner). For example, if LearnAPI.data_interface(learner) == Base.HasLength(), then this means "for o in MLUtils.eachobs(observations)".

See also LearnAPI.target_observation_scitype.

New implementations

Optional. The fallback return value is Union{}.

source
LearnAPI.target_observation_scitypeFunction
LearnAPI.target_observation_scitype(learner)

Return an upper bound S on the scitype of each observation of an applicable target variable. Specifically:

  • If :(LearnAPI.target) in LearnAPI.functions(learner) (i.e., fit consumes target variables) then "target" means anything returned by LearnAPI.target(learner, data), where data is an admissible argument in the call fit(learner, data).

  • S will always be an upper bound on the scitype of (point) observations that could be conceivably extracted from the output of predict.

To illustate the second case, suppose we have

model = fit(learner, data)
+ŷ = predict(model, Sampleable(), data_new)

Then each individual sample generated by each "observation" of (a vector of sampleable objects, say) will be bound in scitype by S.

See also See also LearnAPI.fit_observation_scitype.

New implementations

Optional. The fallback return value is Any.

source
LearnAPI.is_staticFunction
LearnAPI.is_static(learner)

Returns true if fit is called with no data arguments, as in fit(learner). That is, learner does not generalize to new data, and data is only provided at the predict or transform step.

For example, some clustering algorithms are applied with this workflow, to assign labels to the observations in X:

model = fit(learner) # no training data
 labels = predict(model, X) # may mutate `model`!
 
 # extract some byproducts of the clustering algorithm (e.g., outliers):
-LearnAPI.extras(model)

New implementations

This trait, falling back to false, may only be overloaded when fit has no data arguments. See more at fit.

source
+LearnAPI.extras(model)

New implementations

This trait, falling back to false, may only be overloaded when fit has no data arguments. See more at fit.

source