crandas.crlearn

The module crandas.crlearn provides the following functionality:

Machine learing models:

General:

Model interface

crandas.crlearn.model.CModel

Alias for Model (use of this alias is deprecated)

class crandas.crlearn.model.Model(**params_and_query_args)

Bases: object

Base class for machine learning models stored at the server

The API for crandas machine learning models is similar to that of scikit-learn estimators. Functions such as .fit() and set_params() are applied in-place to the Model. Internally, the Model has a field instance that refers to a stateobject.StateObject at the server and that is updated when such functions are applied.

Similarly to scikit-learn, models have parameters (set by the user) and attributes (set based on fitting the model on data). Typically, at least some of the attributes are encrypted (e.g., fitted model parameters). The encrypted attributes can be retrieved by opening the model using open().

Models typically have a .fit() function to fit the model, and functions .predict(), .predict_proba()`, ``.transform() to apply the fitted model. When using the model, it is typically required to use the same feature names as used during fitting. If the dataset has different column names or a different column order, the columns need to be adjusted, e.g.:

# Columns during fit: "a", "b"
cdf = cd.DataFrame({"a": [1], "b": [2]}, auto_bounds=True)
mms = cd.crlearn.utils.MinMaxScaler().fit(cdf)

# Wrong column order can be fixed using indexing
cdf2 = cd.DataFrame({"b": [2], "a": [1]}, auto_bounds=True)
mms.transform(cdf2[mms.feature_names_in_])

# Wrong column names can be fixed using .rename or .set_axis; this assumes that the
# columns are already in the correct order
cdf3 = cd.DataFrame({"a": [2], "B": [1]}, auto_bounds=True)
mms.transform(cdf3.rename({"B": "b"}))
mms.transform(cdf3.set_axis(mms.feature_names_in_, axis=1))
property attributes

Retrieve attributes for the estimator.

Only available for fitted estimators. Encrypted attributes are set to None. To retrieve their values, use open().

classmethod from_opened(params_attributes, **query_args)

Upload model to the server

Should be called on an instance of the final model class, e.g., linear_model.LinearRegression.

Parameters:

params_attributes (dict) – Parameters and attributes as returned by open()

property handle

Return the handle of the current instance of the model

instance = None

Current model instance (updated by .fit(), set_params(), etc)

open()

Download the model

The model is returned as a dictionary of parameters and attributes.

property params

Retrieve parameters for the estimator.

Note that the parameters can be set using set_params().

save(name=None, **query_args)

Save the (current instance of the) model. Returns the saved object, i.e., self. See stateobject.StateObject.save().

set_params(**params_and_query_args)

Set parameters of the estimator

Parameters:

params_and_query_args (dict) – Model parameters (see documentation for specific model) or query arguments (see Query Arguments)

Metrics

crandas.crlearn.metrics.classification_accuracy(y, y_pred, n_classes=2, **query_args)

Compute the classification accuracy on class predictions

Parameters:
  • y (DataFrame) – column with the actual values in range

  • y_pred (DataFrame) – column with the predictions in range

  • n_classes (int) – number of classes (default = 2)

  • query_args – See Query Arguments

Returns:

fixed point number between 0 and 1

Return type:

DataFrame

crandas.crlearn.metrics.confusion_matrix(y, y_pred, n_classes=2, **query_args)

Compute the confusion matrix on class predictions

The y-axis of the result represents the true class. The x-axis the predicted class.

Parameters:
  • y (DataFrame) – column with the actual values in range

  • y_pred (DataFrame) – column with the predictions in range

  • n_classes (int) – number of classes (default = 2)

  • query_args – See Query Arguments

Returns:

matrix of size n_classes * n_classes

Return type:

DataFrame

crandas.crlearn.metrics.mcfadden_r2(model, X, y, **query_args)

Compute the McFadden R^2 metric

Parameters:
Returns:

fixed point number between 0 and 1

Return type:

DataFrame

crandas.crlearn.metrics.model_deviance(model, X, y, **query_args)

Compute the model deviance

Parameters:
Returns:

fixed point number between 0 and 1

Return type:

DataFrame

crandas.crlearn.metrics.null_deviance(y, **query_args)

Compute the null deviance

Parameters:
  • y (DataFrame) – binary response variable (should have only 1 column)

  • query_args – See Query Arguments

  • NOTE (both classes NEED to be present in 'y', otherwise the computations are undefined internally (logarithm of 0))

Returns:

fixed point number between 0 and 1

Return type:

DataFrame

crandas.crlearn.metrics.precision_recall(y, y_pred, **query_args)

Compute the precision and recall on predictions

Parameters:
  • y (DataFrame) – column with the actual values (binary)

  • y_pred (DataFrame) – column with the predictions (binary)

query_args :

See Query Arguments

Returns:

two fixed numbers between 0 and 1

Return type:

DataFrame

crandas.crlearn.metrics.score_r2(y, y_pred, **query_args)

Compute the R^2 metric on predictions

Parameters:
Returns:

fixed point number between < 1

Return type:

DataFrame

crandas.crlearn.metrics.tjur_r2(y, y_pred, **query_args)

Compute the Tjur R^2 metric on predictions

Parameters:
Returns:

fixed point number between -1 and 1

Return type:

DataFrame

Utility functions

crandas.crlearn.utils.CMinMaxScaler

Alias for MinMaxScaler (use of this alias is deprecated)

class crandas.crlearn.utils.MinMaxScaler(feature_range=(0, 1), *, copy=True, clip=False, **query_args)

Bases: Model

MinMaxScaler transform

Transform features by scaling each feature to the interval [0,1], similarly to scikit-learn’s MinMaxScaler, see https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html.

The constructor parameters are provided for compatibility with scikit-learn and must not be modified.

For example:

>>> from crandas.crlearn.utils import MinMaxScaler
>>> cdf = cd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]}, auto_bounds=True)
>>> scaler = MinMaxScaler()

>>> # Fit scaler such that colums of `cdf` are scaled to [0,1]
>>> scaler.fit(cdf)

>>> # Alternatively, use fit_transform to get scaled dataset
>>> cdf_scaled = scaler.fit_transform(cdf)
>>> print(cdf_scaled.open())
   a    b
0  0.0  0.0
1  0.5  0.5
2  1.0  1.0

>>> # Scale `cdf2` according to scaling determined from `cdf`
>>> cdf2 = cd.DataFrame({"a": [2, 4], "b": [5, 9]}, auto_bounds=True)
>>> cdf2_scaled = scaler.transform(cdf2)
>>> print(cdf2_scaled.open())
    a    b
0  0.5  0.5
1  1.5  2.5

Attributes of fitted models:

  • n_features_in: number of input features

  • feature_names_in: input feature names

  • min: (encrypted) per-feature adjustment for minimum

  • scale: (encrypted) per-feature relative scaling

fit(X, y=None, *, features='all', **query_args)

Compute the minimum and maximum to be used for later scaling

Parameters:
  • X (DataFrame) – Training data

  • y (any) – Ignored

  • features ("all" (default) or list of feature names) – Feature (column) names to normalize

Returns:

Fitted model

Return type:

MinMaxScaler

fit_transform(X, y=None, *, features='all', **query_args)

Fit to data, then transform it

Parameters:
  • X (DataFrame) – Training data

  • y (any) – Ignored

  • features ("all" (default) or list of feature names) – Feature (column) names to normalize

Returns:

Normalized data. All columns (also non-normalized ones) are converted to fixed point.

Return type:

DataFrame

transform(X, **query_args)

Scale features of X according to computed scaling

Parameters:

X (DataFrame) – Data to normalize

Returns:

Normalized data. All columns (also non-normalized ones) are converted to fixed point.

Return type:

DataFrame

crandas.crlearn.utils.min_max_normalize(table, columns=None, **query_args)

Apply min-max normalization on columns of a table, to get values in [0, 1]

NOTE: this function is deprecated. Please use MinMaxScaler() instead.

Parameters:
  • table (DataFrame) – table to normalize

  • columns (list of strings, optional) – columns that should be normalized. If None, all columns will be normalized. The columns that are not specified in this list will remain untouched, by default None

  • query_args – See Query Arguments

Returns:

new table with normalized columns

Return type:

DataFrame

Linear regression

crandas.crlearn.linear_model.CLinearRegression

Alias for LinearRegression (use of this alias is deprecated)

class crandas.crlearn.linear_model.LinearRegression(alpha=0.0, *, fit_intercept=True, copy_X=True, n_jobs=None, positive=False, **query_args)

Bases: Model

Linear ridge regression classifier corresponding to the scikit-learn Ridge class (see here).

Parameters:

  • alpha: regularization strength (see scikit-learn documentation); defaults to 0.0

Other constructor parameters are for compatibility with scikit-learn and cannot be overridden.

Attributes:

  • n_features_in_: number of input features

  • feature_names_in_: input feature names

  • beta_: (encrypted) fitted parameters (intercept and respective feature coefficients)

  • standard_error_: (encrypted) standard-error of each fitted parameter

  • singular_: boolean representing if singularity of the model is detected

fit(X, y, **query_args)

Fit a Linear Regression model on the data

Parameters:
Return type:

self

get_beta(**query_args)

Get the fitted parameters (i.e. intercept and feature coeficients) as a table

This function is deprecated; instead, use Model.open() to open the model, and use the returned beta_ attribute.

predict(X, **query_args)

Make predictions on a dataset using a linear regression model

Note: this returns predictions on the target, not probabilities!

Parameters:
Returns:

table containing the column consisting of the predicted target values

Return type:

DataFrame

score(X, y, **query_args)

Scores the linear regression model using the R2 metric

Parameters:
Return type:

self

crandas.crlearn.linear_model.Ridge(alpha=1.0, *, fit_intercept=True, copy_X=True, max_iter=None, tol=None, solver='cholesky', positive=False, random_state=None, **params_and_query_args)

Create a new ridge regression model (LinearRegression) with given alpha (1.0 by default)

Other parameters are for compatibility with scikit-learn and cannot be overriden.

Logistic regression

crandas.crlearn.logistic_regression.CLogisticRegression

Alias for LogisticRegression (use of this alias is deprecated)

class crandas.crlearn.logistic_regression.LogisticRegression(penalty='l2', *, optimizer='lbfgs', type='binomial', dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver=None, max_iter=10, verbose=0, warm_start=False, n_jobs=None, l1_ratio=None, **query_args)

Bases: Model

Logistic Regression Classifier Object with the same parameters as the Scikit-learn Logistic Regression class.

See here for its parameters.

Parameters:
  • type (string) – (binomial/multinomial/ordinal)

  • optimizer (Optimizer) – optimizer used to fit the model (see crandas.crlearn.optimizer.OptimizerParams)

  • max_iter (int) – number of iterations to perform

  • warm_start (bool) – whether to continue fitting from the previous optimizer state

Other constructor parameters have the same meaning as in scikit-learn but cannot be changed from their defaults.

feature_names_in_

input feature names

n_classes_

number of output classes

Type:

int

feature_name_out_

output feature name

optimizer_

attributes of the optimizer used to fit the model (see crandas.crlearn.optimizer.OptimizerAttributes)

beta_

(encrypted) fitted parameters (intercept(s) and coefficients)

fit(X, y, *, n_classes=None, sample_weight=None, **query_args)

Fit a Linear Regression model on the data

Parameters:
  • X (DataFrame) – Training data

  • y (DataFrame) – Target data (should have only 1 column)

  • n_classes (int or None) – Number of output classes (categories). For binomial models, if not given, n_classes is assumed to be equal to two. For other models, if not given, the number of classes is derived from the metadata of y.

  • sample_weight (None) – Not supported

  • query_args – See Query Arguments

Returns:

self

Return type:

LogisticRegression

from_beta(*, type='binomial', n_classes=2, feature_names_in, feature_name_out='out')

Upload pre-fittted logistic regression model

Parameters:
  • beta (list[float]) – Fitted parameters

  • type (str, default "binomial") – Type of model (“binomial”/”multinomial”/”ordinal”)

  • n_classes (int, default 2) – Number of classes

  • feature_names_in (list[str]) – Input feature names

  • feature_name_out (str, default "out") – Output feature name

Returns:

Logistic regression model with given parameters

Return type:

LogisticRegression

predict(X, decision_boundary=0.5, **query_args)

Make (binary) predictions on a dataset using a logistic regression model

Note: this returns binary predictions, not probabilities!

Parameters:
  • X (DataFrame) – predictor variables

  • decision_boundary (float) – number between 0 and 1; records with a probability below this value are classified as 0, greater than or equal to as 1

  • query_args – See Query Arguments

Returns:

table containing the column consisting of the predicted target values

Return type:

DataFrame

predict_proba(X, **query_args)

Make (probability) predictions on a dataset using a logistic regression model

Note: this returns probabilities, not binary predictions

Parameters:
Returns:

table with columns representing predicted class probabilities per input record

Return type:

DataFrame

Random forest classifier

crandas.crlearn.ensemble.CRandomForestClassifier

Alias for RandomForestClassifier (use of this alias is deprecated)

class crandas.crlearn.ensemble.RandomForestClassifier(n_estimators=10, *, max_depth=4, bootstrap=True, max_features='sqrt', max_samples=0.3, criterion='gini', random_state=None, warm_start=False, class_weight=None, ccp_alpha=0.0, monitonic_cst=0, **query_args)

Bases: Model

Random forest classifier

Random forest classifier with an interface similar to skikit-learn’s RandomForestClassifier. See https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html.

Features can be ordinal (node “val <= T”) or categorical (node “val == T”), as specified when fitting. Output labels are categorical values 0, 1, …, M, where the number M is derived from the column metadata, e.g., use labels.astype({"labels": "int[min=0,max=2]"}) to set M=2.

Configurable parameters:

  • n_estimators: number of trees (integer; default: 10)

  • max_depth: depth (number of layers of internal nodes) per tree (integer; default: 4)

  • bootstrap: whether to use bootstrapping, i.e., training respective trees on respective samples (drawn with replacement) from the input data (boolean; default: True)

  • max_features: number of features to consider per split (integer number/float fraction/ “sqrt”/”log2”/”all”; default: “sqrt”)

  • max_samples: number of samples per tree if bootstrapping is used (integer number/float fraction; default: 0.3)

The constructor parameters criterion, random_state, warm_start, class_weight, ccp_alpha, monotonic_cst have the same meaning as in sklearn but cannot be changed from their defaults.

Other sklearn parameters are not applicable to the present implementation. Specifically: options min_samples_split, min_samples_leaf, min_weight_fraction_leaf, max_leaf_nodes, min_impurity_decrease are not applicable since the current implementation trains all trees up to their maximum depth max_depth. Generalization scores are not provided, so oob_score is not supported. Also parameters n_jobs and verbose for controlling the fitting process are not available.

Attributes of fitted models:

  • n_features_in: number of input features

  • feature_names_in: input feature names

  • feature_types_in: column types of input feature columns

  • n_classes: number of classes

  • feature_name_out: name of output feature

  • depths: depths of respective trees

  • nodes_featureids: (encrypted) one-hot encoded features selected per internal node

  • nodes_values: (encrypted) threshold values per internal node

  • nodes_modes: (encrypted) mode per internal node: discrete or continuous

  • class_weights: (encrypted) class probabilities per leaf node

Implementation notes:

The implemented random forest classifier uses the same training technique as sklearn, except that trees are always trained up to their maximum depth max_depth (for this reason, options min_samples_split, min_samples_leaf, min_weight_fraction_leaf, max_leaf_nodes, min_impurity_decrease are not applicable).

fit(X, y, *, n_classes=None, categorical_features=None, max_categories=None, **query_args)

Build a forest of trees from the training set

The dataset X can contain any mix of ordinal features (nodes val <= T) and categorical features (nodes val == T). Features are considered ordinal unless specified in the list of categorical features. Fitting of categorical features is considerably more efficient but costs more memory, so the number of different categories is limited by the server. This limit can be overridden using the max_categories parameter.

Parameters:
  • X (DataFrame) – Training data

  • y (DataFrame) – Target data (should have only 1 column)

  • n_classes (int or None) – Number of output classes (categories). If None, the number of classes is derived from the metadata of y.

  • categorical_features (None or list of str) – Column names of columns that are considered to contain categorical features

  • max_categories (None or int) – If given, override the engine’s maximum number of categories per categorical feature

  • query_args – See Query Arguments

Return type:

self

open_to_graphs(**query_args)

Open the random forest into a list of pydot.Dot instances

To plot a single tree, you can select a particular index in the returned list, for example:

graph = model.open_to_graphs()[0])
graph.write_png("out.png")
from IPython.display import Image
Image("out.png")
predict(X, **query_args)

Predict class for X

Parameters:
Returns:

table with predicted class per input record

Return type:

DataFrame

predict_proba(X, **query_args)

Predict class probabilities for X

Parameters:
Returns:

table with columns representing predicted class probabilities per input record

Return type:

DataFrame

k-nearest neighbours regressor

class crandas.crlearn.neighbors.KNeighborsRegressor(n_neighbors=5, *, weights='uniform', algorithm='auto', p=2, metric='minkowski', metric_weights=None)

Bases: object

Regression based on k-nearest neighbors with similar use as the Scikit-learn K-Nearest Regressor Class.

The target is predicted by local interpolation of the targets associated of the nearest neighbors in the training set.

https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm

Parameters:
  • n_neighbors (int, default=5) – Number of neighbors to use.

  • p (int, default=2) – Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. Currently, integer values between 1 and 5 are supported.

  • metric_weights (DataFrame, default=None) –

    Weights given to the different columns for the metric. The differences between columns are multiplied by the corresponding factors given in metric_weights. This is equivalent to multiplying all columns by the corresponding weights.

    None means no extra factors, equivalent to all weights being 1.

Notes

Warning

Regarding the Nearest Neighbors algorithms, if it is found that two neighbors, neighbor k+1 and k, have identical distances but different labels, the results will depend on the ordering of the training data.

fit(X, y)

Fit the k-nearest neighbors classifier from the training dataset.

Parameters:
  • X (DataFrame) – Predictor variables.

  • y (DataFrame) – Response variable (should have only 1 column).

Returns:

self

Return type:

KNeighborsRegressor

predict_value(X, **query_args)

Predict the target value for the provided data.

Parameters:
Returns:

y – Predicted value.

Return type:

ReturnValue

Optimizer settings

Settings of optimizers used e.g. in logistic regression models. Whenever a value for a parameter is not given, the engine uses a default value if one is available.

crandas.crlearn.optimizer.Adam

alias of AdamOptimizerParams

crandas.crlearn.optimizer.AdamDecaying

alias of AdamDecayingStepsizeParams

pydantic model crandas.crlearn.optimizer.AdamDecayingStepsizeAttributes

Bases: ModelWithMasking

Adam decaying stepsize attributes

field alpha: Fxp128Masked | Fxp128Unmasked [Required]

Decaying stepsize attribute

field iteration: int [Required]

Current iteration

pydantic model crandas.crlearn.optimizer.AdamDecayingStepsizeParams

Bases: BaseModel

Adam decaying stepsize parameters

field alpha: MaybePlaceholder[float] | None = None

Decaying step size parameter

pydantic model crandas.crlearn.optimizer.AdamOptimizerAttributes

Bases: ModelWithMasking

Adam optimizer attributes

field beta1: Fxp128Masked | Fxp128Unmasked [Required]

beta1 attribute

field beta2: Fxp128Masked | Fxp128Unmasked [Required]

beta2 attribute

field m: Fxp128Masked | Fxp128Unmasked [Required]

Internal state variable m

field stepsize_schedule: Annotated[LineSearchStepsizeAttributes | CurvatureAdaptiveStepsizeAttributes | ConstantStepsizeAttributes | AdamDecayingStepsizeAttributes, FieldInfo(annotation=NoneType, required=True, discriminator='stepsizeattributes_type')] [Required]

Stepsize attributes

field v: Fxp128Masked | Fxp128Unmasked [Required]

Internal state variable v

field vhat: Fxp128Masked | Fxp128Unmasked [Required]

Internal state variable vhat

pydantic model crandas.crlearn.optimizer.AdamOptimizerParams

Bases: BaseModel

Adam optimizer parameters

field amsgrad: MaybePlaceholder[bool] | None = None

Whether to use AMSGrad

field beta1: MaybePlaceholder[float] | None = None

beta1 parameter

field beta2: MaybePlaceholder[float] | None = None

beta2 parameter

field epsilon: MaybePlaceholder[float] | None = None

epsilon parameter

field stepsize_schedule: MaybePlaceholder[Annotated[LineSearchStepsizeParams | CurvatureAdaptiveStepsizeParams | ConstantStepsizeParams | AdamDecayingStepsizeParams, FieldInfo(annotation=NoneType, required=True, discriminator='stepsizeparams_type')]] | None = None

Stepsize to use

crandas.crlearn.optimizer.Adamax

alias of AdamaxOptimizerParams

pydantic model crandas.crlearn.optimizer.AdamaxOptimizerAttributes

Bases: ModelWithMasking

Adamax optimizer attributes

field beta1: Fxp128Masked | Fxp128Unmasked [Required]

beta1 attribute

field beta2: Fxp128Masked | Fxp128Unmasked [Required]

beta2 attribute

field m: Fxp128Masked | Fxp128Unmasked [Required]

Internal state m

field stepsize_schedule: Annotated[LineSearchStepsizeAttributes | CurvatureAdaptiveStepsizeAttributes | ConstantStepsizeAttributes | AdamDecayingStepsizeAttributes, FieldInfo(annotation=NoneType, required=True, discriminator='stepsizeattributes_type')] [Required]

Stepsize attributes

field u: Fxp128Masked | Fxp128Unmasked [Required]

Internal state u

pydantic model crandas.crlearn.optimizer.AdamaxOptimizerParams

Bases: BaseModel

Adamax optimizer parameters

field beta1: MaybePlaceholder[float] | None = None

beta1 parameter

field beta2: MaybePlaceholder[float] | None = None

beta2 parameter

field epsilon: MaybePlaceholder[float] | None = None

epsilon parameter

field stepsize_schedule: MaybePlaceholder[Annotated[LineSearchStepsizeParams | CurvatureAdaptiveStepsizeParams | ConstantStepsizeParams | AdamDecayingStepsizeParams, FieldInfo(annotation=NoneType, required=True, discriminator='stepsizeparams_type')]] | None = None

Stepsize parameters

crandas.crlearn.optimizer.Constant

alias of ConstantStepsizeParams

pydantic model crandas.crlearn.optimizer.ConstantStepsizeAttributes

Bases: ModelWithMasking

Constant stepsize attributes

field alpha: Fxp128Masked | Fxp128Unmasked [Required]

Step size attribute

field iteration: int [Required]

Current iteration

pydantic model crandas.crlearn.optimizer.ConstantStepsizeParams

Bases: BaseModel

Constant stepsize parameters

field alpha: MaybePlaceholder[float] | None = None

Step size parameter

crandas.crlearn.optimizer.CurvatureAdaptive

alias of CurvatureAdaptiveStepsizeParams

pydantic model crandas.crlearn.optimizer.CurvatureAdaptiveStepsizeAttributes

Bases: BaseModel

Curvature adaptive stepsize attributes

field iteration: int [Required]

Current iteration

pydantic model crandas.crlearn.optimizer.CurvatureAdaptiveStepsizeParams

Bases: BaseModel

Curvature adaptive stepsize parameters

crandas.crlearn.optimizer.GD

alias of GradientDescentOptimizerParams

pydantic model crandas.crlearn.optimizer.GradientDescentOptimizerAttributes

Bases: ModelWithMasking

Gradient descent optimizer attributes

field momentum: Fxp128Masked | Fxp128Unmasked [Required]

Momentum attribute

field stepsize_schedule: Annotated[LineSearchStepsizeAttributes | CurvatureAdaptiveStepsizeAttributes | ConstantStepsizeAttributes | AdamDecayingStepsizeAttributes, FieldInfo(annotation=NoneType, required=True, discriminator='stepsizeattributes_type')] [Required]

Stepsize attributes

field v: Fxp128Masked | Fxp128Unmasked [Required]

Internal state v

pydantic model crandas.crlearn.optimizer.GradientDescentOptimizerParams

Bases: BaseModel

Gradient descent optimizer parameters

field momentum: MaybePlaceholder[float] | None = None

Momentum parameter

field stepsize_schedule: MaybePlaceholder[Annotated[LineSearchStepsizeParams | CurvatureAdaptiveStepsizeParams | ConstantStepsizeParams | AdamDecayingStepsizeParams, FieldInfo(annotation=NoneType, required=True, discriminator='stepsizeparams_type')]] | None = None

Stepsize parameters

crandas.crlearn.optimizer.Lbfgs

alias of LbfgsOptimizerParams

pydantic model crandas.crlearn.optimizer.LbfgsOptimizerAttributes

Bases: ModelWithMasking

LBFGS optimizer attributes

field last_gradient: Fxp128Masked | Fxp128Unmasked [Required]

Internal state last_gradient

field rho: Fxp128Masked | Fxp128Unmasked [Required]

Internal state rho

field s: Fxp128Masked | Fxp128Unmasked [Required]

Internal state s

field stepsize_schedule: Annotated[LineSearchStepsizeAttributes | CurvatureAdaptiveStepsizeAttributes | ConstantStepsizeAttributes | AdamDecayingStepsizeAttributes, FieldInfo(annotation=NoneType, required=True, discriminator='stepsizeattributes_type')] [Required]

Stepsize attributes

field y: Fxp128Masked | Fxp128Unmasked [Required]

Internal state y

pydantic model crandas.crlearn.optimizer.LbfgsOptimizerParams

Bases: BaseModel

LBFGS optimizer parameters

field history: MaybePlaceholder[int] | None = None

History

field stepsize_schedule: MaybePlaceholder[Annotated[LineSearchStepsizeParams | CurvatureAdaptiveStepsizeParams | ConstantStepsizeParams | AdamDecayingStepsizeParams, FieldInfo(annotation=NoneType, required=True, discriminator='stepsizeparams_type')]] | None = None

Stepsize schedule

crandas.crlearn.optimizer.LineSearch

alias of LineSearchStepsizeParams

pydantic model crandas.crlearn.optimizer.LineSearchStepsizeAttributes

Bases: BaseModel

Line search step size attributes

field iteration: int [Required]

Current iteration

pydantic model crandas.crlearn.optimizer.LineSearchStepsizeParams

Bases: BaseModel

Line search step size parameters

field end: MaybePlaceholder[int] | None = None

End parameter

field start: MaybePlaceholder[int] | None = None

Start parameter

crandas.crlearn.optimizer.OptimizerAttributes

Optimizer attributes

alias of Annotated[AdamOptimizerAttributes | AdamaxOptimizerAttributes | GradientDescentOptimizerAttributes | LbfgsOptimizerAttributes, FieldInfo(annotation=NoneType, required=True, discriminator=’optimizerattributes_type’)]

pydantic model crandas.crlearn.optimizer.OptimizerParams

Bases: RootModel

Optimizer parameters

Optimizer parameters can be specified as a name of a supported optimizer (adam, adamax, gd, or lbfgs) or as an instance of an optimizer parameter type, e.g., AdamaxOptimizerParams.

crandas.crlearn.optimizer.StepsizeScheduleAttributes

Stepsize attributes

alias of Annotated[LineSearchStepsizeAttributes | CurvatureAdaptiveStepsizeAttributes | ConstantStepsizeAttributes | AdamDecayingStepsizeAttributes, FieldInfo(annotation=NoneType, required=True, discriminator=’stepsizeattributes_type’)]

crandas.crlearn.optimizer.StepsizeScheduleParams

Stepsize parameters

alias of Annotated[LineSearchStepsizeParams | CurvatureAdaptiveStepsizeParams | ConstantStepsizeParams | AdamDecayingStepsizeParams, FieldInfo(annotation=NoneType, required=True, discriminator=’stepsizeparams_type’)]