VertaModelBase

class verta.registry.VertaModelBase(artifacts)

Abstract base class for Verta Standard Models.

Note

__init__() and predict() must be implemented by subclasses.

Parameters:

artifacts (dict of str to str) – A mapping of artifact keys to filepaths. This will be provided to the deployed model based on artifact keys specified through RegisteredModel.create_standard_model().

Examples

import pickle
import numpy as np
from verta.registry import VertaModelBase, verify_io
from verta.environment import Python

class Model(VertaModelBase):
    def __init__(self, artifacts):
        with open(artifacts["np_matrix"], "rb") as f:
            self._transform = pickle.load(f)

    @verify_io
    def predict(self, input):
        input = np.array(input)

        return np.matmul(input, self._transform)

model_ver = reg_model.create_standard_model(
    Model,
    environment=Python(["numpy"]),
    artifacts={"np_matrix": arr},
)
batch_predict(df, headers: Dict[str, str])

Produce an output from df.

New in version 0.24.0: headers parameter.

This method is called when batch predictions are made against a Verta endpoint.

See our product documentation on batch prediction for more information about how this method is used.

Note

The headers parameter is optional: overriding batch_predict methods do not need to include this parameter. For usage details, see https://docs.verta.ai/verta/deployment/guides/accessing-headers-from-predict.

Note

batch_predict() must be written to both receive and return pandas.DataFrames [1].

At this time, your subclass must still implement predict(), regardless of whether it contains anything meaningful.

Parameters:
  • df (pandas.DataFrame) –

  • headers (dict of str to str, optional) – Headers provided on the prediction request.

Returns:

pandas.DataFrame

References

model_test()

Test a model’s behavior for correctness.

Implement this method on your model—with any desired calls and assertions—to validate behavior and state.

See our product documentation on model verification for more information about how this method is used.

Note

If using model data logging (e.g. verta.runtime.log()), any calls here to the model’s predict() method must be wrapped in a verta.runtime.context. This also allows testing of expected logs.

Returns:

None – Returned values will be unused and discarded.

Raises:

Any – Raised exceptions will be propagated.

Examples

class MyModel(VertaModelBase):
    def __init__(self, artifacts):
        with open(artifacts["sklearn_logreg"], "rb") as f:
            self.logreg = pickle.load(f)

    @verify_io
    def predict(self, input):
        verta.runtime.log("num_rows", len(input))
        return self.logreg.predict(input).tolist()

    def example(self):
        return [
            [71.67822567370767, 0.0, 0.0, 99.0, 0.0, 0.0, 0.0, 1.0, 0.0],
            [6.901547652701675, 0.0, 1887.0, 50.0, 0.0, 0.0, 0.0, 1.0, 0.0],
            [72.84132724180968, 0.0, 0.0, 40.0, 0.0, 0.0, 0.0, 0.0, 1.0],
        ]

    def model_test(self):
        # call predict(), capturing model data logs
        input = self.example()
        with verta.runtime.context() as ctx:
            output = self.predict(input)
        logs = ctx.logs()

        # check predict() output
        expected_output = [0, 1, 0]
        if output != expected_output:
            raise ValueError(f"expected output {expected_output}, got {output}")

        # check model data logs
        expected_logs = {"num_rows": len(input)}
        if logs != expected_logs:
            raise ValueError(f"expected logs {expected_logs}, got {logs}")
abstract predict(input, headers: Dict[str, str])

Produce an output from input.

New in version 0.24.0: headers parameter.

This method is called when requests are made against a Verta endpoint.

Note

The headers parameter is optional: overriding predict methods do not need to include this parameter. For usage details, see https://docs.verta.ai/verta/deployment/guides/accessing-headers-from-predict.

Note

It is recommended to use the verify_io() decorator to help ensure that your model’s input and output types will be fully compatible with the Verta platform as you iterate locally.

predict() must be written to both receive [2] and return [3] JSON-serializable objects (i.e. mostly basic Python types).

For example to work with NumPy arrays, the input argument should be a Python list that would then be passed to np.array() inside this function. The result must then be cast back to a list before returning.

Parameters:
  • input (any JSON-compatible Python type) – Model input.

  • headers (dict of str to str, optional) – Headers provided on the prediction request.

Returns:

any JSON-compatible Python type – Model output.

References