Native support of LightGBM models in Vespa

Training and deploying a LightGBM model with Vespa features from python

Thiago G. Martins
4 min readJul 15, 2022

The main goal of this tutorial is to deploy a trained LightGBM model in a Vespa application. You can replicate this tutorial by following this Jupyter notebook.

Photo by Isaac Quesada on Unsplash

The following tasks will be accomplished throughout this tutorial:

  1. Train a LightGBM classification model with variable names supported by Vespa.
  2. Create Vespa application package files and export them to an application folder.
  3. Export the trained LightGBM model to the Vespa application folder.
  4. Deploy the Vespa application using the application folder.
  5. Feed data to the Vespa application.
  6. Assert that the LightGBM predictions from the deployed model are correct.

Setup

Install and load required packages.

Create data

Generate a toy dataset to follow along. Note that we set the column names in a format that Vespa understands. query(value) means that the user will send a parameter named value along with the query. attribute(field) means that field is a document attribute defined in a schema.

In the example below, we have a query parameter named value and two document attributes, numeric and categorical. If we want lightgbm to handle categorical variables, we should use dtype="category" when creating the data frame, as shown below.

png

We generate the target variable as a function of the three features defined above:

0     1.0
1 0.0
2 1.0
3 0.0
4 1.0
...
95 0.0
96 0.0
97 0.0
98 0.0
99 1.0
Length: 100, dtype: float64

Fit a LightGBM model

Train a LightGBM model with a binary loss function. The goal of this tutorial is on the model deployment and not on finding optimal training parameters for the LightGBM model.

[LightGBM] [Info] Number of positive: 47, number of negative: 53
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000943 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 74
[LightGBM] [Info] Number of data points in the train set: 100, number of used features: 3
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.470000 -> initscore=-0.120144
[LightGBM] [Info] Start training from score -0.120144

Vespa application package

Create a Vespa application package. The model expects two document attributes, numeric and categorical. We can use the model in the first-phase ranking with the lightgbm rank feature.

The code below uses the python library pyvespa, which implements a Vespa python API designed for Data Scientists.

We can check how the Vespa search definition file will look like:

The output below shows what the search definition file of our application looks like:

schema lightgbm {
document lightgbm {
field id type string {
indexing: summary | attribute
}
field numeric type double {
indexing: summary | attribute
}
field categorical type string {
indexing: summary | attribute
}
}
rank-profile classify {
first-phase {
expression: lightgbm('lightgbm_model.json')
}
}
}

We can export the application package files to disk:

Note that we don’t have any models under the models folder. We need to export the lightGBM model that we trained earlier to models/lightgbm.json.

!tree lightgbmlightgbm
├── files
├── models
├── schemas
│ └── lightgbm.sd
├── search
│ └── query-profiles
│ ├── default.xml
│ └── types
│ └── root.xml
└── services.xml

6 directories, 4 files

Export the model

Now we can see that the model is where Vespa expects it to be:

!tree lightgbmlightgbm
├── files
├── models
│ └── lightgbm_model.json
├── schemas
│ └── lightgbm.sd
├── search
│ └── query-profiles
│ ├── default.xml
│ └── types
│ └── root.xml
└── services.xml

6 directories, 5 files

Deploy the application

Deploy the application package from disk with Docker:

Waiting for configuration server, 0/300 seconds...
Waiting for configuration server, 5/300 seconds...
Waiting for configuration server, 10/300 seconds...
Waiting for application status, 0/300 seconds...
Waiting for application status, 5/300 seconds...
Waiting for application status, 10/300 seconds...
Waiting for application status, 15/300 seconds...
Waiting for application status, 20/300 seconds...
Waiting for application status, 25/300 seconds...
Waiting for application status, 30/300 seconds...
Finished deployment.

It is also possible (and highly recommended) to deploy directly to Vespa Cloud.

Feed the data

Feed the simulated data. To feed the data in batch we need to create a list of dictionaries containing id and fields keys:

Successful documents fed: 100/100.
Batch progress: 1/1.

Model predictions

Predict with the trained LightGBM model so that we can later compare it with the predictions returned by Vespa.

png

Query

Create a compute_vespa_relevance function that takes a document id and a query value and return the prediction of the LightGBM model deployed.

0.4555830422953402

We can now loop through the features to compute a Vespa prediction for all the data points, we can compare it to the predictions made by the model outside Vespa.

png

Compare model and Vespa predictions

Predictions from the model should be equal to predictions from Vespa, showing the model was correctly deployed to Vespa.

Clean environment

Delete the application folder and remove the Docker container running the Vespa app.

--

--