## Things Get Serious

The model has been trained, we are ready to deploy it to Solr, but first would be useful to have better understanding of what we just created.
A LambdaMART model in a real world scenario is a massive ensemble of regression trees, not the most readable structure for a human.
More we understand the model, easier will be to find anomalies and to fix/improve it.
But the most important benefit of having a clearer picture of the training set and the model is the fact that it can dramatically improves the communication with the business layer :

• What are the most important features in our domain ?
• What kind of document should score high according to the model ?
• Why this document (feature vector) is scoring that high ?
These are only examples, but a lot of similar questions can rise, and we need the tools to answer.

## Ltr Tools

This is how the Ltr tools project [1] was born.
Target of the project is to use the power of Solr to visualise and understand the model.
It is a set of simple tools specifically thought for LambdaMart models, represented in the Json format supported by the Bloomber Ltr Solr plugin.
Of course it is open source so feel free to extend it introducing additional models.
All the tools provided are meant to work with a Solr backend in order to index data that we can later search easily.
The tools currently available provide the support to :
- index the model  in a Solr collection
- index the training set in a Solr collection
- print the top scoring leaves from a LambdaMART model

### Preparation

To use the Ltr tools you must proceed with these simple steps :
• set up the Solr backend - this will be a fresh Solr instance with 2 collections : models, trainingSet,  the simple configuration is available in : ltr-tools/configuration
• gradle build - this will package the executable jar in : ltr-tools/ltr-tools/build/libs

### Usage

Let's briefly take a look to the parameters of the executable command line interface :

ParameterDescription
-helpPrint the help message
-tool
<name>
The tool to execute (possible values):
- modelIndexer
- trainingSetIndexer
- topScoringLeavesViewer
-solrURL
<URL>
The Solr base URL to use for the search backend
-model
<file>
The path to the model.json file
-topK
<int>
The number of top scoring leaves to return ( sorted by score descendant)
-trainingSet
<file>
The path to the training set file
-features
<file>
The path to the feature-mapping.json.
A file containing a mapping between the feature Id and the feature name.
-categoricalFeatures
<file>
The path to a file containing the list of categorical feature names.

N.B. all the following examples will assume the model in input is a LambdaMART model, in the json format the Bloomberg Solr Plugin expects.

## Model Indexer

Requirement : Backend Solr collection <models> must be UP & RUNNING

The Model Indexer is a tool that indexes a lambdaMART model in Solr to better visualize the structure of the trees ensemble.
In particular the tool will index each branch split of the trees belonging to the lambdaMART ensemble as Solr documents.
Let's take a look the solr schema:

configuration/solr/models/conf
 <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false"/>
<field name="modelName" type="string" indexed="true" stored="true"/>
<field name="feature" type="string" indexed="true" stored="true" docValues="true"/>
<field name="threshold" type="double" indexed="true" stored="true" docValues="true"/>
...


So giving in input a lambdaMART model :

e.g. lambdaMARTModel1.json
 {
"class":"org.apache.solr.ltr.ranking.LambdaMARTModel",
"name":"lambdaMARTModel1",
"features":[
{
"name":"feature1"
},
{
"name":"feature2"
}
],
"params":{
"trees":[
{
"weight":1,
"root":{
"feature":"feature1",
"threshold":0.5,
"left":{
"value":80
},
"right":{
"feature":"feature2",
"threshold":10.0,
"left":{
"value":50
},
"right":{
"value":75
}
}
}
}
]
}
}


N.B. a branching split is where the tree split in 2 branches:

 "feature":"feature2",
"threshold":10.0,
"left":{
"value":50
},
"right":{
"value":75
}


A split happens on a threshold of the feature value.
We can use the tool to start the indexing process :

 java -jar ltr-tools-1.0.jar -tool modelIndexer -model /models/lambdaMARTModel1.json  -solrURL http://localhost:8983/solr/models


After the indexing process has finished we can access Solr and start searching !
e.g.
This query will return in response for each feature :

• the number of times the feature appears at a branch split
• the top 10 occurring thresholds for that feature
• the number of unique thresholds that appear in the model for that feature

 http://localhost:8983/solr/models/select?indent=on&q=*:*&wt=json&facet=true&json.facet={
Features: {
type: terms,
field: feature,
limit: -1,
facet: {
Popular_Thresholds: {
type: terms,
field: threshold,
limit: 10
},
uniques: "unique(threshold)"
}
}
}&rows=0&fq=modelName:lambdaMARTModel1


Let's see how it is possible to interprete the Solr response :

 facets": {
"count": 3479, //number of branch splits in the entire model 
"Features": {
"buckets": [
{
"val": "product_price",
"count": 317, //the feature "product_price" is occurring in the model in 317 splits  
"uniques": 28, //the feature "product_price" is occurring in the splits with 28 unique threshold values
"Popular_Thresholds": {
"buckets": [
{
"val": "250.0", //threshold value 
"count": 45 //the feature "product_price" is occurring in the splits 45 times with threshold "250.0"
},
{
"val": "350.0",
"count": 45
},
...


## TrainingSet Indexer

Requirement : Backend Solr collection <trainingSet> must be UP & RUNNING

The Training set Indexer is a tool that indexes a Learning To Rank traning set (in RankLib format) in Solr to better visualize the data.
In particular the tool will index each training sample of the trainign set as a Solr document.
Let's see the Solr schema :

configuration/solr/models/conf
<field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false"/>
<field name="relevancy" type="tdouble" indexed="true" stored="true" docValues="true"/>
<dynamicField name="cat_*" type="string" indexed="true" stored="true" docValues="true"/>
<dynamicField name="*" type="tdouble" indexed="true" stored="true" docValues="true"/>


As you can notice the main point here is definition of dynamic fields.
Indeed we don't know beforehand the names of the features, but we can distinguish between categorical features ( which we can index as strings) and ordinal features (which we can index as double).

We require now 3 inputs :

1) the training set in the RankLib format:

e.g. training1.txt
1 qid:419267 1:300 2:4.0 3:1 6:1
4 qid:419267 1:250 2:4.5 4:1 7:1
5 qid:419267 1:450 2:5.0 5:1 6:1
2 qid:419267 1:200 2:3.5 3:1 8:1


2) the feature mapping to translate the feature Id to a human readable feature name

e.g. features-mapping1.json
 {"1":"product_price","2":"product_rating","3":"product_colour_red","4":"product_colour_green","5":"product_colour_blue","6":"product_size_S","7":"product_size_M","8":"product_size_L"}

N.B. the mapping must be a json object on a single line

This input file is optional, it is possible to index directly the feature Ids as names.

3) the list of categorical features

e.g. categoricalFeatures1.txt
product_colour
product_size 

This list ( one feature per line) will clarify to the tool which features are categorical, to index the category as a string value for the feature.
This input file is optional, it is possible to index the categorical features as binary one hot encoded features.

To start the indexing process :

 java -jar ltr-tools-1.0.jar -tool trainingSetIndexer -trainingSet /trainingSets/training1.txt -features /featureMappings/feature-mapping1.json -categoricalFeatures /feature/categoricalFeatures1.txt -solrURL http://localhost:8983/solr/trainingSet


After the indexing process has finished we can access Solr and start searching !
e.g.
This query will return in response all the training samples filtered and then faceted on the relevancy field.

This can be an indication of the distribution of the relevancy score in specific subsets of the training set

 http://localhost:8983/solr/trainingSet/select?indent=on&q=*:*&wt=json&fq=cat_product_colour:red&rows=0&facet=true&facet.field=relevancy


N.B. this is a quick and dirty way to explore the training set. I deeply suggest you to use it as a quick resource. Advance data plotting is more suitable to visualize big data and identify patterns.

## Top Scoring Leaves Viewer

The top scoring leaves viewer is a tool to print the path of the top scoring leaves in the model.
Thanks to this tool will be easier to answer to questions like :
" How a document (feature vector) should look like to get an high score?"
The tool will simply visit the ensemble of trees in the model and keep track of the scores of each leaf.

So giving in input a lambdaMART model :

e.g. lambdaMARTModel1.json
 {
"class":"org.apache.solr.ltr.ranking.LambdaMARTModel",
"name":"lambdaMARTModel1",
"features":[
{
"name":"feature1"
},
{
"name":"feature2"
}
],
"params":{
"trees":[
{
"weight":1,
"root":{
"feature":"feature1",
"threshold":0.5,
"left":{
"value":80
},
"right":{
"feature":"feature2",
"threshold":10.0,
"left":{
"value":50
},
"right":{
"value":75
}
}
}
}, ...
]
}
}  

To start the process :

 java -jar ltr-tools-1.0.jar -tool topScoringLeavesViewer -model /models/lambdaMARTModel1.json -topK 10  

This will print the top scoring 10 leaves (with related path in the tree):

1000.0 -> feature2 > 0.8, feature1 <= 100.0
200.0 -> feature2 <= 0.8,
80.0 -> feature1 <= 0.5,
75.0 -> feature1 > 0.5, feature2 > 10.0,
60.0 -> feature2 > 0.8, feature1 > 100.0,
50.0 -> feature1 > 0.5, feature2 <= 10.0,  

## Conclusion

The Learning to rank tools are quick and dirty solutions to help people understanding better and working better with Learning To Rank models.
They are far from being optimal but I hope they will be helpful for people working on similar problems.
Any contribution, improvement, bugfix is welcome !

[1] Learning To Rank tools

## Pick Up Where We Left Off

We modelled our dataset, we collected the data and refined it in Part 1 .
We have now a shiny lot-of-rows training set ready to be used to train the mathematical model that will re-rank the resulting documents coming out from our queries.
The very first activity to carry on is to decide the model that fits best our requirements.
I will focus in this blog post on two type of models, the ones currently supported by the Solr LTR Bloomberg plugin [1] .

## Ranking SVM

Ranking SVM is a linear model based on Support Vector Machines.
The Ranking SVM algorithm is a pair-wise ranking method that adaptively sorts results based on how 'relevant'  they are for a specific query ( we saw example of relevancy rating for the pair document-query in Part 1).

The Ranking SVM function maps each search query to the features of each sample.
This mapping function projects each data pair onto a feature space.
Each sample ( query-document ) of the training data will be used for the Ranking SVM algorithm to refine the mapping.

Generally, Ranking SVM includes three steps at training time:
• It maps the similarities between queries and the clicked results onto a certain feature space.
• It calculates the distances between any two of the vectors obtained in step 1.
• It forms an optimization problem which is similar to a standard SVM classification and solves this problem with the regular SVM solver
Given the list of the features that describe our problem, an SVM model will assign a numerical weight to each of them, the resulting function will assign a score to a document given in input the feature vector that describes it.
Let's see an example SVM Model in the Json format expected by the Solr plugin :
e.g.
{
"class":"org.apache.solr.ltr.ranking.RankSVMModel",
"name":"myModelName",
"features":[
{ "name": "userTextTitleMatch"},
{ "name": "originalScore"},
{ "name": "isBook"}
],
"params":{
"weights": {
"userTextTitleMatch": 1.0,
"originalScore": 0.5,
"isBook": 0.1
}

}
}

Given 3 example features :
userTextTitleMatch - (query dependant feature) - (binary)
originalScore -  (query dependant feature) - (ordinal)
isBook - (document dependant feature) - (binary)

The ranking SVM model in the example builds a linear function of the variables assigning a different weight to each of them.

Given the documents (expressed with their feature vector):
D1 [1.0, 100, 1]
D2 [0.0, 80, 1]

Applying the model we get the scores :

Score(D1) = 1.0 * userTextTitleMatch(1.0) + 0.5 * originalScore(100) + 0.1 * isBook(1.0) = 51.1
Score(D2) = 1.0 * userTextTitleMatch(0.0) + 0.5 * originalScore(80) + 0.1 * isBook(1.0) = 40.1
The D1 documents is more relevant according the model.

Key points :

• easy to debug and understand
• linear function

Here you can find some good Open Source libraries to train SVM models [2].

## LambdaMART

LambdaMART is a tree ensemble based model.
Each tree of the ensemble is a weighted regression tree and the final predicted score is the weighted sum of the prediction of each regression tree.
A regression tree is a decision tree that takes in input a feature vector and returns a scalar numerical value in output.
At a high level, LambdaMART is an algorithm that uses gradient boosting to directly optimize Learning to Rank specific cost functions such as NDCG.

To understand LambdaMART let's explore the main two aspects: Lambda and MART.
MART
LambdaMART is a specific instance of Gradient Boosted Regression Trees, also referred to as Multiple Additive Regression Trees (MART).
Gradient Boosting is a technique for forming a model that is a weighted combination of an ensemble of “weak learners”. In our case, each “weak learner” is a decision tree.
Lambda
At each training point we need to provide a gradient that will allow us to minimize the cost function ( whatever we selected, NDCG for example).
To solve the problem LambdaMART uses an idea coming from lambdaRank:
at each certain point we calculate a value that acts as the gradient we require, this component will effectively modify the ranking, pushing up or down groups of documents and affecting effectively the rules for the leaf values, which cover the point used to calculate lambda.
For additional details these resources has been useful [3] ,[4] .
LambdaMART is currently considered one of the most effective model and it has been proved by a number of winning contests.

Let's see how a lambdaMART model looks like :
{
"class":"org.apache.solr.ltr.ranking.LambdaMARTModel",
"name":"lambdamartmodel",
"features":[
{ "name": "userTextTitleMatch"},
{ "name": "originalScore"}
],
"params":{
"trees": [
{
"weight" : 1,
"root": {
"feature": "userTextTitleMatch",
"threshold": 0.5,
"left" : {
"value" : -100
},
"right": {
"feature" : "originalScore",
"threshold": 10.0,
"left" : {
"value" : 50
},
"right" : {
"value" : 75
}
}
}
},
{
"weight" : 2,
"root": {
"value" : -10
}
}
]
}
}

This example model is composed by two trees, each branch split evaluates a conditional expression on a feature value, if the feature value is  :
<=  threshold we visit the left branch
>  threshold we visit the right branch
Reaching the leaf of a tree will produce a score, this will be weighted according to the tree weight and then summed to the other scores produced ( the model is an ensemble of trees).
Given the documents :

D1 [1, 9]
D2 [0, 10]

Applying the model we get the scores :

Score(D1) = 1* (userTextTitleMatch (1) > 0.5 go right , originalScore (9) < 10 = 50) +
2 * -10 = 30

Score(D2) = 1 *(userTextTitleMatch(0) <= 0.5 = -100) +
2 * -10 = -120

The D1 document is more relevant according the model.

Key points :
• ensemble of trees are effective
• difficult to debug
• non linear function
There are good open source libraries to train LambdaMART models, the one that we are going to use in this case of study is RankLib [5]

## Evaluation Metric

Important phase of the journey is to validate the model :
How good is our model in re-ranking our dataset ?
How good is the model in re-ranking unknown data  ?
It is really important to carefully select the evaluation metric that best suites your algorithm and your needs.
Evaluating the model will be vital for the validation phase during the training, for testing the model against new data and to consistently being able to assess the predicting quality of the model trained.

N.B. this is particularly important: having a good understanding of the evaluation metric for the algorithm selected, can help a lot in tuning,  improving the model and finding anomalies in the training data.

### NDCG@K

Normalised Discounted Cumulative Gain @k is a natural fit when choosing lambdaMART.
This metric evaluates the performance of a ranking model, it varies from 0.0 to 1.0, with 1.0
representing the ideal ranking model.
It takes K as parameter :
K : maximum number of documents that will be returned .

K specifies, for each query,  after the re-ranking, the top K results to evaluate.
N.B. set K to be the number of top documents you want to optimise.
Generally it is the number of documents you show in the first page of results.

Given in input the test set, this is grouped by queryId and for each query the ideal ranking (obtained sorting the documents by descendant relevancy) is compared with the ranking generated by the ranking model.
An average for all the queryIds is calculated.
Detailed explanation can be found in Wikipedia [6].

Knowing how it works, 2 factors sound immediately important when reasoning about NDCG :
1) distribution of relevancy scores in the test set, per queryId
2) number of samples per queryId

War Story 1 : too many highly relevant samples -> high NDCG
Given the test set A :
5 qid:1 1:1 ...
5 qid:1 1:1 ...
4 qid:1 1:1 ...
5 qid:2 1:1 ...
4 qid:2 1:1 ...
4 qid:2 1:1 ...

Even if the ranking model does not a good job, the high distribution of relevant samples increase the probability of hitting high score in the metric.

War Story 2 : small RankLists -> high NDCG
Having few samples (<K) per queryId means that we will not evaluate the performance of our ranking model with accuracy.
In the edge case of 1 sample per queryId, our NDCG@10 will be perfect, but actually this reflect simply a really poor training/test set.

N.B. be careful on how you generate your queryId as you can end up in this edge case and have a wrong perception of the quality of your ranking model.

## Model Training

Focus of this section will be how to train a LambdaMART model using RankLib[5],
Let's see an example and analyse all the parameters:

java  -jar RankLib-2.7.jar
-train /var/trainingSets/trainingSet_2016-07-20.txt
-feature /var/ltr/trainingSets/trainingSet_2016-07-20_features.txt
-ranker
-leaf 75
-mls 5000
-metric2t NDCG@20
-kcv 5
-tvs 0.8
-kcvmd /var/models
-kcvmn model-name.xml
-sparse

Parameter Description
train
<file>
Path to the training set file
feature <file> Feature description file, list feature Ids to be considered by the learner, each on a separate line. If not specified, all features will be used.
ranker <type> Specify which ranking algorithm to use e.g. 6: LambdaMART
metric2t <metric> Metric to optimize on the training data. Supported: MAP, NDCG@k, DCG@k, P@k, RR@k, ERR@k (default=ERR@10)
tvs
<x \in [0..1]>
Set train-validation split to be (x)(1.0-x).
x * (size of the training set) will be used for training
(1.0 -x) * (size of the training set) will be used for validation
kcv
<k>
Stands for k Cross Validation
Specify if you want to perform k-fold cross validation using ONLY the specified training data (default=NoCV).
This means we split the training data in k subsets and we run k training executions.
In each execution 1 subset will be used as the test set and k-1 subsets will be used for training.
N.B. in the case that tvs and kcv are both defined, first we split for the cross validation, than the training set produced will be split in training/validation.
e.g.
Initial Training Set size : 100 rows
-kcv 5
-tvs 0.8
Test Set : 20 rows
Training Set :  64 ( 0.8 * 80)
Validation Set : 16 (0.2 * 80)
kcvmd
<dir>
Directory where to save models trained via cross-validation (default=not-save).
kcvmn <model> Name for model learned in each fold. It will be prefix-ed with the fold-number (default=empty).
sparse Allow sparse data processing for the training set ( which means that you don't need to specify all the features for each sample where the feature has no value)
tree
<t>
Number of trees of the ensemble(default=1000).
More complex is the learning problem, more trees we need, but it's important to be careful and not overfit the training data
leaf
<l>
Number of leaves for each tree (default=100).
As the number of trees, it is important to tune carefully this value to avoid overfitting trees.
shrinkage <factor> This is the learning rate of the algorithm(default=0.1)
If this is too aggressive the Ranking model will quickly fit the training set, but will not react properly to the validation set evaluation, which means an overall poorer model.
tc
<k>
Number of threshold candidates for tree spliting.
-1 to use all feature values (default=256)
Increasing this value we increase the complexity and possible overfitting of the model.
It is suggested to start with a low value and then increase it according the general cardinality of your features.
mls
<n>
Min leaf support - minimum #samples each leaf has to contain (default=1).
This is quite an important parameter if we want to take care of outliers.
We can tune this parameter to include only leaves with an high number of samples, discarding pattern validated by a weak support.

Tuning the model training parameters is not an easy task.
Apart the general suggestions, trial & error with a careful comparison of the evaluation metric score is the path to go.
Comparing different models on a common Test Set can help when we have produced a number of models with different tuning configuration.

## Converting Model To Json

We finally produced a model which is performing quite well on our Test set.
Cool, it's the time to push it to Solr and see the magic in action.
But before we can do that, we need to convert the model generated by the Machine Learning libraries into the Json format Solr expects ( you remember the section about the models supported ? )

Taking as example a LambdaMART model, Ranklib will produce an xml model.
So you will need to parse it and convert it to the Json format.
Would be interesting to implement directly in RankLib the possibility of selecting in output the Json format expected by Solr.
Any contribution is welcome [7]!
In the next part we'll see how to visualise and understand better the model generated.
This activity can be vital to debug the model, see the most popular features, find out some anomaly in the training set and actually assess the validity of the model itself.