serhii.net

In the middle of the desert you can say anything you want

09 Aug 2022

Creating representative test sets

Thinking out loud and lab notebook style to help me solve a problem, in this installment - creating representative train/test splits.

Problem

Goal: create a test set that looks like the train set, having about the same distribution of labels.

In my case - classic NER, my training instances are documents whose tokens can be a number of different labels, non-overlapping, and I need to create a test split that’s similar to the train one. Again, splitting happens per-document.

Added complexity - in no case I want tags of a type ending up only in train or only in test. Say, I have 100 docs and 2 ORGANIZATIONs inside them - my 20% test split should have at least one ORGANIZATION.

Which is why random selection doesn’t cut it - I’d end up doing Bogosort more often than not, because I have A LOT of such types.

Simply ignoring them and adding them manually might be a way. Or intuitively - starting with them first as they are the hardest and most likely to fail

Implementation details

My training instance is a document that can have say 1 PEOPLE, 3 ORGANIZATIONS, 0 PLACES.

For each dataset/split/document, I have a dictionary counting how many instances of each entity does it have, then changed it to a ratio “out of the total number of labels”.

{
     "O": 0.75,
     "B-ORGANIZATION": 0.125,
     "I-ORGANIZATION": 0,
     "B-NAME": 0,
     "I-NAME": 0,
 }

I need to create a test dataset with the distribution of these labels as close as the train dataset. In both, say, 3 out of 4 labels should be "O".

So - “which documents do I pick so that when their labels are summed up I get a specific distribution”, or close to it. So “pick the numbers from this list that sum up close to X”, except multidimensional.

Initial algo was “iterate by each training instance and put it in the pile it’ll improve the most”.

Started implementing something to do this in
HuggingFace Datasets , and quickly realized that “add his one training instance to this HF Dataset” is not trivial to do, and iterating through examples and adding them to separate datasets is harder than expected.

“Reading the literature”

Generally we’re in the area of concepts like Subset sum problem / Optimization problem / Combinatorial optimization

Scikit-learn

More usefully, specifically RE datasets, How to Create a Representative Test Set | by Dimitris Poulopoulos | Towards Data Science mentioned sklearn.model_selection.StratifiedKFold.

Which led me to sklearn’s “model selection” functions that have a lot of functions doing what I need! Or almost

API Reference — scikit-learn 1.1.2 documentation

And the User Guide specifically deals with them: 3.1. Cross-validation: evaluating estimator performance — scikit-learn 1.1.2 documentation

Anyway - StratifiedKFold as implemented is “one training instance has one label”, which doesn’t work in my case.

My training instance is a document that has 1 PEOPLE, 3 ORGANIZATIONS, 0 PLACES.

Other places

Dataset Splitting Best Practices in Python - KDnuggets

Brainstorming

Main problem: I have multiple labels/ys to optimize for and can’t directly use anything that splits based on a single Y.

Can I hack something like sklearn.model_selection.StratifiedGroupKFold for this?

Can I read about how they do it and see if I can generalize it? (Open source FTW!) scikit-learn/_split.py at 17df37aee774720212c27dbc34e6f1feef0e2482 · scikit-learn/scikit-learn

Can I look at the functions they use to hack something together?

… why can’t I use the initial apporach of adding and then measuring?

Where can I do this in the pipeline? In the beginning on document level, or maybe I can drop the requirement of doing it per-document and do it at the very end on split tokenized training instances? Which is easier?

Can I do a random sample and then add what’s missing?

Will going back to numbers and “in this train set I need 2 ORGANIZATIONS” help me reason about it differently than the current “20% of labels should be ORGANIZATION”?

Looking at vanilla StratifiedKFold

scikit-learn/_split.py at 17df37aee774720212c27dbc34e6f1feef0e2482 · scikit-learn/scikit-learn

They sort the labels and that way get +/- the number of items needed. Neat but quite hard for me to adapt to my use case.

OK, NEXT

Can I think of this as something like a sort with multiple keys?..

Can I use the rarity of a type as something like a class weight? Ha, that might work. Assign weights in such a way that each type is 100 and

This feels relevant. Stratified sampling - Wikipedia

Can I chunk them in small pieces and accumulate them based on the pieces, might be faster than by using examples?

THIS looked like something REALLY close to what I need, multiple category names for each example, but ended up being the usual stratified option I think:

python - Split data into train/ test files such that at least one sample is picked for both the files - Stack Overflow

This suggests to multiply the criteria and get a lot of bins - not what I need but I keep moving

Can I stratify by multiple characteristics at once?

I think “stratification of multilabel data” is close to what I need

Found some papers, yes this is the correct term I think

scikit-multilearn

YES! scikit-multilearn: Multi-Label Classification in Python — Multi-Label Classification for Python

scikit-multilearn: Multi-Label Classification in Python — Multi-Label Classification for Python

scikit-multilearn: Multi-Label Classification in Python — Multi-Label Classification for Python:

In multi-label classification one can assign more than one label/class out of the available n_labels to a given object.

This is really interesting, still not EXACTLY what I need but a whole new avenue of stuff to look at

scikit-multilearn: Multi-Label Classification in Python — Multi-Label Classification for Python

The idea behind this stratification method is to assign label combinations to folds based on how much a given combination is desired by a given fold, as more and more assignments are made, some folds are filled and positive evidence is directed into other folds, in the end negative evidence is distributed based on a folds desirability of size.

Yep back to the first method!

They link this lecture explaining the algo: On the Stratification of Multi-Label Data - VideoLectures.NET

That video was basically what I needed

Less the video than the slides, didn’t watch the video and hope I won’t have to - the slides make it clear enough.

Yes, reframing that as “number of instances of this class that are still needed by this fold” was a better option. And here binary matrices nicely expand to weighted stratification if I have multiple examples of a class in a document. And my initial intuition of starting with the least-represented class first was correct

Basic algorithm:

  • Get class with smallest number of instances in the dataset
  • Get all training examples with that class and distribute them first
  • Go to next class

Not sure if I can use the source of the implementation: scikit-multilearn: Multi-Label Classification in Python — Multi-Label Classification for Python

I don’t have a good intuition of what they mean by “order”, for now “try to keep labels that hang out together in the same fold”? Can I hack it to

I still have the issue I tried to avoid with needing to add examples to a fold/Dataset, but that’s not the problem here.

Generally - is this better than my initial approach?

What happens if I don’t modify my initial approach, just the order in which I give it the training examples?

Can I find any other source code for these things? Ones easier to adapt?

Anyway

I’ll implement the algo myself based on the presentation and video according to my understanding.

The main result of this session was finding more related terminology and a good explanation of the algo I’ll be implementing, with my changes.

I’m surprised I haven’t found anything NER-specific about creating representative test sets based on the distribution of multiple labels in the test instances. Might become a blog post or something sometime.jj

Nel mezzo del deserto posso dire tutto quello che voglio.