My First Foray into Data Science: Semi-Supervised Topic Modeling

I recently started my first position as a data scientist. On my first day on the job, I headed to the client site, armed with my repertoire of pre-processing modules, classification algorithms, regression methods, deep learning approaches, and evaluation techniques. I was ready for whatever this organization threw at me – I expected that I could solve their problems with some simpler models and a few data cleansing steps, much more straightforward than what I faced in my master’s program.

Boy, was I wrong.

The first problem handed to me was one they had been wrestling with for a few years now. They have a set of documents written by hundreds of different authors. These documents need to be tagged with specific metadata before they are stored, to make them searchable and accessible. Currently, this task is being carried out manually, with a different team of people reading each document and applying the tags. This process costs approximately 55 thousand labor hours across the sub-organization. Other teams are performing this same task on other document types in at least three different sub-organizations. That’s a lot of hours.

So, create an algorithm that can perform this tagging automatically. Great, I think to myself, as I load up spaCy and NLTK. Easy-peasy. As I start digging into the business logic and data behind the problem, I learn that one of the top priority metadata categories is the topic. There are 26 highly industry-specific topics, of which a document can have many. Okay, switch modes from entity extraction/tagging to text classification using NLP. Okay, I am still making progress. Now to find some training data and see the spread of these topics.

As I am requesting access to data, I realize the reason this problem has not yet been solved. Of the hundreds of thousands of documents at my disposal, at least 50% of the tagging is incorrect. There is a small subset of manually curated, correctly tagged documents. And by small subset, I mean 87 documents.

There goes any chance of supervised learning. I struggle to wrap my mind around the fact that tons of people are spending tons of hours to manually tag documents with incorrect tags. At this point, I also realize that unsupervised modeling is not an option because these topics are not necessarily intuitive or generic. They are very industry-specific, and there are more apparent features of the text to cluster on, such as country or region.

Determined not to become stumped with the first real-world problem thrown my way, I turned to Google. I knew this was a topic modeling problem – I needed to sort the documents into 26 different topics based on their textual content. One of the most common topic modeling algorithms is Latent Dirichlet Allocation (LDA), which maps documents to topics, represented by a to-be-determined set of words. For a great intro into LDA and topic modeling, see the site in the references below. The only LDA applications I had ever worked with, however, were completely unsupervised. For the reasons stated above, this was not going to work for my specific use case.

A coworker had suggested using seeded LDA as a semi-supervised approach to this problem and thus began my search. I came across a blog post written by the creators of the GuidedLDA python algorithm that explained how they took LDA and seeded the topics with key terms to encourage the model to converge around their specific topics (rather than let the model choose the words for each topic). This approach can be useful when you have very precise topics, as was the case with my problem.

I excitedly loaded up the library. Applying this method to my data, I reached a whopping 16% accuracy against my test set. Needless to say, I was a bit discouraged. Back to Google. I found another similar approach to this semi-supervised topic modeling problem with CorEx (correlation explanation). This library had the option of supplying anchor words to the algorithm, encouraging the model to converge around my enumerated topics similar to the GuidedLDA model. I won’t get into the dirty details outlining the differences between the two models (see the references below), because, in the beginning, it didn’t matter for me. The initial accuracy of the testing data with CorEx was 14%.

After a few days of continuous searching and exploration, I realized that for the realm of semi-supervised topic modeling, this was pretty much it. My two options were GuidedLDA and CorEx. I was able to up my accuracy in a few, perhaps obvious ways. Stratified sampling was huge for me. Taking 100 records from each topic to create an evenly distributed training data set increased my accuracy by at least 10%. Keep in mind, this stratified sample is taken from the incorrectly tagged data repository, but it was the best I could do. Additionally, I eventually gained access to the definitions of each topic, allowing me to use term-frequency matrices to extract key terms per topic that I could then feed into these topic modeling algorithms. These changes bumped my accuracy another 10-15%.

Other hyperparameter tuning tasks such as adjusting my processed text for industry stop words, selecting the token length, limiting minimum and maximum document frequencies, and finding the ideal threshold for seed word confidence landed me with 53% accuracy on the test data set. Congratulations, I thought to myself, I am now slightly better than the paid human taggers.

So, what’s next? How else can I increase the accuracy of this algorithm with the data I have available? There are a few other hyperparameters to tune (alpha, beta, etc.), and I can always gather a larger quasi-stratified sample to throw at the model (keeping in mind that I can’t be sure exactly how evenly distributed it is). Some other ideas that have cropped up are leveraging a concept ontology (or word embedding) to enhance the depth of my seed words, synthetically duplicating the curated documents to increase the size of the test set (to make a training set for supervised learning), or applying transfer learning from a large, external corpus and hope that the topics align with the internal business topics. And, of course, there’s the world of deep learning.

There is the obvious choice. I could always ask for real, usable data. But, as I’m starting to learn, that isn’t always an option. So, while I plan to show up to the weekly meeting and lobby for better data for a sixth time, I will continue to work with the data that I have. The group is very excited that my model can outperform the hundreds of people they pay to do this job, but having a model that is only correct half of the time is the same as having a model that is wrong almost half of the time. Well, I guess it is time for me to get back to the drawing board.

References:

LDA:

https://towardsdatascience.com/light-on-math-machine-learning-intuitive-guide-to-latent-dirichlet-allocation-437c81220158

GuidedLDA

https://www.freecodecamp.org/news/how-we-changed-unsupervised-lda-to-semi-supervised-guidedlda-e36a95f3a164/

https://medium.com/analytics-vidhya/how-i-tackled-a-real-world-problem-with-guidedlda-55ee803a6f0d

CorEx Topic Modeling

https://github.com/gregversteeg/corex_topic

https://github.com/gregversteeg/corex_topic/blob/master/corextopic/example/corex_topic_example.ipynb

BigBear.ai Privacy Policy