BigBear.ai
  • Home
  • Industries
    • Academia
    • Government
    • Healthcare
    • Manufacturing
  • Solutions
    • Cyber
    • Data Analytics
    • Enterprise Planning and Logistics
    • Intelligent Automation
    • Modeling Solutions
    • Professional Services
  • Company
    • About
    • Investor Relations
    • Partners
    • Team
  • Careers
    • Benefits
    • Culture
    • Explore Jobs
    • Military and Veterans
    • Applicant Login
    • Employee Login
  • Resources
    • Blog
    • Events
    • Newsroom
    • Resource Library
    • Online Store
  • Contact
Search

Home Data Modeling My First Foray into Data Science: Semi-Supervised Topic Modeling

Blog

My First Foray into Data Science: Semi-Supervised Topic Modeling

Samantha Hamilton
January 28, 2020
  • Share
  • Share

I recently started my first position as a data scientist. On my first day on the job, I headed to the client site, armed with my repertoire of pre-processing modules, classification algorithms, regression methods, deep learning approaches, and evaluation techniques. I was ready for whatever this organization threw at me – I expected that I could solve their problems with some simpler models and a few data cleansing steps, much more straightforward than what I faced in my master’s program.

Boy, was I wrong.

The first problem handed to me was one they had been wrestling with for a few years now. They have a set of documents written by hundreds of different authors. These documents need to be tagged with specific metadata before they are stored, to make them searchable and accessible. Currently, this task is being carried out manually, with a different team of people reading each document and applying the tags. This process costs approximately 55 thousand labor hours across the sub-organization. Other teams are performing this same task on other document types in at least three different sub-organizations. That’s a lot of hours.

So, create an algorithm that can perform this tagging automatically. Great, I think to myself, as I load up spaCy and NLTK. Easy-peasy. As I start digging into the business logic and data behind the problem, I learn that one of the top priority metadata categories is the topic. There are 26 highly industry-specific topics, of which a document can have many. Okay, switch modes from entity extraction/tagging to text classification using NLP. Okay, I am still making progress. Now to find some training data and see the spread of these topics.

As I am requesting access to data, I realize the reason this problem has not yet been solved. Of the hundreds of thousands of documents at my disposal, at least 50% of the tagging is incorrect. There is a small subset of manually curated, correctly tagged documents. And by small subset, I mean 87 documents.

There goes any chance of supervised learning. I struggle to wrap my mind around the fact that tons of people are spending tons of hours to manually tag documents with incorrect tags. At this point, I also realize that unsupervised modeling is not an option because these topics are not necessarily intuitive or generic. They are very industry-specific, and there are more apparent features of the text to cluster on, such as country or region.

Determined not to become stumped with the first real-world problem thrown my way, I turned to Google. I knew this was a topic modeling problem – I needed to sort the documents into 26 different topics based on their textual content. One of the most common topic modeling algorithms is Latent Dirichlet Allocation (LDA), which maps documents to topics, represented by a to-be-determined set of words. For a great intro into LDA and topic modeling, see the site in the references below. The only LDA applications I had ever worked with, however, were completely unsupervised. For the reasons stated above, this was not going to work for my specific use case.

A coworker had suggested using seeded LDA as a semi-supervised approach to this problem and thus began my search. I came across a blog post written by the creators of the GuidedLDA python algorithm that explained how they took LDA and seeded the topics with key terms to encourage the model to converge around their specific topics (rather than let the model choose the words for each topic). This approach can be useful when you have very precise topics, as was the case with my problem.

I excitedly loaded up the library. Applying this method to my data, I reached a whopping 16% accuracy against my test set. Needless to say, I was a bit discouraged. Back to Google. I found another similar approach to this semi-supervised topic modeling problem with CorEx (correlation explanation). This library had the option of supplying anchor words to the algorithm, encouraging the model to converge around my enumerated topics similar to the GuidedLDA model. I won’t get into the dirty details outlining the differences between the two models (see the references below), because, in the beginning, it didn’t matter for me. The initial accuracy of the testing data with CorEx was 14%.

After a few days of continuous searching and exploration, I realized that for the realm of semi-supervised topic modeling, this was pretty much it. My two options were GuidedLDA and CorEx. I was able to up my accuracy in a few, perhaps obvious ways. Stratified sampling was huge for me. Taking 100 records from each topic to create an evenly distributed training data set increased my accuracy by at least 10%. Keep in mind, this stratified sample is taken from the incorrectly tagged data repository, but it was the best I could do. Additionally, I eventually gained access to the definitions of each topic, allowing me to use term-frequency matrices to extract key terms per topic that I could then feed into these topic modeling algorithms. These changes bumped my accuracy another 10-15%.

Other hyperparameter tuning tasks such as adjusting my processed text for industry stop words, selecting the token length, limiting minimum and maximum document frequencies, and finding the ideal threshold for seed word confidence landed me with 53% accuracy on the test data set. Congratulations, I thought to myself, I am now slightly better than the paid human taggers.

So, what’s next? How else can I increase the accuracy of this algorithm with the data I have available? There are a few other hyperparameters to tune (alpha, beta, etc.), and I can always gather a larger quasi-stratified sample to throw at the model (keeping in mind that I can’t be sure exactly how evenly distributed it is). Some other ideas that have cropped up are leveraging a concept ontology (or word embedding) to enhance the depth of my seed words, synthetically duplicating the curated documents to increase the size of the test set (to make a training set for supervised learning), or applying transfer learning from a large, external corpus and hope that the topics align with the internal business topics. And, of course, there’s the world of deep learning.

There is the obvious choice. I could always ask for real, usable data. But, as I’m starting to learn, that isn’t always an option. So, while I plan to show up to the weekly meeting and lobby for better data for a sixth time, I will continue to work with the data that I have. The group is very excited that my model can outperform the hundreds of people they pay to do this job, but having a model that is only correct half of the time is the same as having a model that is wrong almost half of the time. Well, I guess it is time for me to get back to the drawing board.

References:

LDA:

https://towardsdatascience.com/light-on-math-machine-learning-intuitive-guide-to-latent-dirichlet-allocation-437c81220158

GuidedLDA

https://www.freecodecamp.org/news/how-we-changed-unsupervised-lda-to-semi-supervised-guidedlda-e36a95f3a164/

https://medium.com/analytics-vidhya/how-i-tackled-a-real-world-problem-with-guidedlda-55ee803a6f0d

CorEx Topic Modeling

https://github.com/gregversteeg/corex_topic

https://github.com/gregversteeg/corex_topic/blob/master/corextopic/example/corex_topic_example.ipynb

Posted in Data Modeling.
BigBear.ai
  • Home
  • Industries
  • Solutions
  • Company
  • Careers
  • Blog
  • Investor Relations
  • Contact
  • Twitter
  • Facebook
  • Linkedin
  • Google My business for BigBear.ai
1-410-312-0885
[email protected]
  • Privacy Policy
  • Terms of Use
  • Accessibility
  • Site Map
© BigBear.ai 2023
We value your privacy
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Privacy Policy | Do not sell my personal information
AcceptCookie Settings
Manage Consent

Cookies Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
JSESSIONIDsessionThe JSESSIONID cookie is used by New Relic to store a session identifier so that New Relic can monitor session counts for an application.
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
CookieDurationDescription
__atuvc1 year 1 monthAddThis sets this cookie to ensure that the updated count is seen when one shares a page and returns to it, before the share count cache is updated.
__atuvs30 minutesAddThis sets this cookie to ensure that the updated count is seen when one shares a page and returns to it, before the share count cache is updated.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
CookieDurationDescription
_ga2 yearsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_ga_NK4L4Q320Q2 yearsThis cookie is installed by Google Analytics.
_gat_gtag_UA_163894009_21 minuteSet by Google to distinguish users.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
at-randneverAddThis sets this cookie to track page visits, sources of traffic and share counts.
CONSENT2 yearsYouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
uvc1 year 1 monthSet by addthis.com to determine the usage of addthis.com service.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
CookieDurationDescription
f5avraaaaaaaaaaaaaaaa_session_sessionbusinesswire.com cookie
loc1 year 1 monthAddThis sets this geolocation cookie to help understand the location of users who share the information.
VISITOR_INFO1_LIVE5 months 27 daysA cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSCsessionYSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-idneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
Save & Accept