BigBear.ai
  • Home
  • Industries
    • Academia
    • Government
    • Healthcare
    • Manufacturing
  • Solutions
    • Cyber
    • Data Analytics
    • Enterprise Planning and Logistics
    • Intelligent Automation
    • Modeling Solutions
    • Professional Services
  • Company
    • About
    • Investor Relations
    • Partners
    • Team
  • Careers
    • Benefits
    • Culture
    • Explore Jobs
    • Military and Veterans
    • Applicant Login
    • Employee Login
  • Resources
    • Blog
    • Events
    • Newsroom
    • Resource Library
    • Online Store
  • Contact
Search

Home KNIME Tuning KNIME for String Heavy Workflows

Blog

Tuning KNIME for String Heavy Workflows

Paul Wisneskey
May 28, 2020
  • Share
  • Share

In our advanced predictive analytics platform, we rely on KNIME for virtually all of our ETL needs and large portions of our modeling. A single scenario run requires training more than 27,000 individual models and involves processing millions of records that represent over 20 years of data for 600 measures from 180+ countries. We do this in a surprisingly quick time by distributing our processing over many AWS Fargate containers that each process a portion of source data set.

Because we were processing so much data, we quickly found ourselves hitting the Fargate maximum allow task memory limit of 30 GB. We tuned KNIME’s memory configuration and adjusted how aggressive it was about caching data to storage between nodes but had to be careful because Fargate also limits the storage a task can use.

To better understand our scaling issues, I started profiling the KNIME workbench when it was running in batch mode to understand its memory usage patterns. I quickly discovered that KNIME does a very good job of not duplicating unchanged data as it passes through nodes. A row of data in KNIME consists of cells (StringCell, IntegerCell, etc) and these cells are immutable. If a node does not transform them, they are passed on unchanged assuming they are not cached to storage.

While profiling I did discover that the repetitive nature of our data for 600 measure names and 180 countries was causing a very large number of duplicated strings in the JVM’s heap. Each time one of the millions of rows of data was read, two more of these strings were allocated in the heap. This was causing the memory pressure that was making many of our processing workflows have issue running as a Fargate task.

One of the possible solutions would be to encode the country and measure names as numbers using the Category to Number so that each labels representation would require much less memory. But to do that would not just involve refactoring over one hundred workflows and require us to distribute a single mapping model for the Category nodes across all of these workflows and handle many cases where we encountered country and measure names that eventually get dropped before modeling.

Fortunately, while turning KNIME’s memory allocation I noticed that KNIME 4.0 onwards started using the Garbage First Garbage collector (known as G1GC). One of the optimizations of this garbage collector is that allocated objects are divided into regions and live objects can be copied to new regions to reduced heap size. As part of this copying, the garbage collector supports the deduplication of strings. This option is not enabled by default but we can enabled it in KNIME by adding the line “-XX:+UseStringDeduplication” to the knime.ini right after the garbage collector selection line (“-XX:+UseG1GC”).

The string deduplication is not aggressive. Since it is more expensive, the garbage collector tries not to perform deduplication on short lived strings. Only string objects that have survived three garbage collection sweeps are deduplicated. This works great with KNIME since it is so good about not duplicated the data cells as they pass through the workflow nodes.

This one simple tweak to the knime.ini deployed in our Fargate container resulted in a dramatic reduction in the memory footprint of each of running workflows with no apparent decrease in processing speed. It may even have slightly improved the speeds of some of the more memory intensive workflows since they are less likely to begin caching records to storage due to memory pressure.

We have suggested to the KNIME engineering team that garbage collector tuning be made the default for future releases. They are evaluating our proposal and we are optimistic that it will happen. In the meantime, if you find yourself struggling with memory issues in string heavy workflows, consider enabling string deduplication for the garbage collector.

Posted in KNIME.
BigBear.ai
  • Home
  • Industries
  • Solutions
  • Company
  • Careers
  • Blog
  • Investor Relations
  • Contact
  • Twitter
  • Facebook
  • Linkedin
  • Google My business for BigBear.ai
1-410-312-0885
[email protected]
  • Privacy Policy
  • Terms of Use
  • Accessibility
  • Site Map
© BigBear.ai 2023
We value your privacy
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Privacy Policy | Do not sell my personal information
AcceptCookie Settings
Manage Consent

Cookies Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
JSESSIONIDsessionThe JSESSIONID cookie is used by New Relic to store a session identifier so that New Relic can monitor session counts for an application.
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
CookieDurationDescription
__atuvc1 year 1 monthAddThis sets this cookie to ensure that the updated count is seen when one shares a page and returns to it, before the share count cache is updated.
__atuvs30 minutesAddThis sets this cookie to ensure that the updated count is seen when one shares a page and returns to it, before the share count cache is updated.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
CookieDurationDescription
_ga2 yearsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_ga_NK4L4Q320Q2 yearsThis cookie is installed by Google Analytics.
_gat_gtag_UA_163894009_21 minuteSet by Google to distinguish users.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
at-randneverAddThis sets this cookie to track page visits, sources of traffic and share counts.
CONSENT2 yearsYouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
uvc1 year 1 monthSet by addthis.com to determine the usage of addthis.com service.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
CookieDurationDescription
f5avraaaaaaaaaaaaaaaa_session_sessionbusinesswire.com cookie
loc1 year 1 monthAddThis sets this geolocation cookie to help understand the location of users who share the information.
VISITOR_INFO1_LIVE5 months 27 daysA cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSCsessionYSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-idneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
Save & Accept