Tuning KNIME for String Heavy Workflows

In our advanced predictive analytics platform, we rely on KNIME for virtually all of our ETL needs and large portions of our modeling. A single scenario run requires training more than 27,000 individual models and involves processing millions of records that represent over 20 years of data for 600 measures from 180+ countries. We do this in a surprisingly quick time by distributing our processing over many AWS Fargate containers that each process a portion of source data set.

Because we were processing so much data, we quickly found ourselves hitting the Fargate maximum allow task memory limit of 30 GB. We tuned KNIME’s memory configuration and adjusted how aggressive it was about caching data to storage between nodes but had to be careful because Fargate also limits the storage a task can use.

To better understand our scaling issues, I started profiling the KNIME workbench when it was running in batch mode to understand its memory usage patterns. I quickly discovered that KNIME does a very good job of not duplicating unchanged data as it passes through nodes. A row of data in KNIME consists of cells (StringCell, IntegerCell, etc) and these cells are immutable. If a node does not transform them, they are passed on unchanged assuming they are not cached to storage.

While profiling I did discover that the repetitive nature of our data for 600 measure names and 180 countries was causing a very large number of duplicated strings in the JVM’s heap. Each time one of the millions of rows of data was read, two more of these strings were allocated in the heap. This was causing the memory pressure that was making many of our processing workflows have issue running as a Fargate task.

One of the possible solutions would be to encode the country and measure names as numbers using the Category to Number so that each labels representation would require much less memory. But to do that would not just involve refactoring over one hundred workflows and require us to distribute a single mapping model for the Category nodes across all of these workflows and handle many cases where we encountered country and measure names that eventually get dropped before modeling.

Fortunately, while turning KNIME’s memory allocation I noticed that KNIME 4.0 onwards started using the Garbage First Garbage collector (known as G1GC). One of the optimizations of this garbage collector is that allocated objects are divided into regions and live objects can be copied to new regions to reduced heap size. As part of this copying, the garbage collector supports the deduplication of strings. This option is not enabled by default but we can enabled it in KNIME by adding the line “-XX:+UseStringDeduplication” to the knime.ini right after the garbage collector selection line (“-XX:+UseG1GC”).

The string deduplication is not aggressive. Since it is more expensive, the garbage collector tries not to perform deduplication on short lived strings. Only string objects that have survived three garbage collection sweeps are deduplicated. This works great with KNIME since it is so good about not duplicated the data cells as they pass through the workflow nodes.

This one simple tweak to the knime.ini deployed in our Fargate container resulted in a dramatic reduction in the memory footprint of each of running workflows with no apparent decrease in processing speed. It may even have slightly improved the speeds of some of the more memory intensive workflows since they are less likely to begin caching records to storage due to memory pressure.

We have suggested to the KNIME engineering team that garbage collector tuning be made the default for future releases. They are evaluating our proposal and we are optimistic that it will happen. In the meantime, if you find yourself struggling with memory issues in string heavy workflows, consider enabling string deduplication for the garbage collector.

BigBear.ai Privacy Policy