How we use Kafka

Humio is a log analytics system built to run both On-Prem and as a Hosted offering. It is designed for “On-Prem first” because, in many logging use cases, you need the privacy and security of managing your own logging solution. And because volume limitations can often be a problem in Hosted scenarios.

From a software provider’s point of view, fixing issues in an On-Prem solution is inherently problematic, and so we have strived to make the solution simple. To realize this goal, a Humio installation consists only of a single process per node running Humio itself, being dependent on Kafka running nearby. (We recommend deploying one Humio node per physical CPU so a dual-socket machine typically runs two Humio nodes.)

We use Kafka for two things: buffering ingest and as a sequencer of events among the nodes of a Humio cluster.

Read more at medium.com/humio