How to use automatic batching with HTTP Event Collector stream for Bunyan

You can batch events to send to HTTP Event Collector on Splunk Enterprise or Splunk Cloud. Batching events, as opposed to packaging each event individually, can increase throughput by reducing the net amount of data being transmitted.

This section of the documentation contains information about the all_batching.js example included in the examples directory of the Splunk logging for JavaScript package. The example has also been pasted below.

Note: The examples are not installed when using the npm installation method. To obtain copies of the examples, download the HTTP Event Collector stream for Bunyan package.

Example walkthrough

This example includes logic to automatically batch events before sending them to HTTP Event Collector (HEC) on Splunk Enterprise or Splunk Cloud. Batching is enabled by specifying one or more batching-specific properties, and then queueing events to be sent according to those properties. This example demonstrates setting the batchInterval, maxBatchCount, and maxBatchSize settings.

First, we add require statements for Bunyan and the HEC stream for Bunyan.

Then, we declare a config variable to store the configuration information for the Splunk Enterprise instance or Splunk Cloud server. Only the token property is required, but in this example, we've set the batchInterval, maxBatchCount, and maxBatchSize properties. The full list of values specified in the example follows:

  • token: The HTTP Event Collector token to use. You created this in Requirements and Installation.
  • url: The protocol, hostname, and HEC port (8088 by default) of either your Splunk Enterprise instance or your Splunk Cloud server.
  • batchInterval: The interval, in milliseconds, at which to send batched events.
  • maxBatchCount: The maximum number of events to send per batch.
  • maxBatchSize: The maximum size, in bytes, of each batch of events.

Next, we create a Bunyan stream (splunkStream), plus an error handler.

Then, we create a logger (Logger) using the bunyan.createLogger() function, including a streams array as one of its inputs. Inside the streams array, we include splunkStream.

Next, we define the event payload in the payload variable. We've added fields for the event data itself (temperature and chickenCount in this case). Then we added several special keys to specify metadata that is to be assigned to the event data when HTTP Event Collector receives it. If any of these values (source, sourcetype, and so on) differ from the default values on the server, the values specified here will override the default values. Of course, your JavaScript app will determine what goes into the actual payload contents.

The payload is then queued for transmittal by calling Logger.info().

In our example, we include two event payloads in order to simulate batched events.

Note that there is no explicit "send" function here. The accumulated payloads are sent according to the settings we specified in the config object.

We have, however, added a timeout (setTimeout(...)) for when no more events remain.

all_batching.js

/*
 * Copyright 2015 Splunk, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License"): you may
 * not use this file except in compliance with the License. You may obtain
 * a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
 * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 * License for the specific language governing permissions and limitations
 * under the License.
 */

/**
 * This example shows how to batch events with the
 * Splunk Bunyan logger with all available settings:
 * batchInterval, maxBatchCount, & maxBatchSize.
 */

// Change to require("splunk-bunyan-logger");
var splunkBunyan = require("../index");
var bunyan = require("bunyan");

/**
 * Only the token property is required.
 * 
 * Here, batchInterval is set to flush every 1 second, when
 * 10 events are queued, or when the size of queued events totals
 * more than 1kb.
 */
var config = {
    token: "your-token-here",
    url: "https://localhost:8088",
    batchInterval: 1000,
    maxBatchCount: 10,
    maxBatchSize: 1024 // 1kb
};
var splunkStream = splunkBunyan.createStream(config);

splunkStream.on("error", function(err, context) {
    // Handle errors here
    console.log("Error", err, "Context", context);
});

// Setup Bunyan, adding splunkStream to the array of streams
var Logger = bunyan.createLogger({
    name: "my logger",
    streams: [
        splunkStream
    ]
});

// Define the payload to send to Splunk's Event Collector
var payload = {
    // Our important fields
    temperature: "70F",
    chickenCount: 500,

    // Special keys to specify metadata for Splunk's Event Collector
    source: "chicken coop",
    sourcetype: "httpevent",
    index: "main",
    host: "farm.local"
};

// Send the payload
console.log("Queuing payload", payload);
Logger.info(payload, "Chicken coup looks stable.");

var payload2 = {
    // Our important fields
    temperature: "75F",
    chickenCount: 600,

    // Special keys to specify metadata for Splunk's Event Collector
    source: "chicken coop",
    sourcetype: "httpevent",
    index: "main",
    host: "farm.local"
};

// Send the payload
console.log("Queuing second payload", payload2);
Logger.info(payload2, "New chickens have arrived");

/**
 * Since we've configured batching, we don't need
 * to do anything at this point. Events will
 * will be sent to Splunk automatically based
 * on the batching settings above.
 */

// Kill the process
setTimeout(function() {
    console.log("Events should be in Splunk! Exiting...");
    process.exit();
}, 2000);