How to use automatic batching with Splunk logging for JavaScript

You can batch events to send to HTTP Event Collector on Splunk Enterprise or Splunk Cloud. Batching events, as opposed to packaging each event individually, can increase throughput by reducing the net amount of data being transmitted.

This section of the documentation contains information about the all_batching.js example included in the examples directory of the Splunk logging for JavaScript package. The all_batching.js example has also been pasted below.

Note: The examples are not installed when using the npm installation method. To obtain copies of the examples, download the Splunk logging for JavaScript package.

Example walkthrough

This example includes logic to automatically batch events before sending them to HTTP Event Collector (HEC) on Splunk Enterprise or Splunk Cloud. Batching is enabled by disabling the autoFlush property, which causes events to be sent every time the send function is called. This example demonstrates setting the batchInterval, maxBatchCount, and maxBatchSize settings.

First, we define a SplunkLogger variable to require the library's Logger object.

Then, we declare a config variable to store the configuration information for the Splunk Enterprise instance or Splunk Cloud server. Only the token property is required, but in this example, we've set the batchInterval, maxBatchCount, and maxBatchSize properties. The full list of values specified in the example follows:

  • token: The HTTP Event Collector token to use. You created this in Requirements and Installation.
  • url: The protocol, hostname, and HEC port (8088 by default) of either your Splunk Enterprise instance or your Splunk Cloud server.
  • batchInterval: The interval, in milliseconds, at which to send batched events.
  • maxBatchCount: The maximum number of events to send per batch.
  • maxBatchSize: The maximum size, in bytes, of each batch of events.

Next, we create a new logger (Logger) and add an error handler (Logger.error).

Then, we define the event payload in the payload variable. At minimum, we need some sort of message to send. The other keys, metadata and severity, are optional. In this case, we've added two key-value pairs, but the contents of the message key can be anything at all. The contents of metadata will be assigned to this event when Splunk Enterprise or Splunk Cloud indexes the event. If any of these values (source, sourcetype, and so on) differ from the default values on the server, the values specified here will override the default values. Of course, your JavaScript app will determine what goes into the actual payload contents.

Next, we "send" the payload to the event queue by calling Logger.send.

After queuing the first event, we define another payload variable (payload2) and load it with some more sample data. We then queue this payload as well, by calling Logger.send.

Because we've configured the batching-specific settings, we don't need to do anything further for the batched events to send. If we hadn't, we would need to manually call Logger.flush to flush the queue and send the batch.

We then log that the batch was sent and to check Splunk Enterprise or Splunk Cloud for the events.


 * Copyright 2015 Splunk, Inc.
 * Licensed under the Apache License, Version 2.0 (the "License"): you may
 * not use this file except in compliance with the License. You may obtain
 * a copy of the License at
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
 * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 * License for the specific language governing permissions and limitations
 * under the License.

 * This example shows how to batch events with the
 * SplunkLogger with all available settings:
 * batchInterval, maxBatchCount, & maxBatchSize.

// Change to require("splunk-logging").Logger;
var SplunkLogger = require("../index").Logger;

 * Only the token property is required.
 * Here, batchInterval is set to flush every 1 second, when
 * 10 events are queued, or when the size of queued events totals
 * more than 1kb.
var config = {
    token: "your-token-here",
    url: "https://localhost:8088",
    batchInterval: 1000,
    maxBatchCount: 10,
    maxBatchSize: 1024 // 1kb

// Create a new logger
var Logger = new SplunkLogger(config);

Logger.error = function(err, context) {
    // Handle errors here
    console.log("error", err, "context", context);

// Define the payload to send to Splunk's Event Collector
var payload = {
    // Message can be anything, doesn't have to be an object
    message: {
        temperature: "70F",
        chickenCount: 500
    // Metadata is optional
    metadata: {
        source: "chicken coop",
        sourcetype: "httpevent",
        index: "main",
        host: "farm.local",
    // Severity is also optional
    severity: "info"

console.log("Queuing payload", payload);
// Don't need a callback here

var payload2 = {
    message: {
        temperature: "75F",
        chickenCount: 600,
        note: "New chickens have arrived"
    metadata: payload.metadata

console.log("Queuing second payload", payload2);
// Don't need a callback here

 * Since we've configured batching, we don't need
 * to do anything at this point. Events will
 * will be sent to Splunk automatically based
 * on the batching settings above.

// Kill the process
setTimeout(function() {
    console.log("Events should be in Splunk! Exiting...");
}, 2000);