Join the upcoming Developer Hackathon: Splunk Build-a-thon!Register now

 Create an input configuration specification for Splunk Cloud Platform or Splunk Enterprise

The Splunk platform uses .conf configuration files to define knowledge objects. Corresponding .conf.spec specification files define the scheme of the objects. For example, the scheme of a data input is defined in the inputs.conf.spec specification file, and the data inputs created from this scheme are defined in an inputs.conf configuration file.

The structure of a configuration specification file requires the following elements:

  • One or more stanza headers
  • One or more parameter values for each stanza

The following example shows a minimal inputs.conf.spec file, where myscript is the name of the scheme, and param1 is a parameter. The <name> and <value> variables are ignored.

[myscript://<name>]
param1 = <value>

To register the scheme for your modular input, your app must include a specification for the inputs.conf configuration file. When the Splunk platform reads configuration specification files, it merges settings from all versions of the file, including the one for your modular input.

See How configuration settings are stored and used.

 Tips for writing valid configuration specification files

Here are some things to keep in mind when writing configuration specification files:

  • The input configuration specification file must be named inputs.conf.spec, and must be located in $SPLUNK_HOME/etc/apps/appname/README/.

  • The following regular expression defines valid identifiers for the scheme name (the name before the ://) and for parameters:

    [0-9a-zA-Z][0-9a-zA-Z_-]*
    
  • To avoid name collisions with built-in scheme names, do not use any of the following as scheme names for your modular inputs:

    • batch
    • fifo
    • monitor
    • script
    • splunktcp
    • tcp
    • udp
  • Some parameters are always implicitly defined. Specifying any of the following parameters for your modular inputs has no effect. However, you could specify these parameters to help clarify the usage:

    • source
    • sourcetype
    • host
    • index
    • disabled
    • interval
    • persistentQueue
    • persistentQueueSize
    • queueSize
  • Modular inputs can be defined only once. Subsequent definitions in a new scheme stanza and their parameters are ignored.

  • A scheme must define at least one parameter. Duplicate parameters are ignored.

  • The stanza definition and its parameters must start at the beginning of the line.

The following example shows a configuration specification for an Amazon S3 modular input.

# S3 inputs.conf.spec file

[s3://<name>]
# Amazon key ID
key_id = <value>

# Secret key
secret_key = <value>

 Configure layering for modular inputs

A Splunk platform deployment can have many versions of the same configuration file that are usually layered in directories that affect the user, an app, or the system as a whole. However, unlike with typical configurations that inherit from a global default configuration, each modular input scheme gets a separate default stanza in the inputs.conf configuration file.

After the Splunk platform layers the configurations, the configuration stanza for a modular input, myScheme://myInput, inherits values from the scheme default configuration. A modular input can inherit the values for index and host from the default stanza, but the scheme default configuration can override these values.

For example, consider the following inputs.conf files in a system:

# Global default
# .../etc/system/local/inputs.conf
[default]
. . .
index = default
host = myHost
...

To build a layered configuration:

  1. Apply the values for index and host from the global default configuration.

    In a typical installation, the values for index and host from the global default configuration apply to all data inputs. Other values in the global default configuration do not apply to modular inputs.

  2. Apply values from the scheme default configuration, overriding any values that were previously set.

  3. Apply values from the configuration stanza, overriding any values that were previously set.

Here is the layered configuration of the previous example:

# Layered configuration example

[myScheme://myInput]
index = default          #from Global default

host = myHost            #from Global default, overridden by Scheme default
host = myOtherHost       #from Scheme default
param1 = p1              #from Scheme default
param2 = p2              #from Configuration stanza

 Schedule and monitor scripts using the interval parameter

Use the interval parameter to schedule and monitor scripts. The interval parameter specifies how long a script waits before restarting.

The interval parameter is useful for a script that performs a task periodically. The script performs a specific task and then exits. The interval parameter specifies when the script restarts to perform the task again.

The interval parameter is also useful to ensure that a script restarts, even if a previous instance of the script exits unexpectedly.

Entering an empty value for interval results in a script being run only on start or when the input is modified.

The script instance mode, as defined by the use_single_instance parameter in the introspection scheme, affects the value of interval in the following ways:

  • When allowing multiple instances of the script (use_single_instance is "false"), each stanza can specify its own interval parameter.

  • When allowing only single instances of the script (use_single_instance is "true"), the Splunk platform reads the interval setting from the scheme default stanza only. If interval is set under a specific input stanza, that value is ignored.

    With single instance mode, the interval cannot be a scheme endpoint argument, even if it is specified in inputs.conf.spec. You cannot modify the interval value using the scheme endpoint.

 Persistent queues

You can use persistent queues with modular inputs much as you do with TCP, UDP, FIFO, and scripted inputs. See Use persistent queues to help prevent data loss in the Splunk Enterprise Getting Data In manual.

You configure persistent queues for modular inputs much as you do with other inputs, but there are differences depending on the type of modular input:

  • When allowing multiple instances of the script (use_single_instance is "false"), one script is run per input stanza. Because each script produces its own stream, the script can have its own persistent queue. To configure a persistent queue, place the persistent queue parameters under each inputs stanza as follows:

    [foobar://aaa]
    param1 = 1234
    param2 = qwerty
    queueSize = 50KB
    persistentQueueSize = 100MB
    

    Another way to configure a persistent queue is to put queueSize and persistentQueueSize arguments under the scheme default stanza. In this example, that stanza is [foobar]. All input stanzas inherit these parameters resulting in a separate persistent queue for each input stanza.

  • When allowing only single instances of the script (use_single_instance is "true"), only one stream of data exists for all input stanzas for the modular input. To configure the persistent queue, place the settings under the scheme default stanza as follows:

    [foobar]
    queueSize = 50KB
    persistentQueueSize = 100MB
    

Persistent queue files are located in $SPLUNK_HOME/var/run/splunk/exec/encodedpath, where encodedpath depends on the script mode as follows:

  • When allowing multiple instances of the script (use_single_instance is "false"), encodedpath derives from the input stanza.
  • When allowing only single instances of the script (use_single_instance is "true"), encodedpath derives from the scheme name.

 See also