Platform and tools: a kitbag for our journey

This chapter focuses on the preparations for our journey and describes some of the tools, techniques, and approaches we use for developing and testing the apps.

Our development and test environments

Splunk® apps are typically client-side web applications that may also include server-side components, it's also possible for a Splunk app to consist purely of server-side components. You can use a wide variety of different web technologies to build them such as XML, JavaScript, HTML, CSS, Python, C#, Java and Ruby. Therefore, you can choose the development tools you are most comfortable with: a good programmer's text editor is essential, you'll also need to know how to debug your scripts and access any log data your app generates. If you are using JavaScript, you should learn how to use the developer tools in your browser: most browsers enable you to debug JavaScript, view the resources a page uses (such as stylesheets, images, and JavaScript files), and examine the effect of any CSS files on your formatting and layout. The following screenshot shows the Chrome browser developer tools open on the Summary page in the Pluggable Auditing System (PAS) app, with the JavaScript code paused at a breakpoint:


Splunk Enterprise itself generates log data that you can query and analyze using Splunk Enterprise. You can do so by specifying index=_internal in the default Splunk search app. For more information about Splunk instrumentation, introspection and various troubleshooting techniques, see "What Splunk logs about itself." For Splunk crash log analysis, see "Community:Troubleshooting Crashes."

The approach we are taking to develop the PAS app is to:

  • Begin by understanding what the app must display and (along with our UX designer and business user) sketch out a design of the dashboard. This includes the panels, visualizations, drilldowns, and page navigation.
  • For each panel, understand the data requirements: where does the data come from, how we should interpret it, and do we need to implement features such as field extractions and lookups. Then we can define the searches.
  • Optimize the solution. For example, we might decide to build a data model to abstract the underlying data and then refactor the searches.

When the team started developing the PAS app they used sample data and had some control over the format and content of the data they were using, enabling them to focus more on the UI and the visualizations. In a real project with real data, you would analyze the available data to determine what types of questions you could ask which, in turn, would inform you of the types of visualizations you could use. Therefore in a real project, you would consider both the underlying data and the visualization requirements in parallel.

Overall, we will take a modular approach to the development of the apps, focusing on one particular dashboard or even just one particular panel at a time. Each panel is typically self-contained, but Splunk Enterprise does include capabilities that enable panels to share data if necessary.


The team found that it doesn't make sense to build the data models before they get a good idea of what the app looks like and what searches it requires. Your dashboard and search requirements feed in to your data model design.

The focus of much of the testing will be on the UI, so we are using Selenium as a browser automation tool. It includes a selection of drivers for automating the most commonly used web browsers to enable cross-browser testing and there are bindings for a variety of languages such as C#, Java, JavaScript, Ruby, and Python. To run these automated user acceptance tests we've chosen a Python solution, and the familiarity of the test team and other Splunk Enterprise developers with this platform is one of the main reasons for adopting it. A test sends commands to the browser to simulate a user interaction, such as clicking on a button, and then captures the Document Object Model (DOM) content in the browser to enable us to verify the expected output. One challenge we face is how to test the graphical output in the Splunk app: the visualization libraries we plan to use generate Scalable Vector Graphics (SVG) code for the graphics, so we need to learn how to parse and extract the information we need from the SVG code embedded in the generated app web page.


If you are using Selenium, be sure to check which versions of the browsers you are using it supports. Selenium is not always updated immediately to support the very latest versions.

We also initially planned to use the Splunk Python SDK to connect to Splunk Enterprise beneath the UI and access data directly in some test scenarios, but decided not to do so as we rely on the correctness of search execution by the core product.

In the methodology adopted for this project, the test team defines a set of acceptance test scenarios for each story we implement, and the developers then implement the code with those test scenarios in mind. The test team then completes writing a collection of automated user acceptance tests to verify that the code complies with those test scenarios. Note that for this project, we are not following a full Test-Driven Development methodology, although this is definitely something we will consider in the future.

Walkthrough: How we worked a UX designer to mock up the PAS app

When we set out to build the reference app, we started with a backlog of questions we wanted our guidance to address. The resulting backlog was rather comprehensive and included 60+ questions such as:

  • What does a typical Splunk app architecture look like?
  • How should I set up my dev environment to be productive with Splunk Enterprise?
  • What are the different ways to integrate a Splunk app with existing systems?
  • How do I generate sample data to test my app?
  • What are the distributed deployment considerations?
  • How do I package an app? How do I deal with app versioning and updates?

In parallel, we worked with our partner to identify high-priority use cases for the app.


By building a real-world app that delivers real value to its users, we would achieve the high-level objective of learning and documenting the various architectural and technological aspects of building solutions on the Splunk platform.

To build a real app that delivered real value, the team needed to reconcile the questions backlog with the proposed business use cases. This was done iteratively for each sprint when we demoed progress made to the business owner and prioritized new development stories for the next sprint. The developers would then take the verified designs and approved stories and start making them a reality.

We had engaged with a UX designer early in the process to iteratively build mockups for the application. In the figures below, you can see the progress of our UX mockups for the Summary dashboard with the increasing fidelity. Many valuable insights originated from team brainstorming and whiteboarding sessions with the designer. We were able to have a fast turnaround because the designer was available to join the team on site. Our partner (business owner) was able to provide feedback on the designs early and frequently. These discussions brought many usability issues to light early in the process.

Below you see how we mapped various learning objectives to the specific use cases and visual elements of the reference app.

Engaging with a UX designer early provided the following benefits:

  • Facilitating insightful discussions among project stakeholders.
  • Making use cases more concrete and detailed, seeing how a potential solution would fit into target user's collection of tools, techniques, activities.
  • Validating UI designs using low-fidelity prototypes.
  • Identifying additional strategic opportunities for business.
  • Delighting end users with usable dashboards.

UX design played a vital role in empathizing with our users and understanding our users' needs, iteratively designing and testing solutions, and communicating optimal solutions to the development team.

Walkthrough: How we initially created the PAS app

These are the steps we followed to create a new, empty app when we first began development on both the PAS and Auth0 apps. These steps will create a barebones app that you can use as the starting point for your own Splunk apps. Note that you can also create a barebones app using the Splunk Enterprise command-line interface (CLI).

  1. In Splunk Enterprise, click the Apps link, and then choose Manage Apps.

  2. On the Apps page, click Create app and complete the required information about your app. You should choose a name for your app that makes it easy to recognize, and you should adopt a naming convention for the folders you use to make it easy to locate the source files: we've chosen to use the project name, the organization name, and the individual app name.

    Notice how we've chosen the barebones template.

  3. When we click Save, Splunk Enterprise creates a folder using the name we chose in the previous step in the Splunk etc/apps folder. This folder contains the following standard subfolders to organize the app resources: bin, default, local, and metadata. The app does not initially contain any views or dashboards.

  4. You should add your app to your source control system. We are using GitHub for our project. We first moved the folder that Splunk Enterprise created (in this example, _conducive_pasapp) from etc/apps to a more convenient location: in our case this was to a subfolder in our home directory in Linux, but if you are using Windows it could be to a subfolder in your Documents folder.

  5. From the new location we performed the initial check-in to our GitHub repository.

  6. Finally, we added a symbolic link from the etc/apps folder to our new location for the app source files. For more information about how and why we use symbolic links, see the section "Multiple projects in a single Git repository."


The name warum was changed to pas at a later point in the journey.

Workflows: Developing a Simple XML dashboard

Both apps use Simple XML dashboards to create the UI. This section describes the workflows we use when we create and edit a new Simple XML dashboard. The Splunk Simple XML Form Cheat Sheet is a useful quick reference if you are just starting out with Simple XML.

Creating a new Simple XML dashboard

We create new dashboards through the Splunk Enterprise UI. First we click Dashboards to access a list of dashboards in our app.


If you customize your app navigation, you may want to leave links such as the Dashboards link until you have added all your dashboards. Otherwise, you must remember the URL to get to your list of dashboards or find the list of views from the Settings menu.

Next we click Create New Dashboard:

It's important to set the permissions to Shared in App to ensure that Splunk Enterprise adds the XML file within the app's folder in the Splunk etc/apps folder, and is then available to check into your source control system along with your other files. We can now see the document_activity.xml file if we look in the app's local/data/ui/views folder. For more information about managing the permissions of dashboards and other objects, see "Manage knowledge object permissions."


Permissions are the most common problem encountered when copying apps to a different location. If the permissions are not set to Shared in App or Global, the files will not be located in the app directory.

You can also create a new dashboard by creating a new XML file containing Simple XML markup in the /default/data/ui/views folder.

Editing the dashboard

After creating a new dashboard through the Splunk Enterprise UI, we continue to use the UI to develop the dashboard. when we click Edit on a Simple XML dashboard we have the choice of editing the panels or editing the XML source directly:

If we choose Edit Panels we can use a graphical editing environment to develop the dashboard, while Edit Source lets us edit the XML source code directly.

When we start working with the JavaScript extensions and use an external editor for our JavaScript files, our developers find it convenient to use an external editor for the Simple XML files as well and move away from the Splunk Enterprise visual editing environment. Typically, by this stage the dashboard UI is mostly complete, so any edits to the XML are minor tweaks rather major changes to content.

File locations and packaging

When we use the Splunk Enterprise UI to create and edit our dashboards, it follows the standard Splunk Enterprise convention and saves all the changes we make in the local folder within the app. For example, the dashboard we just created is saved in this location: local\data\ui\views\document_activity.xml. When we package the app for distribution, we should not have any files in the local folder, so we must copy any resources from the local folder to the default folder. Therefore, we should move our document_activity.xml file to default\data\ui\views\document_activity.xml.

It's possible that during development you accidentally end up with two copies of a dashboard XML file, one in local and one in default. In this case, Splunk Enterprise renders the version in the local folder. It's important that everyone knows which is the active file and, if they are using an external text editor, that they edit the correct version. You may want to choose a specific point in the lifecycle of developing a new dashboard to move the XML file into the default folder and at the same time switch from using the Splunk Enterprise visual editing environment to only using an external text editor.


It's easy to forget during development that you've made a change that updates a file in the local folder that must be copied to the default folder. Be especially careful if you have configured your source code control system to ignore the local folder as this can cause you to forget to check in your updates. You should consider creating a script to perform the copy operation, especially if your solution consists of multiple apps.


You can also inadvertently save artifacts to other app context's local directory where the permission is global. So, be sure to watch out for these.

Test and sample data

Our test team needs predictable sample data in the index to ensure that tests are repeatable; for example, a test of a UI element might expect it to contain a specific value. To achieve this, we generate a large set of sample data to use in our automated tests. However, this sample data spans a fixed time range making it awkward to use by the developers for simple tests such as searching for recent events (such as for events in the last day). To meet this requirement for recent dynamic data, we also include a dynamic event generator in the app that produces a pseudo live stream of event data.

Sample data for automated UI testing

To perform automated UI testing, our business user has provided some sample log files containing almost one million records for us to work with. To facilitate using this test data, we have created a simple Splunk app that contains these sample log files and indexes them into a separate index. This app contains two folders: the data folder contains the sample log files, and the default folder contains the .conf files that define the Splunk app. The app.conf file contains a configuration setting to make this app invisible, and the indexes.conf file defines an index named pas. The following sample shows the definition of one of the three inputs in the inputs.conf file:

# linux configuration
disabled = false
followTail = 0
sourcetype = ri:pas:application

# windows configuration
disabled = false
followTail = 0
sourcetype = ri:pas:application

Notice how we define the same input twice. The first definition works correctly if the app runs in a Linux environment, the second works correctly if the app runs in a Windows environment. This makes it easy for us to deploy this simple test app in different environments during development and test.


Don't forget to define the SPLUNK_HOME environment variable on your machine.

On *nix you can do so by using the " export SPLUNK_HOME=<actual path>" command, adding that export statement to your .profile/.bashrc files, or by running the ./setSplunkEnv. On Windows, adjusting the environment settings through the UI will do the trick. Alternatively, create a batch file using splunk envvars and then run that batch file as follows:

splunk envars > setSplunkEnv.bat


While this data is useful during development and test, we will need larger data sets when we performance test our app. It's not always possible to work with real data during development and test. For example, real log files might contain sensitive information, or it may be difficult to find real log files that are a manageable size and that contain a full set of representative event data. We use the Splunk Event Generator utility described below to generate static log files for repeatable tests as well as to generate a pseudo live stream of data.


It's important that any fake test data is properly representative of the real-world log data your app will process. For example, the content of the fields, and the sequencing and pattern of the events should all be realistic.

Pseudo live stream of data

To generate a pseudo live stream of data for use in exploratory testing and demos, we use the open source Splunk Event Generator utility. You can find this utility on GitHub at


There are several versions of this utility and we decided to use the most recent one we could find. This was the dev branch in the GitHub repository (


The latest version of the Splunk Reference App - PAS packages the latest version of the eventgen available at the time of this writing inside appserver/addons. The install scripts would automatically create a symbolic link to it.

Installing this version of the Splunk Event Generator is easy:

  • We clone the code from GitHub, and add the contents of the develop branch (not the default master branch) to a folder named eventgen (the name of this folder is important) in the Splunk etc/apps folder.
  • We then add configuration files to the PAS app that specify how to generate suitable sample data. These configuration files consist of an eventgen.conf file and a set of .csv and .sample files in a new samples folder in the root of the PAS app folder. The eventgen.conf file defines how the Splunk Event Generator generates the sample data using template events in the .csv files, and replaces tokens in the template events with values from the .sample files.
  • We add a metadata/default.meta file to the PAS app that has the content shown below.
export = system

Typically, you exclude the content of the metadata folder when you check your code into your source code control system because the content is generated or modified whenever you install the app. We modified our .gitignore file to include this specific default.meta file in the check-in.

For more information about how to use the eventgen.conf file, you should start by reading the tutorial included in the repository (

Multiple projects in a single Git repository

At the start of our journey, our GitHub repository contains three projects: the source for the PAS Splunk app, the source for the PAS Splunk sample data app, and a C# test project (that we later replaced with a Python script). To run the two Splunk apps when we check out the code, we need to remember to copy the source to the Splunk etc/apps folder. To run the Python tests, we can run the script from the check-out location.

To avoid having to remember to copy the Splunk app source to the correct location, we use symbolic links from the etc/apps folder to the location of the Git repository. In Windows, you can create symbolic folder links by using the mklink /J command at a command prompt. The following screen shot shows the symbolic links we created for our two Splunk apps:

In a Linux or OS X environment, we can also create symbolic links. The following screenshot shows a bash shell with similar symbolic links that we created using the ln -s command:

Now we can work directly with the files in our local Git repository and any changes we make are automatically reflected in the Splunk etc/apps folder.


We are not using a full IDE on this project. Instead, the development team is using a programmer's text editor that can do syntax highlighting for HTML and JavaScript.

Using .gitignore

In a local Git repository, you can use the .gitignore file to ignore changes to individual files or folders so that certain items (such as temporary, generated files) do not get checked in to the repository. The following shows the.gitignore file we use for the PAS app:

# OS X

# Windows

# Python

# UI test automation generated files 

# Splunk app local files 
# (Normally we'd exclude the entire "metadata" directory. 
#  However we must preserve metadata/default.meta so that eventgen works.) 

This file tells Git to ignore some files generated by the operating system, compiled Python files, the files Visual Studio generates when we compile the test project, and the files that Splunk generates at runtime in the local and metadata folders. There is no need to include any of these files in the repository because they are all generated when you run the app or the test project.

Restarting and reloading Splunk Enterprise

We have added some shortcuts to the list of favorites in our browser that make it easier to work with Splunk Enterprise while we are developing the apps. These shortcuts enable us to refresh our Splunk Enterprise environment when we make changes to our source code and configuration without the need to completely restart Splunk Enterprise, which can take time if you have lots of Splunk apps installed. The following is a list of the most useful URLs (you may need to change the port number and locale to reflect your local configuration):


To avoid having to remember to refresh the cache to see the change when you edit a static resource, you can simply disable cache when you are using the developer tools in the browser. In Chrome you check Disable cache (while DevTools is open) in General Settings in Developer Tools. In IE 11, click Always refresh from server on the Network panel in Developer tools.


You can find lots of useful information about your Splunk environment (including the URLs in the previous list) if you visit the Development services page at http://localhost:8000/en-US/info.

If you do need to restart Splunk Enterprise after you make a change, you can use the splunk command in the bin folder. For example, to get help about how to stop and start Splunk Enterprise services, execute the following at a command prompt or shell:

bin\splunk help control

For example, you can request that Splunk Enterprise reload everything from the static folder (such as CSS or JavaScript files) by executing the following command:

bin\splunk restartss

If you want to minimize effort and automate the restart of Splunk Enterprise services, you can also use the Splunk Enterprise development command-line interface (CLI) to help during development. You can install it through npm install -g splunkdev-cli. For more information about this Node.js package, see In addition to letting you stop, start, and restart Splunk Enterprise services, you can also use this tool to watch your local file system and automatically restart Splunk Enterprise services or reload configuration files when you make changes to your apps. Note that to use this tool depends on Node.js (a version of Node.js is included in a standard Splunk Enterprise installation).


We have seen customers who have efficiently automated their builds, Splunk Enterprise instance restarts, test runs and other tasks when code changes using tools such as Grunt and Chef

The development environment for the Auth0 app

For the simpler Auth0 app, we have adopted a streamlined development methodology. In a standard Splunk Enterprise installation, the Splunk etc/apps folder contains a subfolder for each app that contains all the code, configuration, and other resources for that app. We have made this folder into a Git repository, so that we can directly check in to GitHub any changes we make to the app while we are developing it. The Auth0 app includes some server-side components implemented using Node.js. The following Windows screenshot shows the top-level of the Auth0 app including the hidden Git resources.

The .gitignore file includes a line to ignore the node_modules folder that contains the two installable node packages that the app uses (an auth0 package and a splunk-sdk package).


This approach makes it very easy to edit the app and then immediately check in the code you changed. Don't forget to restart Splunk Enterprise using the appropriate URL (see the previous section) when you make a change. Doing a full restart of Splunk Enterprise is slow.

Choosing our platform: Simple XML

Both apps are created using Simple XML, but use different parts of that framework. The Essentials guide that accompanies this description of our journey and online reference documentation include more details about the different options. We have chosen to use the Simple XML approach for building the Auth0 app, and a hybrid of Simple XML and Simple XML with SplunkJS Extensions for building the PAS app.


The different technologies for building Splunk Enterprise apps are not necessarily mutually exclusive. It's possible to create an app that uses a mixture of approaches.

The following table summarizes the differences between the UI technologies we are using and some of their pros and cons. It also describes two options for extending the Simple XML model.

Approach Notes
Simple XML

You should start with this approach. It's easy to add the extensions later if you need them

Lets you build simple dashboards. Splunk Enterprise includes interactive tools for creating and modifying dashboards and an XML editor that you can use to edit dashboard definition markup.

This approach is ideal for simple apps that require only basic customization and that do not require complex navigation between different dashboards.

Simple XML with JS and/or CSS Extensions

Once you hit the limit of what you can achieve with Simple XML on its own, you can add JavaScript and CSS to the collection of tools.

Extends the capabilities of Simple XML by adding more layout options, visualizations, and custom behaviors.

This is useful for adding additional capabilities to a Simple XML dashboard, but it adds to the complexity of the app because your app now includes custom JavaScript and CSS resources. JavaScript libraries can include SplunkJS stack, third party libraries, or own your own JavaScript code.

Simple XML converted to HTML with custom JavaScript.

We don't recommend this approach because it will introduce maintenance issues in the future.

Converts a simple XML dashboard to HTML with a lot of automated JavaScript code generation. This gives you full control of the rendering of the page.

Caution: Maintainability concern: The generated dashboards end up being specific to the Splunk Enterprise version on which the dashboards have been generated and might not going to be future-compatible.

Also, this is a one-way conversion (you can't go back to simple XML).

For the Auth0 app, its relatively simple requirements meant that we could use just Simple XML to build the UI and searches. The PAS app had more complex UI requirements so we used the JavaScript and CSS extensions on some of the dashboards.

Although on this project we chose not to use an IDE, if your choose to use one there are RelaxNG schemas available for Simple XML. You can use these schemas to validate your Simple XML documents and, if your IDE supports it, enable autocomplete functionality in the editor to speed up the development process. You can download the RelaxNG schemas and find out more about them if you navigate to http://localhost:8000/info in Splunk Enterprise.

In addition to the discussion of the Splunk Web Framework in the Essentials guide, the following resources on the Splunk developer and docs web sites provide detailed information about the framework:

We also considered using the SplunkJS Stack to integrate Splunk Enterprise functionality into a standard web app hosted on a standard web server such as Apache or IIS. Although this option offers a very flexible approach, we can meet our requirements by creating Splunk apps to run in Splunk Enterprise. To learn more about reusing the SplunkJS libraries for features such as views and search managers, see Use SplunkJS Stack in your own web apps.

Splunk SDK for JavaScript

The Splunk SDK for JavaScript includes features that let you conveniently call much of the Splunk Enterprise REST API. You can use this SDK to enable your JavaScript code to interact programmatically with the Splunk engine. You can write both client-side and server-side JavaScript to interact with Splunk Enterprise through the SDK. The following resources provide more information about the JavaScript SDK and instructions on how to install it:

Initially, only the Auth0 application in its implementation of a Modular input uses the server-side component of JavaScript SDK. See the section "Creating a Modular input" in the chapter "Working with data: where it comes from and how we manage it."

Testing techniques and approaches

We have a selection of automated user acceptance tests in place for the dashboards and the features the development team has created. In this section, we highlight some of the techniques we have found useful in implementing our tests.


Some testing tools (such as the Selenium IDE) let you record a user session with a web application as a script that you can later refactor into a collection of tests. Because our app is relatively small, we've chosen to implement the user interactions with the app manually, writing code for each acceptance test scenario.

Identifying page elements

Originally, we used a C# project for our automated user acceptance tests, but in the final version of the project we use a Python script. In both cases we can use Selenium to help us automate the browser interactions. We chose to use Python rather than C# to make it easier for users to download and run the tests: in many case a developer or test workstation will have Python preinstalled.

Testing a web app typically requires us to be able to identify elements on a page (such as text boxes and buttons) and then simulate user interactions with those elements to drive a test case. The web pages rendered by Splunk Enterprise can be complex, especially those displaying charts that are rendered using SVG, so we have used a number of different techniques to programmatically locate elements on the page. Typically, we use the Find functions provided by Selenium to locate elements on the page. The Find functions can locate elements in a number of different ways such as by Id, by ClassName, by LinkText, and by XPath (XPath is particularly useful for parsing SVG data). Which particular technique we use depends on a careful analysis of the source of the rendered page (by using the view source feature in our web browser) to determine how to uniquely identify the specific element we need for our test. Due to the complexity of the web pages in our app, not all elements on the page are loaded immediately and we find it necessary to use repeated Find calls to locate an element reliably, as shown in the following code snippet that uses both the find_element_by_tag_name and find_element functions

def WaitElementAppear(parentElement,byMethod, str,logMsg):
    if parentElement is None:

    while not loaded and time.time()&lt;stopTime:
            assert result!=None
            assert result.tag_name!=None
            if logMsg:
                logstr="{0}, {1}, {2}".format(time.time(), time.time()-startTime, logMsg)


This example comes from a utility function named StartWaitElementAppearTask in our Python test script that we use to create an asynchronous wait for a specific element to appear on the page. The wait enables the script to run without failing if the web site is slow to respond. The following snippet shows an example of how we use this function:

def VerifySummaryPageElements():
    print "call VerifySummaryPageElements"
    #wait donut chart to show up
    ActionTask(VerifyDonutChart,"Verify DonutChart")
    #wait top - userstable show up
    userPanel = StartWaitElementAppearTask(driver, By.ID,"panel3").get()
    topUsersTask = StartWaitElementAppearTask(userPanel, By.CLASS_NAME,"shared-resultstable-resultstablerow","load summary page top-user table").get()

    #wait top-documents table show up
    documentPanel = StartWaitElementAppearTask(driver, By.ID,"panel4").get()
    topDocumentsTask = StartWaitElementAppearTask(documentPanel, By.CLASS_NAME,"shared-resultstable-resultstablerow", "load summary page top-documents table").get()

This snippet also makes use of the ActionTask to handle the asynchronous behavior. The following snippet shows the ActionTask function:

def ActionTask(action,logMsg=None):
    print "start to ActionTask({0})".format(action.func_name)
    if logMsg:
        logstr="{0}, {1}, {2}".format(time.time(),action.func_name, stoptime-starttime)
    print "finish to ActionTask({0})".format(action.func_name)

Many of our tests require us to simulate a user interaction with the Splunk time range picker control, so we have created a set of wrapper functions such as ChangeTimeRange for this control that we can reuse from multiple tests. This is particularly useful where the outcome of the test depends on us selecting a specific set of events from a sample log files.

Automating advanced controls

Some controls require special techniques to automate them in a test. For example, on the User Details dashboard you can zoom in the time range by dragging the sliders. The following screenshot shows the sliders after a user has moved them to zoom in on the period Sat Aug 9 to Sun Aug 10:

The following code snippet shows how we automate this zooming behavior in a test:

def VerifyUserPageZoomChart():
    zoomChart = StartWaitElementAppearTask(driver, By.ID,"zoom_chart").get()
    zoomchartTask = StartWaitElementsAppearTask(zoomChart, By.CLASS_NAME,"highcharts-axis-labels", "load userdetailspage zoomchart")

    startdate="{0} {1}{2}".format(searchRangeStartTime.strftime("%b"),,searchRangeStartTime.year)
    enddate="{0} {1}".format(searchRangeEndTime.strftime("%b"),

    assert startdate in zooms[0].text
    assert enddate in zooms[0].text

    #zoom out the zoomchart series
    highchartsGroup = StartWaitElementAppearTask(zoomChart, By.CLASS_NAME,"highcharts-series-group").get()
    highcharts = StartWaitElementAppearTask(highchartsGroup, By.CLASS_NAME,"highcharts-series").get()
    path=StartWaitElementsAppearTask(highcharts, By.TAG_NAME,"path").get()
    d = path.get_attribute("d");
    moveFrom = GetSvgLineCoordinates(d)[1];
    moveTo = GetSvgLineCoordinates(d)[3];
    ActionChains(driver).move_to_element_with_offset(path, moveFrom[0], moveFrom[1]).perform()
    ActionChains(driver).drag_and_drop_by_offset(path, moveTo[0]-moveFrom[0], 0).perform();

    #verify the "reset" shows up when the swim windows is selected.

This code first uses a series of nested searches to locate the SVG elements within the chart that have an HTML div element with an Id of zoom-chart. It then uses the Selenium ActionChains class to drag and drop the two path elements to new locations to set the zoom.


The Selenium ActionChains class is very useful for simulating mouse-based user interactions such as clicks and drag-and-drop.

Instrumenting our tests

Our test project captures timing information as the basis for collecting performance data from the app by writing timing information to a log file to capture information about the tests. The following shows some sample data from the log file collected over the course of several test runs. The timings show how long in seconds a given test item takes to run:

09/16/14 16:35:04, ============================== New Test Run Start ============================== 
09/16/14 16:35:44, 1. Load App page = 20.1652754  
09/16/14 16:36:09, 2. Load Summary page Center Chart = 3.854182 
09/16/14 16:36:28, 3. Load Document Details page = 18.8270169 
09/16/14 16:37:01, 2. Load Summary page Center Chart = 3.0892958 
09/16/14 16:37:20, 4. Load User Details page = 18.4391709 
09/16/14 16:37:42, 2. Load Summary page Center Chart = 4.3207587 
09/16/14 16:38:11, 2. Load Summary page Center Chart = 2.8340855 
09/16/14 16:38:16, ============================== End Test Run ============================== 
09/16/14 20:27:54, ============================== New Test Run Start ============================== 
09/16/14 20:28:27, 2. Load Summary page Center Chart = 4.5750334 
09/16/14 20:28:54, 2. Load Summary page Center Chart = 4.0135587 
09/16/14 20:29:16, 2. Load Summary page Center Chart = 3.8203631 
09/16/14 20:29:35, 3. Load Document Details page = 19.3844841 
09/16/14 20:29:55, 2. Load Summary page Center Chart = 2.9488817 
09/16/14 20:30:15, 4. Load User Details page = 18.4408812 
09/16/14 20:30:47, 1. Load App page = 15.1469081  
09/16/14 20:30:49, ============================== End Test Run ============================== 
09/16/14 20:37:24, ============================== New Test Run Start ============================== 
09/16/14 20:38:08, 1. Load App page = 18.3666046  
09/16/14 20:38:31, 2. Load Summary page Center Chart = 4.0516422 
09/16/14 20:38:54, 2. Load Summary page Center Chart = 2.8054817 
09/16/14 20:39:19, 2. Load Summary page Center Chart = 4.1741535 
09/16/14 20:39:37, 3. Load Document Details page = 18.4968156 
09/16/14 20:39:55, 2. Load Summary page Center Chart = 2.8528058 
09/16/14 20:40:15, 4. Load User Details page = 19.352799 
09/16/14 20:40:17, ============================== End Test Run ==============================

We can use Splunk Enterprise to visualize this information and plot the results over time. First, we define a new data source for the performance log data file, and then we define some extracts for the item under test (measured_item) and the time taken (perf_value). We then create a search to extract the data from the log file, and plot it as shown in the following screenshot:

This lets us track how the performance of the items under test changes over time, and will allow us to measure the impact of any performance tuning we undertake in the future.

Monitoring search performance in Splunk Enterprise

At one point in our journey, our tests showed that it was taking some time for the Summary dashboard to finish loading, with the message "Waiting for data" displaying in the Top Users and Top Documents panels. It's possible to see how long individual searches take to complete by navigating to the list of completed Jobs from the Activity menu in Splunk Enterprise. The following screenshot shows that summary_base_search was taking almost four minutes to complete even though the dashboard eventually reported that no data was found:

When we reviewed the code, we found that we had not set the time range for the query correctly so it was fetching all the data, before a filter on the table caused it not to display any of the returned data. After fixing the code by adding a time range filter to the original query, the Summary dashboard loaded significantly faster. The following screenshot shows the new search timings:


The Search Job Inspector tool (click the Inspect action) allows you to drill down into a particular search and troubleshoot its performance as well as help understand the behavior of the underlying knowledge objects such as tags, event types, lookups and so on. You will be able to evaluate the time spent parsing and executing the search (and subsearches) and setting up the data structures needed for it to run. This is broken down further for each command that is used. Thus, you can identify really expensive commands and begin your search optimizations.


Additional tips for fine-tuning your searches can be found at

What did we learn?

This section summarizes some of the key lessons learned from our preparations for the journey.

  • It's quick and easy to set up a development environment for building Splunk apps.
  • It's quick and easy to build a basic Splunk app using Simple XML.
  • For an efficient development workflow, consider making your app folder in the Splunk etc/apps folder a Git repository so you can quickly check in changes when you update and test your code.
  • Although we are not following a formal test-driven development (TDD) methodology, it's still important for our test team and developers to identify the key test cases as early as possible (ideally, before development starts). In this way, the developers will have the test cases that their implementation must pass in mind as they develop the code, and the test team can begin writing the tests in parallel with the developers writing the code.
  • We must ensure that we test the apps in both Windows and Linux environments to ensure that the configuration works correctly. For example, the application must work with the different path separators and environment variables on the two platforms.
  • Creating user acceptance tests that automate UI interactions require a number of different techniques to reliably locate and interact with elements on the web page.
  • Performance testing helps identify bottlenecks in the searches and devise optimization strategies. Budget sufficient time for this.