Implementing BIG-IP WAF logging and visibility with ELK


This technical article is useful for BIG-IP users familiar with web application security and the implementation and use of the Elastic Stack. This includes, application security professionals, infrastructure management operators and SecDevOps/DevSecOps practitioners.

The focus is for WAF logs exclusively. Firewall, Bot, or DoS mitigation logging into the Elastic Stack is the subject of a future article.


This article focusses on the required configuration for sending Web Application Firewall (WAF) logs from the BIG-IP Advanced WAF (or BIG-IP ASM) module to an Elastic Stack (a.k.a. Elasticsearch-Logstash-Kibana or ELK). 

First, this article goes over the configuration of BIG-IP. It is configured with a security policy and a logging profile attached to the virtual server that is being protected. This can be configured via the BIG-IP user interface (TMUI) or through the BIG-IP declarative interface (AS3).

The configuration of the Elastic Strack is discussed next. The configuration of filters adapted to processing BIP-IP WAF logs.

Finally, the article provides some initial guidance to the metrics that can be taken into consideration for visibility. It discusses the use of dashboards and provides some recommendations with regards to the potentially useful visualizations.

Pre-requisites and Initial Premise

For the purposes of this article and to follow the steps outlined below, the user will need to have at least one BIG-IP Adv. WAF running TMOS version 15.1 or above (note that this may work with previous version but has not been tested). The target BIG-IP is already configured with:

  • A virtual Server
  • A WAF policy

An operational Elastic Stack is also required. 

The administrator will need to have configuration and administrative privileges on both the BIG-IP and Elastic Stack infrastructure. They will also need to be familiar with the network topology linking the BIG-IP with the Elastic Search cluster/infrastructure.

It is assumed that you want to use your Elastic Search (ELK) logging infrastructure to gain visibility into BIG-IP WAF events.

Logging Profile Configuration

An essential part of getting WAF logs to the proper destination(s) is the Logging Profile. The following will go over the configuration of the Logging Profile that sends data to the Elastic Stack.

Overview of the steps:

  1. Create Logging Profile
  2. Associate Logging Profile with the Virtual Server

After following the procedure below On the wire, logs lines sent from the BIG-IP are comma separated value pairs that look something like the sample below:

Aug 25 03:07:19 localhost.localdomainASM:unit_hostname="bigip1",management_ip_address="",management_ip_address_2="N/A",http_class_name="/Common/log_to_elk_policy",web_application_name="/Common/log_to_elk_policy",policy_name="/Common/log_to_elk_policy",policy_apply_date="2020-08-10 06:50:39",violations="HTTP protocol compliance failed",support_id="5666478231990524056",request_status="blocked",response_code="0",ip_client="",route_domain="0",method="GET",protocol="HTTP",query_string="name='",x_forwarded_for_header_value="N/A",sig_ids="N/A",sig_names="N/A",date_time="2020-08-25 03:07:19",severity="Error",attack_type="Non-browser Client,HTTP Parser Attack",geo_location="N/A",ip_address_intelligence="N/A",username="N/A",session_id="0",src_port="39348",dest_port="80",dest_ip="",sub_violations="HTTP protocol compliance failed:Bad HTTP version",virus_name="N/A",violation_rating="5",websocket_direction="N/A",websocket_message_type="N/A",device_id="N/A",staged_sig_ids="",staged_sig_names="",threat_campaign_names="N/A",staged_threat_campaign_names="N/A",blocking_exception_reason="N/A",captcha_result="not_received",microservice="N/A",tap_event_id="N/A",tap_vid="N/A",vs_name="/Common/adv_waf_vs",sig_cves="N/A",staged_sig_cves="N/A",uri="/random",fragment="",request="GET /random?name=' or 1 = 1' HTTP/1.1\r\n",response="Response logging disabled"

Please choose one of the methods below. The configuration can be done through the web-based user interface (TMUI), the command line interface (TMSH), directly with a declarative AS3 REST API call, or with the BIG-IP native REST API. This last option is not discussed herein.



Create Profile

  1. Connect to the BIG-IP web UI and login with administrative rights
  2. Navigate to Security >> Event Logs >> Logging Profiles
  3. Select “Create”
  4. Fill out the configuration fields as follows:
  5. Profile Name (mandatory)
  6. Enable Application Security
  7. Set Storage Destination to Remote Storage
  8. Set Logging Format to Key-Value Pairs (Splunk)
  9. In the Server Addresses field, enter an IP Address and Port then click on Add as shown below:
  10. Click on Create

Add Logging Profile to virtual server with the policy

  1. Select target virtual server and click on the Security tab (Local Traffic >> Virtual Servers : Virtual Server List >> [target virtualserver] )
  2. Highlight the Log Profile from the Available column and put it in the Selected column as shown in the example below (log profile is “log_all_to_elk”):
  3. Click on Update

At this time the BIG-IP will forward logs Elastic Stack.



Create profile

  1. ssh into the BIG-IP command line interface (CLI)
  2. from the tmsh prompt enter the following:
create security log profile [name_of_profile] application add { [name_of_profile] { logger-type remote remote-storage splunk servers add { [IP_address_for_ELK]:[TCP_Port_for_ELK] { } } } }

For example:

create security log profile dc_show_creation_elk application add { dc_show_creation_elk { logger-type remote remote-storage splunk servers add { { } } } }

3. ensure that the changes are saved:

 save sys config partitions all

Add Logging Profile to virtual server with the policy

1.    From the tmsh prompt (assuming you are still logged in) enter the following:

modify ltm virtual [VS_name] security-log-profiles add { [name_of_profile] }

For example:

modify ltm virtual adv_waf_vs security-log-profiles add { dc_show_creation_elk }

2.    ensure that the changes are saved:

save sys config partitions all

At this time the BIG-IP sends logs to the Elastic Stack.


Application Services 3 (AS3) is a BIG-IP configuration API endpoint that allows the user to create an application from the ground up. For more information on F5’s AS3, refer to link

In order to attach a security policy to a virtual server, the AS3 declaration can either refer to a policy present on the BIG-IP or refer to a policy stored in XML format and available via HTTP to the BIG-IP (ref. link).

The logging profile can be created and associated to the virtual server directly as part of the AS3 declaration. For more information on the creation of a WAF logging profile, refer to the documentation found here.

The following is an example of a pa

rt of an AS3 declaration that will create security log profile that can be used to log to Elastic Stack:

     "secLogRemote": {
       "class": "Security_Log_Profile",
       "application": {
         "localStorage": false,
         "maxEntryLength": "10k",
         "protocol": "tcp",
         "remoteStorage": "splunk",
         "reportAnomaliesEnabled": true,
         "servers": [
             "address": "",
             "port": "5244"

In the sample above, the ELK stack IP address is and listens on port 5244 for BIG-IP WAF logs. Note that the log format used in this instance is “Splunk”. There are no declared filters and thus, only the illegal requests will get logged to the Elastic Stack. A sample AS3 declaration can be found here.

ELK Configuration

The Elastic Stack configuration consists of creating a new input on Logstash. This is achieved by adding an input/filter/ output configuration to the Logstash configuration file.   Optionally, the Logstash administrator might want to create a separate pipeline – for more information, refer to this link.

The following is a Logstash configuration known to work with WAF logs coming from BIG-IP:

 input {
 syslog {
   port => 5244
filter {
 grok {
   match => {
     "message" => [
   break_on_match => false
 mutate {
   split => { "attack_type" => "," }
   split => { "sig_ids" => "," }
   split => { "sig_names" => "," }
   split => { "sig_cves" => "," }
   split => { "staged_sig_ids" => "," }
   split => { "staged_sig_names" => "," }
   split => { "staged_sig_cves" => "," }
   split => { "sig_set_names" => "," }
   split => { "threat_campaign_names" => "," }
   split => { "staged_threat_campaign_names" => "," }
   split => { "violations" => "," }
   split => { "sub_violations" => "," }
 if [x_forwarded_for_header_value] != "N/A" {
   mutate { add_field => { "source_host" => "%{x_forwarded_for_header_value}"}}
 } else {
   mutate { add_field => { "source_host" => "%{ip_client}"}}
 geoip {
   source => "source_host"
output {
 elasticsearch {
   hosts => ['localhost:9200']
   index => "big_ip-waf-logs-%{+YYY.MM.dd}"

After adding the configuration above to the Logstash parameters, you will need to restart the Logstash instance to take the new logs into configuration.  The sample above is also available here.

The Elastic Stack is now ready to process the incoming logs. You can start sending traffic to your policy and start seeing logs populating the Elastic Stack.

If you are looking for a test tool to generate traffic to your Virtual Server, F5 provides a simple WAF tester tool that can be found here.

At this point, you can start creating dashboards on the Elastic Stack that will satisfy your operational needs with the following overall steps:

·     Ensure that the log index is being created (Stack Management >> Index Management)

·     Create a Kibana Index Pattern (Stack Management>>Index patterns)

·     You can now peruse the logs from the Kibana discover menu (Discover)

·     And start creating visualizations that will be included in your Dashboards (Dashboards >> Editing Simple WAF Dashboard)

A complete Elastic Stack configuration can be found here – note that this can be used with both BIG-IP WAF and NGINX App Protect.


You can now leverage the widely available Elastic Stack to log and visualize BIG-IP WAF logs. From dashboard perspective it may be useful to track the following metrics:

-Request Rate

-Response codes

-The distribution of requests in term of clean, blocked or alerted status

-Identify the top talkers making requests

-Track the top URL’s being accessed

-Top violator source IP

An example or the dashboard might look like the following:

Published Sep 21, 2020
Version 1.0

Was this article helpful?


  • It's working ok from f5 asm to ELK 7.9.0. This guide it's very useful. Thanks!.


  • deferring to the author  - do you know if LTM implementation docs exist per  's request?

  • For LTM - the best bet is to use F5's Telemetry Streaming (TS): - Elasticsearch can then ingest the formatted JSON and you can get things going for your dashboard -

  • Is there anything for Custom APM logging to ELK?

    We integrated APM with ELK & its working, can see User-ID, Session-ID, App being accessed by User etc..
    Now having requirement to send customized logs to ELK that 'should include/append "User-id" along with endpoint/posture check result be it Successful/Fail.

    My VPE is: PrivacyAcceptancePage > SAML Auth > EndpointChecks > Logon > SSO-Credential-Mapping (NTLM-SSO) > Adv.Resource.Assign

    Can extract user-id from SAML.
    Below Log is after SAML-Auth & Before Endpoint check:
    <141>1 2022-01-27T17:30:14.149174+05:30 apmd 14296 01490265:5: [F5@12276 hostname="" errdefs_msgno="01490265:5:" partition_name="Common" session_id="06df096f" Access_Profile="/Common/My_Access_Policy_NTLM_SSO" Partition="Common" Session_Id="06df096f" SPname="/Common/Access_Policy_v1__sp" IdpName="/Common/Access_Policy_v1_ProductionPilot" SubjType="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress" SubjVal=""] /Common/My_Access_Policy_NTLM_SSO:Common:06df096f: BIG-IP as SP (/Common/Access_Policy_v1__sp) have received SAML Assertion from IdP (/Common/Access_Policy_v1_ProductionPilot) for subject type (urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress) value (

    Endpoint checks status be Pass/fail, it should append Username/User-id in logs, so that ELK can use that info & publish on ELK Custom Dashboard along with User-ID:
    Below log is about Endpoint Check status currently without User-id info in it:
    <142>1 2022-01-13T15:01:36.195716+05:30 apmd 14265 01490006:6: [F5@12276 hostname="" errdefs_msgno="01490006:6:" partition_name="Common" session_id="8242cf14" Access_Profile="/Common/My_Access_Policy_NTLM_SSO" Partition="Common" Session_Id="8242cf14" Rule_Caption="Successful" Current_Node="Firewall" Next_Node="Antivirus - Windows"] /Common/My_Access_Policy_NTLM_SSO:Common:8242cf14: Following rule 'Successful' from item 'Firewall' to item 'Antivirus - Windows'

  • Marvin's avatar
    Icon for Cirrocumulus rankCirrocumulus

    Is it possible to use telemetry streaming and send web request logs in JSON format to ElasticSearch from the Virtual servers with http profile? And is there any integration with ASM with telemitry streaming (our preference)?