Forum Discussion

Nikoolayy1's avatar
Mar 31, 2022

F5 irule Table command rate limit or block HTTP requests in two different ways



I have seen two ways to use the table command to limit HTTP requests as one is to create a single table entry that has as key the client IP address and the value is increased each time the user connects to the VIP. A good example is . This is blocking the source IP address for some time but if we want rate limit then the next example is better.


The other way is to create a subtable for each connecting client ip address and then count the number of keys in each subtable. This is more RAM memmory intensive but each entry has its own timeout and if the client connect 20 times and the limit is 21 after some time some of  the entries will expire but not all and when the client again tries to connect maybe there will 15 entries, so if the client connects 6 times really fast the source IP address will be blocked. This seems to more like a rate limit way of blocking DDOS as the first method is more like completely blocking the the client ip address for some configured time (quarantine the source IP address). A nice example is .


From what I have seen we can only count all the keys in a subtable and not filter based on a repeating key, so you need to create a subtable for each client ip as you can't use a single subtable.

3 Replies

  • There's no question above to answer but I can share a suggestion.

    In the past I've done this with global tables but prefixing the key with the virtual server name ("[virtual name][IP::client_addr]") and when someone was blocked we sent a JSON document to Splunk.

    Reason being:

    • We did not really care about the state/content of the table until an action was taken.
    • By sending the blocked IPs to Splunk it made the troubleshooting easier.
    • Prefixing the table key with the VIP name prevented key overlapping in case one IP accessed multiple VIPs.

    I'd also like to add that I'd never try DDOS protection on premise as I believe it's a lost cause. Better leave that to the big guys in the Cloud such as Silverline, Cloudflare or Akamai. Throttling individual clients to prevent abuse however is alright in my book. 🙂

    • Nikoolayy1's avatar
      Icon for MVP rankMVP

      If we are talking about layer 3/4 DDOS I agree 100% that better to block this on a scrubbing center before your Data Center but for Layer 7 Web DDOS still many do this not only on the CDN provider but on-prem with a WAF system like F5 advanced WAF( ASM) as Layer 7 Web DDOS in most cases is not with many packets but targeting a specific part of the web application.


      It is good to mention that sometimes the scrubbing center  blocks BIG volumetric DDOS attacks and on many places I worked on after the scrubbing center then there is F5 AFM on-prem that uses machine learning for making a dynamic DDOS baseline threshold to catch the Layer 3/4 attacks that scrubbing center didn't or to protect till scrubbing center activates their DDOS protections. This can be combined with Silverline so the AFM to redirect the traffic to Silveline when there is DDOS and in this way you don't pay non-stop for a scrubbing center:



      Still the future seems to be the Volterra distributed cloud (maybe F5 in the future will merge Silverline and Volterra but we will have to see) and if you have not reviewed it I recommend doing so as you did not mention it as an option as there is a free simulator for it:


  • I think all of the ones mentioned can handle L7 protection too (like Swagger API imports to maintain a ruleset etc). Never used them extensively myself though...