Forum Discussion

Bijan_141511's avatar
Bijan_141511
Icon for Nimbostratus rankNimbostratus
Jan 28, 2014

iRule RAM CACHE odd behaviour

Hi, I am trying to cache and serve some content from the static contents of my .Net app. I have enabled the Cache on the HTTP profile, and applied the following rule to my Virtual Server:-

when HTTP_REQUEST {

set hostname [string tolower [HTTP::host]]
set uriadd [string tolower [HTTP::uri]]

if { [TCP::local_port clientside] == "443" } {

    if { $hostname == "someURL.test.co.uk" } {
        if { !([class match [IP::remote_addr] equals CUST_IP_Addresses])
          and !([class match [IP::remote_addr] equals MIC_IP_Addresses])
          and !([class match [IP::remote_addr] equals private_net]) } {
            drop  
        } else {
            CACHE::disable
            pool CUST-someURL.test.co.uk_443_POOL
        }

    } else {
        if { ( $uriadd ends_with ".js" ) ||
          ( $uriadd ends_with ".pdf" ) ||
          ( $uriadd ends_with ".css" ) ||
          ( $uriadd ends_with ".jpg" ) ||
          ( $uriadd ends_with ".png" ) ||
          ( $uriadd ends_with ".gif" ) } {
             enable the cache
            CACHE::enable
        } else {
             disable the cache
            CACHE::disable
        }

         mobile site
        if { $hostname == "someAURL.test.co.uk" } {
            pool CUST-someAURL.test.co.uk_443_POOL
        } elseif  { ($hostname ends_with "test.co.uk") } {
            pool CUST-www.test.co.uk_443_POOL
        } elseif  { ($hostname ends_with "test1.co.uk") } {
            pool CUST-test1.co.uk_443_POOL
        } else {
            pool CUST-test2.co.uk_443_POOL
        }

    }


 -----------------------------------------------------
 Handle HTTP requests
} else {

    if { $hostname == "someURL.test.co.uk" } {
        if { !([class match [IP::remote_addr] equals CUST_IP_Addresses])
          and !([class match [IP::remote_addr] equals MIC_IP_Addresses])
          and !([class match [IP::remote_addr] equals private_net]) } {
            drop  
        } else {
            CACHE::disable
            pool CUST-someURL.test.co.uk_80_POOL
        }

    } else {
        if { ( $uriadd ends_with ".js" ) ||
          ( $uriadd ends_with ".pdf" ) ||
          ( $uriadd ends_with ".css" ) ||
          ( $uriadd ends_with ".jpg" ) ||
          ( $uriadd ends_with ".png" ) ||
          ( $uriadd ends_with ".gif" ) } {
             enable the cache
            CACHE::enable

        } else {
             disable the cache
            CACHE::disable
        }

         mobile site
        if { $hostname == "someAURL.test.co.uk" } {
            pool CUST-someAURL.test.co.uk_80_POOL
        } elseif  { ($hostname ends_with "test.co.uk")  } {
            pool CUST-www.test.co.uk_80_POOL
        } elseif  { ($hostname ends_with "test1.co.uk") } {
            pool CUST-test1.co.uk_80_POOL
        } else {
            pool CUST-test2.co.uk_80_POOL
        }

    }
}


 -----------------------------------------------------
 Tidy up variables
unset hostname uriadd 

}

So i expect all assets that end in js, pdf, css, jpg, png, or gif to be cached and then served from the cache. however when i turn on log these cache hits in the local log, it lists the assets but says 0 cache hits for the identical assets over and over again. : 0 cache hits for document at www.bla.co.uk/Assets/bla/Images-css/btn-close.png : 0 cache hits for document at www.bla.co.uk/Assets/bla/products/H/O/N/HONIPUPB_AV2_st.jpg : 0 cache hits for document at www.bla.co.uk/Assets/bla/Images-css/right-tab-selected.png : 0 cache hits for document at www.bla.co.uk/Assets/bla/Images-css/left-tab-selected.png : 0 cache hits for document at www.bla.co.uk/Assets/bla/Images-css/bg-resultsmenu.png : 0 cache hits for document at www.bla.co.uk/Assets/bla/Images-css/bg-add-to-basket.png : 0 cache hits for document at www.bla.co.uk/Assets/bla/Images/btn-go.png

We have pretty much exactly the same rule running for the site serving the CMS and these are clearly showing cache hits of more than 0. if anyone can explain what is happening that would be great. we use 10.2.3, and dont have webaccelerator. Many thanks

  • Hi Bijan,

    What does your HTTP profile look like? How do you view cache hits?

    Have you tried adding the following (it will not fix your problem but may help shine light on why it's not working as expected);-
    when CACHE_REQUEST {
        log local0. "Request for cached object $uriadd"
    }
    when CACHE_RESPONSE {
        log local0. "About to send cached response for $uriadd"
    }
    
  • I've just added the cache response and there are no returned hits in the log.

     

    Worth mentioning perhaps that this site has multiple white labels that run from the same IIS site and thus the same assets folder. so you could hit the same assets folder from 3 seperate [HTTP::HOSTS] but be sent to the same place. not sure what difference that will make?

     

  • See if this makes a difference - explicitly enabling/disabling CACHE from HTTP_RESPONSE - apologies in advance - I have taken some liberties with the format of your original iRule (although retained the logic I hope);-

    when HTTP_REQUEST {
        set fCache 0
        switch -glob [string tolower [HTTP::host]] {
            "someURL.test.co.uk"  {
                if { !([class match [IP::remote_addr] equals CUST_IP_Addresses])
                and !([class match [IP::remote_addr] equals MIC_IP_Addresses])
                and !([class match [IP::remote_addr] equals private_net]) } {
                    drop 
                    return
                } else {
                    CACHE::disable
                    pool "CUST-someURL.test.co.uk_[TCP::local_port]_POOL"
                    return
                }
            }
            "someAURL.test.co.uk" {
                pool "CUST-someAURL.test.co.uk_[TCP::local_port]_POOL"
            }
            "*test.co.uk" {
                pool "CUST-www.test.co.uk_[TCP::local_port]_POOL"
            }
            "*test1.co.uk" {
                pool "CUST-test1.co.uk_[TCP::local_port]_POOL"
            } 
            default {
                pool "CUST-test2.co.uk_[TCP::local_port]_POOL"
            }
        }
    
        switch [getfield [string tolower [URI::basename [HTTP::path]]] "." 2] {
            "js" -
            "pdf" -
            "css" -
            "jpg" -
            "png" {
                 enable the cache
                CACHE::enable
                set fCache 1
            }       
            default {
                 disable the cache
                CACHE::disable
            }
        }
    }
    when HTTP_RESPONSE {
        if {$fCache} {
            CACHE::enable
        } else {
            CACHE::disable
        }
    }
    
    • Bijan_141511's avatar
      Bijan_141511
      Icon for Nimbostratus rankNimbostratus
      no difference :( i suspect perhaps the RAM CACHE to be at fault in some way, either it has run out of space or it has some limitation that i am unaware of. I have added more sites to the RAM CACHE, they simply aren't doing anything now, when i log cache request or response, not a sausage.
  • Thanks for that, I will attempt this on a staging site to see if it works. Worth mentioning though that the original code is in use and working for other sites, although slightly different logic, but the cache enable and disable rules are the same. Just this one customer that it is refusing to utilise the cache properly. Really appreciate your time. will let you know how it goes. thanks

     

  • Quick thought... you're using HTTP::uri, not HTTP::path. The URI could have other data (query strings, etc...) at the end, which could be messing with the ends_with setting. We usually do something more like (where 'static_content' is a data group of extensions we want cached):

    set VsRAMCacheState [PROFILE::http ramcache]
    set content_extension [string range [HTTP::path] [string last . [HTTP::path]] end]
    if {  $VsRAMCacheState == 1 } {
        CACHE::disable
        if { [ class match $content_extension equals static_content ] }{
            enable
        } 
    }
    
  • Bijan,

     

    Could it be that the HTTP profile applied for the VIP for which caching IS working is different?

     

    q Have you checked the profile?

     

    tmsh ltm show profile http

     

    Then look for "Cache Size" to tell if it has been allocated as you requested.

     

  • Sorry;

    Have you checked the profile?

    tmsh 
    ltm 
    show profile http 
    

    Then look for "Cache Size" to tell if it has been allocated as you requested.

    • Bijan_141511's avatar
      Bijan_141511
      Icon for Nimbostratus rankNimbostratus
      I will see if I can get command line access and try this out. in terms of the GUI the profiles that are assigned have the same RAM CACHE details applied.
    • Bijan_141511's avatar
      Bijan_141511
      Icon for Nimbostratus rankNimbostratus
      you are right, it hasnt been assigned! RAM Cache Cache Size (in Bytes) 0 Total Cached Items 0 Total Evicted Items 0 Inter-Stripe Size (in Bytes) 0 Inter-Stripe Cached Items 0 Inter-Stripe Evicted Items 0 RAM Cache Hits/Misses Count Bytes Hits 11 5.4K Misses (Cacheable) 48 11.1K Misses (Total) 49 56.9G Inter-Stripe Hits 3 1.8K Inter-Stripe Misses 1.8M - Remote Hits 3 1.8K Remote Misses 1.8M -
    • Bijan_141511's avatar
      Bijan_141511
      Icon for Nimbostratus rankNimbostratus
      This is one that is working: RAM Cache Cache Size (in Bytes) 17.9M Total Cached Items 1.9K Total Evicted Items 12.5M Inter-Stripe Size (in Bytes) 3.0M Inter-Stripe Cached Items 393 Inter-Stripe Evicted Items 14.3M RAM Cache Hits/Misses Count Bytes Hits 142.5M 968.4G Misses (Cacheable) 13.8M 387.7G Misses (Total) 15.3M 1.8T Inter-Stripe Hits 92.0M 540.7G Inter-Stripe Misses 85.2M - Remote Hits 14.9M 186.0G Remote Misses 67.1M -
  • I think I have cracked it. We are out of available RAM in the units so it cannot assign any more RAM CACHE. The errors I expect based on the documentation from F5 are not appearing in the logs but this is my best guess. I'm waiting for our support contracts to be sorted out so that I can get an F5 engineer to confirm my suspicions, and then request more RAM and a software update.

     

  • Go and look at all the profiles you have assigned and add up the "Maximum Cache sizes" from the config. That's the figure that you are using actually using per-TMM.

     

    I assume you have seen this http://support.f5.com/kb/en-us/solutions/public/12000/200/sol12225.html?sr=11064657

     

    Check your log for the error message - if you can't find it as you have lost the logs try updating the profile that didn't allocate (just change one thing) and save - you should see the error message then.

     

    Go and look at what you are actually using compared to you Maximum cache size - you'll probably find a few you can reduce the Maximum Cache Size on.

     

    • Bijan_141511's avatar
      Bijan_141511
      Icon for Nimbostratus rankNimbostratus
      so yes as per the previous post you can see that the utilised RAM is not yet at the 50% value. The red line isnt half way to the blue line yet :) so in fact it isnt that we are out of RAM. I have raised a ticket with our suppliers to see if they can figure this one out. the free ram stats from the machine, show the overall used RAM, so i was really clutching at straws there.
    • Bijan_141511's avatar
      Bijan_141511
      Icon for Nimbostratus rankNimbostratus
      just to be sure i amended a http profile that wasnt working and looked for the errors in the log. no such luck sadly :( thanks for all your advice iHeartF5
  • So after lots of different advice it turns out that the reporting on the F5 is at fault. Even though the RAM CACHE reports 0 files in the cache and 0 files in the cache, it has actually cached it. This behaviour was checked by F5 and WestCon support and shown through wireshark that in fact the data is in the cache and being served from the RAM CACHE. This could be a unique feature of 10.2.3 F5 and WestCon are yet to prescribe a fix or resolution to this problem. In the meantime we have scheduled downtime for an upgrade to the latest 11 version. Many thanks to all who helped.