Forum Discussion

Michael_Lang_61's avatar
Michael_Lang_61
Icon for Nimbostratus rankNimbostratus
Aug 19, 2013

IDLE timeout and Keepalive Interval

Maybe someone can bring some light into our confusion regarding IDLE Timeout and Keepalive settings.

 

the default "tcp" protocol specifies the values: IDLE timeout: 300 seconds Keep Alive interval: 1800 seconds

 

so reading the help for those options the Keep Alive one reads: "how frequently the system sends data over an idle TCP connection".

 

So assume there's point in time 1 where the client connects and sends data. After 2 minutes the communication is finished and enters in IDLE mode which is determined by a trigger which is within 300 seconds (IDLE timeout).

 

Keep Alive will kick in after 1800 seconds meaning, in reallity it will never be used at all ? Or is this "seconds" a typo and should be "milliseconds" ?

 

Do we miss something here ? thanks for any hint...

 

kind regards Michael lang

 

  • Richard__Harlan's avatar
    Richard__Harlan
    Historic F5 Account

    The Keep-Alive is in sec. The main reason that the keep-Alive is a setting is the LTM as a full Proxy does not pass client Keep-alives to the server. You have to create a customer TCP profile for this. Also you do not want the LTM to send Keep-Alives to most connections. As Connection eat memory it is best for the LTM to create and remove connections when it can. Keep-Alives are best for long term connection to Databases server and such where the application expects the connection to always be open and will have unexpected results if it is not.

     

  • Richard,

     

    so this Keep Alive Timeout is between the backend server and the LTM ? and the protocol used between Client-Server is LDAP so no proxy at all and I would assume that the Keepalive is used between (profile depending) either the LTM and the Client or the LTM and the backend server.

     

    kind regards Michael Lang

     

  • Here are some results I found when testing keepalives on TCP and FastL4 virtual servers on 9.4.7 and 10.2.1. I would guess the 10.2.1 results would be similar to what you'd see in 11.x, but haven't tested to confirm.

    The customer was considering using keepalives on TMM to alleviate issues with apps that can’t generate keepalives and are slow to generate and start sending the response content.

    Based on the results, if the app needs keepalives it makes sense to enable them on both the client and serverside profiles. This ensures that if the client intentionally or unintentionally drops the connection without closing it, that TMM will detect this after 3x the TCP keepalive interval and reset the connection.

    You'd want to extend the idle timeout so it's (much) longer than the keepalive interval.

    See below for the test results.

    Aaron

    Keepalive (30 second keepalive, 120 second idle timeout)

    BIG-IP 10.2.1

    VS type |clientside |serverside |result of client disconnecting |description

    fastL4 |enabled |n/a |serverside connection is reset after 3x keepalive interval passes |TMM sends keepalive ACKs alternating from client then to server with one keepalive sent per interval

    fastL4 with loose open/close enabled |enabled |n/a |connection is preserved as TMM does not send keepalives with loose init/close enabled |

    |    |    |    |
    

    TCP |not enabled |enabled |connection is preserved as server responds to keepalive probes |ACKs from server are not proxied through to client so TMM never sees the clientside connection as down.

    TCP |enabled |enabled |serverside connection is reset after 3x keepalive intervals pass |

    Keepalive (30 second keepalive, 120 second idle timeout)

    BIG-IP 9.4.7

    VS type |clientside |serverside |result of client disconnecting |description

    fastL4 |n/a |n/a |keepalives not supported on 9.4.x FastL4 profile |

    fastL4 with loose open/close enabled |n/a |n/a |keepalives not supported on 9.4.x FastL4 profile |

    |    |    |    |
    

    TCP |not enabled |enabled |connection is preserved as server responds to keepalive probes |ACKs from server are not proxied through to client so TMM never sees the clientside connection as down.

    TCP |enabled |enabled |serverside connection is reset after 3x keepalive intervals pass |

  • Aaron,

     

    thanks I think your last sentence in the first post acknowledges that we see the things in the right way and the default values are just "untested" by F5. So considering that, connections inheriting the default TCP Profile will not send any keepalive but will be disconnected after IDLing for 5 minutes.

     

    kind regards Michael Lang

     

  • Yes, the default settings need to be customized if you want to use TCP keepalives.

     

    "untested"? No... I'm not speaking officially for F5. I am referring to unofficial testing I did for a customer. I don't know what testing F5 has done for this feature.

     

    Aaron

     

  • Hey Aaron thanks for your test results, I actually need these :) Out of curiosity, what did you use to monitor the both sides of the connections client / server ?

     

    Thanks, Austin