Converting a BIG-IP Maintenance Page iRule to Distributed Cloud using App Stack

If you are familiar with BIG-IP, you are probably also familiar with its flexible and robust iRule functionality. In fact, I would argue that iRules makes BIG-IP the swiss-army knife that it is. If there is ever a need for advanced traffic manipulation, you can usually come up with an iRule to solve the problem.

 

F5 Distributed Cloud (XC) has its own suite of tools to help in this regard. If you need to do some sort of traffic manipulation/routing you can usually handle that with Service Policies or simply using Routes. Even with these features, however, there are going to be some cases where iRule functionality from the BIG-IP cannot be reproduced directly in XC.  When this happens, we switch to using App Stack, which is XC’s version of a swiss army knife.

 

In this article, I wanted to walk through an example of how you can leverage XC's App Stack for a specific iRule conversion use case: Displaying a Custom Maintenance Page when all pool members are down.

 

For reference, here is the iRule:

when LB_FAILED {
    if { [active_members [LB::server pool]] == 0 } {
            if { ([string tolower [HTTP::host]] contains "example.com")} {
                        if { [HTTP::uri] ends_with "SystemMaintenance.jpg" } {
                                    HTTP::respond 200 content [ifile get "SystemMaintenance.jpg"] "Content-Type" "image/jpg"
                        } 
                        else {
                                    HTTP::respond 200 content "<!DOCTYPE html>
                                    <html lang="en">
                                    <head>
                                        <title>System Maintenance</title>
                                        <style type="text/css">
                                        .base {
                                            font-family: 'Tahoma';
                                            font-size: large;
                                        }
                                        </style>
                                    </head>
                                    <body>    
                                        <br>
                                        <center><img alt="sad" height="200" src="SystemMaintenance.jpg" width="200" /></center><br>
                                        <center><span class="base">This application is currently under system maintenance.</span></center>
                                        <br>
                                        <center><span class="base">All services will be back online in a few mintues.</span>                  
                                    </body>
                                    </html>"
                        }
            } 
    }
}

 

When dissecting this iRule, you can see we have to solve for the following:

  1. Trigger the maintenance page when all pool members are down
  2. Serve local files (images, css, etc.)
  3. Display the static HTML page

 

So, how do we do this? Well, App Stack allows us to deploy and host a container in Distributed Cloud. So we can easily create a simple container (using NGINX for bonus points!) that contains all these images, stylesheets, HTML files, etc. and manipulate our pools so that it uses this container when required!

 

Let’s deep dive into the step-by-step process…

 

Step by Step Walk-through:

 

Container Creation

First, we have to create our container. I'm not going to go too deep into how to create a container in this article, but I will highlight the main steps I took.

 

To start, I simply extracted the HTML from the iRule above and saved all the required files (images, stylesheets, etc.) in one directory.

 

Since I am adding NGINX to the container, I must also create and include a nginx.conf file in this directory. Below was my configuration:

worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /tmp/nginx.pid;
events {
    worker_connections  1024;
}
http {
    client_body_temp_path /tmp/client_temp;
    proxy_temp_path       /tmp/proxy_temp_path;
    fastcgi_temp_path     /tmp/fastcgi_temp;
    uwsgi_temp_path       /tmp/uwsgi_temp;
    scgi_temp_path        /tmp/scgi_temp;
    include       /etc/nginx/mime.types;

    server {
        listen 8080;

        
         location / {
            root /usr/share/nginx/html/;
            index  index.html;
        }
        
        location ~* \.(js|jpg|png|css)$ {
            root /usr/share/nginx/html/;
        }
    } 
    sendfile        on;
    keepalive_timeout  65;
}

 

There really isn’t much to the NGINX configuration for this example, but keep in mind that you can expand on this and make it much more robust for other use cases. (One note about the configuration above is that you will see /tmp paths mentioned. These are required since our container will run as a non-root user. For more information, see the NGINX documentation here: https://hub.docker.com/_/nginx)

 

Finally, I included a Dockerfile with my requirements for NGINX and exposing port 8080. Once that was all set, I built my container and pushed it Docker Hub as a private repository. 

 

App Stack Deployment

Now that we have the container created and uploaded to Docker Hub, we are ready to bring it to XC. Start by opening up the F5 XC Console and navigate to the Distributed Apps tile. Navigate to Applications -> Container Registries, then click Add Container Registry.

 

Here we just have to add a name for the Container Registry, our Docker Hub Username, “docker.io” for the Server FQDN, and then blindfold our password for Docker Hub.

 

After saving, we are now ready to configure our workload

To do so, we have to navigate over to Applications -> Virtual K8s. I already had a Virtual Site and Virtual K8s created, but you'll need to create those if you don't already have them. For your reference, here are some links to a walk-through on each of these:

 

Select your Virtual K8s cluster:

 

After selecting your cluster, navigate to the Workloads tab. Under Workloads, click on Add VK8s Workload.

 

Give your workload a name and then change the Type of Workload to Service instead of Simple Service. Your configuration should look something like below:

 

You'll notice we now have to configure the Service. Click Configure. The first step is to tell XC which container we want to deploy for this service. Under Containers, select Add Item:

 

Give the container a name, and then input your Image Name. The format for the image name is "registry/image:tagname". If you leave the tag name blank, it defaults to “latest”.

 

Under the Select Container Registry drop down, select Private Registry. This will bring up another drop-down where we will select the container registry we created earlier. Your configuration should end up looking similar to below:

 

For this simple use case, we can skip the Configuration Parameters and move to our Deploy Options. Here, we have some flexibility on where we want to deploy our workload. You can choose All Regional Edges (F5 PoPs), specific REs, or even custom CEs and Virtual Sites. In my basic example, I chose Regional Edge Sites and picked the ny8-nyc RE  for now:

 

Next, we have to configure where we want to advertise this workload. We have the option to keep it internal and only advertise in the vK8s Cluster or we could advertise this workload directly on the Internet. Since we only want this maintenance page to be seen when the pool members are all down, we are going to keep this to Advertise In Cluster.

 

After selecting the advertisement, we have to configure our Port Information. Click Configure.

 

Under the advertisement configuration, you’ll see we are simply choosing our ports. If you toggle “Show Advanced fields” you can see we have some flexibility on the port we want to advertise and the actual target port for the container. In my case, I am going to use 8080 for both, but you may want to have a different combination (i.e. 80:8080). Click Apply once finished.

 

Now that we have the ports defined, we can simply hit Apply on the Service configuration and Save and Exit the workload to kick off the deployment.

 

We should now see our new maintenance-page workload in the list. You’ll notice that after refreshing a couple times, the Running/Completed Pods and Total Pods fields will be populated with the number of REs/CEs you chose to deploy the workload to. After a few minutes, you should have a matching number of Running/Completed Pods to your Total Pods. This gives us an indication that the workload is ready to be used for our application. (Note: you can click on the pod numbers in this list to see a more detailed status of the pods. This helps when troubleshooting)

 

Pool Creation

With our workload live and advertised in the cluster, it is time to create our pool. In the top left of the platform we’ll need to Select Service and change to Mulitcloud App Connect:

 

 

Under Mulit-Cloud App Connect, navigate to Manage -> Load Balancers -> Origin Pools and Select Add Origin Pool.

 

Here, we’ll give our origin pool a name and then go directly to Origin Servers. Under Origin Servers, click Add Item.

 

Change the Type of the Origin Server to be K8s Service Name of Origin Server on given Sites.  Under Service Name, we have to use the format "servicename.namespace:cluster-id" to point to our workload. In my case, it was "maintenance-page.bohanson:bohanson-test" since I had the following:

  • Service Name: maintenance-page
  • Namespace: bo-hanson
  • VK8s Cluster: bohanson-test

 

Under Site or Virtual Site, I chose the Virtual Site I already had created. The last step is to change the network to vK8s Networks on Site and Click Apply. The result should look like the below:

 

We now need to change our Origin Server port to be the port we defined in the workload advertisement configuration. In my case, I chose port 8080. The rest of the configuration of the origin server is up to you, but I chose to include a simple http health check to monitor the service. Once the configuration finished, click Save and Exit. The final pool configuration should look like this:

 

Application Deployment:

With our maintenance container up and running and our pool all set, it is time to finally deploy our solution. In this case, we can select any existing Load Balancer configuration where we want to add the maintenance page. You could also create a new Load Balancer from scratch, of course, but for this example I am deploying to an existing configuration.

 

Under Manage -> Load Balancers, find the load balancer of your choosing and then select Manage Configuration. Once in the Load Balancer view, select Edit Configuration in the top right.

 

To deploy the solution, we just need to navigate to our Origins section and add our new maintenance pool. Select Add Item.

 

At this point, you may be thinking, “Well that is great, but how am I going to get the pool to only show when all other pool members are down?”

 

That is the beauty of the F5 Distributed Cloud pool configuration. We have two options that we can set when adding a pool: Weight and Priority. Both of those options are pretty self-explanatory if you have used a load balancer before, but what is interesting here is when you give these options a value of zero.

 

Giving a pool a weight of zero would disable the pool. For a maintenance pool use case, that could be helpful since we can manually go into the Load Balancer configuration during a maintenance window, disable the main pool, and then bring up the maintenance pool until our change window is closed when we could then reverse the weights and bring the main pool back online. That ALMOST solves our iRule use case, but it would be manual.

 

Alternatively, we can give a pool a Priority of zero. Doing so would mean that all other pools take priority and will be used unless they go down. In the event of the main pool going down, it would default to the lowest priority pool (zero).

 

Now that is more like it! This means we can set our maintenance pool to a Priority of zero and it will automatically be used when the health of all our other pool members go down – which completely fulfills the original iRule requirement.

 

So in our configuration, let's add our new maintenance pool and set:

  • Weight: 1
  • Priority: 0

 

After clicking save, the final pool configuration should look something like this:

 

Testing

To test, we can simply switch our health check on the main pool to something that would fail. In my case, I just changed the expected status code on the health check to something arbitrary that I knew would fail, but this could be different in your case.

 

After changing the health check, we can navigate to our application in a browser, and see our maintenance page dynamically appear!

 

Changing the health check on the main pool back to a working one should dynamically turn off the maintenance page as well:

 

Summary

This is just one example of how you can use App Stack to convert some more advanced/dynamic iRules over to F5 Distributed Cloud. I only used a basic NGINX configuration in this example, but you can start to see how leveraging NGINX in App Stack can give us even more flexibility.

 

Hopefully this helps!

Published May 06, 2024
Version 1.0

Was this article helpful?

No CommentsBe the first to comment