Securing the LLM User Experience with an AI Firewall
As artificial intelligence (AI) seeps into the core day-to-day operations of enterprises, a need exists to exert control over the intersection point of AI-infused applications and the actual large language models (LLMs) that answer the generated prompts. This control point should serve to impose security rules to automatically prevent issues such as personally identifiable information (PII) inadvertently exposed to LLMs. The solution must also counteract motivated, intentional misuse such as jailbreak attempts, where the LLM can be manipulated to provide often ridiculous answers with the ensuing screenshotting attempting to discredit the service. Beyond the security aspect and the overwhelming concern of regulated industries, other drivers include basic fiscal prudence 101, ensuring the token consumption of each offered LLM model is not out of hand. This entire discussion around observability and policy enforcement for LLM consumption has given rise to a class of solutions most frequently referred to as AI Firewalls or AI Gateways (AI GW). An AI FW might be leveraged by a browser plugin, or perhaps applying a software development kit (SDK) during the coding process for AI applications. Arguably, the most scalable and most easily deployed approach to inserting AI FW functionality into live traffic to LLMs is to use a reverse proxy. A modern approach includes the F5 Distributed Cloud service, coupled with an AI FW/GW service, cloud-based or self-hosted, that can inspect traffic intended for LLMs like those of OpenAI, Azure OpenAI, or privately operated LLMs like those downloaded from Hugging Face. A key value offered by this topology, a reverse proxy handing off LLM traffic to an AI FW, which in turn can allow traffic to reach target LLMs, stems from the fact that traffic is seen, and thus controllable, in both directions. Should an issue be present in a user’s submitted prompt, also known as an “inference”, it can be flagged: PII (Personally Identifiable Information) leakage is a frequent concern at this point. In addition, any LLM responses to prompts are also seen in the reverse path: consider a corrupted LLM providing toxicity in its generated replies. Not good. To achieve a highly performant reverse proxy approach to secured LLM access, a solution that can span a global set of users, F5 worked with Prompt Security to deploy an end-to-end AI security layer. This article will explore the efficacy and performance of the live solution. Impose LLM Guardrails with the AI Firewall and Distributed Cloud An AI firewall such as the Prompt Security offering can get in-line with AI LLM flows through multiple means. API calls from Curl or Postman can be modified to transmit to Prompt Security when trying to reach targets such as OpenAI or Azure OpenAI Service. Simple firewall rules can prevent employee direct access to these well-known API endpoints, thus making the Prompt Security route the sanctioned method of engaging with LLMs. A number of other methods could be considered but have concerns. Browser plug-ins have the advantage of working outside the encryption of the TLS layer, in a manner similar to how users can use a browser’s developer tools to clearly see targets and HTTP headers of HTTPS transactions encrypted on the wire. Prompt Security supports plugins. A downside, however, of browser plug-ins is the manageability issue, how to enforce and maintain across-the-board usage, simply consider the headache non-corporate assets used in the work environment. Another approach, interesting for non-browser, thick applications on desktops, think of an IDE like VSCode, might be an agent approach, whereby outbound traffic is handled by an on-board local proxy. Again, Prompt can fit in this model however the complexity of enforcement of the agent, like the browser approach, may not always be easy and aligned with complete A-to-Z security of all endpoints. One of the simplest approaches is to ingest LLM traffic through a network-centric approach. An F5 Distributed Cloud HTTPS load balancer, for instance, can ingest LLM-bound traffic, and thoroughly secure the traffic at the API layer, things like WAF policy and DDoS mitigations, as examples. HTTP-based control plane security is the focus here, as opposed to the encapsulated requests a user is sending to an LLM. The HTTPS load balancer can in turn hand off traffic intended for the likes of OpenAI to the AI gateway for prompt-aware inspections. F5 Distributed Cloud (XC) is a good architectural fit for inserting a third-party AI firewall service in-line with an organization’s inferencing requests. Simply project a FQDN for the consumption of AI services; in this article we used the domain name “llmsec.busdevF5.net” into the global DNS, advertising one single IP address mapping to the name. This DNS advertisement can be done with XC. The IP address, through BGP-4 support for anycast, will direct any traffic to this address to the closest of 27 international points of presence of the XC global fabric. Traffic from a user in Asia may be attracted to Singapore or Mumbai F5 sites, whereas a user in Western Europe might enter the F5 network in Paris or Frankfurt. As depicted, a distributed HTTPS load balancer can be configured – “distributed” reflects the fact traffic ingressing in any of the global sites can be intercepted by the load balancer. Normally, the server name indicator (SNI) value in the TLS Client Hello can be easily used to pick the correct load balancer to process this traffic. The first step in AI security is traditional reverse proxy core security features, all imposed by the XC load balancer. These features, to name just a few, might include geo-IP service policies to preclude traffic from regions, automatic malicious user detection, and API rate limiting; there are many capabilities bundled together. Clean traffic can then be selected for forwarding to an origin pool member, which is the standard operation of any load balancer. In this case, the Prompt Security service is the exclusive member of our origin pool. For this article, it is a cloud instantiated service - options exist to forward to Prompt implemented on a Kubernetes cluster or running on a Distributed Cloud AppStack Customer Edge (CE) node. Block Sensitive Data with Prompt Security In-Line AI inferences, upon reaching Prompt’s security service, are subjected to a wide breadth of security inspections. Some of the more important categories would include: Sensitive data leakage, although potentially contained in LLM responses, intuitively the larger proportion of risk is within the requesting prompt, with user perhaps inadvertently disclosing data which should not reach an LLM Source code fragments within submissions to LLMs, various programming languages may be scanned for and blocked, and the code may be enterprise intellectual property OWASP LLM top 10 high risk violations, such as LLM jailbreaking where the intent is to make the LLM behave and generate content that is not aligned with the service intentions; the goal may be embarrassing “screenshots”, such as having a chatbot for automobile vendor A actually recommend a vehicle from vendor B OWASP Prompt Injection detection, considered one of the most dangerous threats as the intention is for rogue users to exfiltrate valuable data from sources the LLM may have privileged access to, such as backend databases Token layer attacks, such as unauthorized and excessive use of tokens for LLM tasks, the so-called “Denial of Wallet” threat Content moderation, ensuring a safe interaction with LLMs devoid of toxicity, racial and gender discriminatory language and overall curated AI experience aligned with those productivity gains that LLMs promise To demonstrate sensitive data leakage protection, a Prompt Security policy was active which blocked LLM requests with, among many PII fields, a mailing address exposed. To reach OpenAI GPT3.5-Turbo, one of the most popular and cost-effective models in the OpenAI model lineup, prompts were sent to an F5 XC HTTPS load balancer at address llmsec.busdevf5.net. Traffic not violating the comprehensive F5 WAF security rules were proxied to the Prompt Security SaaS offering. The prompt below clearly involves a mailing address in the data portion. The ensuing prompt is intercepted by both the F5 and Prompt Security solutions. The first interception, the distributed HTTPS load balancer offered by F5 offers rich details on the transaction, and since no WAF rules or other security policies are violated, the transaction is forwarded to Prompt Security. The following demonstrates some of the interesting details surrounding the transaction, when completed (double-click to enlarge). As highlighted, the transaction was successful at the HTTP layer, producing a 200 Okay outcome. The traffic originated in the municipality of Ashton, in Canada, and was received into Distributed Cloud in F5’s Toronto (tr2-tor) RE site. The full details around the targeted URL path, such as the OpenAI /v1/chat/completions target and the user-agent involved, vscode-restclient, are both provided. Although the HTTP transaction was successful, the actual AI prompt was rejected, as hoped for, by Prompt Security. Drilling into the Activity Monitor in the Prompt UI, one can get a detailed verdict on the transaction (double-click). Following the yellow highlights above, the prompt was blocked, and the violation is “Sensitive Data”. The specific offending content, the New York City street address, is flagged as a precluded entity type of “mailing address”. Other fields that might be potentially blocking candidates with Prompt’s solution include various international passports or driver’s license formats, credit card numbers, emails, and IP addresses, to name but a few. A nice, time saving feature offered by the Prompt Security user interface is to simply choose an individual security framework of interest, such as GDPR or PCI, and the solution will automatically invoke related sensitive data types to detect. An important idea to grasp: The solution from Prompt is much more nuanced and advanced than simple REGEX; it invokes the power of AI itself to secure customer journeys into safe AI usage. Machine learning models, often transformer-based, have been fine-tuned and orchestrated to interpret the overall tone and tenor of prompts, gaining a real semantic understanding of what is being conveyed in the prompt to counteract simple obfuscation attempts. For instance, using printed numbers, such as one, two, three to circumvent Regex rules predicated on numerals being present - this will not succeed. This AI infused ability to interpret context and intent allows for preset industry guidelines for safe LLM enforcement. For instance, simply indicating the business sector is financial will allow the Prompt Security solution to pass judgement, and block if desired, financial reports, investment strategy documents and revenue audits, to name just a few. Similar awareness for sectors such as healthcare or insurance is simply a pull-down menu item away with the policy builder. Source Code Detection A common use case for LLM security solutions is identification and, potentially, blocking submissions of enterprise source code to LLM services. In this scenario, this small snippet of Python is delivered to the Prompt service: def trial(): return 2_500 <= sorted(choices(range(10_000), k=5))[2] < 7_500 sum(trial() for i in range(10_000)) / 10_000 A policy is in place for Python and JavaScript detection and was invoked as hoped for. curl --request POST \ --url https://llmsec.busdevf5.net/v1/chat/completions \ --header 'authorization: Bearer sk-oZU66yhyN7qhUjEHfmR5T3BlbkFJ5RFOI***********' \ --header 'content-type: application/json' \ --header 'user-agent: vscode-restclient' \ --data '{"model": "gpt-3.5-turbo","messages": [{"role": "user","content": "def trial():\n return 2_500 <= sorted(choices(range(10_000), k=5))[2] < 7_500\n\nsum(trial() for i in range(10_000)) / 10_000"}]}' Content Moderation for Interactions with LLMs One common manner of preventing LLM responses from veering into undesirable territory is for the service provider to implement a detailed system prompt, a set of guidelines that the LLM should be governed by when responding to user prompts. For instance, the system prompt might instruct the LLM to serve as polite, helpful and succinct assistant for customers purchasing shoes in an online e-commerce portal. A request for help involving the trafficking of narcotics should, intuitively, be denied. Defense in depth has traditionally meant no single point of failure. In the above scenario, screening both the user prompt and ensuring LLM response for a wide range of topics leads to a more ironclad security outcome. The following demonstrates some of the topics Prompt Security can intelligently seek out; in this simple example, the topic of “News & Politics” has been singled out to block as a demonstration. Testing can be performed with this easy Curl command, asking for a prediction on a possible election result in Canadian politics: curl --request POST \ --url https://llmsec.busdevf5.net/v1/chat/completions \ --header 'authorization: Bearer sk-oZU66yhyN7qhUjEHfmR5T3Blbk*************' \ --header 'content-type: application/json' \ --header 'user-agent: vscode-restclient' \ --data '{"model": "gpt-3.5-turbo","messages": [{"role": "user","content": "Who will win the upcoming Canadian federal election expected in 2025"}],"max_tokens": 250,"temperature": 0.7}' The response, available in the Prompt Security console, is also presented to the user. In this case, a Curl user leveraging the VSCode IDE. The response has been largely truncated for brevity, fields that are of interest is an HTTP “X-header” indicating the transaction utilized the F5 site in Toronto, and the number of tokens consumed in the request and response are also included. Advanced LLM Security Features Many of the AI security concerns are given prominence by the OWASP Top Ten for LLMs, an evolving and curated list of potential concerns around LLM usage from subject matter experts. Among these are prompt injection attacks and malicious instructions often perceived as benign by the LLM. Prompt Security uses a layered approach to thwart prompt injection. For instance, during the uptick in interest in ChatGPT, DAN (Do Anything Now) prompt injection was widespread and a very disruptive force, as discussed here. User prompts will be closely analyzed for the presence of the various DAN templates that have evolved over the past 18 months. More significantly, the use of AI itself allows the Prompt solution to recognize zero-day bespoke prompts attempting to conduct mischief. The interpretative powers of fine-tuned, purpose-built security inspection models are likely the only way to stay one step ahead of bad actors. Another chief concern is protection of the system prompt, the guidelines that reel in unwanted behavior of the offered LLM service, what instructed our LLM earlier in its role as a shoe sales assistant. The system prompt, if somehow manipulated, would be a significant breach in AI security, havoc could be created with an LLM directed astray. As such, Prompt Security offers a policy to compare the user provided prompt, the configured system prompt in the API call, and the response generated by the LLM. In the event that a similarity threshold with the system prompt is exceeded in the other fields, the transaction can be immediately blocked. An interesting advanced safeguard is the support for a “canary” word - a specific value that a well behaved LLM should never present in any response, ever. The detection of the canary word by the Prompt solution will raise an immediate alert. One particularly broad and powerful feature in the AI firewall is the ability to find secrets, meaning tokens or passwords, frequently for cloud-hosted services, that are revealed within user prompts. Prompt Security offers the ability to scour LLM traffic for in excess of 200 meaningful values. Just as a small representative sample of the industry’s breadth of secrets, these can all be detected and acted upon: Azure Storage Keys Detector Artifactory Detector Databricks API tokens GitLab credentials NYTimes Access Tokens Atlassian API Tokens Besides simple blocking, a useful redaction option can be chosen. Rather than risk compromise of credentials and obfuscated value will instead be seen at the LLM. F5 Positive Security Models for AI Endpoints The AI traffic delivered and received from Prompt Security’s AI firewall is both discovered and subjected to API layer policies by the F5 load balancer. Consider the token awareness features of the AI firewall, excessive token consumption can trigger an alert and even transaction blocking. This behavior, a boon when LLMs like the OpenAI premium GPT-4 models may have substantial costs, allows organizations to automatically shut down a malicious actor who illegitimately got hold of an OPENAI_API key value and bombarded the LLM with prompts. This is often referred to as a “Denial of Wallet” situation. F5 Distributed Cloud, with its focus upon the API layer, has congruent safeguards. Each unique user of an API service is tracked to monitor transactional consumption. By setting safeguards for API rate limiting, an excessive load placed upon the API endpoint will result in HTTP 429 “Too Many Request” in response to abusive behavior. A key feature of F5 API Security is the fact that it is actionable in both directions, and also an in-line offering, unlike some API solutions which reside out of band and consume proxy logs for reporting and threat detection. With the automatic discovery of API endpoints, as seen in the following screenshot, the F5 administrator can see the full URL path which in this case exercises the familiar OpenAI /v1/chat/completions endpoint. As highlighted by the arrow, the schema of traffic to API endpoints is fully downloadable as an OpenAPI Specification (OAS), formerly known as a Swagger file. This layer of security means fields in API headers and bodies can be validated for syntax, such that a field whose schema expects a floating-point number can see any different encoding, such as a string, blocked in real-time in either direction. A possible and valuable use case: allow an initial unfettered access to a service such as OpenAI, by means of Prompt Security’s AI firewall service, for a matter of perhaps 48 hours. After a baseline of API endpoints has been observed, the API definition can be loaded from any saved Swagger files at the end of this “observation” period. The loaded version can be fully pruned of undesirable or disallowed endpoints, all future traffic must conform or be dropped. This is an example of a “positive security model”, considered a gold standard by many risk-adverse organizations. Simply put, a positive security model allows what has been agreed upon through and rejects everything else. This ability to learn and review your own traffic, and then only present Prompt Security with LLM endpoints that an organization wants exposed is an interesting example of complementing an AI security solution with rich API layer features. Summary The world of AI and LLMs is rapidly seeing investment, in time and money, from virtually all economic sectors; the promise of rapid dividends in the knowledge economy is hard to resist. As with any rapid deployment of new technology, safe consumption is not guaranteed, and it is not built in. Although LLMs often suggest guardrails are baked into offerings, a 30-second search of the Internet will expose firsthand experiences where unexpected outcomes when invoking AI are real. Brand reputation is at stake and false information can be hallucinated or coerced out of LLMs by determined parties. By combining the ability to ingest globally at high-speed dispersed users and apply a first level of security protections, F5 Distributed Cloud can be leveraged as an onboarding for LLM workloads. As depicted in this article, Prompt Security can in turn handle traffic egressing F5’s distributed HTTPS load balancers and provide state-of-the-art AI safeguards, including sensitive data detection, content moderation and other OWASP-aligned mechanisms like jailbreak and prompt injection mitigation. Other deployment models exist, including deploying Prompt Security’s solution on-premises, self-hosted in cloud tenants, and running the solution on Distributed Cloud CE nodes themselves is supported.803Views3likes1CommentF5 Hybrid Security Architectures: Part 3 F5 XC API Protection and NGINX Ingress
Here in this example solution, we will be using DevSecOps practices to deploy an AWS Elastic Kubernetes Service (EKS) cluster running the Arcadia Finance test web application serviced by F5 NGINX Ingress Controller for Kubernetes. For protection, will provide API Discovery and Security with F5 Distributed Cloud's Web App and API Protection service. Introduction: For those of you following along with the F5 Hybrid Security Architectures series, welcome back! If this is your first foray into the series and would like some background, have a look at the intro article. This series is using theF5 Hybrid Security ArchitecturesGitHub repo and CI/CD platform to deploy F5 based hybrid security solutions based on DevSecOps principles. This repo is a community supported effort to provide not only a demo and workshop, but also a stepping stone for utilizing these practicesin your own F5 deployments. If you find any bugs or have any enhancement requests, open a issue or better yet contribute! API Security: APIs are an integral part of our daily routine, facilitating everything from critical to mundane tasks. From banking and ride-sharing apps to the weather updates we check before stepping out, APIs enable these functionalities. Given the sensitive nature of the data that can be exposed by unprotected APIs, the need for effective security cannot be stressed enough. With F5 Distributed Cloud Web App and API protection security teams can discover, inventory, and secure these critical APIs. Here in this example solution, we will be using DevSecOps practices to deploy an AWS Elastic Kubernetes Service (EKS) cluster running the Brewz test web application serviced by F5 NGINX Ingress Controller. To secure our application and APIs, we will deploy F5 Distributed Cloud's Web App and API Protection service. This will provide us API Discovery and Security as well as a traditional Web Application Firewall and Malicious User Detection. Distributed Cloud WAAP:Available for SaaS-based deploymentsand provides comprehensive security solutions designed to safeguard web applications and APIs from a wide range of cyber threats. This solution utilizes a distributed cloud architecture, which enables it to provide real-time protection and scale to meet the needs of large enterprises. NIGNX Ingress Controller for Kubernetes: A lightweight software solution that helps manage app connectivity at the edge of a Kubernetes cluster by directing requests to the appropriate services and pods. It provides advanced load balancing, routing, identity, and security, as well as montioring and observability features. XC WAAP + NGINX Ingress Controller Workflow GitHub Repo: F5 Hybrid Security Architectures Prerequisites: F5 Distributed Cloud Account (F5 XC) Create an F5 XC API certificate NGINX Ingress Controller license AWS Account- Due to the assets being created, free tier will not work. Terraform Cloud Account GitHub Account Assets xc: F5 Distributed Cloud WAAP nic:NGINX Ingress Controller infra: AWS Infrastructure (VPC, IGW, etc.) eks:AWS Elastic Kubernetes Service brewz: Brewz SPA test web application Tools Cloud Provider: AWS Infrastructure as Code: Terraform Infrastructure as Code State: Terraform Cloud CI/CD: GitHub Actions Terraform Cloud Workspaces: Create a workspace for each asset in the workflow chosen Workflow: xc-nic Workspaces: infra, eks, nic, brewz, xc Your Terraform Cloud console should resemble the following: Variable Set: Create a Variable Set with the following values. IMPORTANT:Ensure sensitive values are appropriatelymarked. AWS_ACCESS_KEY_ID: Your AWS Access Key ID - Environment Variable AWS_SECRET_ACCESS_KEY: Your AWS Secret Access Key - Environment Variable AWS_SESSION_TOKEN: Your AWS Session Token - Environment Variable VOLT_API_P12_FILE: Your F5 XC API certificate. Set this to api.p12 - Environment Variable VES_P12_PASSWORD: Set this to the password you supplied when creating your F5 XC API key - Environment Variable nginx_jwt: Your NGINX Java Web Token associatedwith your NGINX license - Terraform Variable ssh_key: Your ssh key for access to created compute assets - Terrraform Variable tf_cloud_organization: Your Terraform Cloud Organization name - Terraform Variable Your Variable Set should resemble the following: GitHub Fork and Clone Repo:F5 Hybrid Security Architectures ctions Secrets: Create the following GitHub Actions secrets in your forked repo XC_P12: The base64 encoded F5 XC API certificate TF_API_TOKEN: Your Terraform Cloud API token TF_CLOUD_ORGANIZATION: Your Terraform Cloud Organization TF_CLOUD_WORKSPACE_workspace: Create for each workspace used in your workflow. EX:TF_CLOUD_WORKSPACE_XCwould be created with the value xc Your GitHub Actions Secrets should resemble the following: Setup Deployment Branch and Terraform Local Variables: Step 1: Check out a branch for the deploy workflow using the following naming convention xc-nic deployment branch: deploy-xcapi-nic Step 2:Rename infra/terraform.tfvars.examples to infra/terraform.tfvars and add the following data #Global project_prefix = "Your project identifier" resource_owner = "You" #AWS aws_region = "Your AWS region" ex: us-west-1 azs = "Your AWS availability zones" ex: ["us-west-1a", "us-west-1b"] #Assets nic = true nap = false bigip = false bigip-cis = false Step 3: Rename xc/terraform.tfvars.examples to xc/terraform.tfvars and add the following data #XC Global api_url = "https://.console.ves.volterra.io/api" xc_tenant = "Your XC Tenant Name" xc_namespace = "Your XC namespace" #XC LB app_domain = "Your App Domain" #XC WAF xc_waf_blocking = true #XC AI/ML Settings for MUD, APIP - NOTE: Only set if using AI/ML settings from the shared namespace xc_app_type = [] xc_multi_lb = false #XC API Protection and Discovery xc_api_disc = true xc_api_pro = true xc_api_spec = ["Path to uploaded API spec"] *See below screen shot for how to obtain this value. #XC Bot Defense xc_bot_def = false #XC DDoS xc_ddos = false #XC Malicious User Detection xc_mud = true For Path to API Spec navigate to Manage->Files->Swagger Files, click the three dots next to your OAS, and choose "Copy Latest Version's URL". Paste this into the xc_api_spec in the xc/terraform.tfvars. Step 4: Modify line 16 in the .gitignore and comment out the *.tfvars line with # and save the file Step 5: Commit your changes Deployment: Step 1: Push your deploy branch to the forked repo Step 2:Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build Step 3: Once the pipeline completes, verify your assets were deployed to AWS and F5 XC Step 4:Check your Terraform Outputs for XC and verify your app is available by navigating to the FQDN API Discovery andSecurity Dashboards: After leaving the Brewz test app deployed for a while we can start to see the API graph form. The F5 XC WAAP platform learns the schema structure of the API by analyzing sampled request data, then reverse-engineering the schema to generates an OpenAPI spec. The platform validates what is deploy versus what is discovered and tags any Shadow APIs that are found. We can also check the dashboards for any attacks that may have occurred while we were waiting for discovery to finish. The internet being what it is, it didn't take long for the platform to protect us against some attacks. Deployment Teardown: Step 1:From your deployment branch check out a branch for the destroy workflow using the following naming convention xc-nic destroy branch: destroy-xcapi-nic Step 2: Push your destroy branch to the forked repo Step 3: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build Step 4:Once the pipeline completes, verify your assets were destroyed Conclusion: In this article we have shown how to utilize the F5 Hybrid Security Architectures GitHub repo and CI/CD pipeline to deploy a tiered security architecture utilizing F5 XC WAAP and NGINX Ingress Controller to protect a test API running in AWS EKS. While the code and security policies deployed are generic and not inclusive of all use-cases, they can be used as a steppingstone for deploying F5 based hybrid architectures in your own environments. Workloads are increasingly deployed across multiple diverse environments and application architectures. Organizations need the ability to protect their essential applications regardless of deployment or architecture circumstances. Equally important is the need to deploy these protections with the same flexibility and speed as the apps they protect. With the F5 WAF portfolio, coupled with DevSecOps principles, organizations can deploy and maintain industry-leading security without sacrificing the time to value of their applications. Not only can Edge and Shift Left principles exist together, but they can also work in harmony to provide a more effective security solution. Article Series: F5 Hybrid Security Architectures (Intro - One WAF Engine, Total Flexibility) F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF) F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller) F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller) For further information or to get started: F5 Distributed Cloud Platform (Link) F5 Distributed Cloud WAAP Services (Link) F5 Distributed Cloud WAAP YouTube series (Link) F5 Distributed Cloud WAAP Get Started (Link)5KViews5likes2CommentsF5 Distributed Cloud (XC) Global Applications Load Balancing in Cisco ACI
Introduction F5 Distributed Cloud (XC) simplify cloud-based DNS management with global server load balancing (GSLB) and disaster recovery (DR). F5 XC efficiently directs application traffic across environments globally, performs health checks, and automates responses to activities and events to maintain high application performance with high availability and robustness. In this article, we will discuss how we can ensure high application performance with high availability and robustness by using XC to load-balance global applications across public clouds and Cisco Application Centric Infrastructure (ACI) sites that are geographically apart. We will look at two different XC in ACI use cases. Each of them uses a different approach for global applications delivery and leverages a different XC feature to load balance the applications globally and for disaster recovery. XC DNS Load Balancer Our first XC in ACI use case is very commonly seen where we use a traditional network-centric approach for global applications delivery and disaster recovery. We use our existing network infrastructure to provide global applications connectivity and we deploy GSLB to load balance the applications across sites globally and for disaster recovery. In our example, we will show you how to use XC DNS Load Balancer to load-balance a global application across ACI sites that are geographically dispersed. One of the many advantages of using XC DNS Load Balancer is that we no longer need to manage GSLB appliances. Also, we can expect high DNS performance thanks to XC global infrastructure. In addition, we have a single pane of glass, the XC console, to manage all of our services such as multi-cloud networking, applications delivery, DNS services, WAAP etc. Example Topology Here in our example, we use Distributed Cloud (XC) DNS Load Balancer to load balance our global application hello.bd.f5.com, which is deployed in a hybrid multi-cloud environment across two ACI sites located in San Jose and New York. Here are some highlights at each ACI site from our example: New York location XC CE is deployed in ACI using layer three attached with BGP XC advertises custom VIP 10.10.215.215 to ACI via BGP XC custom VIP 10.10.215.215 has an origin server 10.131.111.88 on AWS BIG-IP is integrated into ACI BIG-IP has a public VIP 12.202.13.149 that has two pool members: on-premise origin server 10.131.111.161 XC custom VIP 10.10.215.215 San Jose location XC CE is deployed in ACI using layer three attached with BGP XC advertises custom VIP 10.10.135.135 to ACI via BGP XC custom VIP 10.10.135.135 has an origin server 10.131.111.88 on Azure BIG-IP is integrated into Cisco ACI BIG-IP has a public VIP 12.202.13.147 that has two pool members: on-premise origin server 10.131.111.55 XC custom VIP 10.10.135.135 *Note:Click here to review on how to deploy XC CE in ACI using layer three attached with BGP. DNS Load Balancing Rules A DNS Load Balancer is an ingress controller for the DNS queries made to your DNS servers. The DNS Load Balancer receives the requests and answers with an IP address from a pool of members based on the configured load balancing rules. On the XC console, go to "DNS Management"->"DNS Load Balancer Management" to create a DNS Load Balancer and then define the load balancing rules. Here in our example, we created a DNS Load Balancer and defined the load balancing rules for our global application hello.bd.f5.com (note: as a prerequisite, F5 XC must be providing primary DNS for the domain): Rule #1: If the DNS request tohello.bd.f5.com comes from United States or United Kingdom, respond with BIG-IP VIP 12.203.13.149 in the DNS response so that the application traffic will be directed to New York ACI site and forwarded to an origin server that is located in AWS or on-premise: Rule #2: If the DNS request tohello.bd.f5.com comes from United States or United Kingdom and if New York ACI site become unavailable, respond with BIG-IP VIP 12.203.13.147 in the DNS response so that the application traffic will be directed to San Jose ACI site and forwarded to an origin server that is located on-premise or in Azure: Rule #3: If the DNS request tohello.bd.f5.comcomes from somewhere outside of United States or United Kingdom, respond with BIG-IP VIP 12.203.13.147 in the DNS response so that the application traffic will be directed to San Jose ACI and forwarded to an origin server that is located on-premise or in Azure: Validation Now, let's see what happens. When a machine located in the United States tries to reach hello.bd.f5.comand if both ACI sites are up, the traffic is directed to New York ACI site and forwarded to an origin server that is located on-premise or in AWS as expected: When a machine located in the United States tries to reachhello.bd.f5.comand if the New York ACI site is down or becomes unavailable, the traffic is re-directed to San Jose ACI site and forwarded to an origin server that is located on-premise or in Azure as expected: When a machine tries to accesshello.bd.f5.com from outside of United States or United Kingdom, it is directed to San Jose ACI site and forwarded to an origin server that is located on-premise or in Azure as expected: On the XC console, go to"DNS Management"and select the appropriate DNS Zone to view theDashboardfor information such as the DNS traffic distribution across the globe, the query types etc andRequests for DNS requests info: XC HTTP Load Balancer Our second XC in ACI use case uses a different approach for global applications delivery and disaster recovery. Instead of using the existing network infrastructure for global applications connectivity and utilizing XC DNS Load Balancer for global applications load balancing, we simplify the network layer management by securely deploying XC to connect our applications globally and leveraging XC HTTP Load Balancer to load balance our global applications and for disaster recovery. Example Topology Here in our example, we use XC HTTP load balancer to load balance our global applicationglobal.f5-demo.com that is deployed across a hybrid multi-cloud environment. Here are some highlights: XC CE is deployed in each ACI site using layer three attached with BGP New York location: ACI advertises on-premise origin server 10.131.111.161 to XC CE via BGP San Jose location: ACI advertises on-premise origin server 10.131.111.55 to XC CE via BGP An origin server 10.131.111.88 is located in AWS An origin server 10.131.111.88 is located in Azure *Note:Click here to review on how to deploy XC CE in ACI using layer three attached with BGP. XC HTTP Load Balancer On the XC console, go to “Multi-Cloud App Connect” -> “Manage” -> “Load Balancers” -> “HTTP Load Balancers” to “Add HTTP Load Balancer”. In our example, we created a HTTPS load balancer named globalwith domain name global.f5-demo.com. Instead of bringing our own certificate, we took advantage of the automatic TLS certificate generation and renewal supported by XC: Go to “Origins” section to specify the origin servers for the global application. In our example, we included all origin servers across the public clouds and ACI sites for our global application global.f5-demo.com: Next, go to “Other Settings” -> “VIP Advertisement”. Here, select either “Internet” or “Internet (Specified VIP)” to advertise the HTTP Load Balancer to the Internet. In our example, we selected “Internet” to advertise global.f5-demo.com globally because we decided not to manage nor to acquire a public IP: In our first use case, we defined a set of DNS load balancing rules on the XC DNS Load Balancer to direct the application traffic based on our requirement: If the request toglobal.f5-demo.com comes from United States or United Kingdom, application traffic should be directed to an origin server that is located on-premise in New York ACI site or in AWS. If the request toglobal.f5-demo.com comes from United States or United Kingdom and if the origin servers in New York ACI site and AWS become unavailable, application traffic should be re-directed to an origin server that is located on-premise in San Jose ACI site or in Azure. If the request toglobal.f5-demo.com comes from somewhere outside of United States or United Kingdom, application traffic should be directed to an origin server that is located on-premise in San Jose ACI site or in Azure. We can accomplish the same with XC HTTP Load Balancer by configuring Origin Server Subset Rules. XC HTTP Load Balancer Origin Server Subset Rules allow users to create match conditions on incoming source traffic to the XC HTTP Load Balancer and direct the matched traffic to the desired origin server(s). The match condition can be based on country, ASN, regional edge (RE), IP address, or client label selector. As a prerequisite, we create and assign a label (key-value pair) to an origin server so that we can specify where to direct the matched traffic to in reference to the label in Origin Server Subset Rules. Go to “Shared Configuration” -> “Manage” -> “Labels” -> “Known Keys” and “Add Know Key” to create labels. In our example, we created a key named jy-key with two labels: us-uk and other : Now, go to "Origin pool" under “Multi-Cloud App Connect” and apply the labels to the origin servers: In our example, origin servers in New York ACI site and AWS are labeledus-uk while origin servers in San Jose ACI site and Azure are labeled other : Then, go to “Other Settings” to enable subset load balancing. In our example, jy-key is our origin server subsets class, and we configured to use default subset original pool labeled other as our fallback policy choice based on our requirement that is if the origin servers in New York ACI site and AWS become unavailable, traffic should be directed to an origin server in San Jose ACI site or Azure: Next, on the HTTP Load Balancer, configure the Origin Server Subset Rules by enabling “Show Advanced Fields” in the "Origins" section: In our example, we created following Origin Server Subset Rules based on our requirement: us-uk-rule: If the request toglobal.f5-demo.com comes from United States or United Kingdom, direct the application traffic to an origin server labeled us-uk that is either in New York ACI site or AWS. other-rule: If the request to global.f5-demo.comdoes not come from United States or United Kingdom, direct the application traffic to an origin server labeled other that is either in San Jose ACI site or Azure. Validation As a reminder, we use XC automatic TLS certificate generation and renewal feature for our HTTPS load balancer in our example. First, let's confirm the certificate status: We can see the certificate is valid with an auto renew date. Now, let’s run some tests and see what happens. First, let’s try to access global.f5-demo.com from United Kingdom: We can see the traffic is directed to an origin server located in New York ACI site or AWS as expected. Next, let's see what happens if the origin servers from both of these sites become unavailable: The traffic is re-directed to an origin server located in San Jose ACI site or Azure as expected. Last, let’s try to access global.f5-demo.com from somewhere outside of United States or United Kingdom: The traffic is directed to an origin server located in San Jose ACI site or Azure as expected. To check the requests on the XC Console, go to "Multi-Cloud App Connect" -> “Performance” -> "Requests" from the selected HTTP Load Balancer. Below is a screenshot from our example and we can see the request to global.f5-demo.com came from Australia was directed to the origin server 10.131.111.55 located in San Jose ACI site based on the configured Origin Server Subset Rules other-rule: Here is another example that the request came from United States was sent to the origin server 10.131.111.88 located in AWS based on the configured Origin Server Subset Rules us-uk-rule: Summary F5 XC simplify cloud-based DNS management with global server load balancing (GSLB) and disaster recovery (DR). By deploying F5 XC in Cisco ACI, we can securely deploy and load balance our global applications across ACI sites (and public clouds) efficiently while maintaining high application performance with high availability and robustness among global applications at all times. Related Resources *On-Demand Webinar*Deploying F5 Distributed Cloud Services in Cisco ACI Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Two Attached Deployment371Views0likes0CommentsDeploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Two Attached Deployment
Introduction F5 Distributed Cloud (XC) Services are SaaS-based security, networking, and application management services that can be deployed across multi-cloud, on-premises, and edge locations. This article will show you how you can deploy F5 Distributed Cloud’s Customer Edge (CE) site in Cisco Application Centric Infrastructure (ACI) so that you can securely connect your application and distribute the application workloads in a Hybrid Multi-Cloud environment. F5 XC Layer Two Attached CE in Cisco ACI Besides Layer Three Attached deployment option, which we discussed in another article, a F5 Distributed Cloud Customer Edge (CE) site can also be deployed with Layer Two Attached in Cisco ACI environment using an ACI Endpoint of an Endpoint Group (EPG). As a reminder, Layer Two Attached is one of the deployment models to get traffic to/from a F5 Distributed Cloud CE site, where the CE can be a single node or a three-nodes cluster. F5 Distributed Cloud supports Virtual Router Redundancy Protocol (VRRP) for virtual IP (VIP) advertisement. When VRRP is enabled for VIPs advertisement, there is a VRRP Master for each of the VIPs and the VRRP Master for each of the VIPs can possibly be distributed across the CE nodes within the cluster. In this article, we will look at how we can deploy a Layer Two Attached CE site in Cisco ACI. F5 XC VRRP Support for VIPs Advertisement F5 XC Secure Mesh Sites are specifically engineered for non-cloud CE deployments, which support additional configurations that are not available using Fleet or regular Site management functionalities such as VRRP for VIPs advertisement. We recommend Secure Mesh Sites for non-cloud CE deployment and specifically, in Layer Two Attached CE deployment model, we recommend deploying CE site as a Secure Mesh Site to take advantage of the VRRPs support for VIPs advertisement. With VRRP enabled for VIPs advertisement, one of the CE nodes within the cluster will become the VRRP Master for a VIP and starts sending gratuitous ARPs (GARPS) while the rest of the CE nodes will become the VRRP Backup. Please note that in CE software, VRRP virtual MAC is not used for the VIP. Instead, the CE node, which is the VRRP Master for the VIP uses its physical MAC address in ARP responses for the VIP. When a failover happens, a VRRP Backup CE will become the new VRRP Master for the VIP and starts sending GARPs to update the ARP table of the devices in the broadcast domain. As of today, there isn't a way to configure the VRRP priority and the VRRP Master assignment is at random. Thus, if there are multiple VIPs, it is possible that a CE node within the cluster can be the VRRP Master for one or more VIPs, or none. F5 XC Layer Two Attached CE in ACI Example In this section, we will use an example to show you how to successfully deploy a Layer Two Attached CE site in Cisco ACI fabric so that you can securely connect your application and distribute the application workloads in a Hybrid Multi-Cloud environment. Topology In our example, CE is a three nodes cluster (Master-0, Master-1 and Master-2) which connects to the ACI fabric using an endpoint of an EPG namedexternal-epg: Example reference - ACI EPG external-epg endpoints table: HTTP load balancersite2-secure-mesh-cluster-app has a Custom VIP of 172.18.188.201/32 epg-xc.f5-demo.com with workloads 10.131.111.66 and 10.131.111.77 in the cloud (Azure) and it advertises the VIP to the CE site: F5 XC Configuration of VRRP for VIPs Advertisement To enable VRRP for VIPs advertisement, go to "Multi-Cloud Network Connect" -> "Manage" -> "Site Management" -> "Secure Mesh Sites" -> "Manage Configuration" from the selected Secure Mesh Site: Next, go to "Network Configuration" and select "Custom Network Configuration" to get to "Advanced Configuration" and make sure "Enable VRRP for VIP(s)" is selected for VIP Advertisement Mode: Validation We can now securely connect to our application: Note from above, after F5 XC is deployed in Cisco ACI, we also use F5 XC DNS as our primary nameserver: To check the requests on the F5 XC Console, go to"Multi-Cloud App Connect" -> "Overview: Applications" to bring out our HTTP load balancer, then go to "Performance Monitoring" -> "Requests": *Note: Make sure you are in the right namespace. As a reminder, VRRP for VIPs advertisement is enabled in our example. From the request shown above, we can see that CE node Master-2 is currently the VRRP Master for VIP 172.18.188.201 and if we go to the APIC, we can see the VIP is learned in the ACI endpoint table for EPG external-epgtoo: Example reference - a sniffer capture of GARP from CE node Master-2 for VIP 172.18.188.201: Summary A F5 Distributed Cloud Customer Edge (CE) site can be deployed with Layer Two Attached deployment model in Cisco ACI environment using an ACI Endpoint of an Endpoint Group (EPG). Layer Two Attached deployment model can be more desirable and easier for CE deployment when compared to Layer Three Attached. It is because Layer Two Attached does not require layer three/routing which means one less layer to take care of and it also brings the applications closer to the edge. With F5 Distributed Cloud Customer Edge (CE) site deployment, you can securely connect your on-premises to the cloud quickly and efficiently. Next Check out this video for some examples of Layer Two Attached CE use cases in Cisco ACI: Related Resources *On-Demand Webinar* Deploying F5 Distributed Cloud Services in Cisco ACI F5 Distributed Cloud (XC) Global Applications Load Balancing in Cisco ACI Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment Customer Edge Site - Deployment & Routing Options Cisco ACI Endpoint Learning White Paper423Views0likes0CommentsDeploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment
Introduction F5 Distributed Cloud (XC) Services are SaaS-based security, networking, and application management services that can be deployed across multi-cloud, on-premises, and edge locations. This article will show you how you can deploy F5 Distributed Cloud Customer Edge (CE) site in Cisco Application Centric Infrastructure (ACI) so that you can securely connect your application in Hybrid Multi-Cloud environment. XC Layer Three Attached CE in Cisco ACI A F5 Distributed Cloud Customer Edge (CE) site can be deployed with Layer Three Attached in Cisco ACI environment using Cisco ACI L3Out. As a reminder, Layer Three Attached is one of the deployment models to get traffic to/from a F5 Distributed Cloud CE site, where the CE can be a single node or a three nodes cluster. Static routing and BGP are both supported in the Layer Three Attached deployment model. When a Layer Three Attached CE site is deployed in Cisco ACI environment using Cisco ACI L3Out, routes can be exchanged between them via static routing or BGP. In this article, we will focus on BGP peering between Layer Three Attached CE site and Cisco ACI Fabric. XC BGP Configuration BGP configuration on XC is simple and it only takes a couple steps to complete: 1) Go to "Multi-Cloud Network Connect" -> "Networking" -> "BGPs". *Note: XC homepage is role based, and to be able to configure BGP, "Advanced User" is required. 2) "Add BGP" to fill out the site specific info, such as which CE Site to run BGP, its BGP AS number etc., and "Add Peers" to include its BGP peers’ info. *Note: XC supports direct connection for BGP peering IP reachability only. XC Layer Three Attached CE in ACI Example In this section, we will use an example to show you how to successfully bring up BGP peering between a F5 XC Layer Three Attached CE site and a Cisco ACI Fabric so that you can securely connect your application in Hybrid Multi-Cloud environment. Topology In our example, CE is a three nodes cluster(Master-0, Master-1 and Master-2) that has a VIP 10.10.122.122/32 with workloads, 10.131.111.66 and 10.131.111.77, in the cloud (AWS): The CE connects to the ACI Fabricvia a virtual port channel (vPC) that spans across two ACI boarder leaf switches. CE and ACI Fabric are eBGP peers via an ACI L3Out SVI for routes exchange. CE is eBGP peered to both ACI boarder leaf switches, so that in case one of them is down (expectedly or unexpectedly), CE can still continue to exchange routes with the ACI boarder leaf switch that remains up and VIP reachability will not be affected. XC BGP Configuration First, let us look at the XC BGP configuration ("Multi-Cloud Network Connect" -> "Networking" -> "BGPs"): We"Add BGP" of "jy-site2-cluster" with site specific BGP info along with a total of six eBGP peers (each CE node has two eBGP peers; one to each ACI boarder leaf switch): We "Add Item" to specify each of the six eBPG peers’ info: Example reference - ACI BGP configuration: XC BGP Peering Status There are a couple of ways to check the BGP peering status on the F5 Distributed Cloud Console: Option 1 Go to "Multi-Cloud Network Connect" -> "Networking" -> "BGPs" -> "Show Status" from the selected CE site to bring up the "Status Objects" page. The "Status Objects" page provides a summary of the BGP status from each of the CE nodes. In our example, all three CE nodes from "jy-site2-cluster" are cleared with "0 Failed Conditions" (Green): We can simply click on a CE node UID to further look into the BGP status from the selected CE node with all of its BGP peers. Here, we clicked on the UID of CE node Master-2 (172.18.128.14) and we can see it has two eBGP peers: 172.18.128.11 (ACI boarder leaf switch 1) and 172.18.128.12 (ACI boarder leaf switch 2), and both of them are Up: Here is the BGP status from the other two CE nodes - Master-0 (172.18.128.6) and Master-1 (172.18.128.10): For reference, here is an example of a CE node with "Failed Conditions" (Red) due to one of its BGP peers is down: Option 2 Go to "Multi-Cloud Network Connect" -> "Overview" -> "Sites" -> "Tools" -> "Show BGP peers" to bring up the BGP peers status info from all CE nodes from the selected site. Here, we can see the same BGP status of CE node master-2 (172.18.128.14) which has two eBGP peers: 172.18.128.11 (ACI boarder leaf switch 1) and 172.18.128.12 (ACI boarder leaf switch 2), and both of them are Up: Here is the output of the other two CE nodes - Master-0 (172.18.128.6) and Master-1 (172.18.128.10): Example reference - ACI BGP peering status: XC BGP Routes Status To check the BGP routes, both received and advertised routes, go to "Multi-Cloud Network Connect" -> "Overview" -> "Sites" -> "Tools" -> "Show BGP routes" from the selected CE sites: In our example, we see all three CE nodes (Master-0, Master-1 and Master-2) advertised (exported) 10.10.122.122/32 to both of its BPG peers: 172.18.128.11 (ACI boarder leaf switch 1) and 172.18.128.12 (ACI boarder leaf switch 2), while received (imported) 172.18.188.0/24 from them: Now, if we check the ACI Fabric, we should see both 172.18.128.11 (ACI boarder leaf switch 1) and 172.18.128.12 (ACI boarder leaf switch 2) advertised 172.18.188.0/24 to all three CE nodes, while received 10.10.122.122/32 from all three of them (note "|" for multipath in the output): XC Routes Status To view the routing table of a CE node (or all CE nodes at once), we can simply select "Show routes": Based on the BGP routing table in our example (shown earlier), we should see each CE node has two Equal Cost Multi-Path (ECMP) installed in the routing table for 172.18.188.0/24: one to 172.18.128.11 (ACI boarder leaf switch 1) and one to 172.18.128.12 (ACI boarder leaf switch 2) as the next-hop, and we do (note "ECMP" for multipath in the output): Now, if we check the ACI Fabric, each of the ACI boarder leaf switch should have three ECMP installed in the routing table for 10.10.122.122: one to each CE node (172.18.128.6, 172.18.128.10 and 172.18.128.14) as the next-hop, and we do: Validation We can now securely connect our application in Hybrid Multi-Cloud environment: *Note: After F5 XC is deployed, we also use F5 XC DNS as our primary nameserver: To check the requests on the F5 Distributed Cloud Console, go to"Multi-Cloud Network Connect" -> "Sites" -> "Requests" from the selected CE site: Summary A F5 Distributed Cloud Customer Edge (CE) site can be deployed with Layer Three Attached deployment model in Cisco ACI environment. Both static routing and BGP are supported in the Layer Three Attached deployment model and can be easily configured on F5 Distributed Cloud Console with just a few clicks. With F5 Distributed Cloud Customer Edge (CE) site deployment, you can securely connect your application in Hybrid Multi-Cloud environment quickly and efficiently. Next Check out this video for some examples of Layer Three Attached CE use cases in Cisco ACI: Related Resources *On-Demand Webinar*Deploying F5 Distributed Cloud Services in Cisco ACI F5 Distributed Cloud (XC) Global Applications Load Balancing in Cisco ACI Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Two Attached Deployment Customer Edge Site - Deployment & Routing Options Cisco ACI L3Out White Paper1.3KViews4likes0CommentsEasily Protect Your Applications from DDoS with F5 Distributed Cloud DDoS Auto-Mitigation
Introduction: F5 Distributed Cloud DDoS Mitigation leverages a globally secured network with Regional Edges (RE), hosted in scrubbing data centers, interconnected across a dedicated, multi-terabit, redundant, private backbone, operated by Tier 1 Carriers. There have been several articles written about F5 Distributed Cloud (XC) DDoS and how to quickly protect your applications using this service. These covered Fast ACL’s to quickly protect yourself from Layer 3 and Layer 4 Network attacks and Layer 7 protection of your applications. I will link to those below. These solutions took manual configuration and monitoring and are still necessary for a complete layered approach to protecting your applications. This article will show how we continue to make DDoS and the configuration easier by introducing you to F5 XC DDoS Auto-Mitigation feature. The platform itself offers immediate mitigation through our auto-mitigation edge protection (Layer 3/4). This architectural enhancement sits at the edge of our network and the underlying strength of this solution stems from it’s incredibly fast Time to Mitigate (TTM) the most common attack vectors. Then to make this solution even more robust, I am going to show you how to add that same feature, protecting your application at Layer 7. Configuration: This article will assume you have a F5 Distributed Cloud Console Account and have already configured a HTTP Load Balancer that is protecting an application. Log In: Select Load Balancers Verify the correct Namespace I currently have 3 Load Balancers to choose from, your environment may be different. Click the 3 ellipses on the right side of the page of the Load Balancer you wish to add DoS Protection to. When the fly-out appears, select Manage Configuration. Notice along the left side are all the configuration items available for your Load Balancer. To Edit click the Edit Configuration to the right. Once you click Edit Configuration, you can either scroll down to DoS Protection or click the Dos Protection label on the left side. Initially DoS Protection is Disabled. Enable DoS protection Next Enable Auto Mitigation. Scroll to the bottom and select Save and Exit. Conclusion I was able to demonstrate to you how to very simply and with very little effort you are able to protect yourself from Dos Attacks using F5’s leading DoS Solution delivered as a SaaS Service. We have simplified our solution to allow customers who have struggled in the past with manual configuration to easily add DoS protection. For further information or to get started: F5 Distributed Cloud WAAP YouTube series (Link) F5 Distributed Cloud WAAP Services (Link) F5 Distributed Cloud WAAP Get Started (Link) Fast ACL (Link) YouTube Video (Link)3.8KViews4likes1CommentF5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF)
Here in this example solution, we will be using Terraform to deploy an AWS Elastic Kubernetes Service cluster running the Arcadia Finance test web application serviced by F5 NGINX Kubernetes Ingress Controller and protected by NGINX App Protect WAF. We will supplement this with F5 Distributed Cloud Web App and API Protection to provide complimentary security at the edge. Everything will be tied together using GitHub Actions for CI/CD and Terraform Cloud to maintain state.5KViews4likes0CommentsF5 Hybrid Security Architectures: One WAF Engine, Total Flexibility (Intro)
Layered security, we have been told for years that the most effective security strategy is composed of multiple, loosely coupled or independent layers of security controls. A WAF fits snuggly into the technical security controls area and has long been known as an essential piece of application security. What if we take this further and apply the layered approach directly to our WAF deployment? The F5 Hybrid Security Architectures explores this approach utilizing F5's best in class WAF products.7.7KViews10likes0CommentsHow to use F5 Distributed Cloud for API Discovery
In today's digital landscape, APIs are crucial for integrating diverse applications and services by enabling seamless communication and data sharing between systems. API discovery involves finding, exploring, and assessing APIs for their suitability in applications, considering their varied sources and functionalities. F5 Distributed provides multiple architectures that make it ridiculously easy to discover APIs.184Views1like1Comment