Load Balancers – Rancher

Rancher load balancers

provides the ability to use different load balancer controllers within Rancher. A load balancer can be used to distribute network and application traffic to individual containers by adding rules to target services. Any target service will have all of its underlying containers automatically registered as load balancer targets by Rancher. With Rancher, it’s easy to add a load balancer to your stack.

By default, Rancher has provided a load balancer managed by HAProxy that can be manually scaled to multiple hosts. The rest of our examples in this document will cover the different options for load balancers, but specifically referencing our HAProxy load balancer service. We are planning to add additional load balancer providers, and the options for all load balancers will be the same regardless of the load balancer provider.

We use a round robin algorithm to distribute traffic to target services. The algorithm can be customized in HAProxy’s custom settings. Alternatively, you can configure the load balancer to route traffic to target containers that are on the same host as the load balancer container. By adding a specific tag to the load balancer, you configure the load balancer to target only the container on the same host as the load balancer (that is, io.rancher.lb_service.target=only-local) or prioritize these containers over containers on a different host (that is, io.rancher.lb_service.target=prefer-local).

We’ll review our load balancer options for the UI and Rancher Compose and show examples with the UI and Rancher Compose.

Draining target connections for a load balancer

Available from v1.6.11+ By

default, if a

target

service on a load balancer

stops when a request is made to the load balancer, existing connections to the service will terminate immediately. Users may get errors such as HTTP Bad Gateway (502) when trying to access the load balancer because the connection to the target service has been dropped. Dropped connections are typically seen when the target service is updated.

To avoid these dropped connections, services can be scheduled with a drain timeout

so that when load balancers are directed to services, these connections are completely exhausted before terminating

.

How to Enable Connection Drainage on Target Services
  • When defining a target service, specify a non-zero drain timeout. On the Service Command tab, you can set this timeout. In the wording, add the drain_timeout_ms.
  • The drain

  • timeout value is the maximum time in milliseconds during which Rancher will attempt to drain existing connections to a stop service container. After this defined amount of time, the container will be stopped by Rancher. During this time, no new connections will be made to the container when it is in a stopped state and the load balancer will have removed this container from its list of backends.
  • A non-zero drain timeout allows drainage each time a container goes into a standstill state, which typically occurs during service upgrade, service reconciliation, or direct container shutdown.

NOTE: By default, the drain timeout is 0 for a service and connection drain will not occur

.

Known limitations
  • There is no drain support for sidekick containers, containers using the host network, and standalone containers
  • .

  • Support exists only on rancher/lb-service-haproxy:v0.7.15 or later load balancers
  • .

  • Reverting to previous load balancer images will fail unless you add the tag (io.rancher.container.agent.role: environmentAdmin) to the load balancer.

Example docker-compose.yml

Example rancher-compose.yml

Add a load balancer in the UI

We will see how to configure a load balancer for our application “letschat” created earlier in the Add services section

. First, start by creating a load balancer, clicking the drop-down icon next to “Add Service”

and clicking Add Load Balancer. By default, the scale will be 1 container. Provide a name such as “LetsChatLB”.

For port rules, use the default public access, the default http protocol, a source port of 80,

select the “letschat” service, and use a destination port of 8080. Click Create.

Now, let’s see the load balancer in action. In stack view, there is a link to port 80 that you used as the source port for the load balancer. If you click on it, a new tab will automatically appear in your browser and point to one of the hosts that has the load balancer started. The request is redirected to one of the “LetsChat” containers. If it were to upgrade, the load balancer would redirect the new request to the other container in the “letschat” service.

Load balancer options in the

user interface

Rancher provides a load balancer that runs the HAProxy software inside the container to route traffic to

target services.

Note: Load balancers will only work for services that use the managed network. If you select any other network option for the target services, it will not work with the load balancer.

To add a load balancer,

click the drop-down icon next to the Add Service button and select Add Load Balancer.

You can use the slider to select the scale, that is, how many containers in the load balancer. Alternatively, you can select Always run an instance of this container on each host. With this option, the load balancer will scale for any additional hosts that are added to your environment. If you have scheduling rules in the Scheduling section, Rancher will only start containers on hosts that meet the scheduling rules. If you add a host to the environment that does not comply with the scheduling rules, a container will not start on the host.

Note: The scale of the load balancer cannot exceed the number of hosts in the environment, otherwise there will be a port conflict and the load balancer service will be blocked in an activation state. You will continue to try to find an available host and open the port until you edit the scale of this load balancer or add additional hosts.

You must provide a Name and, if desired, a Description of the

load balancer.

Next, you’ll define the port rules for a load balancer. There are two types of port rules that can be created. There are service rules that target existing services and selector rules that will target services that match the selector criteria.

When you create service and selector rules, the host name and path rules match from top to bottom in the order shown in the user interface.

Service

rule Service rules

are port rules that target existing services in Rancher

.

In the Access section, you will decide whether this load balancer port will be publicly accessible (that is, accessible outside the host) or only internally in the environment. By default, Rancher has assumed that you want the port to be public, but you can select Internal if you want only services within the same environment to access the port.

Select the protocol. Read more about our protocol options. If you choose to select a protocol that requires SSL termination (that is, https or tls), you will add your certificates on the SSL Termination tab.

Next, you’ll provide the request host, source port, and route where the traffic will come from.

Note: Port 42 cannot be used as the source port for load balancers because Rancher uses this port for health checks.

Request host

/path

The

request host can be a specific HTTP host header for each service. The request path can be a specific path. The request host and request path can be used independently or together to create a specific request.

Example:
Wildcards

Rancher supports wildcards when adding host-based routing. The following wildcard syntax is supported.

Destination

and port

service

For each service rule, select the specific target service to direct traffic to. The list of services is based on all services within that environment. Along with the service, select which port to direct traffic on the service. This private port in the service is usually the port exposed in the image.

Selector

rule For a selector rule,

instead of targeting a specific service, you must provide a selector value. The picker is used to select target services based on the tags in a service. When the load balancer is created, the selector rules will be evaluated against existing services in the environment to see if there are any existing target services. Any additional services or changes to a service’s labels would be compared to the selector values to see if the service should be a target service.

For each source port, you can add the request host and/or path. The selector value is provided in target and can provide a specific port to route traffic to the service. This private port in the service is usually the port exposed in the image.

Example: 2 selector rules
  1. Source port: 100; Selector: foo=bar; Port: 80 Port
  2. of origin: 200; Selector: foo1=bar1; Port: 80
  • Service A has a foo=bar tag and would match the rule in the first selector. Any traffic at 100 would be directed to Service A.
  • Service B has a foo1=bar tag and would match the second rule of the selector. Any traffic at 200 would be directed to Service B.
  • The C service has the tags foo=bar and foo1=bar1 and matches both selector rules. Traffic from either home port would be routed to Service C.

Note: Currently, if you want to use a selector source port rule for multiple hostname/path, you must use Rancher Compose to set host/path name values on target services

. SSL Termination The

SSL Termination tab provides the ability to add certificates for use with https and tls protocols. From the Certificate drop-down menu, you can select the primary certificate for the load balancers.

To add a certificate to Rancher, read how to add certificates on the Infrastructure tab.

You can provide multiple certificates for the load balancer so that the appropriate certificate is presented to the client based on the requested host name (see Server Name Indication). This may not work with older clients, which don’t support SNI (those will get the primary certificate). For modern clients, they will be offered the certificate from the list for which there is a match or the primary certificate if there is no match.

Adhesion Policy for Load Balancers

You can select the adhesion of the load balancer. Stickiness is the cookie policy you wish to use when using website cookies.

The two options supported in Rancher are

: None: This option means

  • that there is no cookie policy
  • .

  • Create new cookie: This option means that the cookie will be defined outside your application. This cookie is what the load balancer would set on requests and responses. This cookie would determine the adherence policy.

Custom

HAProxy settings

Because Rancher uses a specific HAProxy load balancer, you can customize the load balancer’s HAProxy settings. What you define in this section will be added to the configuration generated by Rancher.

Example of a custom HAProxy configuration

Tags/Load Balancer Scheduling

We provide the ability to add tags to load balancers and schedule where the load balancer will start. Read more details about tags and scheduling here.

Add a load balancer with Rancher Compose We will see how to configure

a load balancer for our “letschat” application created earlier in the Add Services section

.

Read more about setting up Rancher Compose

.

Note: In our examples, we will use <version> as the image label for our load balancers. Each version of Rancher will have a specific version of lb-service-haproxy that is supported by load balancers.

We’ll set up the same example we used earlier in the UI sample. To get started, you’ll need to create a docker-compose.yml file and a rancher-compose.yml file. With Rancher Compose, we can launch the load balancer.

docker-compose.yml example Example of

rancher-compose.yml

load balancer options in Rancher

Compose

Rancher provides a load balancer that runs the HAProxy software inside the container to route traffic to

target services.

Note: Load balancers will only work for services that use the managed network. If you select any other network option for the target services, it will not work with the load balancer.

A load balancer can be scheduled like any other service. Learn more about programming load balancers with Rancher Compose.

Load balancing is configured with a combination of exposed ports on a host and a load balancer configuration, which can include specific port rules for each target service, custom configuration, and adhesion policies.

When working with services that contain colleagues, you must use the main service as the target service, which is the service that contains the companion tag.

Source ports

When you create a load balancer, you can add any port that you want to expose on the host. Any of these ports can be used as source ports in the port rules of a load balancer. If you want an internal load balancer, it will not expose any ports on the load balancer and will only add port rules in the load balancer configuration.

Note: Port 42 cannot be used as a port for load balancers because it is used internally for health checks.

Example load balancer configuration docker-compose.yml All

load balancer configuration

options are defined in rancher-compose.yml under the lb_config key. Port rules Port rules are defined

in rancher-compose.yml

. Because port rules are defined

individually, there can be multiple port rules defined for the same service. By default, Rancher will prioritize these port rules based on a specific order of priority. If you want to change the order of prioritization, you can also set a specific order of priority of the rules.

Default priority order

Wildcard hostname and URL Wildcard hostname Hostname with wildcards and URL

  1. Wildcard
  2. hostname Default URL (no hostname,

  3. no URL)
  4. Source port The source port is one of the exposed ports on the host (that is, a port

that is in docker-compose.yml).

If you want to create an internal load balancer, the source port does not need to match any of the ports in the docker-compose.yml file.

Destination

port

The destination port is the private port of the service. This port maps to the port exposed in the image used to start the service.

Protocol

There are several protocol types that are supported in Rancher load balancer controllers

. http

  • : By default, if no protocol is set, the load balancer uses http.
  • HAProxy does not decrypt traffic and passes traffic directly through

  • tcp – HAProxy does not decrypt traffic and passes traffic directly over
  • https – SSL termination is required. HAProxy decrypts traffic using the provided certificates, which must be added to Rancher before being used in a load balancer. Traffic from the load balancer to the target service is not encrypted.
  • tls – SSL termination required. HAProxy decrypts traffic using the provided certificates, which must be added to Rancher before being used in a load balancer. Traffic from the load balancer to the target service is not encrypted.
  • SNI: Traffic is encrypted on the load balancer and services. Multiple certificates are provided for the load balancer, so that the client is presented with the appropriate certificate based on the requested host name. (See Indicating the server name for details.)
  • udp – This is not supported by Rancher’s HAProxy provider.

Any additional load balancer provider could support only a subset of the protocols

. Host name routing Host

name routing

only supports http, https, and sni. Only http and https also support route-based routing.

Service

The name of the service to which you want the load balancer to direct traffic. If the service is on the same stack, use the service name. If the service is on a different stack, then you would use <stack_name>/<service_name>.

Example rancher-compose.yml

Hostname

and path

The rancher HAProxy load balancer supports L7 load balancing by being able to specify the host header and path

in port rules.

Wildcard

example

rancher-compose.yml Rancher

supports wildcards when adding host-based routing. The following wildcard syntax is supported.

Priority By

default, Rancher prioritizes port rules that target the same service, but if you want, you can customize your own prioritization of port rules (a lower number is a higher priority).

Example of selector

rancher-compose.yml

Instead of targeting a specific service, you can configure a

selector

. By using selectors, you can define service links and host name routing rules on the target service instead of on the load balancer. Services with tags that match the picker become a target on the load balancer.

When you use a selector on a load balancer, the lb_config can be set on the load balancer

and any target services that match the selector. On the load balancer,

the selector

value is set to the lb_config in selector. The port rule on the load balancer lb_config cannot have a service and would typically not have a destination port. Instead, the destination port is set to port rules on the destination service. If you choose to use host name routing, the host name and path will be set on the destination service.

Note: For any load balancer that uses the v1 load balancer, yaml fields that use selector tags will not be converted to the v2 load balancer because the service port rules will not be updated.

docker-compose.yml example

rancher-compose.yml example
Backend name

If you want to explicitly tag the

backend

in the load balancer configuration, you must use the backend_name. This option can be useful if you want to configure custom configuration parameters for a particular backend.

Certificates

If you use the https or tls protocol, you can use certificates that are added directly to Rancher or from a directory mounted

in the load balancer container. Reference to certificates that are

added to Rancher

Certificates are referenced in section lb_config of the

load balancer container. Uploading certificates to the load balancer container

Supported in compose files only

Certificates can be mounted directly

to a load balancer container

as a volume. The load balancer container expects certificates to be in a specific directory structure. If you are using the LetsEncrypt client to generate your certificates, then your directory structure is already configured in the format that Rancher expects. If you are not using LetsEncrypt, then the director and certificate names will need to be structured in a specific way.

The Rancher load balancer will poll the certificate directories for updates. Any addition/removal of the certificates will be synchronized by polling every 30 seconds.

All certificates will be located in a single base certificate directory. This directory name will be used in a load balancer service tag to inform the load balancer where the certificates are.

In this base directory, each certificate that is generated for a specific domain must be placed in a subdirectory folder. The folder name must be the domain name of the certificate, and each folder must contain the private key (that is, privkey.pem) and the certificate chain (fullchain.pem). For the default certificate, it can be placed in any subdirectory name, but the files in the folder must contain the same naming conventions as the certificates (that is, privkey.pem and fullchain.pem).

When you start a load balancer, you must specify the location of the certificates and the location of the default certificate by using tags. If these tags are on the load balancer, the load balancer will ignore the certificates that are in the load balancer’s lb_config key.

Note: You cannot use certificates added in Rancher together with mounting certificates in the container through a volume.

Certificates can be mounted in the load balancer container using host link mounts or using a named volume with our storage controllers as volume controllers.

docker-compose.yml example rancher-compose.yml example Custom settings For advanced users, you can specify custom

settings

for the load balancer in

rancher-compose.yml

. See the HAProxy

documentation for details on the available options you can add for the Rancher HAProxy load balancer.

Example

of

adhesion policy rancher-compose.yml

If you want to specify the adhesion policy, you can update the policies in

rancher-compose.yml. Example rancher-compose.yml

Examples of writing ranchers

Example of load balancer

(L7)

Example docker-compose.yml

Example rancher-compose.yml Example

of internal load balancer

To configure an internal load balancer, no port is displayed, but you can still configure port rules to route traffic to the service.

docker-compose.yml

example SSL termination

example rancher-compose.yml

Certificates must be added to Rancher and are defined in rancher-compose.yml.

Example docker-compose.yml

Example rancher-compose.yml

Contact US