ADC Layer 2

Back

LBL ADC layer 2 is the balancing functionality at layer 2 that allows to balance the connections by performing the DNAT of source IP address.

In order to use the functionality LBL ADC layer 2, it is necessary to use the release LBL 9.9.0 or higher. If coming from lower release 9.9.0 to it is necessary to perform the update to new release with the utility provision on the site oplon (see Manual LBL_GDG_InstallUpdate) and perform the update of the operating system through an Internet connection.

Package Management layer 2

LBL ADC layer 2 using the queuing system of  packet routing of the kernel that you set through the iptables commands to be then processed by LBL.

You use for each stream two queues, the first that receives packets from the client through a network interface, the second receives the reply packets coming from the endpoint through another network interface.

In the example below, two proxy receive traffic from the inside of the datacenter in balancing layer 2 and return the packets toward the respective interfaces toward the ADC system.

Requests that come from the client to the interface enp0S10 departing from the interface enp0S11 with the same client IP amended by the ADC. Once processed by proxy response packets are routed toward the ADC which provides to redesign the packets to deliver the response to the client who made the request.

The doors of the request of the client and the server can be different, they too will be modified by the ADC during the exchange.

To create the queues at layer 2 must act at 2 levels:

1) Generation of input queues and output to level  IP stack

2) setting of the use of the queues on the part of the ADC for the treatment of packets

Creating Queues  IP stack

To manage packages at layer 2 and their routing, LBL ADC uses the queue manager made with iptables.

For each logic flow LBL ADC uses 2 queues, the first associated with the communication interface with the client, the second associated with the communication interface toward services. queues are numbered in order then to be used by LBL ADC.

LBL ADC can manage countless streams associated with countless queues simultaneously.

In the diagram below is taken as an example the use of component a Layer 2 in order to be able to balance and put in a high reliability two proxy. The queue 0 is associated with the adapter to the client, the queue 1 is associated with the adapter toward the proxy.

To carry out the setup of the queues run as root the following commands.

# Iptables -t mangle - PREROUTING - the ENP0S10 -p tcp - s 192.168.43.0/24 -j NFQUEUE --queue-num 0

# Iptables -t mangle - FORWARD - the ENP0S17 -p tcp - s 192.168.31.0/24 -j NFQUEUE --queue-num 1

 

To verify a low level the flow of packets use on two terminals the following commands applied to two queues. The controls display the flows of packages with their direction:

Data Flow interface toward the client

# Tcpdump - the ENP0S10 port 80 or port 8080 or port 3128

 

Data Flow interface toward the services

# Tcpdump - the ENP0S17 port 8080 or port 80 or port 3128

 

Another operation to be performed is set the stack socket to perform the forwarding of packets.

# Sysctl net.ipv4.ip_forward=1

 

To verify the list of queues assigned to perform:

# Iptables -t mangle -

 

Chain PREROUTING (policy ACCEPT)

Target prot opt source destination

NFQUEUE  tcp  --  192.168.43.0/24 anywhere NFQUEUE Num 0

 

INPUT chain (policy ACCEPT)

Target prot opt source destination

The forward chain (policy ACCEPT)

Target prot opt source destination

NFQUEUE  tcp  --  192.168.31.0/24 anywhere NFQUEUE Num 1

 

Chain OUTPUT (policy ACCEPT)

Target prot opt source destination

Chain POSTROUTING (policy ACCEPT)

Target prot opt source destination

To check the activity on the single queues perform:

# Cat /proc/net/netfilter/nfnetlink_queue

 

0 25214     0 65531 2 0 0  97 1

1 2691274501 0 65531 2 0 0  120 1

A- queue number

B- portid peer: good chance it is process ID of software listening to the queue

C- queue total: current number of packets waiting in the queue

D- copy mode: 0 and 1 only message only provide meta data. Ru 2 message provide a part of packet of size copy range.

And- copy range: length of packet data to put in message

F- queue dropped: number of packets dropped because the queue was full

G- user dropped: number of packets dropped because netlink message could not be sent to userspace. If this counter is not zero, try to increase netlink buffer size. On the application side, you will see the gap in packet id if netlink message are lost.

H- id sequence: packet id of last packet

I- 1

Association of the queue of packets to the  ADC routing

LBL ADC performs the forwarding of packets in DNAT while maintaining the session. To associate the queues at the processing to DNAT opera of the ADC is sufficient use of the template already present in deployments.

After login to go to: ADC Settings>ADCs>Select the listeners of templates

With the list of listener template go in search DNAT

The search DNAT highlights the listener template association of queues Layer 2.

Perform copy of listener template on the form, in this case A10_LBLGoPlatform:

On returning to the ADC Settings>ADCs the result is the following:

Repeat the operation for the group of endpoints:

On search type DNAT:

Once you have typed DNAT copy the endpoint grouping template on the form, in this case A10_LBLGoPlatform:

Copy of the endpoint grouping on the module:

Returning to the ADC Display Settings>ADCs the situation is as follows:

Save the settings made up to now:

Run the Save Settings proposals:

Indicate the reason for saving:

Once you have copied the template layer 2 on the module, associate the queues to the listener:

ADC Settings>ADCs>Edit of the module:

Expand the panel Listeners and press the button the EEA details:

The listener will have the following carattristiche osiLayer: Set to 2; layer2QueueIn set to 0 and layerQueueOut set to 1. In this case there will be no need to change the queues as we previously generated respectively to 0 and 1.

It should also be noted that the indication of the address of the listener is to 0.0.0.0 and port is 80. These values are not taken into consideration as are the queues that carry out the queue creation on a given NIC with certain filters. LBL ADC will however in a transparent manner to perform any port rewriting in case the packets should have different ports between the received packets and packets forwarded toward services.

The treatment of the routing to the endpoint is entirely similar to the treatment of the layer 4. Will be indicated the addresses and ports to which meet the services:

ADC Settings>ADCs>Edit on the affected module>Expansion of the panel Endpoints Grouping>EEA details

Enter into the detail of the domain (which in this case is empty):

Modify, add or remove the endpoit. In this case we have two endpoints that respond to port 8080.

It is very important to assign names to the associative endpoit because these there will be needed to perform the healthcheck and disable the routing in the case of irraggiungibilità of one or more services.

If the module is in a state of running run the reinit.

Setting health check services

The system at layer 2 is able to ridirottare requests on services survivors on the basis of their vitality. In order to check the viability of services set the healthcheck through the module:

Reliability Tools>Services Check> Edit

After selecting the module set the health check services by assigning the associative names:

Save and run the reinit indicated:

 Routing Setup side services

To ensure that the services they route packets correctly from the ADC is necessary to carry out the setup of the routing in the respective systems that contain the services.

You will have to force, with the instruments at the disposal of the individual producers of operating system or products, packets to return in the NIC of provenance. The following example is not exhaustive of the topic but useful to be able to then interpret and adapt in individual installations:

Setting up two tables mnemonic¬†“1¬†rt_dnatnet” and “2 rt_INTERNET”:

Vi /etc/iproute2/rt_tables

#

# Reserved values

#

255 local

254 Main

253 Default

0 unspec

#

# Local

#

#1 inr.ruhep

1 rt_dnatnet

2 rt_internet

Set by the root user routing  of packets coming from the ADC and as a gateway set l?ADC itself:

# Ip route add 192.168.31.0/24 dev enp0S17 src 192.168.31.11 table rt_dnatnet

# Ip route add default via 192.168.31.1 dev enp0S17 tab rt_dnatnet

# Ip rule add tab rt_dnatnet from 192.168.31.11

 

To verify the traffic at a low level and the directions of the packages run the following command:

# Tcpdump - the ENP0S17 port 80 or port 8080 or port 3128

 

Media and template

By way of templates that you can find a series of Linux script for setting the routing at both ADC both at the level of service in:

(LBL_HOME)/legacyBin/Linux/ProxyDNAT

+---CentOSNetConfProxy

| +---System000

| | CentOS_symmetrically.sh

| | CentOS_SimmetricInternet.sh

| | CentOS_SimmetricView.sh

| |  ifcfg-backend

| |  ifcfg-DNATNET

| |  ifcfg-INTERNET_DHCP

| |  ifcfg-WIRELESS

| | Readme.txt

| |

| \---System001

| CentOS_symmetrically.sh

| CentOS_SimmetricInternet.sh

| CentOS_SimmetricView.sh

|  ifcfg-backend

|  ifcfg-DNATNET

|  ifcfg-INTERNET_DHCP

|  ifcfg-WIRELESS

| Readme.txt

|

\---NetConfVAPP

CentOS_NFQUEUE.sh

CentOS_NFQUEUEView0.sh

CentOS_NFQUEUEView1.sh

 

Make persistent routing rules

Attention:

Routing rules and queue settings must be persisted and the individual vapps or the individual operating systems delivering the services should be automatically set up and verified