Cluster setup

Back

LBL®Global Distributed Gateway Virtual Appliance (VAPPS hereafter) is a tool for balancing and routing traffic data at the level OSI layer 4 and OSI Layer 7 HTTP/S DNS.

LBL®Global Distributed Gateway is a product destined for mission critical environments therefore only staff who made the course and has passed the examination is authorized to certify the installation and maintenance of products in operation. All Certified People are equipped with certificate of participation in the courses and overcoming of the exam issued by TCOGROUP SRL.

Prerequisites

Before you read and follow the instructions in this manual must be read and executed the instruction manual “LBL_VAPP_Installation.pdf“/ “LBL_VAPP_Installation_eng.pdf” Where are indicated the necessary steps for the import and the first setup of Virtual Appliances.

To perform the operations of this manual is required to be in possession of licenses “Catalog” and at least licensing Standard “HA”.

Preparing to Use

This specific manual allows you to create a cluster of module LBL®Global Distributed Gateway. The module allows you to use the features ADC, full-reverse-proxy, load-balancing in high reliability through the use of Virtual IP (VIP). In this case you will examine a setup of two nodes in fail-over between them.

For example we will use the following IP addresses and networks to simulate a real situation:

  • Heart-Beat: is a network for verification of the state of activity of the nodes
  • Backend: is the network where are attested services, usually is reachable through the gateway. In some reality can reside in a network/SEPARATE VLAN
  • Public: is the network where they come from service requests
  • VIP is the address that is shared between the two nodes

Settings static addresses LBL®VAPPs

After the import of VAPPs in virtual environment it is necessary to assign the virtual NICs to two nodes. In this case you will be awarded 2 VNICs at each node:

The VIP will be assigned alternately to vnics A1 and VNICS B1. The node that owns the VIP is called master.

We will now proceed to assign static addresses to VAPPs based on CentOS. To determine the assignment of vNICs of the virtual environment with the internal interfaces of the VAPP is good to record the MAC address of the individual interfaces because this will facilitate the allocation of static IP addresses.

ATTENZIONELe network interfaces by default are assigned as E1000. If you want to use specific drivers, such as for example VMXNET 3 VMware, you need to install the drivers of the virtualization platform specific to the release request from the same.

In environment CentOS you will proceed to the allocation of static IP addresses through the interface “nmtui”, the dynamic address (VIP) will instead be assigned directly from LBL®GDG.

Below are the steps to setup the static addresses a VAPP, at the end perform the same operations for the other VAPPS:


 

With the arrows positioned on the first element

Check the MAC Address and put it in relation with its function (Public Heart-Beat, etc..). It is recommended that you change the description of the interface in order to be able to quickly identify subsequently. Go with the cursor on <Automatic> and change it to Manual:

With the cursor go to <Show>, you will be shown how to perform the manual setup of the addresses in the interface. With <Add> Enter the address of the interface, in this case the address designated for the Heart-Beat (NB: remember to also indicate the netmask with /XX at the end of the address):

Once you have finished typing the address with the arrows positioned on <OK> and then confirm with the [ENTER]

The interface will reposition itself in the choice of interfaces to set, with arrows positioned on the second interface…

Change in this case also the description of the interface on the basis of its function identifying it through the Mac Address. Then go in the choice of the type of address <Automatic> and change it to Manual as reported below.

Add the address/addresses related to the function of the interface that in this case are the public and the backend. (NB: Usually, the backend can not be directly addressed in the interface but be reached through a gateway, in this case to adapt the setup to the operating environment specific by adding the Gateway)

With the arrows go to the confirmation of the setup <OK> and then confirm with the [ENTER]

After confirming the result will look like this

With the help of the arrows positioned on <Back> and confirm with the [ENTER] key.

At this point we are going to activate the interfaces with the new addresses. (NB: If you are performing this operation from ssh with the dynamic addresses  initial DHCP on the same interfaces will be assigned to the new addresses, during the operation of assignment you will be logged off)

To activate the new addresses, you must first disable the previous assignments and then activate the new assignments. The symbol “*” indicates that the interface is active. With the [ENTER] key it is possible to disable the selected interface and with another [Enter] you can reactivate it. 

In this case were first disabled and then with the [ENTER] key and the positioning arrows reactivated both interfaces…

Once reactivated the interfaces to the left appears again the symbol “*”.

With the arrows positioned on <Back> and then confirm with [Enter]

With the arrows positioned on Quit and then confirm with [Enter]

To check if the addresses are set correctly perform the command “# ip addr”

If the setting was successful interfaces must return static addresses corresponding to the schema of the initial network. Note For convenience the names of the interfaces.

Node A:

Repeat the same operations in the other VAPP, the final result will be similar to the following and however in line with your network addressing scheme.

Node B:

Address setting of management LBL®VAPPS

From the command line of the VAPP of node to perform “# lblsetup”

With the [ENTER] key go to <Choose> and choose the address of management, in this case we will denote the network 192.168.45.xx as management…

Go then to the password of the user “root” to set the password. In this case we will denote “adminadmin”. For reasons of security, do not set “adminadmin” in production.

The password setting of delegation allow to use vapps in environments Hybrid allowing hierarchies of administration. This feature allows you to assign groups of VAPPS in complete autonomy while maintaining a global control of the infrastructure. In this respect see the manual “Autonomous Delegated Authentication”.

Once you have set the login and password and administrative delegation, go on < Save & Exit > to save the settings.

For the settings take effect it is necessary to perform a restart of services through the command “# lblrestart”

Running command “# lblrestart”

The command takes approximately 1 minute at the end of which the VAPP will be configured with the addresses of management where will respond web services and the Management Console.

You can check the actual correspondence of listeners of management through the commands:

# ss -ln|grep 4444

And

# ss -ln|grep 54443

Note: Carry out the same operation in VAPP NODE B, the result should be similar to the previous command

Set hostname on LBL GDG powered by CentOS

Run as root

# hostnamectl set-hostname LBLR9GDG001 

# hostnamectl 

# Vi /etc/hosts

  Change the address 127.0.1.1 with the new name of VAPPS

# hostname

# reboot

It is necessary to perform a reboot so that even applications or services are running with the new name.

Set hostname on LBL GDG powered by Ubuntu

Run as root

# hostnamectl set-hostname LBLR9GDG001

# Vi /etc/hosts

Change the address 127.0.1.1 with the new name of VAPPS

# hostname

# reboot

It is necessary to perform a reboot so that even applications or services are running with the new name.

Disabling services demo/test LBL®Platform

Before proceeding with the setup of the cluster it is necessary to set the licenses on both nodes and disable unnecessary modules that depart automatically to provide a platform for rapid use demo/test/prototype. For this purpose spegneremo modules LBL®Platform on both nodes and then we will make the necessary licenses to the operation of the cluster.

To disable the module LBL®platform is sufficient to type on a browser:

Https://192.168.45.200:4444/ 

(NB: The service has a digital certificate self-signed and then it will be necessary to instruct the browser to continue. It is anyway possible to insert a new digital certificate that identifies the service).

To request a login and password enter the login and the password you set in the console initially, in this case: Login root, Password adminadmin

Once you have confirmed your login will appear the Global Distributed Gateway global control from which we are going to carry out in sequence the disabling of the modules demo/test and set of licenses before the setup of the cluster

To disable the module LBL®Platform expand the menu “modules” and select “ADC & GLB” and then choose “Edit”:

Expand the parameters “General start parameters” and change the parameter to be “Process start” from “automatic” to “manual”

Once changed parameter, in the top right will be an indicator to “1 to save” to save the configuration. 

Following the link to “1 to save” you will be able to apply your changes. To carry out the operation press the Save button. (NB: in this case the amendment and application will be immediate and will lead to stalling of the module LBL®Platform)

Just pressed the button “Save” you will be able to describe the operation because all changes are “record” is to perform the roll-back for both procedures of ‘Audit’

If we go back to the selection “modules” -> “ADC & GLB, you will see a symbol of the rotating gear 

…Until the complete shutdown of the module

NB: To carry out the same operation on the node B

Setting of licenses

To enter the licenses for the operation of the modules, perform the following steps.

  1. To obtain the licenses
  2. Installing licenses

Once you have obtained licenses you can install them according to their function. First of all we are going to install the license “Catalog” which serves to populate the local inventory and/or global depending on the license. In this case we need to have licenses Catalog for the overall control and the Standard licenses has that can also have the extension DoS/DDoS Attack Prevention & Mitigation as reported below. The licenses must be present in a directory of your system.

Example:

In the Web Interface Select “nodes” and then press “Asctions” on which you can access the menu “Install License”

Once you have selected the license “Catalog” Press “Confirm”

The confirmation message operation is displayed:

At this point serves to set the license of the form “LBL®ADC Standard HA” which in this example also contains the extension DoS/DDoS Attack Prevention & Mitigation, DNS Global Balancinf Load and Web Application Firewall (WAF). In order to set the license to the module “LBL®ADC Standard HA” select “modules” -> “ADC & GLB” and then go to “EEA details” of the module.

Also in this case select “Actions” and “Install license” with the previous mode.

 

Select the license “STANDARDHA_DOSDDOS_WAF_DNSGLB” and then confirm

The confirmation message operation is displayed:

NB: Repeat the operations of setting the license also in the node B

To apply the licensing “Catalog” it is necessary to restart the main module LBL®Monitor through the console or SSH.

Node A:

Node B:

During these operations to restart it is possible that the Web console errors due to restart services similar to this:

As soon as the service will be restored, your browser will return to ask the login.

Populating the catalog of inventory of the nodes (Inventory)

For adding nodes must have licenses “Catalog” and the rights “root”. Before creating a Cluster it is necessary to indicate LBL®Global Distributed Gateway nodes to administer. For this purpose it is necessary to use the System bar the settings menu -> Nodes.

To add or remove nodes must have the rights “root”. Other users, even if declared administrators will not be able to add or delete nodes to administer.

 


From the menu nodes you can delete, add and parameterize the nodes 

 

Then proceed with the addition of the second node:

Once you have added the two nodes we save the configuration

 

 

We indicate the reason for the change to the configuration

You can now select the menu nodes to view the status of the nodes.




  

The creation of the Cluster LBL®Workspace

To create the cluster must have licenses “Catalog” and the rights “root”. Other users, even if declared administrators will not be able to create or destroy their clusters. 

To ensure that all general parameterizations are redundant, the first cluster that we are going to create is the Cluster of LBL®Workspace. This will allow us to propagate the general parameters in at least one other node and in case of necessity, by accessing the second node, you will have the same configuration automatically replicated from the cluster.

You access the creation/destruction of a cluster through the menus placed in the system bar of that on the browser appear in the upper right.

Now it is possible to note that the synthetic display of Global Distributed Gateway displays the status of the two nodes for the fundamental values: CPU, disk space, ADC tunnels and Highwater, memory swap usage area.

 


From the menu the cluster you can delete, create and parameterise the Cluster … 

 By choosing Add Cluster will prompt you to indicate a name and a description of the cluster that you want to create. 

 

Just confirmed with OK, you can set the cluster through the [Edit] button.

 

Once you have selected the [edit] and expanded the parameters “Processes” it is possible to associate the processes of this “Cluster”

 

With the [+] button will go to associate the second process that constitutes the “Cluster”

For convenience the [+] button copy the characteristics of origin in which it is depressed. It will be then sufficient to change the necessary values, in this case only the “Node Address”

 

It is sufficient to save the configuration to enable the cluster…

 

 

Selecting Clusters there will appear the composition of the new cluster just created …

 

Returning to the viewing LBL®Global Distributed Gateway we see that the system detects a misalignment of the configuration. 

This is due to the fact that the node of the Cluster LBL®Global Distributed Gateway, on which we did the configurations, is not aligned with the configurations of the node that only now is clustered and must therefore be realigned. LBL®Global Distributed Gateway is equipped with a powerful engine for verification of the consistency of the configurations of the nodes in the CLUSTER and in the case are misaligned proposes an error message and then the possibility to reconcile the configurations. Select the configuration that you want to reconcile and press edit

The system will propose the configurations misaligned with their characteristics. It asks the operator to select the configuration from which to align the other nodes of the cluster. In this case we will depart from Oklahoma that has the date of last modification higher and also the configuration higher than Oregon.

 

After confirming the system Redoes the consistency check

Returning to the Global Distributed Gateway reports of inconsistency will have disappeared.

The creation of the Cluster LBL®ADC Standard HA

As for the creation of the Cluster LBL®Global Distributed Gateway select Settings and Custer

Select Add Cluster

Set the name of the cluster and the description

Go to [Edit] to complete the Cluster Configuration

Set the address of the node on which to create the cluster and then the process (module) putting in Cluster

Press Add new item and then change the address of the second node and then go on the link 1 to save…

[save] Configuration

If you select Clusters you will find 2 Cluster, one relating to the Configurations workspace and one related to the cluster LBL®ADC Standard HA.

Overview Cluster Management

If you select “ADC Settings” -> “ADCs” will the situation synthetic all modules ADC in management to the Global Distributed Gateway. In this case it is in evidence the'”ADC” named STDHA_EDU_cloud that appears as a single object “Cluster” formed by multiple processes that must be managed simultaneously. For this reason the system exposes a single element.


 

If we go to edit and then select the Cluster

The system will highlight the nodes that make up the cluster in a synthetic manner giving the possibility to navigate to specific sections to verify the characteristics. If the cluster nodes are in a state of running it is possible to fully explore all the details.


When performing settings in a cluster, all transactions are replicated to all modules that make up the cluster.

Setting networks LBL®ADC Standard HA

The configurations are then kept constantly aligned in a manner transparent to the operator. To set the Cluster “LBL®ADC Standard HA” go to [Edit]

 

In order to be able to use the same configurations across multiple nodes with different parameters such as local IP addresses to node, you can use variables that associate a name to a value locally to a node or a process and that are always available during setup.

The variables can be of two types, associated to the node or associated to the process/module. In this case were prepared of the variables associated with the process/module that describe the public network, the private network and the network of the backend. Also report the value of a virtual address and its netmask.

Variables can be used in the setup thus keeping for each process or node of different values but identical configurations. To use a variable is sufficient in the configuration indicate the name of the variable between two #. 

Es.: #LBL_ADDRESS_IPV4_PRIVATE#

Setting heart-beat LBL®ADC Standard HA

To maintain consistent the application cluster during the run-time is necessary to maintain a constant dialog of verification between nodes that compose it for determining the state of “Master”. The network of Heart-Beat serves this purpose and LBL®ADC Standard HA uses the network of Heart-Beat for exchanging information regarding the status of the activity of the individual nodes.

The network of Heart-Beat can be set in two modes, multicast UDP or and both can be used in encrypted mode. 

The Multicast mode is very convenient where this protocol is allowed since it allows the look-up discovery of the nodes belonging to the cluster. To use this mode it is necessary to check during the installation if the protocol is enabled in the datacenter. Otherwise, use the UDP protocol that is always enabled.

The UDP mode is indispensable on geographic networks or on local installations or Cloud where is not enabled the Multicast Protocol. The strength of this protocol is that it is a protocol always enabled in all circumstances.

Setting heart-beat UDP LBL®ADC Standard HA

The setting of the Heart-Beat through UDP is an alternative to the setting of the Heart-Beat through Multicast. If you have already set the Heat-Beat Multicast through skip this paragraph.

The setting of the Heart-Beat through the UDP protocol is required in cases of installation in geographical environments or where there is no Multicast Protocol.

 

 

Change the values according to the diagram of the network…

Modified values as shown in the diagram

Preset values



The protocol of Heart-Beat in UDP in panel Lookup is set as default. All other parameters have already been set in the variables quini and there are no other operations to be carried out.

 

Unlike the Multicast, which is able to perform a lookup-automatic discovery, with UDP is necessary to indicate the nodes joint committees that constitute the cluster. Since these different parameters from node to node is already a variable, previously set in the process variables/module, that parametrizza node address in joint cluster “lbl_ADDRESS_IPV4_PRIVATE_peer”.

In the event of a cluster on more than two nodes add variables in an amount equal to the additional nodes of the cluster and then add them on the panel “UDP nodes lookup”

Save the configuration…


 


 


 

Setting heart-beat MULTICAST LBL®ADC Standard HA

The setting of the Heart-Beat through Multicast is alternatively to the setting of the Heart-Beat through UDP. If you have already set the Heat-Beat through UDP skip this paragraph.

To set the Heart-Beat through Multicast is sufficient to modify the values of the variables of the processes with the respective values of the network layout and change from UDP to multicast.

Change the values according to the diagram of the network…

Preset values

Modified values as shown in the diagram


…And Perform The saving…

Describe the reason for the modification of parameters…

VIP setting (Virtual IPS) LBL®ADC Standard HA

To set the VIP (Virtual IPS) ADC Select Settings -> virtual IPs

 

Will be displayed the processes and clusters that can manage virtual addresses. In this case we will choose our cluster.

A VIP is formed basically of three panels:

  • Basic Parameters
  • Health check of the public network
  • Health check of the network of the backend

The panel base parameters:

Enable=:default value=”true”

Enabling Disabling virtual address

Description=:default value=””

Describes the virtual address

Address=:default value=””

It is the virtual address in figures (Ex: 192.168.43.10) .

For IPv6 The representation must be carried out between square brackets [fdd4:3C3F:yyyy::99].

Netmask=:default value=”255.255.255.0″

It is the netmask in figures (es.: 255.255.255.0) virtual address

If the address is reported to IPv6 the value is determined by the precision that you want to obtain es: for 64 setting of the address will be fdd4:3C3F:yyyy::99/64

HealthCheckPort=:default value=””

It is the port on which to run the test of healthCheck. If the “” The health check is not performed. This value is very important because it determines the status of activity not only the IP address but also of the balancing system and routing. Must be set, usually 80 or 443, if the port is in SSL healthCheckSSL set to true!.

HealthCheckSSL=:default value=”false”

If set to true performs a healch HTTP check by establishing an encrypted connection.

HealthCheckUriPath=:default value=”/LBLHealthCheck”

It is the path of the healthcheck of activity of the balancing system. This value normally is never changed less than use already present in other applications. If this value is changed it is necessary to also modify it in “systemsmonitor_m.xml”, “iproxy.xml” and in “healthcheck.xml”.

The minimum values required are:

  1. HealthCheckPort
  2. Adapter device and deviceName
  3. At least 3 public addresses (for certification)
  4. At least 3 addresses of backend (for certification)

For our VIP Cluster will go to parameterize the panel of the basic parameter identifying the network cards just exploring the nodes of the cluster

To explore the network cards is sufficient to go to the link that will display the nodes of the cluster from which to obtain the names of network interfaces.


If the device names were different, as in the case of hybrid installations, you can set up a local variable to the process or to the node with the corresponding value to the local interface and indicate on “device” and “deviceName” the name of the variable.

The panels “Public network healt checks” and “backend network health checks” are used to verify the actual operation of the networks. In fact it is not sufficient to verify if the link is up because usually the connection to a switch will always be up even if the services are unattainable.

For this purpose it is necessary to identify at least 3 services to the public network and at least 3 network services of backend to check the reachability. It is possible to use both services based on ICMP (ping) both connectivity services (connect TCP) on which no vien carried out no operation.

It is possible to previously run tests through the Network Utility checks reachable from Navigation bar.

In this case the public addresses 192.168.43.131, 192.168.43.115, 192.168.43.118 are reachable from ICMP.

The backend is reachable via ICMP with address 192.168.45.131 while addresses 192.168.45.115 and 192.168.45.118 are9 reachable via connect to port 22 (ssh).

The parameters will then be:

Once completed the parameterisation save the configuration…

Describe the reason for the change of parameters and confirm…

Start nodes of the Cluster LBL®ADC Standard HA

Once the parameters of heart-beat and set the first VIP with it is possible to carry out the start processes Cluster. We make this operation from the Panel Cluster…

Select one of the nodes of the cluster and then press Actions and Start Process…

You will be prompted for confirmation of operation…

Once confirmed the symbol will change in running and you will begin to see activity of CPU…

After a few moments you can check in the services panel that the node, being the only running, is given the status of Master…

In the panel networks it is possible to check the allocation of IP address of the VIP in the designated interface…

It now proceeds to the start of the second node in the Cluster….

Just run the start you will notice the change of the state and the activity of the CPU of the node…

…With relative allocation of VIP.

The virtual address is migrated to the node just started.

It is possible to arbitrarily move virtual address (VIP) from one node to the other simply by performing “Promote Master” from the panel of the node that you want to promote to master…

After a few moments the virtual address will migrate on the selected node.

Enable the automatic start of the processes at startup of VAPPs

To make the start of automatic processes at the start of the Virtual Appliance positioned on Clusters -> [Edit]


 

In the panel “General start parameters” change the parameter “Process start” by “manual” to “automatic” 

Run the [save] configuration that will distribute in automatic processes with the new configuration

Describe the operation of change

Both nodes of the cluster will perform a restart returning to the situation of running

Hierarchy of attribution of the master node

With LBL®ADC Standard it is possible to indicate a hierarchy of allocation of state to master the individual nodes. This functionality is very useful in case there are sites where there is a preference to have a master node with respect to another in the state of normal operation. An example of the use of this feature is typically a configuration in Business Continuity or geographical networks with addresses positioned in different region (es.: Elastic IPs).

To determine a priori who must assign to start the status of master is sufficient to modify the “weight” of the node which by default is valued at 100. In this case we will adjust the weight of the node OKLAHOMA from 100 to 110.

We save the change…

We describe the change….

We re-initialisation of the services associated with the determination of the hierarchy

Reinit services associated to the determination of the hierarchy

Confirmation of operation

Reinit services associated to the determination of the hierarchy

Confirmation of operation

At the end of the “Reinit” of the service the Virtual IP will be assigned to the node with the “weight” higher.

You can always manually reassign, until the next restart of instances, the VIP to another node through the command “Promote master”

The result of the “Promote master” is the assignment of a VIP in the selected node in a few moments