LBL DNS Global Load Balancer

Back

LBL®Global Distributed Gateway Global Load Balancer (GLB) is a tool that allows you to use the DNS handling them in a dynamic manner by exploiting to the maximum extent the characteristics and applying them to manage addressing and global load balancing with high reliability distributed functionality.

This document describes a reference architecture for install and configure LBL®GLB in an environment authoritative DNS and distributed on two sites.

Reference Environment

The reference environment takes into account two sites that implement primary and secondary DNS.

The DNS ns2.foo.com, Ns1.foo.com, ns3.foo.com Ns4.foo.com and were recorded with the following addresses:

Ns1.foo.com 10.41.11.22

Ns2.foo.com 10.41.11.23

Ns3.foo.com 10.43.12.24

Ns4.foo.com 10.43.12.25


It is important to note that the DNS registrars requires to address the primary and secondary two classes of different addresses to ensure a minimum of operational continuity. 

Implementation


A possible implementation of dynamic architectures of resolution of name resolution, provides for the introduction of systems LBL®GLB able to filter requests by interposing itself between a request and the DNS.

This solution simplifies the various situations making it highly available two addresses on a different subnet and balancing requests on different DNS.

Up to this moment no changes have been made to the DNS configuration. Were added only services which they exhibit two different addresses that use DNS to resolve them.

The resulting situation, without changes to the DNS registrars, is therefore the following:

Ns11.foo.com 10.41.11.30

Ns1.foo.com 10.41.11.22

Ns2.foo.com 10.41.11.23

Ns33.foo.com 10.43.12.30

Ns3.foo.com 10.43.12.24


Ns4.foo.com 10.43.12.25

The existing infrastructure at the time was not made no change but you will be able to try all the features by setting ns11.foo.com Ns33.foo.com and as a DNS client in members of the test.

Once you have performed the tests it is possible to modify in the DNS registrars attributions of two DNS of reference by moving the address and name in the GLB:

Ns11.foo.com 10.41.11.30

Ns1.foo.com 10.41.11.22

Ns2.foo.com 10.41.11.23

Ns33.foo.com 10.43.12.30

Ns3.foo.com 10.43.12.24


Ns4.foo.com 10.43.12.25

Disaster Recovery

In architectures of Disaster Recovery is good to divide the topics in two specific themes, high reliability of the DNS service and management of DNS.

The high reliability of the DNS service is guaranteed certainly from double the resolution of two DNS addresses, in this case the VIP NS11 and NS VIP33. In the specific case, in the case of total unavailability of the main site you can access the DNS services through the secondary DNS now represented by VIP NS33.

This situation is however a situation not usable in time because the secondary site could not make changes to the resolutions of the addresses by limiting the possibility of management.

Disaster Recovery with Multimaster DNS

A first scenario of resolution of the problem of the manageability of the sites is possible through the use of a system Multimaster DNS.

The alignment of the areas of the two masters in this case is guaranteed by redundancy systems in dependence on the type of the DNS server that you are using. If you are using the platform LBL®GLB you can maintain aligned zones BIND through the system of clustering LBL®GLB.

A possible unavailability of the main site does not affect the overall operation of the operation and management of the secondary site would be completely autonomous even in the case of sole survivor. In the specific case, in the case of total unavailability of the main site you can access the DNS services through the secondary DNS now represented by VIP NS33 and make changes to the primary DNS ns3.foo.com.

Disaster Recovery with the Primary DNS replication in

Another scenario of Disaster Recovery possible is implementable through replication of the primary DNS in the secondary site which in this case can have a single secondary DNS.

The map of the architecture becomes the following. The primary DNS Ns1.foo.com is replicated through replication tools storage and in the secondary site is off.

Ns11.foo.com 10.41.11.30

Ns1.foo.com 10.41.11.22

Ns2.foo.com 10.41.11.23

Ns33.foo.com 10.43.12.30

Ns4.foo.com 10.43.12.25

In the event of failure of the primary site to the secondary site will be immediately available and with the start of the primary DNS replicated in the secondary site will also be possible to carry out the management of long period.

In this case, precede LBL®GLB makes it possible to have an external addressing different from the internal addressing reusing the addresses of primary DNS. In the case of geographic zones different there would be countless problems of implementation.

The situation with the primary site in failure would be the present with the advantage that the address with  10.41.11 subnet would not cause problems of global routing if positioned in different areas because it is placed in the backend:

Ns33.foo.com 10.43.12.30

Ns1.foo.com 10.41.11.22


Ns4.foo.com 10.43.12.25

Disaster Recovery with DNS GLB Cloud

To offer a very high availability DNS services it is possible today to dislodge components LBL®GLB directly in the cloud on geographical areas differentiated.

The solution ensures the availability of the two DNS regardless of availability of sites offering a guarantee very high reliability.

Ns111.foo.com 12.61.9.30

Ns1.foo.com 10.41.11.22

Ns2.foo.com 10.41.11.23

Ns333.foo.com 21.33.8.30

Ns3.foo.com 10.43.12.24


Ns4.foo.com 10.43.12.25

In the event of failure of the main site requests for resolution would still be guaranteed by the components in the cloud.

Disaster Recovery with extension Layer 2

In case the two datacenter belong to the same backbone allowing the migration of addresses from a datacenter to another you can manage both in situation  multimaster DNS that in replication  master DNS, migration of addresses from a datacenter to another through the allocation of VIP LBL®GLB in Mutual fail over.

In this case we will analyze the configurations mutual fail-over components of LBL®GLB that contain the possibility of attributed, in case of failure or maintenance schedule, the VIP address on both sites using the DNS Resources survivors. For the sake of simplicity there we will dive on characteristics of designation of resources far and near the component LBL®GLB assuming this characteristic or deepening in the implementation stage being very simple to implement.


The systems LBL®GLB in this case will have the ability to run the fail over of addresses mutually from site 1 to site 2 in an automatic or manual manner by presenting, in case of failure both addresses

In the event of failure of the pair of LBL®GLB in the DR site expose, also on physical interfaces are different, the two IP addresses.

Considerations

The architectures exposed in the previous paragraphs serve as an example for orienting the choices of architectural policies of high reliability without however being the only possible solutions. Please refer to studies specific architectural that will be studied on a case by case basis to existing infrastructure. In every case the proposed scenarios are however to be considered as a reference in situations of high geographical availability and be used as a starting point for the design for optimal infrastructure that punctually is being analyzed.

Considerations with respect to filters, firewall, DoS/DDoS mitigation, analysis DNS traffic are features of the instruments mentioned in chapters and have not been mentioned in this document.