In the data center, the new needs of high distributed reliability, even in multi-site architectures, require dynamic, flexible, easy to use and above all simple maintenance tools. The datacenter is an ever-changing ecosystem. In such complex and dynamic environments, any operational documentation of the real “handwritten” environment becomes immediately obsolete.
Oplon Commander arises from the need to equip itself with tools of high reliability that facilitate the automatic or semi-automatic management of operational processes. Tools that need to certify their operation at any time, so as to be ready for “unplanned” contingent events. Keeping an environment in constant evolution and at the same time “certified” means performing continuous tests with methods that always document and in a simple and immediate way the effective relationship between “processes” and the operational reality.
As part of the Business Continuty and Disaster Recovery processes, Oplon introduces a new concept of high reliability, taking over the role of task coordinator in a mission-critical datacenter o business-critical through the ability to orchestrate and execute services in the desired sequence by coordinating and piloting the actions on the technological components of the infrastructure.
The possibility of describing processes, testing them, testing them and documenting them is fundamental in mission – critical activities where some procedures are used only in cases of real need and in moments that can not be chosen a priori. In these cases it is essential to have precise reference points with the least possible number of actions to be carried out in a context that allows constant verification of progress.
This result can be achieved with different techniques that are normally summarized in a manual with a set of procedures, often “scripts”, that must be used in pre-established sequences.
A big limitation of these techniques is given by the updating of the procedures and the non-homogeneity of the operations. Another big limitation is represented by the fact that often these scripts take on very high dimensions and complexity having to incorporate both logic and action, often becoming not very easy to maintain.
It is precisely in the orchestration and automation of the necessary activities in an articulated environment that Oplon Commander expresses to its fullest potential.
Oplon Commander Work Flow & Decision Engine
Oplon Commander is the Oplon module integrated geographic clustering that responds to the new requirements of high reliability of services by introducing a new concept of high reliability in the application field, going to play the role of orchestrator of activities in a mission-critical datacenter.
Oplon Commander consists of two main modules: Oplon Commander Work Flow (list of actions to be performed), the workflow executor, and Oplon Commander Decision Engine (event detector ), a decision-making engine able to trigger workflows. The two modules have been designed to work in cooperation with each other or, if no automatic operations are required, Oplon Commander Work Flow can be used manually from any mobile device.
Thanks to the advanced ability to constantly perform health checks on all components, it allows to orchestrate failover, both in physical and virtual environment in a simple, effective and traceable way allowing for self-documentation.
The solution is not invasive on the infrastructure and should not be substituted for solutions such as VMware Site Recovery Manager or DRS, but it makes best use of all the components (Systems, Virtualizers, Storage, DB, etc.) allowing a management of the critical but also and above all maintenance in a fast, safe and above all easy to manage.
No additional HW is required. No compatibility matrix is required.
Oplon Simplified logic commader in distributed environments
Oplon Commander has been designed to activate, monitor, schedule procedures and follow them in their progress by verifying that they have been performed correctly.
Oplon Commander offers a service for scheduling activities that can also be accessed through remote invocations, Remote Workflow Command (RWC) in total security, to manage the life cycle of the entire service in its complexity and articulation on multiple sites and in Hybrid Cloud architectures.
The management of the start of a Work Flow can be triggered by a web interface or automatically by the Oplon Commander Decision Engine or by a command line procedure for integration with third-party tools. Work Flow can be performed either automatically in whole or, where the criticality requires it, followed by a step by step operator. It is therefore possible to structure procedures of Business Continuity (typically with automatic decisions), Disaster Recovery (typically by decision human), watchdog applications while maintaining the visual and contextual documentation of the procedures.
With Oplon Commander a web interface is made available to start operations without having to resort to launch via command line with administrator or root permissions. The possible operations are listed to the operator so that he can easily start them without the need for interventions by specialized operators who may not be immediately available in times of crisis.
The decomposition of the processes of high reliability in logic (Decision Engine) and action (Workflow) and the consequent simple description of the procedures in workflows self-documents the actions that induce the high reliability inside of the entire data center.
The ability to navigate through the actions that will be taken in case of failure, critical moment, or simple routine procedures, make the processes that are developed within the data centers more confidential because they are controllable and self-documented.
The HTML5 graphical interface allows to perform a complete navigation of the drill-down procedures that, once started, can be followed step by step in their evolution and progress.
Any changes made to the configurations or scripts are stored (recorded) and can be retrieved at any time both for regression to the previous state, and for security compliance investigations. For each change, the activity is recorded both centrally and at the node level to trace the executor of the same.
All operations are performed through operator login with complete tracking in the event logging database both on the individual execution points and on the general level.