In the data center sphere, the new needs for high distributed reliability, even in multi-site architectures, require dynamic, flexible, easy to use and above all simple-maintenance tools. The datacenter is an ever-changing ecosystem. In such complex and dynamic environments, any operational documentation of the real “handwritten” environment becomes immediately obsolete.Oplon Commander arises from the need for high reliability tools that facilitate the automatic or semi-automatic take-over of operation processes. Tools that need to certify their operation at any time, so as to be ready for contingent “unplanned” events. Maintaining an environment that is constantly evolving and at the same time “certified” means performing continuous tests with methods that document the actual relationship between “processes” and the operational reality in a simple and immediate manner at all times.
In the context of Business Continuity and Disaster Recovery processes, Oplon introduces a new concept of high reliability by filling the role of coordinator of activities in a mission-critical or business-critical data center through its ability to orchestrate and execute the start-up of services in the desired sequence by coordinating and piloting the actions on the technological components of the infrastructure.
The possibility of describing, testing and documenting processes is essential in mission-critical activities where certain procedures are only used in cases of actual necessity and times that can not be chosen a priori. In these cases, it is essential to have precise reference points with as few actions as possible to be carried out in a context that allows their progress to be constantly checked.
This result can be achieved with different techniques that are normally summarized in a manual with a set of procedures, often “scripts”, that must be used in pre-established sequences.
A major limitation of such techniques is the updating of procedures and the non-homogeneity of operations. Another major limitation is the fact that these scripts often take on very high dimensions and complexity as they have to incorporate both logic and action, often becoming difficult to maintain.
It is precisely in the orchestration and automation of the tasks required in a complex environment that Oplon Commander expresses its full potential.
Oplon Commander is Oplon's integrated geographic module that responds to the new need for high reliability of services by introducing a new concept of high reliability in the application domain filling the role of task orchestrator in a mission-critical data center.
Oplon Commander consists of two main modules: Oplon Commander Work Flow (action list), the workflow executor, and Oplon Commander Decision Engine (event detector), a decision engine capable of triggering workflows. The two modules are designed to work in cooperation with each other or, if no automatic operations are required, Oplon Commander Work Flow can be used manually from any device, including mobile devices.
Thanks to the high ability to constantly perform health checks on all components, it allows to orchestrate failover, both in physical and virtual environments in a simple, effective and traceable way allowing for self-documentation.
The solution is non-invasive on the infrastructure and does not replace solutions such as VMware Site Recovery Manager or DRS, but makes the best use of all its components (Systems, Virtualizers, Storage, DB, etc.) allowing criticality management but also and above all maintenace management in a fast, secure and above all easy way.No additional HW is required.
No compatibility matrix is required.
Oplon Commander is designed to activate, monitor, schedule procedures and follow their progress by checking that they have been executed correctly.
Oplon Commander offers a service for scheduling activities that can also be used through remote invocations, Remote Workflow Command (RWC) in total security, to manage the life cycle of the entire service in all its complexity and articulation on multiple sites and in Hybrid Cloud architectures.Management of the start of a Work Flow can be triggered by web interface or automatically by Oplon Commander Decision Engine or by command line procedure for integration with third-party tools. Workflows can be performed either automatically in full or, where the criticality requires it, followed by an operator step by step. It is therefore possible to structure Business Continuity (typically by automatic decisions), Disaster Recovery procedures (typically by human decision), application watchdogs while maintaining the visual and contextual documentation of the procedures.
With Oplon Commander a web interface is made available to start operations without having to resort to launch via command line with administrator or root permissions. The possible operations are listed to the operator so that he can easily start them without the need for interventions by specialized operators who may not be immediately available in times of crisis.
The decomposition of high reliability processes in logic (Decision Engine) and action (Workflow) and the consequent simple description of the procedures in workflows self-documents the actions that induce the high reliability inside of the entire data center.
The ability to navigate through the actions that will be taken in case of failure, critical moment, or for simple routine procedures, makes the processes that take place within the data centers more confidential because they are controllable and self-documented.
HTML5 graphical interface allows to perform a complete navigation of drill-down procedures that, once started, can be followed step by step in their evolution and progress.
Any changes brought to the configurations or scripts are stored (recorded) and can be retrieved at any time both for regression to the previous state, and for security compliance investigations. For each change, the activity is recorded both centrally and at the node level in order to trace the executor of the same.
All operations are performed through operator login with full tracking in the event logging database both on the individual execution points and on the general level.