informa
Commentary

Continuity Software Releases Latest Version of RecoverGuard: High Availability As Vital As Data Recovery

In terms of business continuity, high availability of resources is as important as the ability to recover resources in the event of a disaster. Availability monitoring -- searching for gaps and inconsistencies in networks -- is at the heart of Continuity Software's latest release of RecoverGuard.
In terms of business continuity, high availability of resources is as important as the ability to recover resources in the event of a disaster. Availability monitoring -- searching for gaps and inconsistencies in networks -- is at the heart of Continuity Software's latest release of RecoverGuard.Disaster Recovery (DR) and High Availability (HA) software company Continuity Software's latest iteration of its RecoverGuard testing and monitoring program was released today.

The new release, RecoverGuard v. 4.0, includes both DR and HA monitoring tools, identifying gaps in DR preparation, vulnerabilities, likely failure points and, critically, the effect of configuration changes on the overall availability and recovery environment.

Aimed primarily at midsized and larger businesses, Continuity's approach bears lessons for companies of all sizes, as a conversation I enjoyed recently with Gil Hecht, the company's founder and CEO made clear.

One of the goals of the latest RecoverGuard version, Hecht explained, was to provide companies with an ongoing picture of their network and system configuration in order to guard against the effects of system changes that can affect recovery and availability.

"Moving to a constantly monitored environment reduces the costs of testing recovery and availability, but more importantly, makes clear whether or not your replications are properly synchronized, and by providing tools for both monitoring and managing HA, fills in a gap that DR alone doesn't address."

In other words, how available are your critical business and operational systems and procedures? When was your most recent period of unavailability? How long did the downtime last? What caused it, and what steps have been taken to guard against it happening again?

But there are large availability/replication/recovery questions raised before an outage, chief among them: Are changes made in one part of your network replicated on other clusters throughout the business? Do you replicate alterations and modifications to your primary systems in all secondary and backup/recovery systems?

These are questions too often overlooked in DR and business continuity (BC) planning, and ones that you should look at closely whether with deployment of a monitoring/analysis program such as RecoverGuard or with implementation of availability monitoring and tracking procedures of your own.

Particularly for smaller businesses, this can be a challenge: Faced with already tight IT resources, the challenge and urgency of just recovering from downtime too often mitigates against analyzing what caused the outage and applying resources to reducing or eliminating the potential for the outage to occur again.

Hecht's point, as I understand it, is the need to move to a sort of real-time recovery objective: a laudable goal made likelier only by constantly monitoring availability as well as point-in-time data replication.

Continuity Software offers a downloadable DR Webinar here (registration required).

Recommended Reading: