On that Saturday, I logged into the company’s systems only to realise that the back-up system had activated meaning that the systems would completely shut down in around 2 hours. The power still hadn’t returned on Monday morning so I managed to tell a handful of staff to work from home rather than journey in to the office. We just about got through the day but our disaster recovery plan wasn’t effective enough and we made a catalogue of errors rather than doing things by the book. Despite all, the chaos, it has given me time to think about how to act in the future and what lessons can be taken from the storm.
1.Ensure that there is a good internal communication structure (This did not go to plan!)
Realistically, all firms will need to have a way to contact their employees in a time of crisis so they can be alerted as to how to work on the day and to keep them updated on the latest happenings. Some firms prefer to focus on their logistical means and ensure that remains priority while others have a more IT-based focus and put all their efforts into a full systems recovery process. I can recall one major firm whose plan involves backing up and restoring critical data at a different facility within a week, backing up telecommunications via a phone network, recovering the local network before setting up at a new temporary location until everything is back online.
When going through the DR plan, it is also essential to establish the type of disaster that is causing all the problems. After all, damage caused by a fire is much different to any problems caused by a long-term power cut. Once the problems have been realised, that is the time to start figuring out how to contact employees, work out ways to carry on working and how they go about doing this. With these aspects under control, that can then allow for other aspects to come into consideration such as organising meeting places and continued operating methods in a crisis.
2. Always have a way to communicate with customers (Not one of our strong points)
No matter if there’s a crisis or not, there always needs to be a way to reach out and communicate with customers. It means that any emergency guidelines needs to emphasise ways to be proactive and clear throughout the day. After all, shareholders and customers all expect the same things when companies have been hit by a problem – that everything is being done to get a service back to normal as quick as humanly possible. It means that the customers are aware and given advice about what to do during the crisis and that they are given regular updates to let them know any changes regarding the communication methods. I was impressed with how Toronto Hydro acted during the storm where they took to Twitter to keep everyone informed about their latest changes.
3. Cloud networks are great for continual business planning (We had some luck with this)
All of Digitcom’s corporate e-mail servers had recently been moved from an Exchange server onto the business Gmail platform and switching to a cloud network seemingly worked in our favour. By using the cloud network, it did mean that we could still have communications throughout the day. It was a benefit I realised that some of other clients didn’t have as they revealed that they couldn’t communicate as their e-mail servers were down. Therefore, it you don’t have email systems on a cloud network, backing it up via a external data centre can help keep things active provided that the other centre isn’t having their own problems.
By having an extensive infrastructure to help run a business, it can help improve operation efficiency during a crisis although everything still needs to be managed as a single package. For us, our e-mail remained active although our main CRM software was down and we had no back-up in our data centre. This is a mistake that will be corrected within the next few weeks.
4. The cloud helps dealing with voice redundancy (This is where we excelled)
Normally, we run a standard Avaya IP Office system throughout the office via a PRI circuit however there is a back-up system at hand should things go awry. When our PRI circuit fails, it moves to our SureConnect product via an SIP trunks and these fall back to our PBX solutions when they shut down. With several of Digitcom’s employees having PBX solutions at home, we were able to re-direct calls to other areas throughout the day meaning that we didn’t miss a thing all day long. It was something that our clients had done as well during the day to ensure that they could handle calls. By doing this, it means that you can always answer calls when the power is out even if the phone lines, the PRI circuit or SIP networks fail. The great thing is that thus can be done using any phone system be it a landline or cell phone. This worked well for us on Monday when one client asked us to re-direct their networks to home-based networks and we were able to do this in just 5 minutes.
5. Test your plan
This doesn’t call for a complete shutdown however testing your disaster recovery plan is necessary once in a while to check how effective it is. It means testing that the communication networks can still work and that IT networks can still run so that business can continue. During the test, just run things from a back-up site for a short amount of time to just get a feel for how everything works. By doing this, it will help realise what needs to be improved for when a disaster does happen.
During the storm, we were able to answer all calls during the day, deal with technical support and also communicate with staff throughout the day. However, many of our internal practices failed to work and now we need to evaluate where we went wrong as well as what we did right. Although Monday’s power outage only lasted a short while, it has forced us to test our DR Plan and start making alterations for the next time a disaster hits the office.