Quantcast
Channel: IX Web Hosting Blog » Stuff Related to Service
Viewing all articles
Browse latest Browse all 12

IX Outage – Post Issue Technical Report

$
0
0

FacebookTwitterGoogle+LinkedInDiggEmail

As many of you know, last week we experienced a major outage, affecting multiple email and database servers for just under 5 days.

It was, without a doubt, the worst technical issue we’ve ever experienced as a company.

Thankfully, all systems were brought back up without any loss of data.  Although we’re happy that we could restore all your data, we know the duration of the incident, as well as the incident itself, is unacceptable.

To make certain that something like this never happens again, we launched a full investigation to determine the root cause of the issue, outline the steps we took to handle it, and help us take preventative measures against this type of problem in the future.

I want to share this report with you, both to satisfy the curiosities of our more tech-savvy customers, and to illustrate the amount of time, work, and research it took to resolve this difficult issue.  I also want to share the steps we’ll be taking to prevent this issue in the future, which are included at the end of the report.

So, here is the post-issue follow-up my system administrators delivered to me this morning. I think you’ll find it informative, and I hope it sheds some light on the mess that was last week:

Incident Name: Storage Outage – sas3

Incident Date: 2014-03-02
Report Date: 2014-03-14

Services Impacted:

Storage sas3 on dbmail01
93 Shared vms (mail and mysql of cp9-11), resources of 30366 accounts were affected.

Incident Root Cause:

The affected SAN is comprised of an array of drives in a RAID 50 configuration that consists of two RAID 5 groups (one parity group consisting of the even numbered drives and one parity group consisting of the odd numbered drives).   The array can handle two disk failures at the same time as long as they are not part of the same parity group.  In the case of our outage, drive 6 failed and drive 10 was added to the RAID to rebuild the group.  During the rebuild process, drive 0 failed causing us to lose the even-numbered parity group.  This occurred just before 4AM EST on 3/2/2014 and caused the RAID to go into an unrecoverable state. We contacted our hardware vendor’s support line before acting, because there was a large potential for data loss, and were escalated to their engineering team.  Total call time was 10 hours.

raidfailure

Response:

In order to regain access to the data, we had to manually disable slots 10 and 15 (the spare drives) so that the RAID would not attempt to rebuild.  Next, we reseated drive 6 which brought it online, but not as part of the RAID.  This allowed the entire RAID to come back online in a degraded state with drive 0 active.  Because drive 0 was still failing, we knew the RAID was in a very fragile state and that we had to move forward with great care or we would risk losing data.

Our hardware vendor showed us a binding procedure that allowed us to move the affected volumes from the storage system.  We learned that if we triggered another failure in drive 0 at any point during this process, the RAID would go offline and we could lose access to the data.  With this in mind, we began to migrate the volumes, one at a time (that way, it would reduce the stress on drive, thereby reducing the chance of it failing again).  We were methodical and deliberate in the way we approached this and thankfully, we were successful in migrating all data from the storage without triggering another failure in drive 0.  The process completed, and all customers were back online as of 3/6/2014 just after 6PM EST.  The whole process took almost 5 days.

Timeline:

You can find a timeline of events on our status blog–from the initial outage to the final server’s reactivation.

What We’re Doing To Prevent This:

Improve Monitoring

Currently, our automated hardware checks are set to notify us when a storage system has an issue of any type.  While sophisticated, it’s not specific enough to tell us what the actual problem is.  For instance, if a drive fails, we get a general notification, rather than a ‘drive x has failed’ message.  We are looking into using more specific, granular notifications for individual disks.

Proactive Hardware Replacement

It may be possible to check via SNMP for things like disk errors on specific disks before they actually fail out of the RAID and trigger a rebuild. This will result in less drive failures and less rebuilds.

Switch All Arrays to More Stable RAID

Our RAID currently rebuilds on storage arrays using RAID 50. Although this is standard, it can take more than eight hours to complete a rebuild. This is an 8-hour window where we risk losing two drives from the same parity group.  We can decrease this risk by moving to a significantly faster RAID 10 setup, which can rebuild in about 3 hours.

Thanks for reading and again, we’re so sorry about this inconvenience. If you have questions, feel free to ask them in the comments.

Conclusion

Along with the steps we’ll be taking to improve our hardware, we also need to work on improving our speed and accuracy when it comes to identifying affected services and affected customers. This information should then make it to the blogs quickly so the customers know we are aware they are affected, and that we are working to fix it. The faster we can do this, the easier it is to pinpoint problems and implement fixes.

During this current outage, we took too long to get detailed technical information (especially ETAs) communicated to you. We think it’s better to get some good information out – even if we have to admit that they’re rough estimates and it’s the best we have right now – than to delay providing any real information. Many of you agreed.

Now that we understand the details of this issue, we can sleep a little better knowing that this won’t be likely to happen again and we’re all on the same page about what went wrong. We’re incredibly grateful for your trust, and we will continue to work tirelessly to regain and keep it.

FacebookTwitterGoogle+LinkedInDiggEmail


Viewing all articles
Browse latest Browse all 12

Latest Images

Trending Articles





Latest Images