Native Apps At The Client & Cloud

Srinivasan Sundara Rajan

Subscribe to Srinivasan Sundara Rajan: eMailAlertsEmail Alerts
Get Srinivasan Sundara Rajan: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, Microservices Journal, Cloud Backup and Recovery Journal, Amazon Cloud Journal

Article

Lessons from the Amazon Cloud Outage

Best Practices for Resilient Cloud Applications

As reported in SYS-CON and elsewhere, we found the Amazon's cloud crashed, taking sites like Reddit, Foursquare, Quora, Hootsuite, Indaba, GroupMe, Scvngr, Motherboard.tv and few more down with it.

As reported several components of Amazon Cloud portfolio like, EC2, Elastic Block Store (EBS), Relational Database Service (RDS), Elastic Beastalk, CloudFormation and lately MapReduce were all impacted.

Amazon has given the following explanation for the crash at this time:

"A networking event triggered a large amount of re-mirroring of EBS [Extended Block Store] volumes ... This re-mirroring created a shortage of capacity ... which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes."

While this issue will be solved for now, this has created a huge impact on the Cloud Adoption for the large enterprises. However, the traditional high availability best practices always hold good for Cloud also and this issue cannot be seen as the failure of Cloud, rather more on the implementation. The following best practices will guard the cloud applications on top of the out of the box high availability options provided by the Cloud Provider like Amazon.

Ensure Application Controlled Scalability
We have got components like Auto Scaling, Elastic Load Balancing and Cloud Watch etc... These will help the scalability by monitoring the resource usage and automatically allocate new instances.

However this is achieved, if the application is aware of its usage and scales accordingly.

One such implementation pattern is a Routing Server, where the application characteristic like the type of the user, geography or the kind of transaction determines the target destination to be process the request and load balance accordingly.

Making the data aware scaling rules configurable without restarting the servers will go a long way in adjusting the routing mechanism to specific servers in case of regions or availability Zones are down due to the unknown reasons. This will also ensure that scalability rules can be dynamically altered in cases of catastrophic situations, such that some high priority transactions can continue to be served and low-priority transactions can be put on hold.

Stay Disconnected
Even though the typical application consists of multiple logical and physical components, it is best to decouple each of these components, so that each layer interacts with the   next layer in an asynchronous manner.

While there are some applications like banking, stock trading and online reservation which requires real time and stay connected nature, most applications in today's scenario can still take the advantage of a disconnected architecture.

Use reliable messaging and request / response framework so that the end users are never aware that their request is queued rather they get a feeling that their request is taken care and got a satisfactory response. This will ensure that even if some physical servers or logical components are down we can still not impact the end user.

Keep Transactions Smaller
The best path to ensure transparent application fail over and recoverability is to ensure that the transactions are as small as possible, and each step ensures a logically meaningful step within the overall process from an end-user perspective.

Remember some legacy applications of the previous era, which accepts transaction data for several fields and pages and used to have a Single SAVE button, and if anything happens, the end user lost all the data requiring to be re-entered, this needs to be avoided at all cost and the systems should be designed to be a combination of logically smaller steps that tied together in a loose coupled manner.

VEET: The User Entered Data
In a disconnected environment, end users are not there to fix the data entry errors or provide additional information, so that best fault tolerant systems are designed when the user is made to enter minimal data and the pattern of VEET (Validate Extract Enrich Transform) is applied to the user data.

Validate: Once the transaction inputs are entered and accepted, they stay as a meaningful information across the system components and no need to correct any data.

Extract: Never accept the information which can be derived, this will ensure that the errors are avoided on the known data.

Enrich: Accumulate the information from the existing information, so that the information need not be entered by the user. For example if the user enters the zipcode, the City, state and other information can automatically be retrieved.

Transform: Transform from one form to another form as it is meaningful to the system flow.

The above steps ensure that we can recover gracefully from failures, which will be transparent to the user.

Keep the Backup Data to the Lowest Granularity for Recovery
We have seen the storage mechanisms like Amazon EBS (Amazon Elastic Block Store) have in built fault tolerant mechanism such that volumes are replicated automatically. This is a very good feature. But more the data is backup as a raw volumes, we should also think about the ability to quickly recover and get going in case of disasters.

Typically database instances take some time to recover the pending transactions or to roll back the unfinished ones, proper backup mechanisms can help to recover from this scenario quickly.

The following options can be considered in the order to quickly recover from a disaster scenario.

Alternative Write Mechanism: Whether a log shipping or stand by database or simply mirror the data to other availability zones is one of the best mechanism to keep the databases in sync and quickly recover when one zone is not available.

Implicit Raw Volume Backups: This is employed out of the box in most of the cloud platform, however the intelligence to quickly recover the raw volumes with automated scripts should be in place.

Share Nothing
From the Amazon experience it is clear that in spite of the best availability mechanisms adopted by the Cloud provider, rarely we may end up in few availability zones struck by disaster.

However, during these scenarios we wanted to ensure that not all our users are affected, but only the minimal number of users. This can be achieved by adopting the ‘Shared Nothing Pattern' so that tenants are logically and physically separate within the Cloud Eco System.

This will ensure that the failure of part of the infrastructure will not affect everyone in the system.

Summary
The Amazon Cloud down event is a wake-up call about how Cloud can be utilized. There is no automatic switch that ensures all the fault tolerant needed for the systems. However, this has reinforced the strong fundamental principles with which applications needs to be built in order to be resilient. This incident cannot be seen as a failure of the Cloud platform itself and we have lot of room for improvement and to avoid these situations in future.

More Stories By Srinivasan Sundara Rajan

Highly passionate about utilizing Digital Technologies to enable next generation enterprise. Believes in enterprise transformation through the Natives (Cloud Native & Mobile Native).