/How to scale

National Arches

Arches National Park, Utah, Mar 2017

Lets say our current application is hosted on 2 2XL servers (8 CPU, 32 MEM) to take care of the peak capacity of 100k users. In the traditional world, since there is no way to increase the servers size on the fly, the server capacities would have  built in headroom of around 30-40%. That means, you are paying for that extra headroom on top of paying for unused capacity during non-peak hours (~20k users).

To use the feature of auto-scaling, the application needs to be prepared for it.

Identify what is the non-peak traffic, and what capacity is needed to handle that. Then divide that capacity by 2 and split it into 2 to ensure you have availability.  One in one AZ / Region and the other in another AZ / region. So to manage the peak capacity, you might have 8 of L (2 CPU, 8 MEM) servers handling it, instead of only 2 2XL (8 CPU, 32 MEM).

Move Rocks into Cloud

In the extreme case that the application hard-coded nodes cannot be changed (lets say A,B,C,D), then you can still have only A, B running during non peak hours and setup the auto-scaling in the below way

  1. Launch all the nodes (A, B, C, D) in appropriate EC2 instances (E1, E2, E3, E4)
  2. Take an independent image of each of the nodes (Ai, Bi, Ci, Di)
  3. Store the images in S3
  4. Terminate E3, E4 instances that host C, D nodes
  5. Put the auto-scaling feature to bring up additional EC2 instances if needed.
  6. Put the configuration to launch Ci first, then Di

Of course, this is just taking care of the half of the picture (ramping down when not needed) and does not care of the unplanned peak (ramping up beyond the 4 nodes). Moving rocks into cloud does come with its own limitations, so it is always good to make the application cloud-ready from the ground up.

Readings on auto-scaling, in case you have not looked at it already

http://docs.aws.amazon.com/autoscaling/latest/userguide/auto-scaling-benefits.html

 

/The flight plan

Rocket Garden, NASA, Cape Canaveral

Rocket Garden, Kennedy Space Center, Florida, Jul 2017

Migration of the applications can become very painful for the organizations. Any medium sized organization has anywhere between 500 – 1000 applications deployed, spanning over multiple departments. Assessing the applications as per our / Will it fly? guidelines is just one step of the process. Applications that were cutting edge at some point of time, might no longer be maintainable due to lack of the right resources. There might be very small applications that are used by a handful of people. And there might be applications like the company directory which have hooks from each and every application.

In fact, the bigger and widely used applications might have a cleaner migration path when compared to those pesky ones which are lying under the desk.  Creating a recommended application migration path becomes very important.

The standard application migration paths can be categorized as below. What we need to identify is the ‘best fit’, as inherently any migration path will have its own advantages, costs and pitfalls.

  1. Re-host: Is the application already in a container (e.g., Docker)? Or is it a standard Java application that is running on the standard Linux box? Applications like these can be migrated with minimal changes. The anticipated changes for such applications should be limited to simple configuration changes like interface changes, connector changes and URL changes in the configuration files. Applications like these might be hard to find in an organization that has not kept up with the changing technologies.
  2. Re-factor: This is the path that most of the applications might end up taking. This path is an effort only to make the application ‘cloud compatible’, by changing hard bindings to loosely coupled links. This would need code changes and rewriting of some of the interface components. There might be a possibility that the application is not yet raking in all the advantages of cloud.
  3. Re-architect: Cloud is different. It provides clear advantages to applications that get designed as cloud-native applications upfront. Components are loosely coupled, can scale up/down effortlessly, and are fault-tolerant. Two types of applications can take this path; an application that clearly needs the capabilities of cloud or the application needs significant changes to be migrated.
  4. Remove: Every organization will have gone through changes. Mergers, spin-offs, changes in direction in the company, a proposed consolidation of applications; all these lead to applications losing relevance or getting duplicated. The migration event should be used to retiring such applications.

The effort is to identify the best-fit. Review all the parameters before the application has been marked for any of the four paths, and be flexible in marking a different approach that is best-fit for your organization.

Drive to: /What goes well with your Cloud?

/What goes well with your Cloud?

Siesta Key Rum Distillary, Saratoga, Florida

Siesta Key Rum Distillery, Saratoga, Florida, Jun 2017

The advent of cloud brought a together tools, products and concepts that work very well with each other and enhance the overall experience. These tools and concepts might not be new, are not necessarily limited to Cloud and many companies might already be doing as part of their regular process
  1. Separate the compute layer and the storage layer. (Storage layer does not restrict to user data. We are also talking about run time files, shared network files in traditional models
  2. Micro-services: Split application requests into small chunks and then tie it up with an independent scale up/down. (e.g, If only name is needed, don’t request for a complete person profile)
  3. Containerize: Extract the application into a container by itself, so that it can run independently and can also be spun up along with the configurations, libraries and dependencies. (Docker, Mesos, Kubernates, etc)
  4. DevOps+CI/CD processes: DevOps is big. The merging of Development and Operations has brought together tools that help streamline activities in a better way. Continuous Integration / Continuous Deployment has redefined the entire release process

Tools like Maven, Jenkins, Chef, Ansible, Puppet help in automating the entire release cycle.

Next stop: /A new home

/Will it fly?

2017-07-09 (3)

Saguaro National Park, New Mexico, May 2017

Well, it has wings, an engine and a set of wheels. But is it ready? How many different types of planes are there? Which engines do they use? Are the right pilots available for the plane? The question is – Will it fly?

To arrive at a conclusion, analysis has to be done on all the applications in the organization by collecting the relevant parameters. Start from the company CMDB (Configuration Management DataBase) to collect basic information

  1. Application Profile:  It is important to understand the application profile holistically. What does the application do, what is the business value, is it Platinum, Gold or Silver application. What are the business dependencies of the application. Who are the users, owners and the relevant stakeholders. Are there multiple environments? All such parameters will help us in planning for the actual move.
  2. Technical Profile: Technologies used. Which technologies have been used in the development of the application? Which operating system? What products have been used, if any? Were there customization done? Is it already containerized? Are there dependencies on the other systems in the organization? What are the dependencies? What are the interfaces (data to / from). How does the licenses work for the products? What kind of support is already available? Do we have resources knowledgeable to provide all the details of the application.
  3. Infrastructure Profile: There are some fundamental differences in the way the traditional data centers are structured versus Cloud. For example, SAN and NAS form the core in the traditional data center, but in the cloud storage is accomplished by Block and File Storage. Are the applications hosted on physical machines or are they in VMs. Are the VM images available. What are the network interfaces. It is important to gather information regarding the servers that are used, and the networks, storage and the ‘hooks’ into various systems.
  4. Data Profile: How are the licenses structured for the application? What about data storage? Does the data need to be encoded in transit and and at rest? Does the application store PII (Personally Identifiable Information). Are there any specifications needed for that?

Based on the information, derive a ‘cloudability’ factor. This should give us on scale of 1-10, the ease with which an application can be moved to cloud – 10 being the easiest to move. This will also help in case a phased approach is being planned.

 

Merge into: /The flight plan

/A new home

algonquin.jpg

Algonquin Peak, Adirondacks, New York, Sep 2013

There are many players in the industry today offering Cloud – Amazon Web Services, Microsoft Azure, Google Cloud are the major players by far. IBM, Oracle, HP and others are still playing catch up. Then there are some players who provide a mix of both (their own Cloud services and then a smattering of others)

A careful evaluation needs to be done before the right cloud is chosen. As each of the cloud companies are ‘growing up’ they have started showing some very specific characters and traits. Startups are more likely to choose AWS as it provides a perfect ‘lego-like’ DIY options when compared to other services. Microsoft is moving very aggressively on the enterprises as it already has hooks through its widely used products like Office, Outlook and Active Directory. Microsoft provides more ‘packaged’ options than AWS. Google has limited its focus on Big Data.

  1. Capabilities: What are the various capabilities that are offered by the company? Can it be consumed per need basis? What are the additional features?
  2. Pricing: Even though the pricing of all the cloud providers are granular and defined per block (of time, or data, or transactions), the TCO (Total Cost of Ownership) might turn out to be very different based on the application needs. It is important to evaluate and understand the application functionalities and requirements to optimize the cost. Also, Microsoft provides some additional discounts and benefits based on negotiations while AWS might stick to pre-defined discounts.
  3. Long Term Strategy: The long term strategy needs to be evaluated before the applications are moved to any of the companies. Some companies might make it very easy to move to their platform, but make it impossible to move away in the future.
  4. Locations: If there are global companies, then they need to evaluate their needs as per the company capabilities

 

Circle back: /Cloud

/Cloud

PalmSpringsCACropped

San Jacinto Peak, Palm Springs, California, May 2017

Cloud is not for everybody.
There are still some people who might say that and I might actually agree, but only at an application level with a short term view.  There are lot of parameters that need to be reviewed before a decision is made to move to cloud.
But at an enterprise level, my response would be – “you got to be kidding me!”

As internet has become more and more ‘utilatarian’, the compute / storage facilities are going to be so as well. Very soon it would be as archaic as a company planning to generate its own power / electricity.

Next turn: /Will it fly?