Thoughts RightScale Annual State of the Cloud Report

May 2, 2015

In January of 2015 cloud portfolio management company RightScale Inc. surveyed 930 users of cloud services for their Annual State of the Cloud report.  The findings are both interesting and insightful.  Several key findings are highlighted here.

  1. Cloud is a given and hybrid cloud is the preferred strategy. According to the survey 93% of respondents are using the cloud in one way or another.  Further more than half (55%) of enterprises are using hybrid clouds – either private clouds or an integration with on premise solutions.
  • Savvy start-ups realize that public clouds can be expensive relative to self-hosting in an economy co-lo facility. Until traffic ramps to the point where the ability to immediately scale justifies it there is no urgency to host in AWS or Azure.
  • Public clouds are ideal for a variety of scenarios – unknown, unpredictable, or spiking traffic, the need to host in a remote geography, or where an organization has other priorities than to focus on hosting. Conversely self-hosting can be more economical.  Example, Amazon c3.2xlarge – 8 vCPU and 16 GB RAM (as of May 2015) is $213 / month or approximately $2500 / month / per server.  Organizations who already have an investment in a data center or have on premise capacity often find it cost-effective to self-host for internal applications.
  • Many enterprises are not surprisingly reluctant to walk away from significant capital investments in their own equipment. Hybrid clouds allow organizations to continue to extract value from these investments for tasks that may be difficult or costly to implement in a public cloud.  For example, high security applications, solutions which must interact with behind the firewall systems, or processing / resource intensive programs.

93% of Respondents Are Using the Cloud

  1. DevOps rises; Docker soars. DevOps is the new agile.  It is the hip buzz word floating around every organization.  According to Gene Kim, author of the Phoenix Project, DevOps is the fast flow of code from idea to customer hands.  The manifestation of DevOps is the ability to release code as frequently as several times a day.  To achieve this level of flexibility organizations need to eliminate bottlenecks and achieve what Kim calls flow.  Tools like Puppet, Chef, and Docker are enablers for DevOps.  In forthcoming surveys it can be expected that Microsoft’s InRelease (part of Visual Stuido Online) and Hyper-V Containers will have prominent roles in organizations that use the Microsoft stack.

DevOPs Adoption Up in 2015

  1. Amazon Web Services (AWS) continues to dominate in public cloud, but Azure makes inroads among enterprises. AWS adoption is 57 percent, while Azure IaaS is second at 12 percent.  (Among enterprise respondents, Azure IaaS narrows the gap with 19 percent adoption as compared to AWS with 50 percent.)  This is consistent with other market surveys – see Synergy Research Group Study from October 2014.
  • At this point the market has effectively narrowed to only two major cloud IaaS providers Amazon and Azure. While there are other offerings from Rackspace, IBM, HP and other non-traditional sources (i.e., Verizon) these seem to be solutions for organizations who already have a relationship with this vendor or there is a specific reason for going away from the market leaders.
  • There are certainly many other PaaS solutions including Google,, Heroku (owned by SFDC). Similarly there are many SaaS solutions again including Google Apps, NetSuite,, Taleo, and many other vertical specific solutions.
  • This respondent base is heavily represented by small business – 624 SMB vs. 306 Enterprise. Although Microsoft is working hard to attract start-ups the reality is that today most entrepreneurs chose open source technologies over Windows.  Conversely Microsoft technologies are disproportionately represented in larger enterprise.  While today AWS is the undisputed market leader Azure is growing quickly and can be expected to close the gap.  Microsoft is investing heavily in their technology, is actively reaching out to the open source community, and is making it apparent that they are not satisfied with being an also ran.
  1. Public cloud leads in breadth of enterprise adoption, while private clouds leads in workloads.
  2. Private cloud stalls in 2015 with only small changes in adoption.
  • Private clouds are being used for functional and load testing as well as hosting internal applications (i.e., intranet) where the costs and risks associated with a public footprint do not exist. It makes sense that where in the past organizations would have had “farms” of low end desktop PCs and blade servers in server closets that these types of applications have been moved to private clouds that are hosted on virtualized servers that can be centrally managed, monitored, and delivered to users more cost effectively.
  • It is interesting that the data suggests that the market virtualization infrastructure has matured and is not growing. The market leader in this space continues to be VMWare with Microsoft gaining traction in enterprises.
  1. Significant headroom for more enterprise workloads to move to the cloud. An interesting data point – 68% of enterprise respondents says that less than 20% of their applications are currently running in the cloud.
  • It will be interesting to see how his number changes over time. Reversing the statistic – 80% of enterprise applications are still run on premise.  This could be due to IT organizations heavy investment in capitalized equipment / data center.  It could be that the economics of a public cloud are still too expensive to justify moving to a public cloud.  There could be technical limitations such as security which are holding back cloud adoption.  Finally, there could be organizational prejudices against taking what is perceived as a risk to embrace the public cloud.  Very likely it is all of the above.
  • The role of a visionary CTO is to move their organization forward to embrace new technologies, break down prejudices, and find new and better ways to serve customers. Cloud vendors are working to make it easier for organizations of all sizes to adopt the cloud by lowering cost, increasing security, and providing new features which make management more seamless.
  • While this study does not provide any data on the breakdown of PaaS vs. IaaS it is a reasonable assumption that most enterprise adoption of the cloud is IaaS as this is by and large simply re-hosting an application as-is. PaaS applications on the other hand typically need more integration which in many cases involves software development.  Once done, however, PaaS applications are often more secure, scalable, and extensible as they take advantage of the hosting platform infrastructure.

Cloud Challenges 2015 vs. 2014

Finally, RightScale has a proprietary maturity model which ranks organizations comfort level with using cloud related technologies.  Interestingly the data suggests that nearly 50% of organizations have yet to do any significant work with the cloud.  This data can certainly be expected to change over the next 2-3 years.

Cloud Maturity of Respondents


Enterprise Adoption of Cloud Technology

July 29, 2014

Forrester recently published a research note on enterprise adoption of Cloud Technology.  The full report can be downloaded here from (after registration).  As the report was commissioned by Akamai who absolutely is not a neutral third party the results need to be considered with caution.  That said, there are some interesting conclusions.

  • Public cloud use is increasing across a number of business-critical use cases.

This is not a surprise.  Public clouds have become mainstream.  Amazon’s case study page is a who’s who of well-known traditional brand names including Hess, Suncorp, Dole, and Pfizer as well as newer technology oriented companies such as Netflix, Shazam, Airnb, and Expedia.

  • Cloud success comes from mastering “The Uneven Handshake.”

The gist of this point is that organizations have specific requirements (e.g., security, access to behind the firewall data, etc.) which may be incompletely fulfilled by a particular cloud offering.  In order to use a cloud solution it may be necessary to piece together multiple provider solutions together with custom “glue” code.

  • It’s a hybrid world

Most organizations that have been around for a while have an investment in on premise systems.  In addition to providing valuable services that work (think don’t fix what isn’t broken), they are known commodities, and are typically capitalized pieces of equipment/software.  In a perfect world oftentimes it would be cleaner to create a homogeneous configuration all on a cloud platform.  Unfortunately we do not live in a perfect world and many times cloud systems have to be made to co-exist with legacy systems for technical, cost, or other reasons.

One particularly interesting finding is that most enterprises are quite satisfied with their investment in the cloud.  This conclusion is illustrated in the following figure.

How well did your chosen service actually meet key metrics?

Enterprise Considerations

As organizations begin the journey to or expand their operations in the cloud there are a number of important considerations.  Each of these topics stands on their own and literally thousands of pages of documentation exist on each.  Here are some brief overview thoughts.

  • Platform as a Service (PaaS) or Infrastructure as a Service (IaaS)

In a PaaS configuration the provider manages the infrastructure, scalability, and everything other than the application software.  In an IaaS configuration the enterprise who licenses the software has total control of the platform.  There are pros and cons to both PaaS and IaaS.  PaaS can be very appropriate for small organizations who wish to off-load as much of the hosting burden as possible.  PaaS platforms offer organizations less control and less flexibility.  IaaS provides organizations as much control as they would have in a self-hosted model.  The trade off with IaaS is that the organization is responsible for the provisioning and maintenance of all aspects of the infrastructure.  Enterprises new to the cloud may find that there IT group is most comfortable with IaaS as it is much more familiar territory.  As the IT group is the one who answers the panicked call at 2:00 AM there conservative nature can be understood.

  • Picking the right provider

Google AppEngine,, Heroku, and Amazon Elastic Beanstalk are some on the most well-known PaaS platforms.  Amazon’s EC2 platform as well as Microsoft Azure Virtual Machines are the two dominant platforms in the IaaS space.  (Azure has a rich PaaS offering called Web Sites.)  Rackspace also has very strong offerings as well – particularly in the IaaS space.

  • Platform lock in

With an IaaS model careful consideration should be given to the selection of technology components.  To a point made in the Forrester report interfaces between existing components need to be considered and configured to work together.  Further consideration should be given to whether platform specific technologies should be used or not.  For example, Amazon offers a proprietary queuing solution (SQS – Simple Queue Service).  RabbitMQ is a well-respected open source queuing platform.  The choice of SQS would lock an organization into Amazon where the choice of RabbitMQ allows more flexibility to shift to another platform.  Again these are trade offs to be considered.

  • Security

With enough time and effort public cloud technology can theoretically be made as secure as an on premise solution.  This topic is considered by the Forrester report.  They note “The most common breaches that have occurred in the cloud have not been the fault of the cloud vendors but errors made by the customer.”  Should an organization make the decision to hold sensitive business-critical information in the cloud a best practice would be to retain a subject matter expert in cloud security and conduct regular third-party penetration testing.

  • Global Footprint and Responsiveness

One of the advantages of working with a public cloud provider is that an organization can cost-effectively host their applications around the world.  For example, Amazon offers three hosting options in the Asia Pacific Zone alone and nine regions world-wide.  Hosting in another geography is on the surface attractive for improving response times for customers as well as complying with country specific privacy regulations.  For most organizations hosting in a shared public cloud is much cheaper than self-hosting in a remote geography.  Organizations should be aware that hosting in a given region may or may improve response times depending on how their customers access the service.  Your mileage may vary depending on customer network routing algorithms.  Performance testing using a service like Compuware can help identify how your customers access your content.  Similarly, care needs to be taken to ensure compliance with privacy laws.  For example, it is a well-known requirement that PII data from EU citizens should not leave Europe without the user’s consent.  A public cloud can be used to comply with this directive, however, should administrators from the US have the ability to extract data from that machine the organization may not be meeting the requirements of the law.

  • Uptime and monitoring

Finally, enterprises need to be concerned with up-time.  It is a law of nature that all systems go down.  Even the biggest, most well maintained systems, have unplanned outages.  Nearly every cloud systems has a distributed architecture such that rarely does the entire network go down at the same time.  Organizations should carefully consider (and test) how they monitor their cloud hosted systems and fail-over should an outage occur just as they do with on premise solutions.  Should an organization embrace a hybrid hosting strategy the cloud could fail over to the self-hosted platform and vice versa.

Using the right database tool

April 27, 2014

Robert Haas, a major contributor and committer on the PostgreSQL project, recently wrote a provocative post entitled “Why the Clock is Ticking for MongoDB.”  He was actually responding to a post by the CEO of Mongo DB “Why the clock’s ticking for relational databases.”  I am no database expert, however, it occurs to me that relational databases are not going anywhere AND NoSQL databases absolutely have a place in modern world.  (I do not believe Haas was implying this was not the case.)  It is a matter of using the right tool to solve the business problem.

As Haas indicates RDBMS solutions are great for many problems such as query and analysis where ACID (Atomic, Consistent, Isolated, and Durable) are important considerations.  When the size of the data, need for global scale, and translation volume grows (think Twitter, Gmail, Flicker) NoSQL (read not-only-SQL) solutions make a ton of sense.

Kristof Kovacs’ comparison has the most complete comparison of the various NoSQL solutions.  Mongo seems to be the most popular document database, Cassandra for Row/Column data, and Couchbase for caching.  Quoting Kovacs – “That being said, relational databases will always be the best for the stuff that has relations.”  To that end there is no shortage of RDBMS solutions from the world’s largest software vendors (Oracle – 12c, Microsoft – SQL Server, IBM – db2) as well many other open source solutions such as SQL Lite, MySQL, and PostgreSQL.

In the spirit of being complete, Hadoop is not a database per se – though HBase is an implementation of Hadoop as a database.  Hadoop is a technology meant for crunching large amounts of data in a distributed manner typically using batch jobs and the map-reduce design pattern. It can be used with many NoSQL database such as Cassandra.

2013 Pacific Crest SaaS Survey

October 20, 2013

In September David Stock of Pacific Crest (Investment Bank focusing on SaaS companies) published the results of their 2013 Private SaaS Company Survey.   The survey covers a variety of financial and operating metrics pertinent to the management and performance of SaaS companies, ranging from revenues, growth and cost structure, to distribution strategy, customer acquisition costs, renewal rates and churn.  Here are some top-line observations from me:

  • 155 companies surveyed with median: $5M revenue, 50 employees, 78 customers, $20K ACV, and a global reach
  • Excluding very small companies, median revenue growth is projected to be 36% in 2013
  • Excluding very small companies, the most effective distribution mode (as measured by growth rate) is mixed (40%), followed by inside sales (37%), field sales (27%), and internet sales (23%)
  • Excluding very small companies $0.92 was spent for each dollar of new ACV from a new customer, where it was only $0.17 to upsell
  • The median company gets 13% of new ACV from upsells, however, the top growers upsell more than slower ones
  • Companies which are focused mainly in enterprise sales have highest levels of PS revenue (21%) with an median GM of 29%
  • Median subscription gross margins are 76% for the group
  • Approximately 25% of companies make use of freemium in some way, although very little new revenues are derived here.  Try Before You Buy is much more commonly used: two-thirds of the companies use it and many of the companies of those derive significant revenues from it
  • The average contract length is 1.5 years with quarterly billing terms
  • Annual gross dollar churn (without the benefit of upsells) is 9%

The full survey can be accessed here.

Microsoft’s Azure Stores

August 25, 2013

Microsoft Azure has historically lagged far behind Amazon’s EC2 in the market and in the hearts and minds of most developers.  Azure started out life as a Platform as a Service (PaaS) offering which pretty much no one wanted.  Indeed most developers wanted Infrastructure as a Service (IaaS) – like EC2 has had since day one.  The difference between PaaS and IaaS means that you can deploy and manage your own application in the cloud vs. being constrained to compiled / packaged offerings.  Further Amazon has been innovating at such a rapid pace that pretty much at every turn Azure has looked like an inferior offering by comparison.

In mid-2011 Microsoft moved their best development manager Scott Guthrie onto Azure.  Also working on the Azure project since 2010 is Mark Russinovich arguably Microsoft’s best engineer.  At this point Microsoft truly has their “A” team on Azure and they are actively using it with (replacement for Hotmail) and SkyDrive (deeply integrated into Windows 8).  Amazon EC2 is still the gold standard in cloud computing but Azure is increasingly competitive.  The Azure Store is a step in the direction toward building parity.  The Store was announced in the fall of 2012 at the Build Conference and has come a long way in a short period of time.  By way of reference Amazon has something similar Called the AWS Marketplace.

There are actually two different entities.  The Azure Store is meant for developers and the Azure Marketplace is meant for analysts and information workers.  My sense is that the Marketplace has been around for longer than the Store as it has a much richer set of offerings.  Some of the offerings overlap between the Store and the Marketplace. For example, the Worldwide Historical Weather Data can be access from both places.


  • Both have data and applications.
  • Both operate in the Azure cloud


  • Windows Azure Store: Integration point is always via API
  • Marketplace: Application are accessed via a web page or other packaged application such as a mobile device; Data can be access via Excel, (sometimes) an Azure dataset viewer, or integrated into your application via web services

What is confusing to me why there are so many more data applications in the Marketplace than there are in the store.  For example, none of the extensive Stats Inc data is in the Store.  It may be that the Store is just newer and it has yet to be fully populated.  See this Microsoft blog entry for further details.

I went and kicked the tires of Azure Store and came away very impressed with what we saw.  I saw approximately 30 different applications (all in English).  There are two different types of apps in the store – App Services and Data.  Although I did not write a test application I am fairly confident that both types of applications are accessed via web services.  App Services provide functionality where Data provide information.  In both cases Azure Marketplace apps can be thought of as buy vs. build.

  • App Services: You can think about a service as a feature you would want integrated into your application.  For example, one of the App Services (Staq Analytics) provides real-time analytics for service-based games.  In this case a game developer would code Staq Analytics into their games which in turn would provide insight on customer usage.  Another applications MongoLab provides a No-SQL database.  The beauty of integrating an app from the Azure marketplace is that you as the customer do not ever need to worry about scalability.  Microsoft takes care of that for you.
  • Data: Data application provide on-demand information.  For example, Dun and Bradstreet’s offering provides credit report information, Bing provides web search results, and StrikeIron validates phone numbers.  As with app services Azure takes care of the scalability under load.  Additionally, using a marketplace offering the data is theoretically as fresh as possible.

Further detail on the Store can be found here.

All and all the interface is very clean and straightforward to use.  There is a store and a portal.  Everything in the store appears to be in English though based on the URL it looks like it might be set up for localization.  The portal is localized into 11 languages.  The apps do not appear to be localized – though the Azure framework is localized.    As a .Net developer I feel very comfortable using this environment and am impressed with how rich the interface has become – increasingly competitive with EC2 on a usability basis.

Applications are built using the Windows Azure Store SDK.  There is a generic mailto address for developers to get into contact with Microsoft.  There is also an Accelerator Program which will give applications further visibility in the Azure Store.

It probably not a bad point to highlight, in that Microsoft actually does have a third “store” of a sort called VM Depot (presently in preview mode) which focuses more on the IaaS approach, and the bridging of both “on premise” with “off premise” clouds with Hyper-V and Azure portability.

Finally, Identification technologies are also gaining a lot of focus, striving to unified the experience for hybrid deployments of on premise or hosted IaaS, when combined with Azure PaaS. The ALM model is also starting to be unified so that both Azure and Windows Hyper-V will be delivered by Development teams as defined packages – Databases as DAC’s; Applications as CAB’s / MSDeploy, Sites as WebDeploy / GIT, etc. with many of the features of Azure such as the Service Bus being ported back to Windows Server. Additionally, monitoring services are starting to unify to this model to define a transparent unified distributed service.

Azure pulls closer to AWS

May 11, 2013

It is no secret that Windows Azure has historically lagged far behind Amazon EC2 in the market and in the hearts and minds of most developers.  Azure started out life (in 2008) as a somewhat clunky 1.0 offering.  The initial Platform as a Service (PaaS) model was really only useful for running .Net applications in the cloud.  While the process was straight-forward and the applications worked well in the cloud, publishing the project to Azure was slow.  Further, once you had the project in the cloud it was not very intuitive how you could scale the application.  Finally, there was not much support for languages other than C#, VB.Net, or C++ or common frameworks like Drupal, WordPress, etc.

Things have really changed in Redmond and I am actually thinking that Azure may have actually pulled even with or even eclipsed in some areas Amazon.  In mid-2011 Microsoft moved their best development manager Scott Guthrie onto Azure.  Also working on the Azure project since 2010 is Mark Russinovich arguably Microsoft’s best engineer.  At this point Microsoft truly has their “A” team on Azure.

I used to say that what most developers was is Infrastructure as a Service (IaaS) – like Amazon EC2 has had since day one.  The difference between PaaS and IaaS means that you can deploy and manage your own application in the cloud vs. being constrained to compiled / packaged offerings.  I’ve not been a fan of PaaS based on my early experience with Azure, Google AppEngine, and  The common thread in all of these platforms was that you were by design limited in terms of your ability to manage the underlying platform.  If a particular technology was not installed on the platform you could not use it.  For example, AppEngine only supports Java, Python, and Go.

I’ve since changed (or possibly evolved) my thinking about PaaS.  As a developer I want to focus on delivering features to customers.  I would prefer not to have to worry about hosting as long as the platform gives me enough control to get my job done.  Hosting should provide easy deployment, ability to tune the application, ability to debug the application, and easily scale.  Azure’s initial PaaS solution was none of these.  In the middle of 2012 Azure began supporting a PaaS technology called Web Sites.  Web sites allow you to quickly and easily create a site, deploy pretty much any language to it, and scale it with only a few clicks.  Is so slick that it’s caused me to re-think whether I really need IaaS on Azure.  If I don’t have to why would I mess with my own VMs?  At this point Microsoft seems to have the best of both worlds (PaaS and IaaS) – a more mature version of web roles, web sites, and VMs.  Here is a nice comparison of the applications for each model and an analysis of the pros and cons of sites vs. roles.

Amazon has been innovating at such a rapid pace that pretty much at every turn Azure has historically looked like an inferior offering by comparison.  Again it would seem like Microsoft is not satisfied with also ran status.

  • One of the big things that was missing from Azure ability to connect to corporate networks.  Microsoft recently announced that they would be adding additional networking capabilities to Azure.
  • Amazon has for a while had a marketplace for products and services that are add-on offerings to AWS.  Microsoft announced a similar offering the fall of 2012 at the Build Conference and has come a long way in a short period of time.  Microsoft actually has two offerings the Azure Store for developers and the Azure Marketplace analysts and information workers.  My sense is that the Marketplace has been around for longer than the Store as it has a much richer set of offerings.  Some of the offerings overlap between the Store and the Marketplace. For example, the Worldwide Historical Weather Data can be access from both places.
  • Finally, and perhaps most significantly as part of their General Availability announcement for IaaS Microsoft committed to matching Amazon’s pricing on commodity services – compute, storage, and bandwidth.

Best practices for adding scalability

November 11, 2011

My thesis is that a you can’t have a good SaaS application that doesn’t scale.  By definition the need for scalability is driven by customer demand but there is demand and there is DEMAND. A handful of lucky organizations (Google, Twitter, Facebook) are faced with industrial strength volume every minute of every day. Organizations with this type of DEMAND can afford to have entire divisions dedicated to managing scalability. Most people are dealing with optimizing their resources for linear growth or the happy situation where their application (Instragram) catches fire (in some cases overnight). A scalable architecture makes it possible to expand to cloud services such as EC2 and Azure or even locally hosted capacity. Absent a scalable architecture an organization is faced with curating a collection of tightly coupled servers and overseeing a maintenance nightmare.

Scalability is the ability to handle additional load by adding more computational resources.  Performance is not scalability, however, improving system performance mitigates to some degree the need for scalability.  Performance is the number of operations per unit of time that a system can handle (e.g., words / second, pages served / day, etc.).  There are two types of scalability – vertical and horizontal.

Vertical scalability is achieved by by adding more power (more RAM, faster CPU) to a single machine.  Vertical scalability typically results in incremental improvements.  Horizontal scalability is accommodating more load by distributing processing to multiple computers.  Where vertical scalability is relatively trivial to implement, horizontal scalability is much more complex.  Conversely, horizontal scalability offers theoretically unlimited capacity.  Google is the classic example of infinite horizontal scalability using thousands of low-cost commodity servers.

If you have the luxury of working off of a blank sheet of paper or have the flexibility to implement a major new technology stack some of the better solutions for implementing scalability include ActiveMQ, and Hadoop. Microsoft’s AppFabric Service Bus promises capability in this area for Azure hosted applications. Many times scalability was considered when an application was first created but has proven to be inadequate for current demand.  The following are suggestions for improving an existing application’s scalability.

Microsoft’s Five Commandments of Designing for Scalability

  • Do Not Wait– A process should never wait longer than necessary.
  • Do Not Fight for Resources – Acquire resources as late as possible and then release them as soon as possible.
  • Design for Commutability– Two or more operations are said to be commutative if they can be applied in any order and still obtain the same result.
  • Design for Interchangeability – Manage resources such that they can be interchangeable (i.e., database connection).  Keep server side components as stateless as possible.
  • Partition Resources and Activities – Minimizing relationships between resources and between activities

Microsoft’s Best Practices for Scalability

  • Use Clustering Technologiessuch as load balancers, message brokers, and other solutions that implement a decoupled architecture.
  • Consider logical vs. physical tierssuch as the model view controller (MVC) architecture.
  • Isolate transactional methodssuch that components that implement methods that implement transactions are distinct from those that do not.
  • Eliminate Business Layer State such that wherever possible server-side objects are stateless.

Shahzad Bhatti’s Ten Commandments for Scalable Architecture

  1. Divide and conquer – Design a loosely coupled and shared nothing architecture.
  2. Use messaging oriented middleware (ESB) to communicate with the services.
  3. Resource management – Manage http sessions and remove them for static contents, close all resources after usage such as database connections.
  4. Replicate data – For write intensive systems use master-master scheme to replicate database and for read intensive systems use master-slave configuration.
  5. Partition data (Sharding) – Use multiple databases to partition the data.
  6. Avoid single point of failure – Identify any kind of single point of failures in hardware, software, network, power supply.
  7. Bring processing closer to the data – Instead of transmitting large amount of data over the network, bring the processing closer to the data.
  8. Design for service failures and crashes – Write your services as idempotent so that retries can be done safely.
  9. Dynamic Resources – Design service frameworks so that resources can be removed or added automatically and clients can automatically discover them.
  10. Smart Caching – Cache expensive operations and contents as much as possible.