What is Cloud Native?
“Cloud Native” is the name of a particular approach to designing, building and running applications based on infrastructure-as-a-service combined with new operational tools and services like continuous integration, container engines, and orchestrators. The overall objective is to improve speed, scalability and finally margin.
Who is using it?
Internet giants like Uber, Twitter, LinkedIn, Netflix, and Square have pioneered a cloud-native operational model which helped them to make existing product hyper-scaled, reduced operating costs and improves their profit margin. Many other companies are now hoping to adopt that approach.
What is the Purpose of Cloud Native?
According to the Cloud Native Computing Foundation (CNCF), a Cloud Native strategy is about scale and resilience: “distributed systems capable of scaling to tens of thousands of self-healing multi-tenant nodes”.
Speed: Companies of all sizes now see a strategic advantage in being able to move quickly and get ideas to market fast. By this, we mean minimising the time for getting an idea to production from months to days or even hour. Part of achieving this is a cultural shift within a business, transitioning from big bang projects to more incremental improvements along with managing risk. At its best a Cloud Native approach is about de-risking as well as accelerating change, thus allowing companies to delegate more aggressively and thus become more responsive.
Scale: As businesses grow, it becomes strategically necessary to support more users, in more locations, with a broader range of devices, while maintaining responsiveness, managing costs, and not falling over.
Margin: In the new world of infrastructure-as-a-service, a strategic goal may be to pay for additional resources only as they’re needed – as new customers come online. Spending moves from up-front CAPEX (buying new machines in anticipation of success) to OPEX (paying for additional servers on-demand). But this is not all. Just because machines can be bought just in time does not that they’re being used efficiently. Another stage in Cloud Native is usually to spend less on hosting.
How to build cloud-ready application architecture?
Cloud-native applications must always consist of a single codebase that is tracked in a version control system. A codebase is a source code repository or a set of repositories that share a common root. The single codebase for an application is used to produce any number of immutable releases that are destined for different environments.
Create a manifest file, which explicitly declares all the dependent library packages. It’s also a good idea to use a dependency isolation tool during execution to ensure no implicit dependencies from the surrounding system
It includes database resource handling as well as credentials for accessing various external services. These can be stored in a config file; however, it is prudent to use environment variables for this purpose mainly because they are language and OS agnostic standard.
IV. Backing services
Backing services are those services which an app depends on over a network connection. This can be a local database service or any other 3rd party service.
V. Build, release and run
Build, release and run stages should be treated separately. Use necessary automation and tools to generate build and release packages with proper tags. This is followed by running the app in the execution environment while using proper release management tools for ensuring timely rollback.
Apps should be self-contained, stateless and share-nothing processes and shouldn’t depend on any runtime injection for creating web-facing services. The only thing they should do is to bind to a port on the underlying execution environment and the app services are exported to that port.
VII. Port binding
Port binding is all about running apps as stateless processes and exporting services over port-binding.
It recommends process model and treating each process in the app as a first-class citizen. This simply means that each process can be managed independently. Also, design a process formation for the app which details the process types (ex. web, workers) and a number of processes for each type to handle different kinds of workload.
Processes should shutdown gracefully and also remain robust against sudden death in the case of hardware failures. The user can use some high quality robust queuing backend (Beanstalk, RabbitMQ etc.) that would help return unfinished jobs back to the queue in the case of a failure.
X. Dev/prod parity
This is all about keeping development, staging and production setups as similar as possible. Implement a continuous deployment strategy and deploy code on demand instead of sticking to a schedule. This would help catch issues more easily and at an early stage of development.
It is critical for debugging. So it should be handled properly. Instead of app managing the datastore for log files, it should output the log info as a continuous stream. Then some other separate services should pick up this stream and do the archiving/analysis activities.
XII. Admin processes
Create one-off admin processes to collect data from the running application. These processes should be part of the all deploys to avoid any synchronization issues.
Credits : 12factor.net contino.io