COPY URL
SHARE CASE STUDY
Cloud Native

Doing Cloud-Native the Right Way. Key Components.

Rohith Reddy Gopu
Principal Architect @ TYNYBAY
Timer Icon
Mar 24th, 2022
12
 m Read

The pandemic has made "Cloud-Native" a prerequisite to thrive in the new normal, but the right path to "Cloud Native Transformation" is still a critical challenge that organization face.

This document could be your starting point for the Cloud Native Transformation Roadmap

What is Cloud Native?

Let us try to get an overview of what it means to build a "Cloud Native" application.

A decade ago, to deploy and monitor an application in a production environment, needed Data Centre Network Engineers and Application Developers working parallelly, in order to deploy the software to production. In most cases, the task involved handling a long list of bugs and the production server breaking in deployment. This further led to hundreds of engineers working extra hours trying to fix the bugs on the fly.

Today, we're in a different world; the emergence of cloud has transformed how technology is built and deployed! And with cloud-native, it has become easier to develop and deploy production-ready apps.

Cloud-Native is often defined by many as "running your applications on the Kubernetes platform," but cloud-native is much more than just Kubernetes.

Cloud-Native is much more than Kubernetes

Cloud-Native way of building applications involves Microservices Architecture for developing services. The services are packaged into an image, making it easier for deployment. When we deploy or run these images or a set of images together, we call it a container. Usually, without images, we install all the dependencies of the project/application manually. Containers run on the same base operating system; all of the containers run as isolated processes. When the number of images or containers grows in number, Kubernetes comes in to orchestrate these containers. We can define the kind of specifications each image requires in a configuration file. YAML is used for writing the configuration files and is currently widely used.

What's driving the Cloud Native Transformation? - The need for speed

Cloud Native in this new normal is driven by "The need for Speed". Getting your products and digital experiences to the market has to be faster. The release cycles have to be shortened to the best extent possible, to cater to your customer needs.

According to Gartner, though 50% of Enterprises have adopted Agile & DevOps, it is limited to merely 15% of the Applications they build or manage.

Let's now dive deeper and understand the key components of cloud-native -

The Core Components of Cloud Native Computing:

  • 1. Agile & DevOps
  • 2. Microservices
  • 3. Service Mesh
  • 4. Containers

1. Agile & DevOps

Agile & DevOps have proven to be efficient for teams of different sizes. The cultural shift takes its own time for large teams and is vital for starting your Cloud-Native journey.

Amazon puts new software or releases changes every 11 seconds into production by automating its entire deployment and release processes using DevOps.

Netflix launched a project "Chaos Monkey" for randomly terminating instances in production to ensure that engineers implement services resilient to failures.

Projects like these could bring a substantial cultural change inside the organizations and promote developer autonomy.

Getting Started with Agile & DevOps adoption
Agile & Continuous Planning

Agile is more about cultural change than a process that everyone adapts to. You can use different methodologies like Scrum to streamline the Agile Process. Scrum is the most popular and focuses on improving the productivity & efficiency of the developers.

Considerations to stress on when thinking about Agile -

  1. Big vs. Small While the big picture needs to be intact, it is crucial to segregate the software into small modules that could be independent.
  2. Breaking it down - Breaking the larger initiatives into small chunks like EPICS gives a detailed view of what needs to be done to complete the development.
  3. Estimating - Once you can break the entire project into smaller chunks, calculating becomes more straightforward and will give you an understanding of what lies ahead of you.
  4. Release cycles - Proper estimates help you plan better release cycles, improve the time to market, and build more efficient software with seamless experiences.
  5. Keep Improving - Incremental improvements to the release cycles with continuous feedback would drive innovation.

DevOps

DevOps involves Continuous Integration, Testing & Deployment. DevOps is about automating the code & deployment pipeline. Code integration and testing issues are quite common in large projects involving hundreds of developers working on a common code base. There were times when many developers were spending hours together to merge their pieces of code before it is ready to be released. DevOps solves this problem with CI/CD. Continuous Integration, as the name suggests, integrates the frequent changes made by different developers on different branches and puts them together seamlessly. The most important part of it is automated testing, ensuring that everything is working well when put together.

While CI/CD process remains mostly standardized, with Cloud Native, a few things have changed how it is configured and implemented. We will be discussing the Cloud Native way of doing CI/CD here. Cloud Native CI also involves building those pieces of code on open specs and standards; it should be reusable, we should be able to deploy it anywhere. The process of delivering the changes in real-time to Kubernetes Clusters is Continuous Delivery.

Design {Microservices (Loosely Coupled Independent Services)}

Deploy {CI/CD (High-Quality Code, Release often}

Manage {Shared Responsibility across the cycle of application development}

An Image has to be shown here - Ref: using the above three statements.

How will DevOps add value to your software development & delivery?

  1. Avoids rework
  2. Increases the speed of deployments
  3. Improves the quality of the code
  4. Better collaboration between remote working teams
  5. Early defect detection & better security practices

GitOps for Cloud Native - GitOps is a subset of DevOps and is more of an approach than technology. GitOps workflow manages the infrastructure code and deployments of Kubernetes clusters by merely using git pull request methods. Whenever the changes in git appear, the cluster is also updated immediately. GitOps allows separating the continuous Integration and continuous deployment process from the application source.

Weave. works, a Product Company, was the first to bring out this approach. In a recently concluded CNCF Virtual Conference, the panel defined GitOps as these two things:

  • An operating model specifically for Kubernetes and other cloud-native technologies, offering a set of best practices to unify deployment, management, and the monitoring of containerized clusters and applications.
  • A route towards a developer experience for managing applications, where end-to-end CI/CD pipelines and git workflows are applied to operations and development.
GitOps = Continuous Delivery + Continuous Operation

GitOps gives the power of deployment to every developer in the project.

2. Microservices

Microservices could be defined as loosely coupled services that can be developed, deployed, and monitored, independently. It is a part of the SOA paradigm for organizing distributed services under the control of different ownership domains.

Uncle Bob puts this more simply:

Gather together those things that change for the same reason, and separate them for different reasons.

The principle, often known as the Single Responsibility Principle(SRP), is coined by Robert C Martin, often referred to as Uncle Bob. The SRP says that a subsystem, a class, a module or even a function should not have more than one reasons to change.

When the services are small and independent it becomes easier for remote teams, independent developers as the service boundaries are clearly defined. It improves the productivity of the team and helps ship software faster and at regular intervals. Keeping the services independent also helps to build them in different programming languages. Choosing the right technology stack becomes easier with the freedom to explore multiple languages for different services. Post-development of the services, each service can be independently deployed, monitored, and scaled using the CD.

Monolith vs Microservices

Monolithic applications are built and deployed as a single unit. There is no isolation between different modules inside the application. When deployed with new changes from a single module, the application would have many unintentional effects on all the application modules.

Monolithic Apps are -

  1. Tough to maintain large codebases.
  2. Hard to scale. Generally, the entire application is scaled horizontally.
  3. Limitation to use a single technology stack.
  4. Deployments are messy.
  5. Changes would involve the deployment of the entire application.
  6. Release cycles are longer.
  7. Very less reusability.

Microservice based Apps are -

  1. Easy to maintain smaller codebases.
  2. Can scale independently and efficiently with Containers.
  3. Multiple programming languages / technology stacks can be used for independent services.
  4. Deployments are easy to automate.
  5. Independent services can be changed and deployed without affecting other components.
  6. Release cycles are short.
  7. A lot of independent services can be reusable. (Single Responsibility Principle).

Why Microservices for Cloud Native?

What good will a Monolithic application do if you deploy the same app in multiple containers? The power of containerization can only be unleashed with micro-services. By running services in separate/independent containers, they can be developed in different languages and deployed independently.

Containers help us remove the dependencies on languages and frameworks by isolating the environment for each service. The hot services can be scaled accordingly, and this improves the efficiency of utilization of infrastructure resources. Once we have the number of services growing in the application, the containers grow in number. We need a framework to manage the containers for the effective utilization of resources. While Docker is the widely used Container platform, Kubernetes has become the widely use Orchestrator for the Docker Containers.

Is it all "Pros with Microservices"?

No, every methodology, culture, and approach comes with its challenges. Implementing micro-services could be challenging if you are not following Agile or DevOps. The micro-services approach best suits only when you can Implement Agile, and more importantly, having the right set of DevOps resources would be essential.

It could increase some operational complexity as the micro-services grow in number; managing multiple instances/containers would be challenging and requires a high level of automation. The initial investment into Microservices could be increased, but it will reduce the costs. The ROI's are high once you can navigate to the complete Cloud Native Ready Applications. As the services grow, it becomes complex to manage service communication, service discovery & load balancing. But it isn't the end of the road. Service Mesh Architecture addresses all these concerns around the scalability of Microservices.

3. Service Mesh

Service Mesh Architecture becomes essential, without which the Microservices Implementation can not be effectively utilized.

What is Service Mesh?

In simple terms, the Service mesh is a Configurable Infrastructure layer inside the application, handling high volume service-to-service communications. A service mesh provides critical capabilities like service discovery, load balancing, observability, and traceability. Service mesh uses "sidecar" proxies deployed for each service to monitor and facilitate inter-service communication.

The Infrastructure layer for Service to Service communication.

Advantages of a Service Mesh

  1. Simplify the inter-service communications, a network for services.
  2. Improve the resiliency & efficiency.
  3. Load balancing with automated failover.
  4. Improved security for services.
  5. Circuit breakers & Traffic control.
  6. Policy enforcement.

Service Mesh vs. API Gateway

While Service Mesh helps effectively use the services architecture, it cannot be straightforward if the use case is not entirely understood. Many times it is often confused with an API Gateway. Hence, it is essential to understand the differences between an API Gateway & Service Mesh.

Getting started with Istio

The most popular framework for Service Mesh is Istio. Istio plans to remain open and platform-independent while initially supporting Kubernetes-based deployments; eventually, it will be adapted to other environments. Istio doesn't require changes to the underlying services. It provides resilience, service level metrics, traffic management, encryption, distributed tracing, and advanced routing functionality. With Istio, developers can focus more on the business logic as Istio handles all other tasks related to Micro-services. Istio service is logically split into two layers - data plane & control plane.

The Data plane layer contains proxies deployed as sidecars, which control the communication between micro-services and report metrics on all mesh traffic. The control plane handles the configuration of brokers to route the traffic.

Source - Istio

4. Containers

Many companies and teams have picked containerization very early in the game - leaving behind the importance of Microservices & Agile for leveraging the maximum from your cloud-native ecosystem.

Now, what are containers?

In simple terms, the container is a package of code/software and all its dependencies.  Containers take advantage of the base OS and operate as a virtual OS with complete control and access the amount of CPU, memory, and disk. Containers are light-weight, portable and isolated. Kubernetes is the platform that orchestrates containers by automating deployment, scaling, and managing containerized applications.

Advantages of Kubernetes

Fast & Easy deployments - Containerization can speed up the process of building, testing, and releasing software.

Runs at Scale - Based on the usage of CPU resources or other application metrics, you can modify the number of containers that are running.

High Availability - Kubernetes provides functionalities such as self-healing and auto replacement, in case a pod crashes due to an error.

Open Source - Kubernetes is an open-source system for managing clusters of containers.

Cloud Agnostic - The same configuration can work on any cloud platform. All we have to do is to program the requirements from Kubernetes or how it should manage containers.

The key to doing Cloud-Native right is by bringing all these key components together, and implement them independently.

About TYNYBAY

A Cloud Native Consulting Company founded in 2020. We enable and empower teams to get the most out of the Cloud Native ecosystem. Our team of experts, known as TYNYpreneurs, are all certified Kubernetes and Cloud architects.

About TYNYBAY

A Cloud Native Consulting Company founded in 2020. We enable and empower teams to get the most out of the Cloud Native ecosystem. Our team of experts, known as TYNYpreneurs, are all certified Kubernetes and Cloud architects.

Our Experts

Featured TYNYpreneurs

Get in touch with our TYNYpreneurs to discuss your project requirements and explore how we can help you build a scalable, resilient, and agile application for your business.

Krishna Teja
Senior Software Engineer
Swapna Anumula
Software Development Engineer Manager
Ajay Chandra
Software Development Engineer
Shiva Prasad
Associate Software Engineer
Mounika Musham
Associate Software Engineer

Book a 30 min call with our Featured TYNYpreneurs

Book a call with the experts who have a proven track record of delivering exceptional results.
Book a call now

Let’s get started. This is exactly what will happen after you get started

We will respond to you within 24 hours.
We’ll sign a Non-disclosure agreement.
You’ll be talking to the product and tech experts (no account managers).

Your best partner for the 
journey ahead.

I would like to
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Our Thinking / Doing Cloud-Native the Right Way. Key Components.
Cloud Native

Doing Cloud-Native the Right Way. Key Components.

Rohith Reddy Gopu
Principal Architect @ TYNYBAY
Mar 24th, 2022
Timer Icon
Timer Icon
12
 m Read

The pandemic has made "Cloud-Native" a prerequisite to thrive in the new normal, but the right path to "Cloud Native Transformation" is still a critical challenge that organization face.

This document could be your starting point for the Cloud Native Transformation Roadmap

What is Cloud Native?

Let us try to get an overview of what it means to build a "Cloud Native" application.

A decade ago, to deploy and monitor an application in a production environment, needed Data Centre Network Engineers and Application Developers working parallelly, in order to deploy the software to production. In most cases, the task involved handling a long list of bugs and the production server breaking in deployment. This further led to hundreds of engineers working extra hours trying to fix the bugs on the fly.

Today, we're in a different world; the emergence of cloud has transformed how technology is built and deployed! And with cloud-native, it has become easier to develop and deploy production-ready apps.

Cloud-Native is often defined by many as "running your applications on the Kubernetes platform," but cloud-native is much more than just Kubernetes.

Cloud-Native is much more than Kubernetes

Cloud-Native way of building applications involves Microservices Architecture for developing services. The services are packaged into an image, making it easier for deployment. When we deploy or run these images or a set of images together, we call it a container. Usually, without images, we install all the dependencies of the project/application manually. Containers run on the same base operating system; all of the containers run as isolated processes. When the number of images or containers grows in number, Kubernetes comes in to orchestrate these containers. We can define the kind of specifications each image requires in a configuration file. YAML is used for writing the configuration files and is currently widely used.

What's driving the Cloud Native Transformation? - The need for speed

Cloud Native in this new normal is driven by "The need for Speed". Getting your products and digital experiences to the market has to be faster. The release cycles have to be shortened to the best extent possible, to cater to your customer needs.

According to Gartner, though 50% of Enterprises have adopted Agile & DevOps, it is limited to merely 15% of the Applications they build or manage.

Let's now dive deeper and understand the key components of cloud-native -

The Core Components of Cloud Native Computing:

  • 1. Agile & DevOps
  • 2. Microservices
  • 3. Service Mesh
  • 4. Containers

1. Agile & DevOps

Agile & DevOps have proven to be efficient for teams of different sizes. The cultural shift takes its own time for large teams and is vital for starting your Cloud-Native journey.

Amazon puts new software or releases changes every 11 seconds into production by automating its entire deployment and release processes using DevOps.

Netflix launched a project "Chaos Monkey" for randomly terminating instances in production to ensure that engineers implement services resilient to failures.

Projects like these could bring a substantial cultural change inside the organizations and promote developer autonomy.

Getting Started with Agile & DevOps adoption
Agile & Continuous Planning

Agile is more about cultural change than a process that everyone adapts to. You can use different methodologies like Scrum to streamline the Agile Process. Scrum is the most popular and focuses on improving the productivity & efficiency of the developers.

Considerations to stress on when thinking about Agile -

  1. Big vs. Small While the big picture needs to be intact, it is crucial to segregate the software into small modules that could be independent.
  2. Breaking it down - Breaking the larger initiatives into small chunks like EPICS gives a detailed view of what needs to be done to complete the development.
  3. Estimating - Once you can break the entire project into smaller chunks, calculating becomes more straightforward and will give you an understanding of what lies ahead of you.
  4. Release cycles - Proper estimates help you plan better release cycles, improve the time to market, and build more efficient software with seamless experiences.
  5. Keep Improving - Incremental improvements to the release cycles with continuous feedback would drive innovation.

DevOps

DevOps involves Continuous Integration, Testing & Deployment. DevOps is about automating the code & deployment pipeline. Code integration and testing issues are quite common in large projects involving hundreds of developers working on a common code base. There were times when many developers were spending hours together to merge their pieces of code before it is ready to be released. DevOps solves this problem with CI/CD. Continuous Integration, as the name suggests, integrates the frequent changes made by different developers on different branches and puts them together seamlessly. The most important part of it is automated testing, ensuring that everything is working well when put together.

While CI/CD process remains mostly standardized, with Cloud Native, a few things have changed how it is configured and implemented. We will be discussing the Cloud Native way of doing CI/CD here. Cloud Native CI also involves building those pieces of code on open specs and standards; it should be reusable, we should be able to deploy it anywhere. The process of delivering the changes in real-time to Kubernetes Clusters is Continuous Delivery.

Design {Microservices (Loosely Coupled Independent Services)}

Deploy {CI/CD (High-Quality Code, Release often}

Manage {Shared Responsibility across the cycle of application development}

An Image has to be shown here - Ref: using the above three statements.

How will DevOps add value to your software development & delivery?

  1. Avoids rework
  2. Increases the speed of deployments
  3. Improves the quality of the code
  4. Better collaboration between remote working teams
  5. Early defect detection & better security practices

GitOps for Cloud Native - GitOps is a subset of DevOps and is more of an approach than technology. GitOps workflow manages the infrastructure code and deployments of Kubernetes clusters by merely using git pull request methods. Whenever the changes in git appear, the cluster is also updated immediately. GitOps allows separating the continuous Integration and continuous deployment process from the application source.

Weave. works, a Product Company, was the first to bring out this approach. In a recently concluded CNCF Virtual Conference, the panel defined GitOps as these two things:

  • An operating model specifically for Kubernetes and other cloud-native technologies, offering a set of best practices to unify deployment, management, and the monitoring of containerized clusters and applications.
  • A route towards a developer experience for managing applications, where end-to-end CI/CD pipelines and git workflows are applied to operations and development.
GitOps = Continuous Delivery + Continuous Operation

GitOps gives the power of deployment to every developer in the project.

2. Microservices

Microservices could be defined as loosely coupled services that can be developed, deployed, and monitored, independently. It is a part of the SOA paradigm for organizing distributed services under the control of different ownership domains.

Uncle Bob puts this more simply:

Gather together those things that change for the same reason, and separate them for different reasons.

The principle, often known as the Single Responsibility Principle(SRP), is coined by Robert C Martin, often referred to as Uncle Bob. The SRP says that a subsystem, a class, a module or even a function should not have more than one reasons to change.

When the services are small and independent it becomes easier for remote teams, independent developers as the service boundaries are clearly defined. It improves the productivity of the team and helps ship software faster and at regular intervals. Keeping the services independent also helps to build them in different programming languages. Choosing the right technology stack becomes easier with the freedom to explore multiple languages for different services. Post-development of the services, each service can be independently deployed, monitored, and scaled using the CD.

Monolith vs Microservices

Monolithic applications are built and deployed as a single unit. There is no isolation between different modules inside the application. When deployed with new changes from a single module, the application would have many unintentional effects on all the application modules.

Monolithic Apps are -

  1. Tough to maintain large codebases.
  2. Hard to scale. Generally, the entire application is scaled horizontally.
  3. Limitation to use a single technology stack.
  4. Deployments are messy.
  5. Changes would involve the deployment of the entire application.
  6. Release cycles are longer.
  7. Very less reusability.

Microservice based Apps are -

  1. Easy to maintain smaller codebases.
  2. Can scale independently and efficiently with Containers.
  3. Multiple programming languages / technology stacks can be used for independent services.
  4. Deployments are easy to automate.
  5. Independent services can be changed and deployed without affecting other components.
  6. Release cycles are short.
  7. A lot of independent services can be reusable. (Single Responsibility Principle).

Why Microservices for Cloud Native?

What good will a Monolithic application do if you deploy the same app in multiple containers? The power of containerization can only be unleashed with micro-services. By running services in separate/independent containers, they can be developed in different languages and deployed independently.

Containers help us remove the dependencies on languages and frameworks by isolating the environment for each service. The hot services can be scaled accordingly, and this improves the efficiency of utilization of infrastructure resources. Once we have the number of services growing in the application, the containers grow in number. We need a framework to manage the containers for the effective utilization of resources. While Docker is the widely used Container platform, Kubernetes has become the widely use Orchestrator for the Docker Containers.

Is it all "Pros with Microservices"?

No, every methodology, culture, and approach comes with its challenges. Implementing micro-services could be challenging if you are not following Agile or DevOps. The micro-services approach best suits only when you can Implement Agile, and more importantly, having the right set of DevOps resources would be essential.

It could increase some operational complexity as the micro-services grow in number; managing multiple instances/containers would be challenging and requires a high level of automation. The initial investment into Microservices could be increased, but it will reduce the costs. The ROI's are high once you can navigate to the complete Cloud Native Ready Applications. As the services grow, it becomes complex to manage service communication, service discovery & load balancing. But it isn't the end of the road. Service Mesh Architecture addresses all these concerns around the scalability of Microservices.

3. Service Mesh

Service Mesh Architecture becomes essential, without which the Microservices Implementation can not be effectively utilized.

What is Service Mesh?

In simple terms, the Service mesh is a Configurable Infrastructure layer inside the application, handling high volume service-to-service communications. A service mesh provides critical capabilities like service discovery, load balancing, observability, and traceability. Service mesh uses "sidecar" proxies deployed for each service to monitor and facilitate inter-service communication.

The Infrastructure layer for Service to Service communication.

Advantages of a Service Mesh

  1. Simplify the inter-service communications, a network for services.
  2. Improve the resiliency & efficiency.
  3. Load balancing with automated failover.
  4. Improved security for services.
  5. Circuit breakers & Traffic control.
  6. Policy enforcement.

Service Mesh vs. API Gateway

While Service Mesh helps effectively use the services architecture, it cannot be straightforward if the use case is not entirely understood. Many times it is often confused with an API Gateway. Hence, it is essential to understand the differences between an API Gateway & Service Mesh.

Getting started with Istio

The most popular framework for Service Mesh is Istio. Istio plans to remain open and platform-independent while initially supporting Kubernetes-based deployments; eventually, it will be adapted to other environments. Istio doesn't require changes to the underlying services. It provides resilience, service level metrics, traffic management, encryption, distributed tracing, and advanced routing functionality. With Istio, developers can focus more on the business logic as Istio handles all other tasks related to Micro-services. Istio service is logically split into two layers - data plane & control plane.

The Data plane layer contains proxies deployed as sidecars, which control the communication between micro-services and report metrics on all mesh traffic. The control plane handles the configuration of brokers to route the traffic.

Source - Istio

4. Containers

Many companies and teams have picked containerization very early in the game - leaving behind the importance of Microservices & Agile for leveraging the maximum from your cloud-native ecosystem.

Now, what are containers?

In simple terms, the container is a package of code/software and all its dependencies.  Containers take advantage of the base OS and operate as a virtual OS with complete control and access the amount of CPU, memory, and disk. Containers are light-weight, portable and isolated. Kubernetes is the platform that orchestrates containers by automating deployment, scaling, and managing containerized applications.

Advantages of Kubernetes

Fast & Easy deployments - Containerization can speed up the process of building, testing, and releasing software.

Runs at Scale - Based on the usage of CPU resources or other application metrics, you can modify the number of containers that are running.

High Availability - Kubernetes provides functionalities such as self-healing and auto replacement, in case a pod crashes due to an error.

Open Source - Kubernetes is an open-source system for managing clusters of containers.

Cloud Agnostic - The same configuration can work on any cloud platform. All we have to do is to program the requirements from Kubernetes or how it should manage containers.

The key to doing Cloud-Native right is by bringing all these key components together, and implement them independently.

DevOps vs GitOps: 4 Benefits you must know to Master the Methodologies

Next Blog