Configuration Management and the Cloud https://www.coursera.org/learn/configuration-management-cloud/home/module/1
Configuration Management and the Cloud
https://www.coursera.org/learn/configuration-management-cloud/home/module/1
Module 1
Cloud Services Overview Video #3
Services running in the cloud Means:
Service are running some where else
1. In Data Center
2. Remote Servers over internet
When setup cloud model you need to consider
REGIONS--->
ZONES---->
PHYSICAL DATA CENTERS
One is failed the service will automatically migrated from one point to another pointer without the knowledge/distrubance of end users.
Means Service Skip from faulty Server to Good one Server located to any other point.
Question
Module 1
Scaling in the Cloud
In VERTICAL Scaling we upgrade each machine's resouces like HD capacity, RAM , CPU capacity or increasing Nos of CPUs in one machine
If number of Logins at one time increase Cloud service provide automatically increase Vertical and Horizonal Scaling. An automatic scalling according to resource is being used.
Question
Module 1
Evaluating the Cloud
Question
Module 1
Migrating to the Cloud
Question
Congratulations! You passed!
1.
When we use cloud services provided to the general consumer, such as Google Suite or Gmail, what cloud deployment model are we using?
Keep it up! A public cloud offers services to the general public, often as SaaS (Software as a Service) offerings.
2.
What is a container?
You got it! A container is an OS- and hardware-independent environment that allows for easy migration and compatibility.
3.
Select the examples of Managed Web Application Platforms. (Check all that apply)
Nice work! Google App Engine is a Platform as a Service (PaaS) product that offers access to Google's flexible hosting and Tier 1 Internet service for Web app developers and enterprises.
Great job! AWS Elastic Beanstalk is an easy-to-use PaaS service for deploying and scaling web applications.
Woohoo! Microsoft Azure App Service enables you to build and host web apps, mobile back ends, and RESTful APIs in the programming language of your choice without having to manage infrastructure.
4.
When a company solely owns and manages its own cloud infrastructure, what type of cloud deployment model are they using?
Way to go! A private cloud deployment is one that is fully owned and operated by a single company or entity.
5.
Which "direction" are we scaling when we add RAM or CPU resources to individual nodes?
Awesome! Vertical scaling is a form of upscaling, but upscaling can also be horizontal.
1. Create Project name Project Name: First cloud step
Question
$ sudo cp hello_cloud.py /usr/local/bin
Press button <CREATE> at the bottom
$ glcould compute instances create --source-instance-template webserver-template ws1 ws2 ws3 ws4 ws4
Question
Practice Quiz: Managing Instances in the Cloud
Congratulations! You passed!
1.
What is templating?
Way to go! Effective templating software allows you to capture an entire virtual machine configuration and use it to create new ones.
2.
Why is it important to consider the region and zone for your cloud service?
Right on! Generally, you're going to want to choose a region that is close to your users so that you
can deliver better performance.
3.
What option is used to determine which OS will run on the VM?
Woohoo! The boot disk from which the VM boots will determine what operating system runs on the VM.
4.
When setting up a new series of VMs using a reference image, what are some possible options
for upgrading services running on our VM at scale?
Nice job! One way of updating VM services at scale is to simply spin them up again with an updated reference image.
Awesome! Puppet or other configuration management systems provide a streamlined way to deploy service updates at scale.
5.
When using gcloud to manage VMs, what two parameters tell gcloud that a) we want to manage our VM resources and b) that we want to deal with individual VMs? (Check two)
For the difference virtual instances to correctly interact with each other we need ORCHESTRATION
1.
In order to detect and correct errors before end users are affected, what technique(s) should we set up?
You got it! Monitoring and alerting allows us to monitor and correct incidents or failures before they reach the end user.
2.
When accessing a website, your web browser retrieves the IP address of a specific node in order to load the site. What is this node called?
Awesome! When you connect to a website via the Internet, the web browser first receives an IP address. This IP address identifies a particular computer: the entry point of the website.
3.
What simple load-balancing technique just assigns to each node one request at a time?
Right on! Round-robin load balancing is a basic way of spreading client requests across a server group. In turn, a client request will be forwarded to each server. The load balancer is directed by the algorithm to go back to the top of the list and repeat again.
4.
Which cloud automation technique spins up more VMs into instance groups when demand increases, and shuts down VMs when demand decreases?
Way to go! Autoscaling helps us save costs by matching resources with demand automatically.
5.
Which of the following are examples of orchestration tools used to manage cloud resources as code? (Check all that apply)
Woohoo! Like Puppet, Terraform uses its own domain specific language (DSL), and manages configuration resources as code.
Nice job! CloudFormation is a service provided by Amazon to assist in modeling and managing AWS resources.
Excellent! Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources.
Grade received 90% Latest Submission Grade 90%
This graded quiz assesses your understanding of the concepts and procedures covered in the lab you just completed. Please answer the questions based on the activities you performed in the lab.
Note:
You can refer to your completed lab for help with the quiz.
In order to complete this quiz, you must have completed the lab before it.
1.
When creating a template for a virtual machine (VM) in Google Cloud Platform (GCP), what is the first step in the process?
2.
What is the primary purpose of the Google Cloud command-line interface (gcloud) in cloud computing?
3.
When creating an image based on the vm1 disk in Google Cloud Platform, what field should be set to 'vm-image'?
4.
When creating an instance template in Google Cloud Platform, as described in the process, what is the purpose of setting
the firewall to 'allow HTTP and HTTPS traffic'?
5.
What is the primary purpose of the gcloud command in the Google Cloud Platform (GCP) ecosystem?
6.
In the process of creating VMs in Google Cloud Platform, why is the image named 'vm-image' created from the disk of the VM instance 'vm1'?
7.
How does the creation of the 'vm-image' from the vm1 disk and the subsequent 'vm1-template' relate to the concept of cloning in VM management?
8.
how would you typically apply updates or changes to an existing VM instance template (like 'vm1-template')?
9.
Which machine series and machine type are used when creating the instance template named "vm1-template" in the lab instructions?
10.
After creating the "vm1-template" instance template, you notice that the boot disk for the template is set to a "standard persistent disk." What type of storage is this and what are its characteristics?
NoSQL is designed to store on the Tons of machines , these are super fast to reterive data. Instead SQL query data are reterive through using specific APIs provided by databses.
Question
To give one cookie to each person turn by turn again and again still the cookie finished
Question
https://www.simplilearn.com/tutorials/devops-tutorial/continuous-delivery-and-continuous-deployment
aS SHOWN
Question
More About Cloud Providers
Here are some links to some common Quotas you’ll find in various cloud providers
https://cloud.google.com/compute/quotas#understanding_vm_cpu_and_ip_address_quotas
https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html
https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits#service-specific-limits
Practice Quiz: Building Software for the Cloud
Congratulations! You passed!
1.
What is latency in terms of Cloud storage?
Nice job! Latency is the amount of time it takes to complete a read or write operation.
2.
Which of the following statements about sticky sessions are true? (Select all that apply.)
Great work! Sticky sessions route requests for a particular session to the same machine that first served the request for that session.
Woohoo! Because sticky sessions can cause uneven load distribution as well as migration problems, they should only be used when absolutely necessary.
Right on! Sticky sessions can cause unexpected results during migration, updating, or upgrading, so it's best to use them only when absolutely necessary.
3.
If you run into limitations such as rate limits or utilization limits, you should contact the Cloud provider and ask for a _____.
Great work! Our cloud provider can increase our limits that we have set, though it will cost more money.
4.
What is the term referring to everything needed to run a service?
Way to go! Everything used to run the service is referred to as the environment. This includes the machines and networks used for running the service, the deployed code, the configuration management, the application configurations, and the customer data.
5.
What is the term referring to a network of hosts spread in different geographical locations, allowing ISPs to be as close as possible to content?
Excellent! CDNs allow an ISP to select the closest server for the content it is requesting.
Module 1 review
how continuous integration (CI) can build and test code every time there is a change,
Glossary terms from course 5, module 1
Terms and definitions from Course 5, Module 1
A/B testing: A way to compare two versions of something to find out which version performs better
Automatic scaling: This service uses metrics to automatically increase or decrease the capacity of the system
Autoscaling: Allows the service to increase or reduce capacity as needed, while the service owner only pays for the cost of the machines that are in use at any given time
Capacity: How much the service can deliver
Cold data: Accessed infrequently and stored in cold storage
Containers: Applications that are packaged together with their configuration and dependencies
Content Delivery Networks (CDN): A network of physical hosts that are geographically located as close to the end users as possible
Disk image: A snapshot of a virtual machine’s disk at a given point in time
Ephemeral storage: Storage used for instances that are temporary and only need to keep local data while they’re running
Hot data: Accessed frequently and stored in hot storage
Hybrid cloud: A mixture of both public and private clouds
Input/Output Operations Per Second (IOPS): Measures how many reads or writes you can do in one second, no matter how much data you're accessing
Infrastructure as a Service (or IaaS): When a Cloud provider supplies only the bare-bones computing experience
Load balancer: Ensures that each node receives a balanced number of requests
Manual scaling: Changes are controlled by humans instead of software
Multi-cloud: A mixture of public and/or private clouds across vendors
Object storage: Storage where objects are placed and retrieved into a storage bucket
Orchestration: The automated configuration and coordination of complex IT systems and services
Persistent storage: Storage used for instances that are long lived and need to keep data across reboots and upgrades
Platform as a Service (or PaaS): When a Cloud provider offers a preconfigured platform to the customer
Private cloud: When your company owns the services and the rest of your infrastructure
Public cloud: The cloud services provided to you by a third party
Rate limits: Prevent one service from overloading the whole system
Reference images: Store the contents of a machine in a reusable format
Software as a Service (or SaaS): When a Cloud provider delivers an entire application or program to the customer
Sticky sessions: All requests from the same client always go to the same backend server
Templating: The process of capturing all of the system configuration to let us create VMs in a repeatable way
Throughput: The amount of data that you can read and write in a given amount of time
Utilization limits: Cap the total amount of a certain resource that you can provision
Graded assessment for module 1
You finished this assignment
1.
Say you work for a company that wants the IT department to focus on deploying and managing applications and spend as little time as possible managing cloud services. Which service might be the right choice?
Correct.
2.
Which word best describes the direction you are scaling when increasing the capacity of a specific service by making the nodes bigger? Select all that apply.
Correct.
3.
Your company is moving its servers from one office to another. At the same time, the organization will be migrating some of its computing needs to a cloud service. In this “lift and shift” strategy, which is the “lift”?
Correct.
4.
If any part of your workload is running on servers owned by your company, what type of cloud might this be part of? Select all that apply.
Correct.
Correct.
Correct.
5.
What are the locations from where you can create a VM to run in the cloud? Select all that apply.
Correct.
Correct.
Not quite. Please refer to the SpinningupVMSintheCloud video for more information.
Not quite. Please refer to the SpinningupVMSintheCloud video for more information.
6.
You’ve set up a VM, modified its configuration settings, and made sure that it's working correctly. Now you want to reproduce this exactly on multiple other machines. How might a template help?
Correct.
7.
Why are there usually multiple entry points for a single website? Select all that apply.
Correct.
Correct.
8.
What is the best method for a batch action like creating ten VMs at once?
Correct.
9.
What type of storage refers to storing files with unique names in a storage bucket? Select all that apply.
Not quite. Please refer to the StoringDataintheCloudvideo for more information.
Correct.
Correct.
10.
What is the advantage of round robin DNS?
Correct. But it has some limitations.
11.
You are planning some improvements in your cloud services, but you want to make the changes in a controlled way. This approach is commonly called “change management”. In change management, how does a continuous integration system, or CI, help to catch problems before they're merged into the main branch?
Correct.
Example
Module 2
Set up Docker
developer wrote some code that works perfectly on their local machine but does not work on others’ machines. Docker helps solve this common—and annoying—problem by providing a consistent runtime across different environments.
Docker is an easy way to package and run applications in containers.
A container is a lightweight, portable, and isolated environment that facilitates the testing and deployment of new software
Within the container, the application is isolated from all other processes on the host machine. In the programming world
Module 2
Docker web apps
Module 2
Docker images
Docker images are the building blocks of Docker containers.
A Docker image contains the application code, data files, configuration files, libraries, and other dependencies needed to run an application.
Module 2
Container and artifact registry
Above 3 CRs provides features like
1. Authenntications
2. Access control
3. Image Geo-Replication
4. Offers Registries for Artifact Storeage
Repository--> Registries--> continers or articafts.
Alaiye Chha
Docker and Google Cloud Platform (GCP) are two types of technologies that complement each other, allowing programmers to build, deploy, and manage containerized applications in the cloud.
Google Cloud Platform
GCP is a composition of all the cloud services provided by Google. These include:
- Virtual machines
- Containers
- Computing
- Hosting
- Storage
- Databases
- Tools
- Identity management
How to run Docker containers in GCP
You can run containers two ways in the cloud using GCP.
The first way is to start a virtual machine with Docker installed on it. Use the docker run command to create a container and start it. This is the same process for running Docker on any other host machine.
The second way is to use a service called Cloud Run. This serverless platform is managed by Google and allows you to launch containers without worrying about managing the underlying infrastructure. Cloud Run is simple and automated, and it’s designed to allow programmers to be more productive and move quickly.
An advantage of Cloud Run is that it allows you to deploy code written in any programming language if you can put the code into a container.
Use Cloud Run to deploy containers in GCP
Before you begin, sign into your Google account, or if you do not have one, create an account.
Open Cloud Run.
Click Create service to display the form.
- In the form,
Select Deploy one revision from an existing container image.
Below the Container image URL text box, select Test with a sample container.
From the Region drop-down menu, select the region in which you want the service located.
Below Authentication, select Allow unauthenticated invocations.
Click Create to deploy the sample container image to Cloud Run and wait for the deployment to finish.
3. Select the displayed URL link to run the container.
My created Link https://hello-tgfaln26ga-el.a.run.app
Pro tip: Cloud Run helps keep costs down by only charging you for central processing unit (CPU) time while the container is running. It’s unlike running Docker on a virtual machine, for which you must keep the virtual machine on at all times—running up your bill.
Key takeaways
GCP supports Docker containers and provides services to support containerized applications. Integrating GCP and Docker allows developers and programmers to build, deploy, and run containers easily while being able to focus on the application logic.
When running https://hello-tgfaln26ga-el.a.run.app
This created the revision hello-00001-6fl of the Cloud Run service hello in asia-south1 in the GCP project first-cloud-step-407010.
You can deploy any container to Cloud Run that listens for HTTP requests on the port defined by the PORT environment variable. Cloud Run will scale automatically based on requests and you never have to worry about infrastructure.
What's next?
Follow the Quickstart tutorial to build a “Hello World” application in your favorite language into a container image, and then deploy it to Cloud Run.
Module 2 Build artifact testing
you will learn more about different types of build artifacts,
how to test a Docker container,
and how to troubleshoot any issues along the way.
Build artifacts
Build artifacts are items that you create during the build process. Your main artifact is your Docker container,
All other items that you generate during the Docker image build process are also considered build artifacts. Some examples include:
- Libraries
- Documentation
- Static files
- Configuration files
- Scripts
Build artifacts in Docker
Build artifacts in Docker play a crucial role in the software development and deployment lifecycle.
No matter what you create with code, you need to test it. You must test your code before deployment to ensure that you catch and correct all issues, defects, and errors.
This is true whether your code is built as a Docker container or built the more “classic” way.
The process to execute the testing varies based on the application and the programming language it’s written in.
Pro tip: It’s important to check that Docker built the container itself correctly if you are testing your code with a containerized application.
There are several types of software testing that you can execute with Docker containers:
Unit tests:
These are small, granular tests written by the developer to test individual functions in the code.
In Docker, unit tests are run directly on your codebase before the Docker image is built, ensuring the code is working as expected before being packaged.
Integration tests:
These refer to testing an application or microservice in conjunction with the other services on which it relies.
In a Dockerized environment, integration tests are run after the docker image is built and the container is running, testing how different components operate together inside the Docker container.
End-to-end (E2E) tests:
This type of testing simulates the behavior of a real user (e.g., by opening the browser and navigating through several pages).
E2E tests are run against the fully deployed docker container, checking that the entire application stack with its various components and services functions correctly as a whole.
Performance tests:
This type of testing identifies bottlenecks.
Performance tests are run against the fully deployed Docker container and test various stresses and loads to ensure the application performs at expectations.
Practice quiz: Docker
1.
You have created your first application and would like to test it before showing it to stakeholders. A colleague suggests using Docker to execute this task. What is Docker an example of?
Correct. Some would consider Docker the most popular containerized technology to test new software on your machine.
2.
You have been talking to a colleague about how beneficial Docker has been to you for packaging and running applications in containers over the past several weeks. Your colleague has finally decided to install Docker on their local machine and reaches out to you for help with the installation process. Which method can your colleague execute to get Docker up and running on their machine?
Correct. Your colleague can install Docker, based on their operating system, from the Docker website.
3.
A colleague is discussing the combination of application code, data files, configuration, and libraries that are needed to run an application. What Docker term are they referring to?
Correct. An image contains all of the dependencies needed to run an application.
4.
A new programmer with your company has run into the issue of how to test multiple independent components together, which components must work properly in order for the application to run smoothly. What advice would you give the programmer to make their development process more efficient?
Correct. Using multiple containers to test the entirety of the application can be beneficial because the microservices are independent from one another.
5.
You share a new idea for an application with your team to get their feedback and any advice to make the application better. Some members of your team provide feedback on the build artifacts. Which of the following are examples of build artifacts? Select all that apply.
Correct. Build artifacts are items created during the build process, including containers, documentation, libraries, and scripts.
Correct. Build artifacts are items created during the build process, including containers, documentation, libraries, and scripts.
Correct. Build artifacts are items created during the build process, including containers, documentation, libraries, and scripts.
==================================
Module 2
Kubernetes on GCP Google Cloud Platform
What is the purpose of using Kubernetes?
Kubernetes can help organizations better manage their workloads and reduce risks. Kubernetes is able to automate container management operations and optimize the use of IT resources. It even can restart orphaned containers, shutdown the ones that are not being used, and recreate them.
The containerS in POD uses same IP address and same Namespace and resources , so that they can communicate with each other
Relationship of Docker, Container and Kubernate:
Imagine Docker is shipping container. It lets you to package each application and its dependencies in separate crate in container. In this example Kubernate is the Port,orchestrating (automating many process) how the package and containers are handled. and directing them to the right place.
Google Kubernetes Engine (GKE)
Google Compute Engine GCE
Kubernetes principles
Declarative configuration
In this approach, developers specify the desired state only .
They do not need how to get that state, it is automatically done by Kubernate.
The control plane
Components of the control plane include:
etcd is used as Kubernetes backing store for all cluster data as a distributed database
The scheduler where pods are assigned to run on particular nodes in the cluster.
The control manager hosts and monitors multiple Kubernetes controllers.
The cloud controller manager embeds cloud-specific control logic. It acts as the interface between Kubernetes and a specific cloud provider, managing the cloud’s resources.
Key takeaways
Kubernetes core principles and key components support developers with starting, stopping, storing, building, and managing containers.
Module 2
Installing Kubernetes
Kubenetes is not something you download.
Enable Kubernetes
After Docker is installed on your machine, follow the instructions below to run Kubernetes in Docker Desktop.
From the Docker Dashboard, select Settings.
Select Kubernetes from the left sidebar.
Select the checkbox next to Enable Kubernetes.
Select Apply & Restart to save the settings.
Select Install to complete the installation process.
The Kubernetes server runs as containers and installs the /usr/local/bin/kubect1 command on your machin
Key takeaways
Kubernetes is not a replacement for Docker, but rather a tool that developers use while working in Docker. It can run and manage Docker containers, allowing developers to deploy, scale, and manage containerized applications across clusters.
Module 2
Pods
What is CONTAINER in Kubernets
In Kubernetes, a container is a lightweight, standalone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and system tools.
Containers are isolated from each other and bundle their own software, libraries, and configuration files, but they share the operating system kernel with other containers.
They are designed to be easily portable across different environments, which makes them ideal for consistent deployment across different platforms.
What are PODs? in Kubernets
Containers are encapsulated within Pods, which are the fundamental deployment units in a Kubernetes cluster.
A Pod can contain one or more containers that need to run together on the same host and share the same network and storage resources, allowing them to communicate with each other using localhost.
When a deployment requires multiple containers to work together on the same node, a Pod is created to ensure they are co-located and can communicate efficiently.
Pods serve together as a logical host that encapsulates one or more tightly coupled containers within a shared network and storage context.
This provides a way to group containers that need to work closely together, allowing them to share the same resources and interact with each other as if they were running on the same physical or virtual machine.
Pods as logical host
The key points to understand about a Pod as a logical host are:
Tightly coupled containers:
When multiple containers within a Pod are considered tightly coupled, it means they have a strong interdependency and need to communicate with each other over localhost. This allows them to exchange data and information efficiently without the need for complex networking configurations.
Shared network namespace:
Containers within the same Pod share the same network namespace. This implies that they have the same IP address and port space, making it easier for them to communicate using standard inter-process communication mechanisms.
Shared storage context:
Pods also share the same storage context, which means they can access the same volumes or storage resources. This facilitates data sharing among the containers within the Pod, further enhancing their collaboration.
Co-location and co-scheduling:
Kubernetes ensures that all containers within a Pod are scheduled and co-located on the same node. This co-scheduling ensures that the containers can efficiently communicate with each other within the same network and storage context.
Ephemeral(for short time) nature:
Like individual containers, Pods are considered to be ephemeral and can be easily created, terminated, or replaced based on scaling requirements or resource constraints. However, all containers within the Pod are treated as a single unit in terms of scheduling and lifecycle management.
Pods in action
use a Kubernetes Pod to encapsulate both the
1. web server container and the
2. log processor container.
Since both containers exist within the same Pod, they share the same network namespace (they can communicate via localhost) and they can share the same storage volumes. This allows the web server to generate logs and the log processor to access and process these logs efficiently.
Both containers run simulaeneously and stopped togateher .
Even if one fails other will be automatically stopped.
Advantages of Pods
1. Facilitating co-location:
2. Enabling Data Sharing:
3. Simplifying Inter-communication of the container:
Single container vs. multiple containers in a Pod:
Advanctages of Multiple containers in Pod:
Sidecar pattern:
In this pattern, the main container represents the primary application, while additional sidecar containers provide supporting features like
logging,
monitoring, or
authentication.
The sidecar containers enhance and extend the capabilities of the main application without modifying its code.
Proxy pattern:
Multi-container Pods can use a proxy container that acts as an intermediary between the main application container and the external world. The proxy container handles tasks like
load balancing,
caching, or
SSL termination,
offloading these responsibilities from the main application container
Adapter pattern:
performs data format conversions or protocol translations.
Shared data and dependencies:
Key terms
Here are some key terms to be familiar with as you’re working with Kubernetes.
Pod lifecycle:
Pendding-->Running-->Successed or Failed
1. starting from "Pending" when they are being scheduled,
2. "Running" when all containers are up and running,
3. "Succeeded" when all containers successfully terminate,
4. "Failed" if any container within the Pod fails to run.
5. Pods can also be in a "ContainerCreating" state if one or more containers are being created.
Pod templates:
define the specification for creating new Pods.
Pod affinity and anti-affinity:
rules define the scheduling preferences and restrictions for Pods.
Pod autoscaling:
Kubernetes provides Horizontal Pod Autoscaler (HPA) functionality that automatically scales the number of replicas (Pods) based on resource usage or custom metrics.
Pod security policies:
used to control the security-related aspects of Pods, such as their access to certain host resources, usage of privileged containers, and more.
Init container:
run and complete before the main application containers start. They are useful for performing initialization tasks, such as database schema setup or preloading data.
Pod eviction (remove) and disruption(distrubance):
Taints and tolerations:
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes.
Pod DNS:
Pods are assigned a unique hostname and IP address.
Pod annotations and labels:
to provide metadata or facilitate Pod selection for various purposes like monitoring, logging, or routing.
Pods and Python
To manage Kubernetes pods using Python, you can use the kubernetes library. Here is some example code of how to create, read, update, and delete a Pod using Python.
- Pods are the
fundamental deployment units in a Kubernetes cluster.
- A Pod can contain
one or more containers that need to run together on the same host and
share the same network and storage resources, allowing them to communicate
with each other using localhost.
- Pods serve as an
abstraction layer, allowing Kubernetes to schedule and orchestrate
containers effectively.
- Use a
single-container Pod when you have a simple application that does not
require additional containers, or when you want to isolate different
applications or services for easier management and scaling.
- Use multi-container Pods when you have closely related components that need to work together, such as those following the sidecar pattern.
Key Takeaways
Pods
https://kubernetes.io/docs/concepts/workloads/pods/
- Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
- A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.
Resources for more information
Kubernetes documentation: Pods
Official Python client library for Kubernetes
The challenge
Imagine you're developing a Python-based web application deployed in a Kubernetes cluster.
This application is composed of multiple components such as
1. a web server, with same IP Address
2. a caching layer, and with same IP Address
3. a database, with same IP Address
each running in separate Pods.
there's the issue of service
A new service updated list of all the active Pods and their IP addresses for this service—a difficult and dynamic challenge.
This topic seems to be very hard so for the time being I left it. and proceed further after leving it.
Module 2
Deployment
A Kubernetes Deployment also manages a ReplicaSet
deployments support rolling updates and rollbacks.
Kubernetes Deployment consists of several key components:
It includes details such as container images, container ports, environment variables, labels, and other configurations.
number of replicas is maintained, automatically scaling up or down as needed.
Update strategy:
Resources for more information
Kubernetes Deployments: Comprehensive documentation on Kubernetes Deployments, their use cases, and operations.
Managing Resources: A guide on managing Deployments in Kubernetes including rolling updates, scaling and rollback.
Kubernetes ReplicaSets: Detailed explanation on ReplicaSets in Kubernetes, their role in maintaining the desired number of Pods.
Declarative Application Management in Kubernetes: Understanding declarative approach in Kubernetes with configuration files.
Configure Liveness, Readiness and Startup Probes: An in-depth guide on liveness and readiness probes, which enhance the health management of Pods.
Rolling Back a Deployment: Documentation on how to perform rollbacks on a Deployment.
Congratulations! You passed!
1.
What are some of the advantages of Kubernetes? Select all that apply.
Correct. And Kubernetes has a lot of industry “buzz”.
Correct. This is true even in different regions.
2.
What is the easiest tool for local developers using Windows or macOS to learn Kubernetes?
That’s right! Docker Desktop is easiest for non-production-grade environments, with built-in support for Kubernetes.
3.
In Kubernetes, what is a Pod? Select all that apply.
Correct. This accurately describes a Pod. These containers share the same resources and network stack.
Correct. This highlights the role of a Pod as a Kubernetes resource used to define the desired state of containers, and is managed by higher-level controllers like ReplicaSets or Deployments.
4.
What is the purpose of a Kubernetes Service?
Not quite. Kubernetes uses other resources to store and manage configuration data for applications, but this is not the purpose of a Kubernetes Service.
Module 2
Kubernetes on GCP
Every Programer have to answer wether
Cloud
OR
NOT Cloud
before using Kubernet
Kubernet is a powerful tool to
Manage
Organise
Share
the containers
Kubernet allow programmers
1. To Scale
2. To Update
3. To Push Updates
4. To Duplicate
5. To Roll Back
6. Version Controll
7. More.....
ensure that pods are not scheduled onto inappropriate node
Using GKE(Kubernet Engine) allow you to copy/paste Docker files.
These worker nodes are virtual machine (VM) instances
They can be in a single zone or spread out all over the world.
Kubernetes Engine API
Builds and manages container-based applications, powered by the open source Kubernetes technology.
of cloud-based deployments.
1. Docker
containers on Google Cloud Run
2. Docker
containers on Google Kubernetes Engine (GKE)
3. Docker
containers on Google Compute Engine (GCE)
unnecessary resources if there are no requests.
The platform
is based on containers, so you can write your code in
any language and then deploy it through Docker images.
2. Docker containers on Google Kubernetes Engine (GKE)
3. Docker containers on Google Compute Engine (GCE)
Key takeaways
Kubernetes YAML files play a crucial role in defining and managing Kubernetes resources, enabling Python developers to manage their applications' infrastructure in a consistent, version-controlled, and automated manner. By understanding the structure of these files and their key components, developers can leverage Kubernetes to its full potential and focus more on writing the application logic rather than managing infrastructure.
Resources for more information
Module 2
Scaling containers on GCP
For a tutorial on how to use the command line to scale
containers, see the Autoscaling Deployments section
in this tutorial on Scaling an application.
Module 2
GCP networking and load balancing
Google network infrastructure consists of three
main types of networks:
·
A data center network,
which
connects all the machines in the network together.
·
A software-based private wide area network (WAN)
that
connects all data centers together.
·
A software
defined public WAN
Key takeaways
IP ranges:
Routes:
Peering: to connect two VPC
networks, potentially across different projects, as if they were one.
Firewall rules:
GCP offers several other
networking services
that are very useful for Python developers using Kubernetes.
1.
Global and
regional load balancing:
Directs user traffic to the nearest instance of your application,
within a specific region.
2.
HTTP(S), TCP,
and UDP load balancing
3.
Managed
Instance Groups:
They maintain a
pool of instances that can automatically scale up or down based on demand, and distributes
traffic across these instances.
4. Integration with Kubernetes:
to distribute traffic across the Pods in your Kubernetes cluster.
Resources for more information
Module 2
Protect containers on GCP
Security challenges and considerations
to addressing Security challenge is the Zero Trust model, which involves assuming no trust by default and only granting permissions as necessary
using Virtual Private Clouds (VPCs) and properly firewalled subnets
means you can guarantee at
the network level
—not the software level
Security on GCP is a shared responsibility.
GCP is responsible for:
1.
infrastructure security,
2.
operational security, and
3.
providing tools for software supply chain security.
Developers are responsible for:
1.
workload security,
2.
network security,
3.
identity and access management, and
4.
effective use of software supply chain security tools.
GCP provides several security features and best practices for
protecting containers, including
1. using minimal base
images,
2. regularly updating
and patching containers,
3. implementing
vulnerability scanning,
4. using runtime
security tools like gVisor,
5. implementing access
controls with IAM,
6. encrypting
sensitive data with KMS, monitoring and
7. logging activity with Cloud Audit Logs, and
8. using Binary Authorization to ensure only trusted images
are deployed.
Resources for more information
Google Cloud Security Command Center
Module 2
Qwiklabs assessment: Work with containers on
GCP
Dockerà Kubernetes (Engine)à Podsà Container/sà Application/s
Docker containers can be directly used in Kubernetes, which allows
them to be run in
the Kubernetes Engine with ease.
Work with containers on GCP
Graded Quiz
In order to complete this quiz, you must have completed the lab before it.
1.
What is the purpose of a Dockerfile when building Docker images in containers?
2.
Which of the following options correctly demonstrates the usage of the docker run command to start a Docker container with specific configurations?
3.
True or false: When running a docker logs command, you don’t have to write the entire container ID, as long as the initial characters uniquely identify the container.
4.
What is the purpose of the docker inspect command?
5.
What is a common technique for debugging issues in Docker containers when troubleshooting runtime problems?
6.
What is the purpose of the docker pull command in Docker containerization?
7.
Which authentication method is commonly used when pushing Docker images to Google Artifact Registry?
8.
Which Google Cloud Platform (GCP) service allows you to run Docker containers in a managed environment, handling tasks such as cluster management, scaling, and load balancing?
9.
What role does Google Container Registry (GCR) play in Docker container management on Google Cloud Platform?
10.
What is Google Kubernetes Engine (GKE) used for in the context of scaling containers on GCP?
Docker------>
Kubernetes -->
Cluster ------->
Pods ---------->
Containers---->
ApplicationsÃ
Kubernetes acts as your application’s manager and is
responsible for keeping your application up and running smoothly.
Different types of clusters include:
1. on-premises,
2. public cloud managed
3. private cloud managed,
4. local clusters and
5. hybrid clusters.
6. Kubernetes clusters
are robust, flexible, and reliable units of multiple
nodes that work together
Module 2
Glossary terms from course 5, module
2
Terms and definitions from Course 5, Module 2
Artifact:
A byproduct of the software development process that can be accessed and used, an item produced during programming
Container registry:
A storage location for container images, organized for efficient access
Container repository:
A container registry that manages container images
Docker: An open-source tool used to build, deploy, run, update, and
manage containers
Pod:
A group of one or more containers that are scheduled and run together
Registry: A place where containers or artifacts are stored and
organized
Kubernetes: An open-source platform that gives programmers the power to
orchestrate
1.
A developer reached out to you to better understand Docker. The developer knows it is used to package and run applications but could not remember what the environment was called. In what environment is Docker run?
2.
You explain to another programmer that it is typical for a Docker image to be composed of up to a dozen layers. What is the purpose of having multiple layers?
Please refer to Docker images for more information.
3.
You are ready to run Docker containers on a virtual machine. Which command should you use to create and start a Docker container?
4.
You are developing a Python-based data processing application. One component of the application processes raw data, while another component analyzes the processed data. You want these components to easily exchange data. You also want to ensure that the processed data persists even if one of the containers restarts. Why are Pods in Kubernetes a good fit for this task? Select all that apply.
Please review the reading Pods for more information.
5.
You are a DevOps engineer working for a rapidly growing e-commerce company. With the upcoming Black Friday sale, you anticipate a surge in traffic and want to ensure that your Python-based web application can handle the increased load without any downtime. Which Kubernetes resource would you primarily use to maintain the desired number of web server instances?
6.
You’re setting up a Kubernetes cluster and want to use autoscaling. What might you consider as you decide on the maximum number of nodes allowed for your application? Select all that apply.
* A: The needs of your application
7.
Kubernetes clusters use what is called the “declarative approach.” What does this mean?
8.
You’ve decided to run your docker containers on Google Cloud Platform, and you’re about to choose which service to use. What are some advantages of Google Kubernetes Engine (GKE)?
9.
Containers are not just for packaging. What else are they used for? Select all that apply.
10.
Rebecca is working on a Python application that needs to integrate with an external logging service. She wants to create an alias for this external service, allowing her to reference it using a Kubernetes DNS name. Which Kubernetes service types should Rebecca consider for this process?
Please review the reading Services for more information.
1.
Another developer asked where the central repository is for downloading containers. What should you tell them?
2.
You and a colleague are collaborating on a project where you will use Docker images. You mentioned the benefits of Docker images and how they are composed of multiple files. Your colleague asked what Docker images do. What can you tell them?
Please refer to Docker images for more information.
3.
You informed another programmer that Cloud Run can help them launch containers. They asked what the benefit is of using Cloud Run. What should you tell them?
Please refer to Docker and GCP for more information.
4.
Maria is working on a distributed Python application where multiple components need to communicate with each other frequently. Why does she decide to use Pods in Kubernetes for inter-container communication?
Please review the reading Pods for more information.
5.
You are a DevOps engineer working for a rapidly growing e-commerce company. With the upcoming Black Friday sale, you anticipate a surge in traffic and want to ensure that your Python-based web application can handle the increased load without any downtime. Which Kubernetes resource would you primarily use to maintain the desired number of web server instances?
Please review the reading Deployment for more information.
6.
You’re setting up your first Kubernetes cluster. What is the absolute minimum number and type of virtual machines you must have to function as a cluster?
7.
You just got a new job in the IT department of a medical practice. Considering the fact that the organization’s data includes confidential patient records, what sorts of clusters might you choose to work with? Select all that apply.
Please review the reading Types of clusters for more information.
8.
You’ve decided to run your docker containers on Google Cloud Platform, and you’re about to choose which service to use. What are some advantages of Google Kubernetes Engine (GKE)?
9.
Which of the following is the best phrase to complete this sentence? Containers allow users to _____________________.
10.
Carlos is deploying a Python application in a cloud environment. The application has a user interface that he expects will experience heavy traffic from users around the world. Additionally, he's integrating a third-party payment gateway. Which Kubernetes service types should Carlos consider for these components? Select all that apply.
9 best configuration
management tools:
- Best for CI/CD: Bitbucket.
- Best version control system: Git.
- Best for application deployment: Ansible.
- Best for infrastructure automation: Chef.
- Best for large-scale configurations: Puppet.
- Best for automating workflows: Bitbucket Pipelines.
- Best for high-speed and scalability: SaltStack.
- Best for container orchestration: Kubernetes
- CFEngine
In CM, we
write rules and configuration information into files.
CM tools
process these files. These files are managed and
controlled
by "Version Control system" like Git. VCS keeps track
of all these files changes. This process of keeping, tracking and
Managing all the configuration files of the system through VCS is
known as Infrastructure as code, Iac.
VCS track (config,rules)
files
Who changed the file
When change the file
Why changed the file
The paradigm of storing all configuration files for managing nodes
in VCS is called Infrastructure as Code, Iac.
In other words
Module 3 IaC options
1. Puppets
2. Terraform
3. Ansible
4. Google Cloud
Platform Offerings
Puppets industrial
standard, robust and well stablished solution. At the end we will compare other
solutions with puppets
Terraform
managing infrastructure resources
across various cloud providers.
define your
desired infrastructure state,
to manage a
wide spectrum of resources, from virtual machines to databases,
across multiple cloud environments
it an
excellent choice for orchestrating (automating) cloud resources and building
scalable, modern applications.
Ansible
This lightweight approach simplifies deployment and reduces the
overhead of maintaining agents on target nodes.
an agentless architecture
a
simple and human-readable YAML syntax to define playbooks
Not a catalog-based system
excels in its simplicity, ease of adoption, and suitability for rapid
deployment scenarios.
Google Cloud Platform alternatives
leverage
native tools
using
YAML or Python templates,
offering
a declarative approach similar to Terraform.
well-integrated with GCP services and resources, components like
GKE clusters, Cloud Storage
Buckets, and load balancers.
allowing to focus more on application development and less on
provisioning and configuration.
Key takeaways
Each
tool brings its own strengths,
Terraform's
cloud provisioning prowess
Ansible's
lightweight automation
GCP's
native integration.
choice
between these options depends on your specific needs, preferences, and the
ecosystem you are operating within
Practice Quiz: Automation at Scale
Congratulations!
You passed!
Grade received 100%
To pass 80% or
higher
1.
What is IaC (Infrastructure as Code)?
Great job! IaC goes hand in hand with continuous delivery.
2.
What is the principle called when you think of networked machines as interchangeable resources instead of individual machines?
Nice work! This means no node is irreplaceable and configuration is automated.
3.
What benefits can we gain by using automation to manage our configuration? (Check all that apply)
Way to go! When a configuration or process doesn't depend on a human remembering to follow all the necessary steps, the result will always be the same.
Right on! Because automation breeds consistency, when we know a particular process that has been automated works, we can count on it working every time as long as everything remains the same.
Woohoo! A scalable system is a flexible system that can handle extra tasks or integrate extra resources easily.
4.
Puppet is a commonly used configuration management system. Which of the following applications are also used for configuration management?
Excellent! Chef is a configuration management system that treats configuration as code.
Awesome! Ansible is an open source IT Configuration Management, Deployment & Orchestration tool which aims to provide a wide variety of automation challenges with huge productivity gains.
Nice job! CFEngine is an open-source configuration management program that offers automated configuration and maintenance of large-scale computing networks, including centralized cloud, desktop, consumer and industrial application control, embedded networked applications, handheld smartphones, and tablet computers.
5.
A network administrator is accustomed to manually configuring the 5 nodes on the network he manages. However, the company he works for is quickly growing, and there are plans to expand the network to 200 nodes. What is the most reasonable course of action for the network administrator?
Yes! We can write automation scripts ourselves or we can use some sort of configuration management software to make our network scalable by pushing changes from a control server.
Module 3
Review: What is Puppet?
class sudo {
package { 'sudo':
ensure => present,
}
}
-----------------
About this code
This block of code is saying that the package 'sudo' should be
present on every computer where the rule gets applied. If this rule
is applied on 100 computers, it would automatically install the
package in all of them. This is a small and simple block but can
already give us a basic impression of how rules are written in puppet.
Module 3
What is Puppet?
Puppet deployed on client-server architecture
There many Package Management systems, in
which Installation tools are available
In Linux tools are
APT
YUM
DNF
Not only installation packages Puppet can do
many other things like add, remove, modify configuration files stored in the system,
or maintain Registry Entries of the system.
Module 3
Review: Puppet resources
Module 3
Puppet Resources
There are several attributes (ensure, content, replace …) of a
resource (file)
1. File Permissions
2. File Owner
3. File
Modification Time
Question
The Building Blocks of Configuration Management
Module 3
What are domain-specific languages?
Question
Question
Module 3
More Information About
Configuration Management
Check out the following links for more information:
Practice Quiz:
The Building Blocks of Configuration Management
1.
How is a declarative language different from a procedural language?
Right on! In a declarative language, it's important to correctly define the end state we want to be in, without explicitly programming steps for how to achieve that state.
2.
Puppet facts are stored in hashes. If we wanted to use a conditional statement to perform a specific action based on a fact value, what symbol must precede the facts variable for the Puppet DSL to recognize it?
Nice job! All variable names are preceded by a dollar sign in Puppet's DSL.
3.
What does it mean when we say that Puppet is stateless?
Not quite. The 'test and repair' paradigm is a philosophy which states that actions should be taken only when necessary to achieve our goal.
4.
What does the "test and repair" paradigm mean in practice?
Great work! By checking to see if a resource requires modification first, we can avoid wasting precious time.
5.
Where, in Puppet syntax, are the attributes of a resource found?
Woohoo! We specify the package contents inside the curly braces, placed after the package title.
Module 3
Wrap Up: Automating with
Configuration Management
Question
Codes
used in video as follows.
$ vi
ntp.pp
Class ntp {
package{‘ntp’:
ensure=> latest;
}
file{‘/etc/usr/ntp.conf’:
source=> ‘/home/user/ntp.conf’
replace => true,
require => Package[‘ntp’],
notify => Service [‘ntp’],
}
service{
enable => true,
ensure => running,
require=> File[‘/etc/usr/ntp.conf’],
}
}
include ntp
-------- End of DSL coding .pp
file---------
ntp.pp is manifesto file.
This file contains resources
related to the NTP configuration:
1.the ntp package,
2.the ntp configuration file, and
3.ntp service
$ sudo puppet apply -v ntp.pp
Module 3
Managing
Resource Relationships (Video)
Question
Question
Practice Quiz: Deploying Puppet Locally
1.
Puppet evaluates all functions, conditionals, and variables for each individual system, and generates a list of rules for that specific system. What are these individual lists of rules called?
Right on! The catalog is the list of rules for each individual system generated once the server has evaluated all variables, conditionals, and functionals in the manifest and then compared them with facts for each system.
2.
After we install new modules that were made and shared by others, which folder in the module's directory will contain the new functions and facts?
Not quite. The files folder in a module will contain files that won’t need to be changed like configuration files.
3.
What file extension do manifest files use?
Excellent! Manifest files for Puppet will end in the extension .pp.
4.
What is contained in the metadata.json file of a Puppet module?
Awesome! Metadata is data about data, and in this case, often takes the form of installation and compatibility information.
5.
What does Puppet syntax dictate we do when referring to another resource attribute?
Great work! When defining resource types, we write them in lowercase, then capitalize them when referring to them from another resource attribute.
4 What is contained in the metadata.json file of a Puppet module?
Not quite. Manifests are stored in their own files.
The command node default (name of node where to do work ) installs the sudo and ntp classes on all default nodes
---------Starting of another
program---------
node webserver.example.com {
class{‘sudo’:}
class{‘ntp’:
servers=>[‘ntp1.example.com’,’ntp2.example.com’]
}
class{‘apache’:}
} /
------The End of Program---------------------------------
Module 3 Puppet Nodes (Video)
Through puppet we may
apply basic rules to all computers or nodes.
Or we apply specific
rules in specific computer/node.
Eg for webservers we wants
install apache and on email rules will only be applied to email servers.
So terms of puppet we use
NODE term. E.g
node default{} for basic rules to each physical machine, VM and
Network Router
Question
When a client comes into network, first it sends information about itself
to the Sever,
BUT
The question is, how Server know it is the client that, it claimed to be?
This is a question of Security.
There are different types
of Public Key Infrastructure. One that Puppet uses is Secure Socket Layer SSL.SSL also used in HTTPS protocol in internet cloud.
The server and client both check others' Identity by an encrypted channel through SSL.
Question
Module 3
$ sudo puppet config --section master set autosign true
$ ssh webserver
$ sudo apt install puppet
$ sudo puppet config set server ubuntu.example.com
sudo puppet agent -v --test
This code tests the
connection between the Puppet agent on the machine and
the Puppet master.
vim /etc/puppet/code/environments/production/manifests/site.pp
node webserver {
class { 'apache': }
}
node default {}
To
enable puppet service at starting of Linux
$ sudo systemctl enable puppet
To start
puppet service at starting of machine/Linux
$ sudo systemctl start puppet
To check
the status of puppet at startup/reboot of computer/linux
$ sudo
systemctl status puppet
Module 3
Setting up Puppet Clients and Servers
(Video)
Puppet config means we are giving configuration command in
puppet.
Master means this command is from Puppet Server/” puppet master”
Set autosign true mean client will Automatically Sign into the Puppet Server/master without any high security as it is a test server otherwise, we set the sign in Manually. Due to security purpose.
Question
Module 3
Practice Quiz: Deploying
Puppet to Clients
Question 2
When a Puppet agent evaluates
the state of each component in the manifest, it uses gathered facts about the
system to decide which rules to apply. What tool can these facts be
"plugged into" in order to simplify management of the content of our
Puppet configuration files?
Node definitions Incorrec
Modules
Incorrect
Not
quite. A module is a collection of resources, and associated data used to
expand the functionality of Puppet.
Congratulations! You passed!
1.
When defining nodes, how do we identify a specific node that we want to set rules for?
Right on! A FQDN is a complete domain name for a specific machine that contains both the hostname and the domain name.
2.
When a Puppet agent evaluates the state of each component in the manifest, it uses gathered facts about the system to decide which rules to apply. What tool can these facts be "plugged into" in order to simplify management of the content of our Puppet configuration files?
Nice job! Templates are documents that combine code, system facts, and text to render a configuration output fitting predefined rules.
3.
What is the first thing that happens after a node connects to the Puppet master for the first time?
Awesome! After receiving a certificate, the node will reuse it for subsequent logins.
4.
What does FQDN stand for, and what is it?
Awesome! A fully qualified domain name (FQDN) is the unabbreviated name for a particular computer, or server. There are two elements of the FQDN: the hostname and the domain name.
5.
What type of cryptographic security framework does Puppet use to authenticate individual nodes?
Way to go! Puppet uses an Secure Sockets Layer (SSL) Public Key Infrastructure to authenticate both nodes and masters.
(Readings)
describe ‘gksu’ :type => :class do
let (:facts) {{ ‘is_vitural’ => ‘false’ }}
it { should contain_package(‘gksu’).with_ensure(‘latest’)
}
end
About this code
This code runs an rspec
test to determine whether the gksu package has the intended
behavior when the fact is_virtual is set to false. When this is the case,
the gksu package
should have the ensure parameter set to latest: ensure('latest').
Module 3
Modifying
and Testing Manifests
Manifests
are at server side, When they update at server then it should be applied to the
all Clients fleet.
Before
applying Manifests update Rule to all the client , we first apply the rule
locally , to test the rule.
E.g. If
we want change the permissions of some files of nodes,
After
applying new rules on clients, you found bug/client hangs up.
First check
the syntax of manifest by “puppet parser validate”
You may
check/validate rule using the parameter –noop No Operation.
Means
only to check the Rules and its syntax without Applying it.
It
checks that catalog is written correctly.
Question
Safely Rolling out Changes and
Validating Them
Not quite. While a "Canary" environment is often used to minimize the disruption of unforeseen problems in production, the canaries are a subset of all production machines and are thus being used by some real customers.
Question
More Information About Updating
Deployments
Module 3
Congratulations! You passed!
1.
What is a production environment in Puppet?
Awesome! Environments in Puppet are used to isolate software in development from software being served to end users.
2.
What is the --noop parameter used for?
Nice job! No Operations mode makes Puppet simulate what it would do without actually doing it.
3.
What do rspec tests do?
Right on! We can test our manifests automatically by using rspec tests. In these tests, we can verify resources exist and have attributes set to specific values.
4.
How are canary environments used in testing?
Woohoo! If we can identify a problem before it reaches all the machines in the production environment, we’ll be able to keep the problem isolated.
5.
What are efficient ways to check the syntax of the manifest? (Check all that apply)
Great work! In order to perform No Operations simulations, we must use the --noop parameter when running the rules.
Groovy! To test automatically, we need to run rspec tests, and fix any errors in the manifest until the RSpec tests pass.
Excellent! Using the puppet parser validate command is the simplest way to check that the syntax of the manifest is correct.
Monitoring and Alerting (Main
Heading)
Module 3
Getting Started with Monitoring
When service is running in a cloud, we make sure
that service is:
1. Behaving
as expected
2. Returning
the right result
3. Quickly
and Reliably
To get
these aims we need:
1.
Good Monitoring and

Response Code:
When web server receive HTTP request from client,
It generate a response code,
Responses are grouped into five classes:
Informational responses (100–199)
Successful responses (200–299)
Redirects (300–399)
Client errors (400–499)
Server errors (500–599)
showing that the request served
correctly or give an error code that is response code as shown under
In general errors like 500,501, 503 means there is an error from
Server Side
And the codes like 400,401,402 means there is an error from
client/user side
Monitoring
and Alerting System
à Response
codeÃ
---> Error Codes like 401,500Ã
à Figure out MATRIX no of
email, successful purchases/failed
---> store Matrix in Monitoring Systems
Like:
1.
AWS CloudWatch
2.
Google Stack Driver
3.
Azure Matrix
4.
Prometheus
5.
DataDog
6.
Nagios
There
are two ways to store Matrix into Monitoring system
1.
Pull (by client from Server,)
2.
Push (by client to Server)
Pull and Push are both by client to/from Monitory System
Question
Module 3
Getting Alerts When Things Go Wrong
We need
run our system 24 Hours, But we as a human System Administrator cannot sit in-front
the system 24 Hours. So, we want to get service unattained.
We need an automation to handle the bad/worse situations.
One way
is an automatic program to check the health of system periodically, if the checking
program found any error or system inconsistency it will send email or SMS to
the System Administrator.
For example,
you set an alert:
1.
If any application uses
more than 10 GB of RAM
2.
If an application raising
too many 500 errors
3.
If a request is too long
much time to respond.
We
divide the Alerts into two categories.
1.
One which need immediate actions
2.
Another which need actions near future.
If any problem/trouble do not need any action, then it is called NOISE
Non urgent
bugs are configured to create TICKET for the IT supporters to solve the problem
during office hours.
Question
Module 3
Service-Level Objectives
Question
Module 3
Basic Monitoring in GCP
We use monitoring tool called STACKDRIVER.
GCPÃ
Under STACKDRIVER MonitoringÃ
In new GCP
software there is no any SATACKDREVER instead
Go to Operation à Monitors
To create new alerting policy
To check CPU utilization we made an Infinity programs LOOP to utilize CPU 100 %
$ while true; do true; done &
Output of $ top command shows bash command given in above
script is utilization CPU 100%
Question
Module 3
More
Information on Monitoring and Alerting
(Readings)
Check out the following links for more information:
https://www.datadoghq.com/blog/monitoring-101-collecting-data/
https://www.digitalocean.com/community/tutorials/an-introduction-to-metrics-monitoring-and-alerting
Practice Quiz: Monitoring & Alerting
Congratulations! You passed!
1.
What is a Service Level Agreement?
Awesome! A service-level agreement is an arrangement between two or more parties, one being the client and the other being service providers.
2.
What is the most important aspect of an alert?
Right on! If an alert notification is not actionable, it should not be an alert at all.
3.
Which part of an HTTP message from a web server is useful for tracking the overall status of the response and can be monitored and logged?
Nice job! We can log and monitor these response codes, and even use them to set alert conditions.
4.
To set up a new alert, we have to configure the _____ that triggers the alert.
Excellent! We must define what occurence or metric threshold will serve as a conditional trigger for our alert.
5.
When we collect metrics from inside a system, this is known as ______ monitoring.
Great work! A white-box monitoring system is one that collects metrics internally, from within the system being monitored
Module 3
What to
Do When You Can't Be Physically There
Question
Well done, you! Part of the beauty of running services in the Cloud
is that you aren't responsible for everything! Most Cloud providers
are happy to provide various levels of support
Module 3
Identifying
Where the Failure Is Coming From
If problem still exists it means its OUR/Your
fault/problem.
If problem does not exist after shifting the
location it means
It is service providers fault in infrastructure,
contact/complain
The service provider
Shift the service on another
machine (Physical|VM)
If any request is getting more time to server the request, then one
Type of solution is that you shift the service to more powerful Server.
Question
Full container can be shifted from Sever to your Work Station or
to and from infrastructure (cloud). In this way a problem can be
debugged or can found from where problem is coming.
Module 3
Recovering from Failure
For complex systems failure, we must do two pre-cautionary measurement.
1.
Good Backup System
2.
Documentations for steps to
be taken in case of failure.
Good Backup
System:
Backup does not mean to do the
backup of data only. But also
have to
backup of services, Instances and Networks configuration
automatically
So, if
one datacenter fails then the end users get services from other
Datacenters
seamlessly.
Question
Module 3
Reading:
Debugging Problems on the Cloud
Check out the following links for more information:
https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-instances
https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-troubleshoot.htm
Practice Quiz: Troubleshooting
& Debugging
Congratulations! You passed!
1.
Which of the following are valid strategies for recovery after encountering service failure? (Select all that apply.)
Awesome! A quick way to recover is to have a secondary instance of the VM running your service that you can quickly switch to.
Nice job! As long as you've been keeping frequent backups, restoring a previous VM image will often get you where you need to be.
Woohoo! If the problem is related to recent changes or updates, rolling back to a previous working version of the service or supporting software will give the time to investigate further.
2.
Which of the following concepts provide redundancy? (Select all that apply.)
Right on! If your primary VM instance running your service fails, having a secondary instance running in the background ready to take over can provide instant failover.
You nailed it! Having a secondary Cloud service provider on hand with your data in case of the first provider having large-scale outages can provide redundancy for a worst-case scenario.
3.
If you operate a service that stores any kind of data, what are some critical steps to ensure disaster recovery? (Select all that apply)
Nice work! As long as we have viable backup images, we can restore the VM running our service.
Excellent! It's important to know that our backup process is working correctly. It would not do to be in a recovery situation and not have backups.
4.
What is the correct term for packaged applications that are shipped with all needed libraries and dependencies, and allows the application to run in isolation?
Great job! Containerization ensures that our software runs the same way every time.
5.
Using a large variety of containerized applications can get complicated and messy. What are some important tips for solving problems when using containers? (Select all that apply)
Great work! As long as we have the right logs in the right places, we can tell where our problems are.
Nice job! We should take every opportunity to test and retest that our configuration is working properly.
Glossary terms from course 5, module 3
Terms and definitions from Course 5, Module 3
Configuration management: Automation technique that manages the configuration of computers at scale
Domain-Specific Language (DSL): A programming language that's more limited in scope
Facts: Variables that represent the characteristics of the system (Puppet Client
sends his Facts to Puppet Server)
Puppet: The current industry
standard for configuration management, also known as the client (GALAT) but
puppet Agent
Puppet master: Known as the Puppet server
Graded assessment for module 3
You finished this assignment
1.
When defining nodes, how do we identify a specific node that we want to set rules for?
2.
Puppet evaluates all functions, conditionals, and variables for each individual system, and generates a list of rules for that specific system. What are these individual lists of rules called?
3.
Which of the following are valid strategies for recovery after encountering service failure? (Select all that apply.)
4.
Which part of an HTTP message from a web server is useful for tracking the overall status of the response and can be monitored and logged?
5.
What is a production environment in Puppet?
6.
Puppet facts are stored in hashes. If we wanted to use a conditional statement to perform a specific action based on a fact value, what symbol must precede the facts variable for the Puppet DSL to recognize it?
7.
A Puppet agent inspects /etc/conf.d, determines the OS to be Gentoo Linux, then activates the Portage package manager. What is the provider in this scenario?
8.
What benefits can we gain by using automation to manage our configuration? (Check all that apply)
9.
What is the correct term for packaged applications that are shipped with all needed libraries and dependencies, and allows the application to run in isolation?
10.
What are efficient ways to check the syntax of the manifest? (Check all that apply)
Test manually
Module 4
What is devops?
1. software development, Dev, and
2. IT operations, Ops,
to
1. shorten the system's development
lifecycle and
2. provide continuous delivery with
3. high software quality.
How do you define CI?
Continuous integration (CI) is a software development practice in
which frequent and isolated changes are immediately tested and
reported on when they're added to a larger codebase
CI can be considered as the first stage in producing and
delivering code, and CD as the second. CI focuses on preparing
code for release (build/test), whereas CD involves the actual
release of code (release/deploy).
Module 4
Continuous integration, delivery, and deployment (Readings)
Continuous integration (CI) automatically
1.builds,
2.tests, and
3.integrates code changes within a shared repository.
continuous delivery (CD) automatically
delivers code changes to production-ready environments
for approval, and
continuous deployment (CD) automatically
deploys those code changes directly to production.
Module 4
Example pipeline (Readings)
Pipelines
Pipelines
are
1. automated processes and
2. sets of tools
that developers use during the software development lifecycle.
the
steps of a process are carried out in sequential order mandatory
A pipeline for a Python
application is triggered
when a pull request is ready to be merged.
That pipeline can perform the following steps:
1. Check out the
complete branch represented by the pull request.
2. Attempt to build
the project by running python setup.py
build.
3. Run unit tests with
pytest.
4. Run integration
tests against other parts of the application with a framework like playwright.
5. Build the documentation and upload it to an internal wiki.
6. Upload the build
artifacts to a container registry.
7. Message your team
in Slack to let them know the build was successful.
Example pipeline
In order to deploy an application successfully, your
organization has to:
1. Choose a “release
day” when all the code will be merged together.
2. Restrict new
code commits until the release is complete, to avoid conflicts.
3. Run integration
tests (and maybe performance tests).
4. Prepare the
deployment.
5. Notify customers of
an upcoming maintenance window.
6. Manually deploy the
application and any other updates.
1. Developers commit
code to the repository as soon as they’re done.
2. The CI server
observes the commit and automatically triggers a build pipeline.
3. If the build
completes successfully, the CI server runs all the unit tests. If any tests
fail, the build stops.
4. The CI server runs
integration tests and/or smoke tests, if any.
5. Assuming the
previous steps all complete successfully, the CI server signals success. Then,
the application is ready to be deployed.
6. If the CD process
has also been automated, the code is deployed to production servers.
Module 4
DevOps tools
There
are many software DevOps tools.
1.Source Repositories
a. Github
b. Bitbucket
2. CI/CD Tools
a. Github
Actions
b. Jenkens
c. Google
cloud deploy
3. Infrastructure
as code (Ias) tools
a. Terraform
b. Ansible
4. Container
management tools:
a. Docker
b.Kubernetes
5. Security
scanning codes
a. Synic
b. SonarQube
6. Production
Monitoring Tools
a. DataDog
b.
Application
Stages
of DevOps
1. Discover
2. Plan
3. Build
4. Test
5. Monitor
6. Operate
7. Continuous
feedback
1.
Discover
This allows everyone on your team to
share and comment on anything and will be important throughout
the DevOps lifecycle. Examples of tools you can use include
a. Jira Product Discovery,
b. Miro, and
c. Mural.
2.
Plan
Includes
a. sprint (to break project in actionable blocks)
b. planning
and
c. issue tracking, as well as
d. continued
collaboration.
Examples of tools you can use include
a. Jira Software,
b. Confluence,
and
c. Slack.
3.
Build
a. to create individual development
environments,
b. monitor versions with version
control
c. continuously integrate and test
d. have source control of your code
4.
Test
tools
that can automate testing like
a. Veracode
and
b. SmartBear
c. Zephyr Squad or
d. Zephyr
Scale.
5.
Monitor
Look for tools that can integrate with your group chat clients and send
you notifications or alerts when you’ve automated monitoring your servers and
application performance. An example tool you can use is Jira Software.
1.Operate
Once your software has deployed, look for tools that can track incidents,
changes, problems, and software projects on a single platform.
An example tool you can use is Jira Software
2. Continuous
feedback
Look for applications that can
integrate your chat clients with a survey platform or social media platform.
Examples of tools you can use include
a. Slack and
b. GetFeedback.
3. Popular
tools for CI/CD
a.
Jenkins,
b.
GitLab,
c.
Travis CI, and
d.
CircleCI are all tools
which can automate the different stages of the software development lifecycle,
including
1. building,
2. testing, and
3. deploying.
They are often used in
DevOps to continuously build and test software, which allows you to continuously integrate
your changes into your build.
Tools like
a. Spinnaker,
b. Argo CD, and
c. Harness can be used to automate continuous delivery and
deployment and to simplify your DevOps processes.
Module 4
From coding to
the cloud
Coding---> …..--->…..--->
Cloud
Coding--->DevOps---->Cloud
Development Team --->
Dev Ops =è operational Team
DevOps collaborate between
Development Team and Operational Team
DevOps work from coding to cloud
Programmer loads the program into Container (Docker | Kubernet)
Containers
with Docker and Kubernetes
Containers
are
applications that are packaged together with their
configuration
and dependencies.
Docker
is the most common way to package and run
applications in containers. It can build container images, run containers, and
manage container data
Kubernetes
is a
portable and extensible platform to assist developers with containerized
applications.
It’s a tool
that developers use while working in Docker to run and manage Docker
containers, allowing you to deploy, scale, and manage containerized
applications across clusters.
Containers in the CI/CD pipeline
Continuous
integration and continuous delivery/deployment (CI/CD) is the automation of an
entire pipeline of tools that build, test, package, and deploy an application
whenever developers commit
a code change to the source control repository.
Feedback can be provided to
developers at any given stage of the process.
A
pipeline is an automated process and set of tools that
developers use during the software development lifecycle.
In
a pipeline, the steps of a process are carried out in sequential order. The
reason behind this is that if any step fails, the pipeline can stop without
deploying the changes. The pipeline stops executing the steps and marks the job
as failed.
Using
containers in the CI/CD pipeline can bring
developers additional flexibility, consistency, and benefits to building,
testing, packaging, and deploying an application. Because containers are
lightweight, they allow for a faster deployment of the application. Containers
help eliminate the common “works on my machine” syndrome.
Docker images contain the application code,
data files, configuration files, libraries, and other dependencies needed to
run an application. Typically, these consist of multiple layers in order to
keep the images as small as possible. Container images allow developers to run
tests, conduct quality performance checks, and ensure each code change is
tested and works as expected before being deployed.
Kubernetes is a tool for organizing,
sharing, and managing containers. This powerful tool gives programmers and
developers the ability to scale, duplicate, push updates, roll back updates and
versions, and operate under version control.
Another
advantage of using containers in a CI/CD pipeline
is that developers are able to deploy multiple versions of an application at
the same time without interfering with one another.
It
can reduce the number of errors from configuration issues and allow delivery
teams to quickly move these containers between different environments, like
from build to staging and staging to production.
And
lastly, using containers in a CI/CD pipeline supports automated
scaling, load balancing, and high availability of applications creating robust
deployments.
Module 4
Continuous testing and continuous improvement
Continuous testing
Continuous testing means running automated test
suites every time a change is committed to the source code repository.
There are three types of testing that you’ll typically
see in the CI/CD pipeline. These include:
·
Unit testing
·
Integration testing
·
System testing
unit
testing
to test an individual unit
within your code—a unit can be
·
a function,
·
module, or
·
set of
processes.
Unit testing checks that
everything
System testing
It
simulates active users and runs on the entire system to test for performance
testing
for performance can include testing how your program, software, or application
handles
1. high loads or stress,
2. changes in the configuration; and
3. changes in system
security
Testing
frameworks and tools
JUnit JUnit,
J For java for
the Java programming language
PyUnit
for Python and
NUnit
for C#.
Selenium Selenium,
for web application developers.
Cypress
is a JavaScript-based . Often
used for front-end development of web-based applications.
Postman
to automate
1. unit tests,
2. function tests,
3. integration tests,
4. end-to-end tests,
5. regression tests, and
6. more in your CI/CD pipeline.
Continuous
improvement
is a
crucial part of the DevOps mindset.
Team is always
engaged in checking for product efficiency improvements and to reduce errors
and bottlenecks.
Key
benefits of continuous improvement include:
·
Increased productivity and efficiency
·
Improved quality of products and services
·
Reduced waste
·
Competitive products and services
·
Increased innovation
·
Increased employee engagement
·
Reduced employee turnover
Key
performance indicators (KPIs)
to
improved software or application quality and performance
Popular metrics in DevOps that you can use to
measure performance include:
·
Lead time for changes:
This is the length of
time it takes for a code change to be committed to a branch (Git/Github) and be
in a deployable state.
·
Change failure rate:
This is the percentage
of code changes that lead to failures and require fixing after they reach
production or are released to end-users.
·
Deployment frequency:
This measures the
frequency of how often new code is deployed into production.
·
Mean time to recovery:
This measures how long
it takes to recover from a partial service interruption or total failure of
your product or system.
Key takeaways
Making high-quality tests part of your CI/CD
pipeline is critical to your DevOps success.
Practice quiz: CI/CD pipelines
Congratulations! You passed!
1.
Which types of tests are automated and run by a CI/CD pipeline?
Correct. Unit, integration, and system are the types of tests commonly used to perform continuous testing in a CI/CD pipeline.
2.
Why is automated testing important? Select all that apply.
That’s right! Automated testing ensures that all of your code changes are tested for errors or bugs, allowing you to create fixes as issues arise. It also reduces the risk of human error, especially when performing larger tests that would take time if conducted manually.
Not quite. Automated testing ensures that all of your code changes are tested for errors or bugs, allowing you to create fixes as issues arise. It also reduces the risk of human error, especially when performing larger tests that would take time if conducted manually.
That’s right! Automated testing ensures that all of your code changes are tested for errors or bugs, allowing you to create fixes as issues arise. It also reduces the risk of human error, especially when performing larger tests that would take time if conducted manually.
3.
Which actions typically trigger a CI/CD pipeline to start? Select all that apply.
That’s right! A change in code, a scheduled or user-initiated workflow, and another pipeline are all actions that could trigger a CI/CD pipeline to start.
That’s right! A change in code, a scheduled or user-initiated workflow, and another pipeline are all actions that could trigger a CI/CD pipeline to start.
That’s right! A change in code, a scheduled or user-initiated workflow, and another pipeline are all actions that could trigger a CI/CD pipeline to start.
4.
What are some advantages of implementing DevOps?
That’s right! The advantages of implementing DevOps include an automated software development lifecycle, collaborative environments for the development and operations teams, and continuous, iterative improvements to your software or applications.
5.
Which of the following are benefits of using containers in your CI/CD pipeline? Select all that apply.
That’s right! The benefits of using containers in your CI/CD pipeline include deploying applications easily to multiple operating systems and hardware platforms, deploying multiple versions of an application at the same time without interfering with one another, and creating a more reliable way to work with applications at any stage in the pipeline process.
That’s right! The benefits of using containers in your CI/CD pipeline include deploying applications easily to multiple operating systems and hardware platforms, deploying multiple versions of an application at the same time without interfering with one another, and creating a more reliable way to work with applications at any stage in the pipeline process.
Continuous
Integration(main heading)
Module 4 Automation
Through CI/CD.
The essential parts of CI Automation setup is
1.
VCS
2.
Build Server
3.
Automated Testing Framwork
AUTOMATION
is programming function
1.
That enables continuous routines to be scaled
2.
Catch errors automatically
3.
Reduced need of the human intervention
The
automation of manual task is key to CI.
CI/CD pipeline
is vital component of software development.
Automation makes it easier to work DevOps teams with programming
teams to work together.
Module 4
Integration
with Github
How to
integrate CI with Github.com,
For CI
circleCI is used , how circleCI.com
Account
is created in circleci.com using google account …ni1@gmail.com
Module 4
Cloud
Build on GCP
Cloud
Build is a fully managed continuous
integration and continuous delivery (CI/CD) service
provided by GCP
It allows developers to automate the process of
1.
building,
2.
testing, and
3.
deploying applications or
4.
code changes
to various environments.
The core components of Cloud Build include:
·
Build triggers
·
Build configurations
·
Build steps
1. Build triggers They define when and under what
conditions a build should be triggered. Cloud Build supports various types of
build triggers, including:
·
Push trigger: This initiates a build when code changes
are pushed to a specific branch of a version control repository like Github.
·
·
Tag trigger:
This triggers a build when a new tag is applied to the repository.
·
·
Pull request trigger: allowing you to run tests and checks
before merging code changes.
·
·
Scheduled trigger:
2. Build
configurations
are YAML files that define the build
steps, environment variables, and other settings for a build. The build
configuration file is typically named cloudbuild.yaml and is placed in the root
of the repository.
3. Build steps
are individual actions that Cloud Build executes in
sequence according to the build configuration. Each step can run commands or
scripts and the steps are executed in the order they are listed. Let’s look at
an example.
A typical build configuration might include the
following build steps:
i.
Fetching dependencies: The first step pulls in the
required libraries and dependencies for the application.
ii.
Building the application: This step compiles the
code and creates the application binaries.
iii.
Running tests:
iv.
Deploying: The last step deploys the
application to a specified environment like staging or production.
Benefits
Using Cloud Build for CI/CD workflows offers a number of benefits,
including:
1. speed,
2. scalability,
and
3. seamless
integrations with other GCP services.
Cloud Build is a fully managed service, meaning you do not need to worry
about infrastructure setup and maintenance.
It automatically scales resources based on your build requirements,
allowing you to run multiple builds in parallel, reducing build times and
increasing overall development velocity, speed, and efficiency.
Cloud Build's ability to scale automatically means it can handle builds
of any size, from small projects to large-scale applications. As your
development needs grow, Cloud Build can accommodate the increased workload
without manual intervention, ensuring that your CI/CD process remains smooth
and efficient.
Cloud Build seamlessly integrates with other GCP services, making it
easy to incorporate different stages of the CI/CD workflow into your projects.
Integrations
include:
Integration capabilities
Cloud Build offers integration capabilities with both
1. GitHub
and
2. 1. Google Cloud Source Repositories.
Module 4
CI best practices
Continuous Integration CI
is software development practices, in which code changes occurs automatically,
frequently and safely to integrate int shared repository.
Key principles of CI include:
- Integration
- Builds
- Tests
- Feedback (from clients)
- Version control
Core
practices
CI is composed of three core practices, which include:
·
Automated building (Auto compilations, artifacts)
·
Automated testing
·
Version control system integration
Benefits
Continuous integration
enables faster feedback, higher quality software, and a lower risk of bugs and
conflicts in your code.
CI is a way for developers
to ensure that their code is always up to date and ready to deploy.
CI ensures that reliable
software is getting into the hands of users.
Module 4
CI testing
Continuous testing means running
automatic testing suites when ever code changes occurs.
Also, in another words it is
running a test as a part of CI/CD pipeline in between Build and Deploy.
Integration
testing
Continuous Integration is when the developer changes the code and
deposit it into shared repository frequently.
The benefits of continuous Integration is
1.
Revision Control
2.
Build automation
3.
Continuous Testing
Integration test is how different parts or software modules or routines
work together.
It runs in CI pipeline.
If any change made by developers, integration test verify that everything is working together as expected.
There are
different types of CI tests in the CI pipeline:
1.
Code Quality test ,
that the code must not be
complicated
2.
Unit test,
like to test function, module
and set of processes.
3.
Integration test:
To test different parts of application
or modules are working together as expected.
4.
Security or license tests:
To test if the application is free from
a.
Thread
b.
Vulnerability
c.
Risk
Tools used
in Integration testing
1.
Pytest:
In Python, to
test the integration among the web services.
2.
Selenium framework:
To test the
brower based applications or sites, to load the web pages and to check functionality.
3.
Playwrigth framework:
Same as above.
There are also some “Code
coverage” testing tools.
End-to-end testing
it is used
to test the functionality and performance of your entire application from start
to finish by simulating a real user scenario.
Practice quiz: Continuous integration
Practice Quiz30 min
Congratulations! You passed!
1.
What is the role of a webhook in GitHub?
Correct! A webhook is a URL provided to GitHub by the CI system, and it allows GitHub to notify CI tools about code changes.
2.
Which of the following are the core components of Cloud Build? Select all that apply.
Correct! Build triggers, configurations, and steps all make up the core components of Cloud Build. Build triggers are events that begin the Cloud Build process. Build configurations are YAML files that define the steps and settings for your build. Build steps are the actions that Cloud Build executes in a specific order depending on the build configuration.
Correct! Build triggers, configurations, and steps all make up the core components of Cloud Build. Build triggers are events that begin the Cloud Build process. Build configurations are YAML files that define the steps and settings for your build. Build steps are the actions that Cloud Build executes in a specific order depending on the build configuration.
Correct! Build triggers, configurations, and steps all make up the core components of Cloud Build. Build triggers are events that begin the Cloud Build process. Build configurations are YAML files that define the steps and settings for your build. Build steps are the actions that Cloud Build executes in a specific order depending on the build configuration.
3.
What is the purpose of utilizing version control in continuous integration with your code?
Correct! Version control allows developers to view code history and manage the code changes as needed.
4.
Why should you run integration tests? Select all that apply.
Correct! You conduct integration tests to make sure the different parts of your application work together and catch errors earlier on in the CI pipeline, which can save you time, money, and a lot of headaches.
Correct! You conduct integration tests to make sure the different parts of your application work together and catch errors earlier on in the CI pipeline, which can save you time, money, and a lot of headaches.
5.
You have developed new code for an application you are creating for a client. You are using the Cloud Build service supported by Google Cloud Platform, or GCP. Which step in the build steps process refers to moving the application to an environment when it is ready for production?
Correct! The last step in the build step process is to deploy the application to a specific environment for production.
Module 4 Continuous Delivery and Continuous Deployment
Here deployment is not in Production but in test server as above case
Module 4
Value stream mapping
To draw flowchart of all steps taken during CI/CD
value stream mapping (VSM) is
a technique used to analyze, design, and manage the flow of materials and
information required to bring a final product to a customer.
VSM is also know as
1. Material Flow
2. Information Flow
Mapping through flowcharts
like ucid.co software
Benefits of VSM
1. To identify bottlenecks in your value stream,
2. To identify inefficiencies in your process, and
3. To identify current areas of improvement.
4. It helps to reduce the number of steps in your process and
5. helps you visualize where handoffs occur.
6. To identify where wait time is preventing work from moving through your
system.
The goals of VSM
1. To reduced the wastages of time and resources.
2. To increase the efficiency of processes.
To do this, create a detailed
map of all the necessary steps involved in your business process with a diagram
or a flowchart
This diagram outlines these steps:
1.
Define the
problem. What are you trying to solve or achieve?
2.
List the
steps in your current process. For each step, make sure to note
a.
the amount
of time needed,
b.
any inputs
and
c.
any outputs,
and
d.
the
resources—both people and materials—
e.
necessary to
complete each step.
3.
Create and
organize the map using the above data. Your goal is to illustrate the flow of
your process, so begin with the start and finish with the end of your process.
If you need help organizing the flow, think back to the steps in the software
development lifecycle and use that as a guide to organize your steps.
4.
Find areas
that can be improved. Gather information about your current process by
answering questions like:
a. Can some tasks be done in parallel?
b. Can tasks be reordered to improve efficiency?
c. Can tasks be automated to reduce the amount of manual
labor?
5.
Update the
map with your findings.
6. Implement the new process. But don’t stop here! If this new process works well for your project—great! Keep in mind that coding, software, programs, apps—everything digital—are constantly updating to meet client or business needs. It can be helpful to implement an iterative process—either manual or automated—to make sure that any new hiccups in your process can be identified and addressed before they become a larger issue.
For more information and an explanation of how value maps benefit DevOps, see the article How to Use Value Stream Mapping in DevOpson the Lucidchart website.
Other common components of a VSM
include: lead times, wait times,
handoffs, and waste.
- Lead time is the length of time between when a code change is
committed to the repository and when it is in a deployable state.
- Wait time indicates the length of time a product has to wait
between teams.
- Handoffs are the transfer of information or responsibilities
from one party to another.
- Waste refers to any time you are not creating value. In
software development, there are seven types of waste production.
- Partially completed work refers to when software is released in an
incomplete state. This leads to more waste because additional work is
needed to make updates.
- Extra features refers to creating waste by doing more work than is required. This
may be well-intentioned but can signal a disconnect between what the
customer wants and what’s being created.
- Relearning refers to waste generated from a lack of internal documentation.
This can be a result of not investigating software errors, failures, or
outages when they occur and having to relearn what to do if they happen
again. It also includes having to learn new or unfamiliar technologies,
which can create delays or wait times in workflows.
- Handoff waste can occur in a few places—when project owners change, when
roles change, when there is employee turnover, and when there is a
breakdown in the communication pipeline between teams.
- Delays refer to when there are dependencies on coupled parts of the
project. A delay in one stage or decision may create a delay in another,
which can create a surge in waste.
- Task switching refers to the waste that is generated when an individual has to
jump between tasks, which involves mental context switching. This may
result in the individual working more slowly and/or less efficiently.
- Defects refers to waste that is generated when bugs are released with
software. Similar to partially completed work, defects can result in
extra time and money down the line, as well as delays and interruptions
in workflow due to task switching.
Module 4
Github and delivery
GitHub can facilitate your
efforts in CI/CD.
How GitHub
supports CI/CD
Github (<ACTIONS>) supports external tools of
CI/CD to restrict to merge the pull request through webhooks and APIs.
For example:
GitHub refused to merge the Pull Request before the completion
of some steps E.g;
1. Pull Request is reviewed and signed by one or more code viewers
2. CI process is completed
3. CI tests are completed
4. Pull requesters are acknowledged project license, code of conduct and
coding standards
GitHub
Actions
GitHub Actions is a
feature of GitHub that allows you to run tasks whenever certain events occur in
your code repository.
With GitHub Actions, you are able to
trigger any part of a CI/CD pipeline off any webhook on GitHub.
Resources for more information
GitHub Actions documentation - GitHub Docs
A beginner’s guide to CI/CD and
automation on GitHub - The GitHub Blog
GitHub Protips: Tips, tricks, hacks,
and secrets from Jason Etcovitch - The GitHub Blog
Module 4
Configuration management
Consistency and stability
CM ensures that each component of your code is automatically
and properly
1. built,
2. monitored, and
3. updated as needed
Configuration
files
Configuration files are commonly referred to as a
manifest or playbook
You can think of these as
statements in configuration files, on how you want the system to look and
perform.
A playbook (Conf. file)
might say, “I need a server with
1. 32GB of RAM (will be allocated Virtually from main portion of Memory)
2. running Debian Linux,
3. with Python 3.9 and
4. Nginx installed.”
Create a configuration
file as the input to your configuration management tool (like puppet)
describing the desired state, as describe above
Pro tip:
Store configuration
management files alongside the application code in your revision control system.
Continuous deployment (Main Heading)
Module 4
From staging to production
Like from coding to cloud
Stagging is from coding to cloud and from
cloud to end user is Production
Stagging is:
1. Coding
2. Testing
(All types of , Unit tests, integration tests….etc)
3. Again Testing (Apha,beta tests)
4. After removing sensitive information
Containerize the application
5. Container is delivered to DevOps
6. DevOps Put application to cloud Server.
Then Productions:
DevOps
deliver the application From Cloud Server Application
end user i.e. called
Production.
In other words, Productions means software is in REAL LIFE (Finally)
Module 4
Postmortem
Module 4
Qwiklabs assessment: Set up CICD
Graded assessment for module 4
Graded Quiz50 min
Congratulations! You passed!
1.
GitHub is very helpful in continuous integration (CI) because it automatically notifies the CI tools about code changes and whether your commits meet the conditions you have set. Which of the following is the key element that allows communication between your CI system and GitHub?
2.
You log on to a virtual call to meet with another software developer to discuss build steps in Cloud Build. What are the build steps that are typically included in the build configuration? Select all that apply.
3.
You are excited to implement a new practice in your coding development. What software development practice describes code changes that occur automatically, frequently, and safely when integrating them into a shared repository?
4.
When speaking to a new Python programmer, how might you describe the workflows available in GitHub Actions? Select all that apply.
5.
Your team has just launched a mobile application that translates English into American Sign Language. Upon the release, your team discovers that the app doesn't integrate well with the Android system. Your team fixes the problem urgently and after a few quick rounds of testing, your team pushes out another release. What type of release is this?
6.
A software developer pushes out some poorly written code to production. This resulted in a system failure and multiple outages. Which process allows teams to understand and learn from system failures and incidents?
7.
What is DevOps? Select the best answer.
8.
As part of a development or operations team, what are the benefits of using DevOps tools in the software development lifecycle?
9.
Which of the steps of the “from coding to the cloud” process listed below are done by the DevOps team? Select all that apply.
10.
There are seven key concepts of automation for continuous integration. Which of the following are included in those concepts? Select all that apply.
Not quite. Refer to Automation for more information.
11.
Which metric would you employ to measure the length of time it takes for a code change to be committed to a branch and be in a deployable state?
12.
Value stream mapping (VSM) can help you identify bottlenecks in your value stream, inefficiencies in your process, and current areas of improvement. Which of the following are common components of VSM? Select all that apply.
You didn’t select all the correct answers
Comments
Post a Comment