Configuration Management and the Cloud https://www.coursera.org/learn/configuration-management-cloud/home/module/1

 Configuration Management and the Cloud

https://www.coursera.org/learn/configuration-management-cloud/home/module/1

 Module 1

Cloud Services Overview Video #3

 Services running in the cloud Means:

    Service are running some where else

        1. In Data Center

        2.  Remote Servers over internet

Like gmail.com , 
storage service DropBox, 
MS Office 365
May get service of OS from cloud. 





When setup cloud model you need to consider
REGIONS

  



        REGIONS--->

                            ZONES---->

                                                PHYSICAL DATA CENTERS

One is failed the service will automatically migrated from one point to another pointer without the knowledge/distrubance of end users.

Means Service Skip from faulty Server to Good one Server located to any other point.

Question

If a bare-bones cloud computing experience is needed as well as a high level of control over the software being run, what Cloud service would one choose?

Correct

Great job! Infrastructure as a Service (IaaS) provides users with the bare minimum needed to utilize a server’s computational resources, such as a virtual machine. It is the user's responsibility to configure everything else.


Module 1

Scaling in the Cloud








In
VERTICAL Scaling we upgrade each machine's resouces like HD capacity, RAM , CPU capacity or increasing Nos of CPUs in one machine

If number of Logins at one time increase Cloud service provide automatically increase Vertical and Horizonal Scaling. An automatic scalling according to resource is being used.


Question

Which "direction" are we scaling when we add capacity to our network in order to meet demand?

Correct

Nice work! Adding capacity to our network to meet demand—whether vertically or horizontally—is considered to be upscaling.


Module 1

Evaluating the Cloud

Question

What are some advantages to using cloud services? (Select all that apply)

Correct

Great work! Cloud services provide many advantages, such as outsourcing support and maintenance, simplifying configuration management, and letting the provider take care of security.

Correct

Great work! Cloud services provide many advantages, including simplifying configuration management, outsourcing support and maintenance, and letting the provider take care of security.

Correct

Great work! Cloud services provide several advantages, like putting the provider in charge of security.

Module 1

Migrating to the Cloud


Question

What does the phrase lift and shift refer to?

Correct

Nailed it! When we migrate from traditional server configurations to the Cloud, we lift the current configuration and shift it to a virtual machine.






Module 1
Practice Quiz: Cloud Computing

Congratulations! You passed!

Grade received 100%
To pass 80% or higher
Question 1

When we use cloud services provided to the general consumer, such as Google Suite or Gmail, what cloud deployment model are we using?

1 / 1 point
Correct

Keep it up! A public cloud offers services to the general public, often as SaaS (Software as a Service) offerings.

Question 2

What is a container?

1 / 1 point
Correct

You got it! A container is an OS- and hardware-independent environment that allows for easy migration and compatibility.

Question 3

Select the examples of Managed Web Application Platforms. (Check all that apply)

1 / 1 point
Correct

Nice work! Google App Engine is a Platform as a Service (PaaS) product that offers access to Google's flexible hosting and Tier 1 Internet service for Web app developers and enterprises.

Correct

Great job! AWS Elastic Beanstalk is an easy-to-use PaaS service for deploying and scaling web applications.

Correct

Woohoo! Microsoft Azure App Service enables you to build and host web apps, mobile back ends, and RESTful APIs in the programming language of your choice without having to manage infrastructure.

Question 4

When a company solely owns and manages its own cloud infrastructure, what type of cloud deployment model are they using?

1 / 1 point
Correct

Way to go! A private cloud deployment is one that is fully owned and operated by a single company or entity.

Question 5

Which "direction" are we scaling when we add RAM or CPU resources to individual nodes?

1 / 1 point
Correct

Awesome! Vertical scaling is a form of upscaling, but upscaling can also be horizontal.


Module 1
Spinning up VMs in the Cloud





Question

If we want to reuse an exact copy of a virtual machine, we might save a snapshot to use as a reference image later. What is this snapshot called?

Correct

Excellent! A disk image is a snapshot of a virtual machine’s disk, and is an exact copy of the virtual machine at the time of the snapshot.

Module 1
Creating a New VM Using the GCP Web UI

GCP = Google Cloud Platform

console.cloud.google.com

    1. Create Project name Project Name: First cloud step
   2. Open Project
         To create VM
   3. In menu go to <Compute Engine>
   4. VM Instance 
   5. <Create Instance>
    6. Name instance as <Linux Instance>




Question

Using the web interface, what is an easy way to create a virtual machine identical to the one we've just configured?

Correct

Nicely done! By clicking the link labeled “Command line”, we can see the exact command used to create the virtual machine.


Module 1
Review: Customizing VMs in GCP





$ sudo cp hello_cloud.py /usr/local/bin
$ sudo cp hello_cloud.service /etc/systemd/system

Module 1
Review: Templating a customized VM

$ gcloud init

user@ubuntu:~$ gcloud compute instances create --source-instance-template webserver-template ws1 ws2 ws3 ws4 ws5

Module 1
Templating a Customized VM

In last video , stay updated VM by PUPPET
Puppet is a software configuration management tool which includes its own declarative language to describe system configuration.

1. To create Instance Template from VM made
    Then we use created Template to create bunch of Instanes
    like WS1 WS2 WS3 WS4 WS5 , 
    Thrugh command $ gcloud .....   WS1...WS54

        1. STOP existance instance


When instance STOPPED then click on Instance Name as under
Then click on BOOT DISK as shown under




Press button <CREATE> at the bottom 



$ glcould compute instances create --source-instance-template webserver-template ws1 ws2 ws3 ws4 ws4
In above command template name "webserver-template" is which we ourself create through the option <Create Template>   
Through above comman d of $ gcloud we created Five VMs. with same configuration using the template we already creadted.

  

Question

What does the gcloud init command do?

Correct

Awesome! The gcloud init command sets up the authentication procedure between our virtual machine and Google Cloud.


Module 1
Managing VMs in GCP
 Over the last few videos we learned how to create and use virtual machines running on GCP. We then explored how we can use one VM as a template for creating many more VMs with the same setup. You can find a lot more information about this in the following tutorials:
 

Practice Quiz: Managing Instances in the Cloud


Congratulations! You passed!

Grade received 100%
To pass 80% or higher
Question 1

What is templating?

1 / 1 point
Correct

Way to go! Effective templating software allows you to capture an entire virtual machine configuration and use it to create new ones.

Question 2

Why is it important to consider the region and zone for your cloud service?

1 / 1 point
Correct

Right on! Generally, you're going to want to choose a region that is close to your users so that you

can deliver better performance.

Question 3

What option is used to determine which OS will run on the VM?

1 / 1 point
Correct

Woohoo! The boot disk from which the VM boots will determine what operating system runs on the VM.

Question 4

When setting up a new series of VMs using a reference image, what are some possible options

for upgrading services running on our VM at scale?

1 / 1 point
Correct

Nice job! One way of updating VM services at scale is to simply spin them up again with an updated reference image.

Correct

Awesome! Puppet or other configuration management systems provide a streamlined way to deploy service updates at scale.

Question 5

When using gcloud to manage VMs, what two parameters tell gcloud that a) we want to manage our VM resources and b) that we want to deal with individual VMs? (Check two)

1 / 1 point

Module 1
Cloud Scale Deployments

The main advantage of Cloud is to Scale service UP and DOWN , according to need. and Pay as Per Usage.
You may easily add/remove NODES from the system.
Node may Instance, container or Application etc.


PLATFORM AS SERVICE MODEL which control database servers and their services.

If any entry point fails the web service will not stopped






Question

What does a load balancer do?

Correct

Great job! Load balancers reroute requests in order to balance and reduce network load.


Module 1
What is orchestration?


 For the difference virtual instances to correctly interact with each other we need ORCHESTRATION

Orchestration is automate configuration and coordination of Complex IT system in virtual environment. 
 
Other words Orchestration is automatic configuration of a lot of things which talk with each other

All virtual service providered , give APIs to add, modify,delete VM or any other services

 

 Orchestration service is in b/w these two services to communicate the both.
 
To check and ensure the service is running smoothly we setup MONITORING and ALTERTING service


Question

What is the difference between automation and orchestration?

Correct

Nice work! Automation is when we set up a single step in a process to require no oversight, while orchestration refers to automating the entire process. MANY TASKS/SERVICES

 
 Module 1
Cloud Infrastructure as Code
 We already study Infrastructure as code. 
Now we will study Cloud Infrastructure as code.


We can quickly get idea by looking configuration file, How 
is infrastructure is working 
we can keep the track of configuration files using Version Control system like Git and GitHub.com
Cloud service provider has own tools to control Infrastructure (Managing Resource) 
as code

    1. Amazon    has  Cloud Formations
    2. Google      has  Cloud Development Manager
    3. MS            has  Azure Resource Manager.
    4, Openstack has  Open Heat orchestration templates 





Practice Quiz: Automating Cloud Deployments

Question 1

In order to detect and correct errors before end users are affected, what technique(s) should we set up?

1 / 1 point
Correct

You got it! Monitoring and alerting allows us to monitor and correct incidents or failures before they reach the end user.

Question 2

When accessing a website, your web browser retrieves the IP address of a specific node in order to load the site. What is this node called?

1 / 1 point
Correct

Awesome! When you connect to a website via the Internet, the web browser first receives an IP address. This IP address identifies a particular computer: the entry point of the website.

Question 3

What simple load-balancing technique just assigns to each node one request at a time?

1 / 1 point
Correct

Right on! Round-robin load balancing is a basic way of spreading client requests across a server group. In turn, a client request will be forwarded to each server. The load balancer is directed by the algorithm to go back to the top of the list and repeat again.

Question 4

Which cloud automation technique spins up more VMs into instance groups when demand increases, and shuts down VMs when demand decreases?

1 / 1 point
Correct

Way to go! Autoscaling helps us save costs by matching resources with demand automatically.

Question 5

Which of the following are examples of orchestration tools used to manage cloud resources as code? (Check all that apply)

1 / 1 point
Correct

Woohoo! Like Puppet, Terraform uses its own domain specific language (DSL), and manages configuration resources as code.

Correct

Nice job! CloudFormation is a service provided by Amazon to assist in modeling and managing AWS resources.

Correct

Excellent! Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources.

Create VM template and automate deployment
Graded Quiz





Grade received 90% Latest Submission Grade 90%


This graded quiz assesses your understanding of the concepts and procedures covered in the lab you just completed. Please answer the questions based on the activities you performed in the lab.

Note:

  • You can refer to your completed lab for help with the quiz.

  • In order to complete this quiz, you must have completed the lab before it.

Question 1

When creating a template for a virtual machine (VM) in Google Cloud Platform (GCP), what is the first step in the process?

1 / 1 point
Correct
Question 2

What is the primary purpose of the Google Cloud command-line interface (gcloud) in cloud computing?

1 / 1 point
Correct
Question 3

When creating an image based on the vm1 disk in Google Cloud Platform, what field should be set to 'vm-image'?

1 / 1 point
Correct
Question 4

When creating an instance template in Google Cloud Platform, as described in the process, what is the purpose of setting

the firewall to 'allow HTTP and HTTPS traffic'?

1 / 1 point
Correct
Question 5

What is the primary purpose of the gcloud command in the Google Cloud Platform (GCP) ecosystem?

1 / 1 point
Correct
Question 6

In the process of creating VMs in Google Cloud Platform, why is the image named 'vm-image' created from the disk of the VM instance 'vm1'?

0 / 1 point
Question 7

How does the creation of the 'vm-image' from the vm1 disk and the subsequent 'vm1-template' relate to the concept of cloning in VM management?

1 / 1 point
Correct
Question 8

how would you typically apply updates or changes to an existing VM instance template (like 'vm1-template')?

1 / 1 point
Correct
Question 9

Which machine series and machine type are used when creating the instance template named "vm1-template" in the lab instructions?

1 / 1 point
Correct
Question 10

After creating the "vm1-template" instance template, you notice that the boot disk for the template is set to a "standard persistent disk." What type of storage is this and what are its characteristics?

1 / 1 point
Correct
  
Module 1
Managing Cloud instances at scale

Module 1
Storing Data in the Cloud




If want to share data storage across the instances , we use SHARED FILE SYSTEM SOLUTIONS.
Cloud provider uses Service as Model.
Using these solutions , data can be accessed through Network File systems protocol like NSF and CISF. This let you to connect many different instance and containers to the same file system without any programming


These objects are like 
Photos,
cat videos 
encoded and retreive and stored like  binary data

BLOB = Binary Large OBjects

The BLOBs are stored in the location knwon as BUCKETS

Most cloud provider give database as a service. These come into two basic falours 
    SQL
    NonSQL
 

and we can reterieve the data by running SQL Query (Select command)


NoSQL is designed to store on the Tons of machines , these are super fast to reterive data. Instead SQL query data are reterive through using specific APIs provided by databses.

Question

Which form of cloud data storage is based on objects instead of traditional file system hierarchies?

Correct

Right on! Blobs are pieces of data that are stored as independent objects, and require no file system.


The performance of storage is affected by number of factors like;
        Throughputs
        Iops 
and         Latency



Your current/latest data which frequently used, must be place is HOT STORAGE. and
Your old backups of 5 years data must be placed in COLD storage. 
Hot Storage use SSDs. 

Module 1
Load Balancing


To give one cookie to each person turn by turn  again and again still the cookie finished


Make sure that all the IP Address in a Pool configured in DNS Server.




We can add new servers or Instances in pool very easily, to route some traffic to new server easily

Question

Which description fits the Round-robin DNS load balancing method?

Correct

Nice job! The Round-robin approach serves clients one at a time, starting with the first, and making rounds until it reaches the beginning again.


How much client is close to its server machine we have to use
        GeoIPs
and         GeoDNS
GeoDNS is configured so that the clients redirect to the closed Geographical Load Balancers.

Module 1
Change Management




    
This is called as Change Management, so that the services remain running without any time gap.

Before making any on production we must have to TEST changes in Testing eviroment.
We have 
        Unit     Test
        Integration Test
Also perform Continuous Integration



https://www.simplilearn.com/tutorials/devops-tutorial/continuous-delivery-and-continuous-deployment 

CONTINUOUS DEPLOYMENT

There are two types of CD stages
1. CD servers ---> make changes ---> test to Test Server---> Munaully push to Prod Servers
as in figure


1. CD servers ---> make changes ---> test to Test Server---> PRE-PROD SERVERS --> Munaully push to Prod Servers
aS SHOWN

Question

Automation tools are used to manage the software development phase's build and test functions. Which of the following is the set of development practices focusing on these aspects?

Correct

Great work! Continuous Integration means the software is built, uploaded, and tested constantly.



A= Production Config:
B= Testing Config:
First will implement only 1% B with 99% of A. 
Then you will gradually make replacement form A to B
You must have MONITORING system which checks/compare performance of A and B

Module 1
Understanding Limitations

We always have some 
            LIMITS
            QUOTAS
E.g.
For Blob  we have only limit of 1000 writes per second

Limitation and Quota will control , if there is unintentionally 
system uses very much resources beyond out bugdets.

Question

What is the purpose of a rate limit?

Correct

Right on! Cloud providers will often enforce rate limits on resource-hungry service calls to prevent one service from overloading the entire system



More About Cloud Providers

Here are some links to some common Quotas you’ll find in various cloud providers


Congratulations! You passed!

Grade received 100%
To pass 80% or higher
Question 1

What is latency in terms of Cloud storage?

1 / 1 point
Correct

Nice job! Latency is the amount of time it takes to complete a read or write operation.

Question 2

Which of the following statements about sticky sessions are true? (Select all that apply.)

1 / 1 point
Correct

Great work! Sticky sessions route requests for a particular session to the same machine that first served the request for that session.

Correct

Woohoo! Because sticky sessions can cause uneven load distribution as well as migration problems, they should only be used when absolutely necessary.

Correct

Right on! Sticky sessions can cause unexpected results during migration, updating, or upgrading, so it's best to use them only when absolutely necessary.

Question 3

If you run into limitations such as rate limits or utilization limits, you should contact the Cloud provider and ask for a _____.

1 / 1 point
Correct

Great work! Our cloud provider can increase our limits that we have set, though it will cost more money.

Question 4

What is the term referring to everything needed to run a service?

1 / 1 point
Correct

Way to go! Everything used to run the service is referred to as the environment. This includes the machines and networks used for running the service, the deployed code, the configuration management, the application configurations, and the customer data.

Question 5

What is the term referring to a network of hosts spread in different geographical locations, allowing ISPs to be as close as possible to content?

1 / 1 point
Correct

Excellent! CDNs allow an ISP to select the closest server for the content it is requesting.


Module 1 review


1. cWe started by learning about clouds  computer services that run in a data center or remote servers that we access over the Internet

2. Next we learned how to deploy a virtual machine (VM), and stay updated via Puppet

3. it’s easy to scale in the cloud pas per need

4. VM template, how both the web interface and the command line tool can be used to create VMs in the cloud, modify their configuration, and control other things, using tools which are very effective at a small or medium scale.

5. At a large scale, you’ll need to automate cloud deployments even further using orchestration
Orchestration lets us combine the power of infrastructure as code with the flexibility of cloud resources. 

6.Terraform allow us to define our cloud infrastructure as code

7. we looked into the different types of storage available by clould providers

8. different methods load balancers use to distribute the workload, and how they can monitor server health to avoid sending requests to unhealthy servers.

9.how change management allows us to make changes in a safe and controlled way.
            
        how continuous integration (CI) can build and test code every time there is a change,

    how continuous deployment (CD) can automatically control the deployment of new code to a specified set of rules

During CI and CD We learn about
            Developement enviroment
            Test environment
            Pre-Production environment
            In the Last Production enviroment

Glossary terms from course 5, module 1

Terms and definitions from Course 5, Module 1


A/B testing: A way to compare two versions of something to find out which version performs better


Automatic scaling: This service uses metrics to automatically increase or decrease the capacity of the system


Autoscaling: Allows the service to increase or reduce capacity as needed, while the service owner only pays for the cost of the machines that are in use at any given time


Capacity: How much the service can deliver


Cold data: Accessed infrequently and stored in cold storage


Containers: Applications that are packaged together with their configuration and dependencies


Content Delivery Networks (CDN): A network of physical hosts that are geographically located as close to the end users as possible


Disk image: A snapshot of a virtual machine’s disk at a given point in time


Ephemeral storage: Storage used for instances that are temporary and only need to keep local data while they’re running


Hot data: Accessed frequently and stored in hot storage


Hybrid cloud: A mixture of both public and private clouds


Input/Output Operations Per Second (IOPS): Measures how many reads or writes you can do in one second, no matter how much data you're accessing


Infrastructure as a Service (or IaaS): When a Cloud provider supplies only the bare-bones computing experience


Load balancer: Ensures that each node receives a balanced number of requests


Manual scaling: Changes are controlled by humans instead of software


Multi-cloud: A mixture of public and/or private clouds across vendors


Object storage: Storage where objects are placed and retrieved into a storage bucket


Orchestration: The automated configuration and coordination of complex IT systems and services


Persistent storage: Storage used for instances that are long lived and need to keep data across reboots and upgrades


Platform as a Service (or PaaS): When a Cloud provider offers a preconfigured platform to the customer


Private cloud: When your company owns the services and the rest of your infrastructure


Public cloud: The cloud services provided to you by a third party


Rate limits: Prevent one service from overloading the whole system


Reference images: Store the contents of a machine in a reusable format


Software as a Service (or SaaS): When a Cloud provider delivers an entire application or program to the customer


Sticky sessions: All requests from the same client always go to the same backend server


Templating: The process of capturing all of the system configuration to let us create VMs in a repeatable way


Throughput: The amount of data that you can read and write in a given amount of time


Utilization limits: Cap the total amount of a certain resource that you can provision


Graded assessment for module 1

You finished this assignment

Grade received 93.18%
Latest Submission Grade 93.18%

Question 1

Say you work for a company that wants the IT department to focus on deploying and managing applications and spend as little time as possible managing cloud services. Which service might be the right choice?

1 / 1 point
Correct

Correct.

Question 2

Which word best describes the direction you are scaling when increasing the capacity of a specific service by making the nodes bigger? Select all that apply.

1 / 1 point
Correct

Correct.

Question 3

Your company is moving its servers from one office to another. At the same time, the organization will be migrating some of its computing needs to a cloud service. In this “lift and shift” strategy, which is the “lift”?

1 / 1 point
Correct

Correct.

Question 4

If any part of your workload is running on servers owned by your company, what type of cloud might this be part of? Select all that apply.

1 / 1 point
Correct

Correct.

Correct

Correct.

Correct

Correct.

Question 5

What are the locations from where you can create a VM to run in the cloud? Select all that apply.

0.5 / 1 point
Correct

Correct.

Correct

Correct.

This should not be selected

Not quite. Please refer to the SpinningupVMSintheCloud video for more information.

This should not be selected

Not quite. Please refer to the SpinningupVMSintheCloud video for more information.

Question 6

You’ve set up a VM, modified its configuration settings, and made sure that it's working correctly. Now you want to reproduce this exactly on multiple other machines. How might a template help?

1 / 1 point
Correct

Correct.

Question 7

Why are there usually multiple entry points for a single website? Select all that apply.

1 / 1 point
Correct

Correct.

Correct

Correct.

Question 8

What is the best method for a batch action like creating ten VMs at once?

1 / 1 point
Correct

Correct.

Question 9

What type of storage refers to storing files with unique names in a storage bucket? Select all that apply.

0.75 / 1 point
This should not be selected

Not quite. Please refer to the StoringDataintheCloudvideo for more information.

Correct

Correct.

Correct

Correct.

Question 10

What is the advantage of round robin DNS?

1 / 1 point
Correct

Correct. But it has some limitations.

Question 11

You are planning some improvements in your cloud services, but you want to make the changes in a controlled way. This approach is commonly called “change management”. In change management, how does a continuous integration system, or CI, help to catch problems before they're merged into the main branch?

1 / 1 point
Correct

Correct.

Module 2
What are containers?









    Example


  
   

      

             

 Module 2

Set up Docker

developer wrote some code that works perfectly on their local machine but does not work on others’ machines. Docker helps solve this common—and annoying—problem by providing a consistent runtime across different environments.

Docker is an easy way to package and run applications in containers.

  A container is a lightweight, portable, and isolated environment that facilitates the testing and deployment of new software

Within the container, the application is isolated from all other processes on the host machine. In the programming world       



               

           


Module 2

Docker web apps





Module 2

Docker images

Docker images are the building blocks of Docker containers.

A Docker image contains the application code, data files, configuration files, libraries, and other dependencies needed to run an application.



Module 2

Container and artifact registry



Above 3 CRs provides features like

            1. Authenntications

            2. Access control

            3. Image Geo-Replication

            4. Offers Registries for Artifact Storeage

 



Repository--> Registries--> continers or articafts.




Alaiye Chha


 
Module 2
Docker and GCP


Docker and Google Cloud Platform (GCP) are two types of technologies that complement each other, allowing programmers to build, deploy, and manage containerized applications in the cloud.

Google Cloud Platform

GCP is a composition of all the cloud services provided by Google. These include:

  • Virtual machines  
  • Containers 
  • Computing 
  • Hosting 
  • Storage 
  • Databases 
  • Tools 
  • Identity management

How to run Docker containers in GCP


You can run containers two ways in the cloud using GCP.

            The first way is to start a virtual machine with Docker installed on it. Use the docker run command to create a container and start it. This is the same process for running Docker on any other host machine.


            The second way is to use a service called Cloud Run. This serverless platform is managed by Google and allows you to launch containers without worrying about managing the underlying infrastructure. Cloud Run is simple and automated, and it’s designed to allow programmers to be more productive and move quickly.

An advantage of Cloud Run is that it allows you to deploy code written in any programming language if you can put the code into a container.


Use Cloud Run to deploy containers in GCP


        Before you begin, sign into your Google account, or if you do not have one, create an account.


  1. Open Cloud Run.


  2. Click Create service to display the form.

  3. In the form,

  4. Select Deploy one revision from an existing container image.


  5. Below the Container image URL text box, select Test with a sample container.


  6. From the Region drop-down menu, select the region in which you want the service located.


  7. Below Authentication, select Allow unauthenticated invocations.


  8. Click Create to deploy the sample container image to Cloud Run and wait for the deployment to finish.


       3.  Select the displayed URL link to run the container.

My created Link https://hello-tgfaln26ga-el.a.run.app


Pro tip: Cloud Run helps keep costs down by only charging you for central processing unit (CPU) time while the container is running. It’s unlike running Docker on a virtual machine, for which you must keep the virtual machine on at all times—running up your bill.


Key takeaways

GCP supports Docker containers and provides services to support containerized applications. Integrating GCP and Docker allows developers and programmers to build, deploy, and run containers easily while being able to focus on the application logic.


When running https://hello-tgfaln26ga-el.a.run.app

A group celebrating

It's running!

Congratulations, you successfully deployed a container image to Cloud Run

This created the revision hello-00001-6fl of the Cloud Run service hello in asia-south1 in the GCP project first-cloud-step-407010.


You can deploy any container to Cloud Run that listens for HTTP requests on the port defined by the PORT environment variable. Cloud Run will scale automatically based on requests and you never have to worry about infrastructure.

What's next?

Follow the Quickstart tutorial to build a “Hello World” application in your favorite language into a container image, and then deploy it to Cloud Run.

BUILD AND DEPLOY QUICKSTART VIEW IN CLOUD CONSOLE


Module 2 Build artifact testing

you will learn more about different types of build artifacts,

how to test a Docker container,

and how to troubleshoot any issues along the way.

Build artifacts  

Build artifacts are items that you create during the build process. Your main artifact is your Docker container, 

All other items that you generate during the Docker image build process are also considered build artifacts. Some examples include:

  1. Libraries 
  1. Documentation 
  1. Static files 
  1. Configuration files 
  1. Scripts

Build artifacts in Docker 

Build artifacts in Docker play a crucial role in the software development and deployment lifecycle.

        No matter what you create with code, you need to test it. You must test your code before deployment to ensure that you catch and correct all issues, defects, and errors.

        This is true whether your code is built as a Docker container or built the more “classic” way.

        The process to execute the testing varies based on the application and the programming language it’s written in.

Pro tip: It’s important to check that Docker built the container itself correctly if you are testing your code with a containerized application.

There are several types of software testing that you can execute with Docker containers:

Unit tests:

            These are small, granular tests written by the developer to test individual functions in the code.

            In Docker, unit tests are run directly on your codebase before the Docker image is built, ensuring the code is working as expected before being packaged.

Integration tests:

            These refer to testing an application or microservice in conjunction with the other services on which it relies.

            In a Dockerized environment, integration tests are run after the docker image is built and the container is running, testing how different components operate together inside the Docker container. 

End-to-end (E2E) tests:

            This type of testing simulates the behavior of a real user (e.g., by opening the browser and navigating through several pages).

            E2E tests are run against the fully deployed docker container, checking that the entire application stack with its various components and services functions correctly as a whole.

Performance tests:

            This type of testing identifies bottlenecks.

            Performance tests are run against the fully deployed Docker container and test various stresses and loads to ensure the application performs at expectations. 



Practice quiz: Docker

Question 1

You have created your first application and would like to test it before showing it to stakeholders. A colleague suggests using Docker to execute this task. What is Docker an example of?

1 / 1 point
Correct

Correct. Some would consider Docker the most popular containerized technology to test new software on your machine.

Question 2

You have been talking to a colleague about how beneficial Docker has been to you for packaging and running applications in containers over the past several weeks. Your colleague has finally decided to install Docker on their local machine and reaches out to you for help with the installation process. Which method can your colleague execute to get Docker up and running on their machine?

1 / 1 point
Correct

Correct. Your colleague can install Docker, based on their operating system, from the Docker website.

Question 3

A colleague is discussing the combination of application code, data files, configuration, and libraries that are needed to run an application. What Docker term are they referring to?

1 / 1 point
Correct

Correct. An image contains all of the dependencies needed to run an application.

Question 4

A new programmer with your company has run into the issue of how to test multiple independent components together, which components must work properly in order for the application to run smoothly. What advice would you give the programmer to make their development process more efficient?

1 / 1 point
Correct

Correct. Using multiple containers to test the entirety of the application can be beneficial because the microservices are independent from one another.

Question 5

You share a new idea for an application with your team to get their feedback and any advice to make the application better. Some members of your team provide feedback on the build artifacts. Which of the following are examples of build artifacts? Select all that apply.

1 / 1 point
Correct

Correct. Build artifacts are items created during the build process, including containers, documentation, libraries, and scripts.

Correct

Correct. Build artifacts are items created during the build process, including containers, documentation, libraries, and scripts.

Correct

Correct. Build artifacts are items created during the build process, including containers, documentation, libraries, and scripts.




    

==================================

Module 2

Kubernetes on GCP Google Cloud Platform

What is the purpose of using Kubernetes?

Kubernetes can help organizations better manage their workloads and reduce risks. Kubernetes is able to automate container management operations and optimize the use of IT resources. It even can restart orphaned containers, shutdown the ones that are not being used, and recreate them.



The containerS in POD uses same IP address and same Namespace and resources , so that they can communicate with each other





Relationship of Docker, Container and Kubernate:


            Imagine Docker is shipping container. It lets you to package each application and its dependencies in separate crate in container. In this example Kubernate is the Port,orchestrating (automating many process) how the package and containers are handled. and directing them to the right place. 




Google Kubernetes Engine (GKE)

Google Compute Engine  GCE



Module 2
Kubernetes principles (Readings)

Kubernetes provides developers with a framework to easily run distributed systems. Kubernetes also provides developers choice and flexibility when building platforms.

Kubernetes principles
            
            Kubernetes—a cloud-native application—follows principles to ensure the containerized application runs properly. 
 relying only on the Linux kernel.
Public APIs are used

Declarative configuration

            In this approach, developers specify the desired state only . 

They do not need how to get that state, it is automatically done by Kubernate.

The control plane

                     the control plane will determine how to direct nodes in the cluster to achieve the desired state.

Components of the control plane include: 









etcd is used as Kubernetes backing store for all cluster data as a distributed database

The scheduler where pods are assigned to run on particular nodes in the cluster.

The control manager hosts and monitors multiple Kubernetes controllers.

The cloud controller manager embeds cloud-specific control logic. It acts as the interface between Kubernetes and a specific cloud provider, managing the cloud’s resources. 

Key takeaways 

        Kubernetes core principles and key components support developers with starting, stopping, storing, building, and managing containers. 

 Module 2

Installing Kubernetes  

 Kubenetes is not something you download. 

Enable Kubernetes


After Docker is installed on your machine, follow the instructions below to run Kubernetes in Docker Desktop.

 

From the Docker Dashboard, select Settings.
Select Kubernetes from the left sidebar.
Select the checkbox next to Enable Kubernetes.
Select Apply & Restart to save the settings.
Select Install to complete the installation process.

The Kubernetes server runs as containers and installs the /usr/local/bin/kubect1 command on your machin  

 

Key takeaways


        Kubernetes is not a replacement for Docker, but rather a tool that developers use while working in Docker. It can run and manage Docker containers, allowing developers to deploy, scale, and manage containerized applications across clusters.

Module 2

Pods


What is  CONTAINER in Kubernets

            In Kubernetes, a container is a lightweight, standalone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and system tools.

            Containers are isolated from each other and bundle their own software, libraries, and configuration files, but they share the operating system kernel with other containers.

            They are designed to be easily portable across different environments, which makes them ideal for consistent deployment across different platforms.


What are  PODs? in Kubernets

            Containers are encapsulated within Pods, which are the fundamental deployment units in a Kubernetes cluster. 

            A Pod can contain one or more containers that need to run together on the same host and share the same network and storage resources, allowing them to communicate with each other using localhost.

            When a deployment requires multiple containers to work together on the same node, a Pod is created to ensure they are co-located and can communicate efficiently. 

            Pods serve together as a logical host that encapsulates one or more tightly coupled containers within a shared network and storage context. 

This provides a way to group containers that need to work closely together, allowing them to share the same resources and interact with each other as if they were running on the same physical or virtual machine.

Pods as logical host

The key points to understand about a Pod as a logical host are:


  • Tightly coupled containers:

  • When multiple containers within a Pod are considered tightly coupled, it means they have a strong interdependency and need to communicate with each other over localhost. This allows them to exchange data and information efficiently without the need for complex networking configurations. 


  • Shared network namespace:

  • Containers within the same Pod share the same network namespace. This implies that they have the same IP address and port space, making it easier for them to communicate using standard inter-process communication mechanisms. 


  • Shared storage context:

  • Pods also share the same storage context, which means they can access the same volumes or storage resources. This facilitates data sharing among the containers within the Pod, further enhancing their collaboration.


  • Co-location and co-scheduling:

  • Kubernetes ensures that all containers within a Pod are scheduled and co-located on the same node. This co-scheduling ensures that the containers can efficiently communicate with each other within the same network and storage context.


  • Ephemeral(for short time) nature:

  • Like individual containers, Pods are considered to be ephemeral and can be easily created, terminated, or replaced based on scaling requirements or resource constraints. However, all containers within the Pod are treated as a single unit in terms of scheduling and lifecycle management.


Pods in action

         use a Kubernetes Pod to encapsulate both the

                1. web server container and the

                2. log processor container.

        Since both containers exist within the same Pod, they share the same network namespace (they can communicate via localhost) and they can share the same storage volumes. This allows the web server to generate logs and the log processor to access and process these logs efficiently.

Both containers run simulaeneously and stopped togateher .

Even if one fails other will be automatically stopped.

Advantages of Pods

        1. Facilitating co-location:

        2. Enabling Data Sharing:

3. Simplifying Inter-communication of the container:

Single container vs. multiple containers in a Pod:

Advanctages of Multiple containers in Pod:

  • Sidecar pattern:

  • In this pattern, the main container represents the primary application, while additional sidecar containers provide supporting features like 
     
    logging,  
    monitoring, or  
    authentication.  

     The sidecar containers enhance and extend the capabilities of the main application without modifying its code.

Proxy pattern:

        Multi-container Pods can use a proxy container that acts as an intermediary between the main application container and the external world. The proxy container handles tasks like

            load balancing,

            caching, or

            SSL termination,

offloading these responsibilities from the main application container

  • Adapter pattern:

  • performs data format conversions or protocol translations.

    Shared data and dependencies: 

Key terms


    Here are some key terms to be familiar with as you’re working with Kubernetes.


  • Pod lifecycle:

  • Pendding-->Running-->Successed or Failed

  • 1. starting from "Pending" when they are being scheduled,  
    2. "Running" when all containers are up and running,  
    3. "Succeeded" when all containers successfully terminate,  
    4. "Failed" if any container within the Pod fails to run.  
    5. Pods can also be in a "ContainerCreating" state if one or more containers are being created.

Pod templates:

        define the specification for creating new Pods.

Pod affinity and anti-affinity:

         rules define the scheduling preferences and restrictions for Pods.

  • Pod autoscaling:

  • Kubernetes provides Horizontal Pod Autoscaler (HPA) functionality that automatically scales the number of replicas (Pods) based on resource usage or custom metrics.

  • Pod security policies:

  • used to control the security-related aspects of Pods, such as their access to certain host resources, usage of privileged containers, and more.

  • Init container:

  • run and complete before the main application containers start. They are useful for performing initialization tasks, such as database schema setup or preloading data.

  • Pod eviction (remove) and disruption(distrubance):  

Taints and tolerations:

Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes.

Pod DNS:

Pods are assigned a unique hostname and IP address.

Pod annotations and labels:

to provide metadata or facilitate Pod selection for various purposes like monitoring, logging, or routing.

Pods and Python

To manage Kubernetes pods using Python, you can use the kubernetes library. Here is some example code of how to create, read, update, and delete a Pod using Python.

from kubernetes import client, config

# Load the Kubernetes configuration from the default location
config.load_kube_config()

# Alternatively, you can load configuration from a specific file
# config.load_kube_config(config_file="path/to/config")

# Initialize the Kubernetes client
v1 = client.CoreV1Api()

# Define the Pod details
pod_name = "example-pod"
container_name = "example-container"
image_name = "nginx:latest"
port = 80

# Create a Pod
def create_pod(namespace, name, container_name, image, port):
    container = client.V1Container(
        name=container_name,
        image=image,
        ports=[client.V1ContainerPort(container_port=port)],
    )

    pod_spec = client.V1PodSpec(containers=[container])
    pod_template = client.V1PodTemplateSpec(
        metadata=client.V1ObjectMeta(labels={"app": name}), spec=pod_spec
    )

    pod = client.V1Pod(
        api_version="v1",
        kind="Pod",
        metadata=client.V1ObjectMeta(name=name),
        spec=pod_spec,
    )

    try:
        response = v1.create_namespaced_pod(namespace, pod)
        print("Pod created successfully.")
        return response
    except Exception as e:
        print("Error creating Pod:", e)


# Read a Pod
def get_pod(namespace, name):
    try:
        response = v1.read_namespaced_pod(name, namespace)
        print("Pod details:", response)
    except Exception as e:
        print("Error getting Pod:", e)


# Update a Pod (e.g., change the container image)
def update_pod(namespace, name, image):
    try:
        response = v1.read_namespaced_pod(name, namespace)
        response.spec.containers[0].image = image

        updated_pod = v1.replace_namespaced_pod(name, namespace, response)
        print("Pod updated successfully.")
        return updated_pod
    except Exception as e:
        print("Error updating Pod:", e)


# Delete a Pod
def delete_pod(namespace, name):
    try:
        response = v1.delete_namespaced_pod(name, namespace)
        print("Pod deleted successfully.")
    except Exception as e:
        print("Error deleting Pod:", e)


if __name__ == "__main__":
    namespace = "default"

    # Create a Pod
    create_pod(namespace, pod_name, container_name, image_name, port)

    # Read a Pod
    get_pod(namespace, pod_name)

    # Update a Pod
    new_image_name = "nginx:1.19"
    update_pod(namespace, pod_name, new_image_name)

    # Read the updated Pod
    get_pod(namespace, pod_name)

    # Delete the Pod
    delete_pod(namespace, pod_name)
===========The End of Python Program==============

        

    Key Takeaways

    • Pods are the fundamental deployment units in a Kubernetes cluster. 
    • A Pod can contain one or more containers that need to run together on the same host and share the same network and storage resources, allowing them to communicate with each other using localhost.
    • Pods serve as an abstraction layer, allowing Kubernetes to schedule and orchestrate containers effectively.
    • Use a single-container Pod when you have a simple application that does not require additional containers, or when you want to isolate different applications or services for easier management and scaling. 
    • Use multi-container Pods when you have closely related components that need to work together, such as those following the sidecar pattern. 

Pods

https://kubernetes.io/docs/concepts/workloads/pods/

  • Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
  • Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.

 

Resources for more information


Kubernetes documentation: Pods

Official Python client library for Kubernetes


Module 2 Services (Readings)


The challenge         

        Imagine you're developing a Python-based web application deployed in a Kubernetes cluster.

This application is composed of multiple components such as

        1. a web server, with same IP Address

        2. a caching layer, and with same IP Address

        3. a database, with same IP Address

each running in separate Pods.

 there's the issue of service 

A new service updated list of all the active Pods and their IP addresses for this service—a difficult and dynamic challenge.




This topic seems to be very hard so for the time being I left it. and proceed further after leving it.


Module 2

Deployment



A Kubernetes Deployment also manages a ReplicaSet

deployments support rolling updates and rollbacks.

Kubernetes Deployment consists of several key components:

Desired Pod template:

           It includes details such as container images, container ports, environment variables, labels, and other configurations.

Replicas:

        number of replicas is maintained, automatically scaling up or down as needed.

Update strategy:

      performs updates by gradually replacing Pods, keeping the application available throughout the process  

Powerful features

Declarative updates:
            If there are any differences between the current and desired state, Kubernetes automatically reconciles them. 


Scaling:
    
        scale up during peak traffic times and scale down during off-peak hours.

History and revision control:
        
      This can be useful for debugging, auditing, and rolling back to specific versions.


Resources for more information


Kubernetes Deployments: Comprehensive documentation on Kubernetes Deployments, their use cases, and operations.

Managing Resources: A guide on managing Deployments in Kubernetes including rolling updates, scaling and rollback.

Kubernetes ReplicaSets: Detailed explanation on ReplicaSets in Kubernetes, their role in maintaining the desired number of Pods.

Declarative Application Management in Kubernetes: Understanding declarative approach in Kubernetes with configuration files.

Configure Liveness, Readiness and Startup Probes: An in-depth guide on liveness and readiness probes, which enhance the health management of Pods.

Rolling Back a Deployment: Documentation on how to perform rollbacks on a Deployment.


Question 1 What are some of the advantages of Kubernetes? Select all that apply. 1. Kubernetes has become a de facto industry standard. Correct. And Kubernetes has a lot of industry “buzz”. 2. Kubernetes adds self-healing features (like fault tolerance and load         balancing) across multiple servers. This is true even in different regions. 3. Kubernetes debugging and troubleshooting is easy. Question 3 In Kubernetes, what is a Pod? Select all that apply.
Two Ansers
in both contain Single or Multiple Containser


Question 4 What is the purpose of a Kubernetes Service?  
To store and manage configuration data for applications running in a Kubernetes cluster


Question 5

Correct. The primary purpose of a Kubernetes Deployment is to provide declarative updates and automate the management of replica sets of Pods, ensuring the desired state is consistently maintained
            

Practice quiz: Kubernetes

Congratulations! You passed!

Grade received 80%
To pass 80% or higher
Question 1

What are some of the advantages of Kubernetes? Select all that apply.

1 / 1 point
Correct

Correct. And Kubernetes has a lot of industry “buzz”.

Correct

Correct. This is true even in different regions.

Question 2

What is the easiest tool for local developers using Windows or macOS to learn Kubernetes?

1 / 1 point
Correct

That’s right! Docker Desktop is easiest for non-production-grade environments, with built-in support for Kubernetes.

Question 3

In Kubernetes, what is a Pod? Select all that apply.

1 / 1 point
Correct

Correct. This accurately describes a Pod. These containers share the same resources and network stack.

Correct

Correct. This highlights the role of a Pod as a Kubernetes resource used to define the desired state of containers, and is managed by higher-level controllers like ReplicaSets or Deployments.

Question 4

What is the purpose of a Kubernetes Service?

0 / 1 point
Incorrect

Not quite. Kubernetes uses other resources to store and manage configuration data for applications, but this is not the purpose of a Kubernetes Service.




Module 2

Kubernetes on GCP   


Every Programer have to answer wether

Cloud 

OR 

NOT Cloud

before using Kubernet


Kubernet is a powerful tool to

Manage

Organise

Share

the containers


Kubernet allow programmers

1. To Scale

2. To Update

3. To Push Updates

4. To Duplicate

5. To Roll Back

6. Version Controll

7. More.....








 






 

























ensure that pods are not scheduled onto inappropriate node
















Using GKE(Kubernet Engine) allow you to copy/paste Docker files.

GKE is replacement of DevOps Solution    

Python programmers can get better benefits from GKE
















Kubernet is a tool for 

optimizing the management 

of deploying Dockers

1. Containers and 

2. Projects 

Module 2 Create a Kubernetes cluster on GCP


        A cluster is a group of machines grouped to work together, 
but not necessarily all doing the same tasks.

        In a Kubernetes cluster, virtual machines (VMs) are coordinated 
to execute all of the functions needed to process requests, such as 
            1. serving a web application, 
            2. running a database, or 
            3. solving big-data problems 


 Each cluster consists of at least one cluster control plane machine, 
a server that manages multiple nodes
            You submit all of your work to the control plane, and the
control plane distributes the work to the node or nodes
where it will run.
        
        These worker nodes are virtual machine (VM) instances
running the Kubernetes processes necessary to make them part of
the cluster
        
    They can be in a single zone or spread out all over the world.

one node might be used for
        1. data processing and
                another for
        2. hosting a web server
        


   
Creating a GKE Cluster using Google Cloud Console

1.  Log in to Google Cloud Console: Go to https://console.cloud.google.com
2.  Open Google Kubernetes Engine (GKE). In the left-hand navigation menu,
        select Kubernetes Engine, and then Clusters.
3.  Click Create Cluster to create a new Kubernetes cluster. By default, this will take you to
        Autopilot cluster. For these instructions, we are setting up a standard cluster, so click Switch to Standard Cluster.
4.  Configure cluster basics. Enter a unique cluster name for your GKE cluster.
5.  Choose a Location type. Zonal --- Regional is for multi-zone deploymen
6.  Configure the node pool. In the Node pool section, specify the desired node count for
    the initial number of nodes in the default node pool depending on the needs of your
    application on your Kubernetes cluster. For production clusters,
    the recommended minimum is usually three nodes. 
7. Enable cluster autoscaler
8.Finally, choose the machine type for your nodes. There are four machine families
8.  Click the Create button to start creating the GKE cluster.


Kubernetes Engine API

Google Enterprise API

Builds and manages container-based applications, powered by the open source Kubernetes technology.


Module 2 
Types of clusters


  What is Kubernet Cluster:
Kubernet Cluster comprised Servers (nodes), work togather called group.
Nodes are physical or Virtual Machines
Nodes can easily be added or removed as workload seemlessly.
Nodes are interconnected and communicate with each other through the Kubernetes control plane
Nodes are interconnected and communicate with each other through the Kubernetes control plane know as brain of Kubernet.

   Kubernet Cluster consists of several components.
    1. An API Server
    2. A Control Manger
    3. A sheduler
    4. An etcd: Reliable data storage that access by cluster of machines.
   
       The standard unit for deployment to a Kubernetes cluster is a 
container.
    
Containerized applications are software applications packaged along with 
        their dependencies, 
        libraries, and 
        configurations 
into isolated containers  
  
Kubernetes also manages resources across the cluster by optimally 
        allocating CPU, 
        memory, and 
        storage 
based on application requirements  

Kubernetes also maintains the health of the cluster by employing features that automatically replace failed or unhealthy containers. 

        The cluster handles the actual execution and maintenance of the 
applications to match that desired state, is called “declarative approach.” 
It simplifies management and reduces the need for manual 
intervention once the initial parameters are set.


Different types of Kubernetes clusters 

        1. On-premises cluster

        2. Public cloud managed cluster

        3. Private cloud managed cluster

        4. Local development clusters

        5. Hybrid cluster

        6. Edge cluster

        7. High-performance computing (HPC) cluster

        8. Multi-cluster federation


    1. On-premises cluster

         is deployed within an organization's own data center 
or on a private infrastructure

    2. Public cloud managed cluster

             on the cloud
            
            of cloud-based deployments.

           can be spread over zones or even regions.

    3. Private cloud managed cluster


    4. Local development clusters
        
            for individual developers.
             used for application development and testing on a 
             developer's local machine

   5. Hybrid cluster
           
            coordinates on-premises and cloud environments, allowing 
            workloads to run seamlessly across both locations.

    6. Edge cluster

            deployed at the edge of the network, closer to the locations of end-users
            or Internet of Things (IoT) devices     

    7. High-performance computing (HPC) cluster

             are tailored for running computationally intensive workloads, such as scientific simulations or large data processing tasks.


   8. Multi-cluster federation

            managing multiple Kubernetes clusters as a single logical cluster. 



Module 2
Deploying Docker containers on GCP          

        1.   Docker containers on Google Cloud Run

        2.   Docker containers on Google Kubernetes Engine (GKE)

        3.   Docker containers on Google Compute Engine (GCE)


1.   Docker containers on Google Cloud Run

       It can scale down to zero, which means it will not use any

 unnecessary resources if there are no requests.

        The platform is based on containers, so you can write your code in 

any language and then deploy it through Docker images. 


commonly used for stateless applications

  2.   Docker containers on Google Kubernetes Engine (GKE)

features self-healing clusters which  automatically detect and 
replace unhealthy nodes or containers, maintaining the desired state 
and application availability

GKE is also convenient for integration
  If you find yourself needing to run a stateful application, GKE is a great option

   3.   Docker containers on Google Compute Engine (GCE)

Google Compute Engine is a virtual machine (VM) service that 
allows you to run your containerized applications

grants you more control over the underlying infrastructure of your 
VM instances than 1. GKE, and far more than 2. Cloud Run

when using Google Compute Engine, you are responsible for 
managing the VM instances and scaling.

The platform itself is not as simplified as GKE or Cloud Run, and you cannot use all programming languages


Deploying Docker containers on Google Compute Engine. 


Key takeaways

GCP offers several choices for deploying Docker containers, all of 
which allow you to 
integrate with other Google services

Cloud Run 
                is the simplest to use, offering a fully managed platform, 
but with little customization

GKE 
            is a powerful platform that offers more flexibility in configuration coupled with plenty of options for automation.


Google Compute Engine 
            lets you control your environments and applications while they run on Google’s infrastructure, but requires significantly more technical knowledge than Cloud Run or GKE.

            The best option for you will be based on your needs.  


Module 2 Kubernetes YAML files

        To protect from the surge ( spontaneously ) of customers traffic.
Kubernetes can manage and scale your containerized application automatically…if you can tell it what to do! This is where Kubernetes YAML files come in. Automatic configuration information is saved in YAML files, using particular Parameters

        Kubernetes YAML files define and configure Kubernetes 
resources.

 describing
 
        1. what resources should be created, 
        2. what images to use, 
        3. how many replicas of your service should be running, and
        4.  more.


Key takeaways


        Kubernetes YAML files play a crucial role in defining and managing Kubernetes resources, enabling Python developers to manage their applications' infrastructure in a consistent, version-controlled, and automated manner. By understanding the structure of these files and their key components, developers can leverage Kubernetes to its full potential and focus more on writing the application logic rather than managing infrastructure.

Resources for more information

Objects in Kubernetes

Get Started with Kubernetes (using Python)


Above site is very necessary to study by Saeed


Module 2

Scaling containers on GCP


Horizontal and vertical scaling

Multidimensional scaling

         is a combination of horizontal and vertical scaling. 
Also called diagonal scaling,

Elastic scaling

        automatically increase or decrease the number of servers 
(horizontal) or the resources allocated (activate when reguired) to existing servers 
(vertical) or both 
(multidimensional) based on the current demand. 

For a tutorial on how to use the command line to scale 

containers, see the Autoscaling Deployments section 

in this tutorial on Scaling an application





Module 2

GCP networking and load balancing


GCP is a high-quality, high-speed, and highly reliable global 
network that facilitates communication between various resources 
such as 

        1. virtual machines (VMs), 

        2. Kubernetes clusters, and 

        3. managed databases, 
regardless of their geographical location. 


Google network infrastructure consists of three main types of networks:

·         A data center network,

which connects all the machines in the network together.

·         A software-based private wide area network (WAN)

that connects all data centers together.

·         A software defined public WAN



Pods consist of 1 or more containers, Kubernetes cluster on GCP

VPC network
 
A VPC is a global, private network
To secure your network from Public Internet you must be VPC            


Key components of a GCP VPC network

Each VPC is divided into subnets, which are regional resources. 
Each subnet has a specific IP range
        

Key takeaways

            A VPC provides the network infrastructure that allows for secure,
efficient communication between Pods within the cluster and between
 the cluster and other GCP resources.

            Leverage GCP's load balancing solutions to provide a 
consistent and responsive user experience, especially during periods of high traffic.

A GCP VPC network has several key components:

IP ranges:

Routes:

Peering:  to connect two VPC networks, potentially across different projects, as if they were one.

Firewall rules:

GCP offers several other networking services

that are very useful for Python developers using Kubernetes.


1.  Global and regional load balancing:

Directs user traffic to the nearest instance of your application, within a specific region.


2.  HTTP(S), TCP, and UDP load balancing


3.  Managed Instance Groups:


They maintain a pool of instances that can automatically scale up or down based on demand, and distributes traffic across these instances.

        4. Integration with Kubernetes:

to distribute traffic across the Pods in your Kubernetes cluster.

Resources for more information

Cloud Networking Overview

Subnets

Cloud Firewalls

Cloud Load Balancing

Google Kubernetes Engine


Module 2

Protect containers on GCP


Security challenges and considerations

 to addressing Security challenge is the Zero Trust model, which involves assuming no trust by default and only granting permissions as necessary

using Virtual Private Clouds (VPCs) and properly firewalled subnets 

means you can guarantee at

        the network level

        —not the software level




Key takeaways

Containers pose some unique security challenges, including
 securing the container runtime, protecting the host system, and 
managing application dependencies.

Adopting a Zero Trust model can help mitigate these challenges. 
This approach involves assuming no trust by default and only 
granting permissions as necessary, reducing the potential attack 
surface.

Security on GCP is a shared responsibility.

GCP is responsible for:

1.   infrastructure security,

2.   operational security, and

3.   providing tools for software supply chain security.

Developers are responsible for:

1.   workload security,

2.   network security,

3.   identity and access management, and

4.   effective use of software supply chain security tools.


GCP provides several security features and best practices for protecting containers, including

1.   using minimal base images,

2.   regularly updating and patching containers,

3.   implementing vulnerability scanning,

4.   using runtime security tools like gVisor,

5.   implementing access controls with IAM,

6.   encrypting sensitive data with KMS, monitoring and

7.    logging activity with Cloud Audit Logs, and

8.   using Binary Authorization to ensure only trusted images 

    are deployed.




Module 2

Qwiklabs assessment: Work with containers on GCP



DockeràKubernetes (Engine)àPodsàContainer/sàApplication/s

Docker containers can be directly used in Kubernetes, which allows 

them to be run in the Kubernetes Engine with ease.






Work with containers on GCP


Graded Quiz.


  • In order to complete this quiz, you must have completed the lab before it.

Question 1

What is the purpose of a Dockerfile when building Docker images in containers?

0 / 1 point
Question 2

Which of the following options correctly demonstrates the usage of the docker run command to start a Docker container with specific configurations?

1 / 1 point
Correct
Question 3

True or false: When running a docker logs command, you don’t have to write the entire container ID, as long as the initial characters uniquely identify the container.

1 / 1 point
Correct
Question 4

What is the purpose of the docker inspect command?

1 / 1 point
Correct
Question 5

What is a common technique for debugging issues in Docker containers when troubleshooting runtime problems?

1 / 1 point
Correct
Question 6

What is the purpose of the docker pull command in Docker containerization?

1 / 1 point
Correct
Question 7

Which authentication method is commonly used when pushing Docker images to Google Artifact Registry?

0 / 1 point
Question 8

Which Google Cloud Platform (GCP) service allows you to run Docker containers in a managed environment, handling tasks such as cluster management, scaling, and load balancing?

1 / 1 point
Correct
Question 9

What role does Google Container Registry (GCR) play in Docker container management on Google Cloud Platform?

1 / 1 point
Correct
Question 10

What is Google Kubernetes Engine (GKE) used for in the context of scaling containers on GCP?

1 / 1 point
Correct
Module 2

Module 2 review

working with Docker containers as it provides services to support 
containerized applications.


Kubernetes  designed to support developers with 
        starting, 
        stopping, 
        storing, 
        building, and 
        managing containers.

Docker------>

    Kubernetes --> 

        Cluster -------> 

            Pods ---------->

Containers---->

Applicationsà

         Kubernetes acts as your application’s manager and is 

responsible for keeping your application up and running smoothly.

How to deploy 
            1. Docker containers and 
            2. Kubernetes clusters 
to GCP.

Different types of clusters include:

1.   on-premises,

2.   public cloud managed

3.   private cloud managed,

4.   local clusters and

5.   hybrid clusters.

6.   Kubernetes clusters

are robust, flexible, and reliable units of multiple nodes that work together

How Kubernetes YAML files define and 
configure Kubernetes resources.

 YAML files allow developers to manage their applications’ 
infrastructure in a consistent and automated manner


Module 2

Glossary terms from course 5, module 2

Terms and definitions from Course 5, Module 2

Artifact: A byproduct of the software development process that can be accessed and used, an item produced during programming

Container registry: A storage location for container images, organized for efficient access

Container repository: A container registry that manages container images

Docker: An open-source tool used to build, deploy, run, update, and

manage containers

Pod: A group of one or more containers that are scheduled and run together

Registry: A place where containers or artifacts are stored and

organized

Kubernetes: An open-source platform that gives programmers the power to

orchestrate


Graded assessment for module 2

Graded Quiz


Question 1

A developer reached out to you to better understand Docker. The developer knows it is used to package and run applications but could not remember what the environment was called. In what environment is Docker run?

1 / 1 point
Correct
Question 2

You explain to another programmer that it is typical for a Docker image to be composed of up to a dozen layers. What is the purpose of having multiple layers?

0 / 1 point
Incorrect

Please refer to Docker images for more information.

Question 3

You are ready to run Docker containers on a virtual machine. Which command should you use to create and start a Docker container?

1 / 1 point
Correct
Question 4

You are developing a Python-based data processing application. One component of the application processes raw data, while another component analyzes the processed data. You want these components to easily exchange data. You also want to ensure that the processed data persists even if one of the containers restarts. Why are Pods in Kubernetes a good fit for this task? Select all that apply.

0.5 / 1 point
This should not be selected

Please review the reading Pods for more information.

Correct
Question 5

You are a DevOps engineer working for a rapidly growing e-commerce company. With the upcoming Black Friday sale, you anticipate a surge in traffic and want to ensure that your Python-based web application can handle the increased load without any downtime. Which Kubernetes resource would you primarily use to maintain the desired number of web server instances?

1 / 1 point
Correct
Question 6

You’re setting up a Kubernetes cluster and want to use autoscaling. What might you consider as you decide on the maximum number of nodes allowed for your application? Select all that apply.

* A: The needs of your application

1 / 1 point
Correct
Correct
Question 7

Kubernetes clusters use what is called the “declarative approach.” What does this mean?

1 / 1 point
Correct
Question 8

You’ve decided to run your docker containers on Google Cloud Platform, and you’re about to choose which service to use. What are some advantages of Google Kubernetes Engine (GKE)?

1 / 1 point
Correct
Correct
Correct
Question 9

Containers are not just for packaging. What else are they used for? Select all that apply.

1 / 1 point
Correct
Correct
Question 10

Rebecca is working on a Python application that needs to integrate with an external logging service. She wants to create an alias for this external service, allowing her to reference it using a Kubernetes DNS name. Which Kubernetes service types should Rebecca consider for this process?

0 / 1 point
Incorrect

Please review the reading Services for more information.


Grade received 75%
Latest Submission Grade 54.17%
Question 1

Another developer asked where the central repository is for downloading containers. What should you tell them?

1 / 1 point
Correct
Question 2

You and a colleague are collaborating on a project where you will use Docker images. You mentioned the benefits of Docker images and how they are composed of multiple files. Your colleague asked what Docker images do. What can you tell them?

0 / 1 point
Incorrect

Please refer to Docker images for more information.

Question 3

You informed another programmer that Cloud Run can help them launch containers. They asked what the benefit is of using Cloud Run. What should you tell them?

0 / 1 point
Incorrect

Please refer to Docker and GCP for more information.

Question 4

Maria is working on a distributed Python application where multiple components need to communicate with each other frequently. Why does she decide to use Pods in Kubernetes for inter-container communication?

0 / 1 point
Incorrect

Please review the reading Pods for more information.

Question 5

You are a DevOps engineer working for a rapidly growing e-commerce company. With the upcoming Black Friday sale, you anticipate a surge in traffic and want to ensure that your Python-based web application can handle the increased load without any downtime. Which Kubernetes resource would you primarily use to maintain the desired number of web server instances?

0 / 1 point
Incorrect

Please review the reading Deployment for more information.

Question 6

You’re setting up your first Kubernetes cluster. What is the absolute minimum number and type of virtual machines you must have to function as a cluster?

1 / 1 point
Correct
Question 7

You just got a new job in the IT department of a medical practice. Considering the fact that the organization’s data includes confidential patient records, what sorts of clusters might you choose to work with? Select all that apply.

0.6666666666666666 / 1 point
This should not be selected

Please review the reading Types of clusters for more information.

Correct
Question 8

You’ve decided to run your docker containers on Google Cloud Platform, and you’re about to choose which service to use. What are some advantages of Google Kubernetes Engine (GKE)?

0.75 / 1 point
Correct
Correct
You didn’t select all the correct answers
Question 9

Which of the following is the best phrase to complete this sentence? Containers allow users to _____________________.

1 / 1 point
Correct
Question 10

Carlos is deploying a Python application in a cloud environment. The application has a user interface that he expects will experience heavy traffic from users around the world. Additionally, he's integrating a third-party payment gateway. Which Kubernetes service types should Carlos consider for these components? Select all that apply.

1 / 1 point
Correct
Correct
=================================================
Module 3

Intro to Module 3: Automating with configuration management



Module 3

What is scale?
















        Adding more computers in existing pool of servers may be very simple
 or it may be very hard. It depends on how infrastructure is setup.
To figure out how much existing system is Scalable, you have to 
ask the following question to yourself

            1. Will adding more servers increase the capacity of the service.?
            2. How new server prepared, installed and configured?
            3. How quickly you setup a new computer,  will be ready to be 
                used?
            4. Could deploy 100 new servers today, with an existing staff, 
                without hiring any one more.
            5. Would you need hire more staff to get the work faster.
            6. Would all the servers configured exactly in the same way.


If your company is hiring new employees’ day by day, so have must 
have scalable system, to add more and more computers in your 
computer, without any extra efforts. 




















Through Automation 100s or even 1000s of computers can be 
handle/controlled by a small IT team

Configuration Management is tool used for scaling and automation (Mirayee Koor, Such)



Module 3 What is configuration management?

Manual Configuration means Unmanaged Configuration

In Managed configuration you define set rules for the services and 
devices (also known as nodes), which you put in configuration files.

 

9 best configuration management tools:

  • Best for CI/CD: Bitbucket.
  • Best version control system: Git.
  • Best for application deployment: Ansible.
  • Best for infrastructure automation: Chef.
  • Best for large-scale configurations: Puppet.
  • Best for automating workflows: Bitbucket Pipelines.
  • Best for high-speed and scalability: SaltStack.
  • Best for container orchestration: Kubernetes
  • CFEngine
Integrated with  cloud environments like

EC2 of Amazon
MicroSoft Azure
Google Cloud Platform GCP

There are some platform specific tools like
SCCM
Group policy For Windows

From these tools we choose Puppet because it current industry 
standard for Configuration Management.

We select CMT keeping in the mind as Infrastructure as code 
paradigm


Module 3

What is infrastructure as code?


In CM, we write rules and configuration information into files.

CM tools process these files. These files are managed and

controlled by "Version Control system" like Git. VCS keeps track

of all these files changes. This process of keeping, tracking and 

Managing all the configuration files of the system through VCS is 

known as Infrastructure as code, Iac

VCS track (config,rules) files

        Who changed the file

        When change the file

        Why changed the file

The paradigm of storing all configuration files for managing nodes 

in VCS is called Infrastructure as Code, Iac.

In other words




 

 

 

 Module 3  IaC options

            1.   Puppets

            2.   Terraform

            3.   Ansible

            4.   Google Cloud Platform Offerings

Puppets industrial standard, robust and well stablished solution. At the end we will compare other solutions with puppets

 

Terraform

managing infrastructure resources across various cloud providers.

define your desired infrastructure state,

to manage a wide spectrum of resources, from virtual machines to databases, across multiple cloud environments

it an excellent choice for orchestrating (automating) cloud resources and building scalable, modern applications.

Ansible

This lightweight approach simplifies deployment and reduces the 

overhead of maintaining agents on target nodes.

an agentless architecture

a simple and human-readable YAML syntax to define playbooks

Not a catalog-based system

excels in its simplicity, ease of adoption, and suitability for rapid 

deployment scenarios. 

Google Cloud Platform alternatives

leverage native tools

using YAML or Python templates,

offering a declarative approach similar to Terraform.

well-integrated with GCP services and resources, components like 

GKE clusters, Cloud Storage Buckets, and load balancers.

allowing  to focus more on application development and less on 

provisioning and configuration.


Key takeaways

Each tool brings its own strengths,

Terraform's cloud provisioning prowess  

Ansible's lightweight automation

GCP's native integration.

choice between these options depends on your specific needs, preferences, and the ecosystem you are operating within

 

 Practice Quiz: Automation at Scale

 

Congratulations! You passed!

Grade received 100%

To pass 80% or higher

 

 

Question 1

What is IaC (Infrastructure as Code)?

1 / 1 point
Correct

Great job! IaC goes hand in hand with continuous delivery.

Question 2

What is the principle called when you think of networked machines as interchangeable resources instead of individual machines?

1 / 1 point
Correct

Nice work! This means no node is irreplaceable and configuration is automated.

Question 3

What benefits can we gain by using automation to manage our configuration? (Check all that apply)

1 / 1 point
Correct

Way to go! When a configuration or process doesn't depend on a human remembering to follow all the necessary steps, the result will always be the same.

Correct

Right on! Because automation breeds consistency, when we know a particular process that has been automated works, we can count on it working every time as long as everything remains the same.

Correct

Woohoo! A scalable system is a flexible system that can handle extra tasks or integrate extra resources easily.

Question 4

Puppet is a commonly used configuration management system. Which of the following applications are also used for configuration management?

1 / 1 point
Correct

Excellent! Chef is a configuration management system that treats configuration as code.

Correct

Awesome! Ansible is an open source IT Configuration Management, Deployment & Orchestration tool which aims to provide a wide variety of automation challenges with huge productivity gains.

Correct

Nice job! CFEngine is an open-source configuration management program that offers automated configuration and maintenance of large-scale computing networks, including centralized cloud, desktop, consumer and industrial application control, embedded networked applications, handheld smartphones, and tablet computers.

Question 5

A network administrator is accustomed to manually configuring the 5 nodes on the network he manages. However, the company he works for is quickly growing, and there are plans to expand the network to 200 nodes. What is the most reasonable course of action for the network administrator?

1 / 1 point
Correct

Yes! We can write automation scripts ourselves or we can use some sort of configuration management software to make our network scalable by pushing changes from a control server.

 Module 3

Review: What is Puppet?


class sudo {

    package { 'sudo':

       ensure => present,

  }

}

-----------------

About this code

This block of code is saying that the package 'sudo' should be

present on every computer where the rule gets applied. If this rule

is applied on 100 computers, it would automatically install the

package in all of them. This is a small and simple block but can

already give us a basic impression of how rules are written in puppet.


Module 3

What is Puppet?












Puppet deployed on client-server architecture














\



















































There many Package Management systems, in which Installation tools are available     

In Linux tools are

APT

YUM

DNF

 


    












Not only installation packages Puppet can do many other things like add, remove, modify configuration files stored in the system, or maintain Registry Entries of the system.






















What are cron jobs used for?

A cron job is a Linux command used for scheduling tasks to be 
executed sometime in the future.


Module 3

Review: Puppet resources


class timezone {
      file { '/etc/timezone':
        ensure  => file,
        content => "UTC\n",
        replace => true,
      }
}

About this code
In this code block, we are configuring the contents of /etc/timezone.
 This will be a file, and the contents of the file will be set to the 
UTC timezone. We also set the replace attribute to true, which 
means that even if the contents of the file already exist, they will 
 be replaced.  



Module 3

Puppet Resources





































































































There are several attributes (ensure, content, replace …) of a 

resource (file)

1.   File Permissions

2.   File Owner

3.   File Modification Time

 

Question

What is the most basic unit for modeling in Puppet?

Correct

Nailed it! The most basic unit in Puppet is a resource, such as

1.    user,

2.   Group 

3.    file, 

4.    service or

5.    package.

 The Building Blocks of Configuration Management

 Module 3

What are domain-specific languages?

 


 

 

 

 


There are two types of languages used in configuration Management System 

1. Domain Specfic Language (Class--> Ressource --> Attributes
                     (as learned above) & Facts ariable.)

2.  General Purpose Languages. (Python, Java)

    DSL is easier, but limited in its scope. 































Class--> Ressource --> Attributes (as learned above).

DSL we can use (As in GPL Basic Syntex like )

1. Variables  (Attributes)

2. Conditional Statements
 
3. Functions

We can apply them  in Resources (Small functions under a Class 
Like File,Service, Package, User, Group)

Variables are Attributes under Resource     





















    
  
   
 

























We can apply them in Resources (Small functions under a Class 
Like File,Service, Package, User, Group)

Variables are Attributes under Resource  

When Puppet Agent runs it call the program Facts which analyze the current system, store the 
information and gathers  in these Facts

Puppet Agent Rund in Client--> 

--> it gathers the clients information in FACTS -->

--> Puppet Agent sends Facts to Server --->

--> According to Facts (Info:) of Client, Server makes Rules-->

--> Puppet Master(Sever) sends the Rules to PA(Client)

--> Rules are executed in PA(Client) 

PA gathers following Information like:

OS

Memory

either VM or Physical Machine

IP Address 


Question

What is a fact in Puppet?

Correct

Nicely done! A fact is a hash that stores information about the details of a particular system.

     
Module 3
The Driving Principles of Configuration Management





     


























































As attribute onlyif => ‘test -e example.txt’ used in above






















Question

What does idempotent mean?

Correct

Way to go! We can use an attribute like onlyif to make sure a file is changed only if it exists.



Module 3

More Information About Configuration Management




Practice Quiz: 

The Building Blocks of Configuration Management


Question 1

How is a declarative language different from a procedural language?

1 / 1 point
Correct

Right on! In a declarative language, it's important to correctly define the end state we want to be in, without explicitly programming steps for how to achieve that state.

Question 2

Puppet facts are stored in hashes. If we wanted to use a conditional statement to perform a specific action based on a fact value, what symbol must precede the facts variable for the Puppet DSL to recognize it?

1 / 1 point
Correct

Nice job! All variable names are preceded by a dollar sign in Puppet's DSL.

Question 3

What does it mean when we say that Puppet is stateless?

0 / 1 point
Incorrect

Not quite. The 'test and repair' paradigm is a philosophy which states that actions should be taken only when necessary to achieve our goal.

Question 4

What does the "test and repair" paradigm mean in practice?

1 / 1 point
Correct

Great work! By checking to see if a resource requires modification first, we can avoid wasting precious time.

Question 5

Where, in Puppet syntax, are the attributes of a resource found?

1 / 1 point
Correct

Woohoo! We specify the package contents inside the curly braces, placed after the package title.

Module 3

Wrap Up: Automating with Configuration Management



Module 3 Deploying puppet

        Up till now we have learned about configuration Management 
and the CM Tool Puppet theoretical concept.

Now we will Practically deploy CM tool Puppet on your computer.

We will install Puppet Locally on your computer. 

Puppet has Client-Server architecture.   

On one computer we install Puppet and Server and on the remaining 
computers we will install Puppet as Client. Puppet Client seeds Facts
to Puppet Server, where the server make Rules on the basis of Facts 
and sends and apply these Rule to each and every Client.


Module 3 Review: Applying rules locally


$ sudo apt install puppet-master

output was



























$ vim tools.pp 
add in file 

package {'htop' :
ensure=> htop,








$ sudo puppet apply -v tools.pp










$ htop

output was
 









































If we use command second time
$ sudo puppet apply -v tools.pp














Question

Which of the following file extensions does the manifest file need to end with in Puppet?

Correct

Awesome! Manifest files are where we store the rules to be applied.

   
    
      
Module 3
Review: Managing resouce relationships
Reading      

Codes used in video as follows.

$ vi ntp.pp

Class ntp {

package{‘ntp’:

       ensure=> latest;

}

file{‘/etc/usr/ntp.conf’:

       source=> ‘/home/user/ntp.conf’

       replace => true,

       require => Package[‘ntp’],

       notify   => Service [‘ntp’],

}

service{

       enable => true,

       ensure => running,

       require=> File[‘/etc/usr/ntp.conf’],

 

}

}    

include ntp

-------- End of DSL coding .pp file---------     

 ntp.pp is manifesto file.

This file contains resources related to the NTP configuration:

1.the ntp package,

2.the ntp configuration file, and

3.ntp service       

   $ sudo puppet apply -v ntp.pp     

 Module 3

Managing Resource Relationships  (Video) 

  

    


    

     

         
     
       



  

     

     

     

      


    

     

        
      
    
     
    
   
    
   

 
   
   
   






































Following are the contents of ntp.conf file seen by using
$ vi ntp.conf
























Currently using bunch of server from ntp.org as
server 0.pool.ntp.org 
server 1.pool.ntp.org 
server 2.pool.ntp.org    
server 3.pool.ntp.org
       
We change these server to google cloud server as under by using vi command 
server time1.google.com
server time2.google.com 
server time3.google.com 
server time4.google.com 

Then we rerun puppet rules with new configuration file

$ sudo puppet apply -v ntp.pp 














Question

When we declare a resource type, how do we differentiate between the original resource type and the name of a resource relationship being referenced in another resource?

Correct

Nice job! When declaring resources initially, we type the resource type in lowercase. When we reference a resource relationship from another file, we capitalize the resource name being referenced.

Module 3 Review: Organizing your Puppet modules  
Readingd  



Module 3
Organizing Your Puppet Modules     (Video)

In management Configuration System deployement there are many things to manage.
1. To install some Packages.
2. copy some configuration files
3. Start some services       
    4. Schedule some periodic tasks
5. Some users and groups are created and given permission to access                         to use some specific devices 
6. OR to execute some specific commands which do not exist in current packages
   
 

   

 








We can put any resource into a Module, but to keep our configuration Management organized, we grouped everything under the sensible topic.
For example, we have module

1. Everything related to monitoring computer health. 
2. and another for network stack
3. and another for configuring web servering applications. 
 
    The Module shipped with Manifest with associated data as shown

 





































   
    
     

Step 1
 





wrote 
include ::apache
:x

Step 2


$ sudo puppet apply -v webeser.pp



   
   
   
   
 
    
    
     

Question

What do we call a collection of manifests, and folders containing associated data?

Correct
Great work! A module is an easy way to organize our configuration management tools.    
   
Module 3.

More Information About Deploying Puppet Locally
  
  Check out the following links for more information:
 
   


Practice Quiz: Deploying Puppet Locally

   
 
   
   

Incorrect
Not quite. A module is a collection of manifests, and associated data before processing.  


 
Incorrect
Not quite. The templates directory contains files that are pre-processed that can include values intended replaced after calculating the manifests or sections that are only present if certain conditions are valid.   
    
Question 1

Puppet evaluates all functions, conditionals, and variables for each individual system, and generates a list of rules for that specific system. What are these individual lists of rules called?

1 / 1 point
Correct

Right on! The catalog is the list of rules for each individual system generated once the server has evaluated all variables, conditionals, and functionals in the manifest and then compared them with facts for each system.

Question 2

After we install new modules that were made and shared by others, which folder in the module's directory will contain the new functions and facts?

0 / 1 point
Incorrect

Not quite. The files folder in a module will contain files that won’t need to be changed like configuration files.

Question 3

What file extension do manifest files use?

1 / 1 point
Correct

Excellent! Manifest files for Puppet will end in the extension .pp.

Question 4

What is contained in the metadata.json file of a Puppet module?

1 / 1 point
Correct

Awesome! Metadata is data about data, and in this case, often takes the form of installation and compatibility information.

Question 5

What does Puppet syntax dictate we do when referring to another resource attribute?

1 / 1 point
Correct

Great work! When defining resource types, we write them in lowercase, then capitalize them when referring to them from another resource attribute.

Question 4

4 What is contained in the metadata.json file of a Puppet module?

0 / 1 point
Incorrect

Not quite. Manifests are stored in their own files.

   ------------ This is End of Deploying puppet locally-----------------

   
    
 
  Deploying Puppet to Clients (Main Heading)

 

Module 3 Review: Puppet Nodes (Reading)  
    
     
 ----------- starting of program---------
node default {
  class { 'sudo': }
  class { 'ntp':
        servers => ['ntp1.example.com''ntp2.example.com']
  }
} /  
   
   

The command node default (name of node where to do work ) installs the sudo and ntp classes on all default nodes


---------Starting of another program---------

node webserver.example.com {

     class{‘sudo’:}

     class{‘ntp’:

servers=>[‘ntp1.example.com’,’ntp2.example.com’]

}

     class{‘apache’:}

}      /

  ------The End of Program---------------------------------

  

Module 3 Puppet Nodes (Video)      

 

  
   
 
   
   
 


   
    

Through puppet we may apply basic rules to all computers or nodes.

Or we apply specific rules in specific computer/node.

Eg for webservers we wants install apache and on email rules will only be applied to email servers.

So terms of puppet we use NODE term. E.g

node default{}  for basic rules to each physical machine, VM and Network Router

node webserver.example.com {}  only computer/nodes related to FQDN     
   
   
   
   
 





Question

In Puppet, what can we use to categorize in order to apply different rules to different systems?

Correct

Nice job! Different kinds of nodes are defined, allowing different sets of rule catalogs to apply to different types of machines.

    
    
     
 Module 3
Puppet's Certificate Infrastructure  
   
  
 

 
   
   
 
   
   
 


   
    
     
   
   
   
   
 
    
    
     
    
   

When a client comes into network, first it sends information about itself to the Sever,

BUT

The question is, how Server know it is the client that, it claimed to be?

This is a question of Security.      



    
  
   
   
   
 
   
   
 


 Puppet use PKI to establish secure connections between Servers and 

Clients.

There are different types of Public Key Infrastructure. One that Puppet uses is Secure Socket Layer SSL.SSL also used in HTTPS protocol in internet cloud.

 


 

 

 

The server and client both check others' Identity by an encrypted channel through SSL.


 

 

 

 

 

 


    

  

 

  

  

  

 

  


  

 

 

 

 

 


 

 

 

 

 

    

  


  

    
     
   
   
   
   
 
    
    
     
    
   
  
   
   
   
 
   
   
 

Question

What is the purpose of the Certificate Authority (CA)?

Correct

Awesome! The CA either queues a certificate request for manual validation, or uses pre-shared data to verify before sending the certificate to the agent.



  

 Module 3

Review: Setting up Puppet Clients and Servers    
   
 (Readings) 
 

 $ sudo puppet config --section master set autosign true

   

$ ssh webserver

$ sudo apt install puppet

$ sudo puppet config set server ubuntu.example.com

 

sudo puppet agent -v --test

 

This code tests the connection between the Puppet agent on the machine and

the Puppet master. 

   

vim /etc/puppet/code/environments/production/manifests/site.pp

node webserver {

  class { 'apache': }

} 

node default {}

 

To enable puppet service at starting of Linux   

$ sudo systemctl enable puppet

 

To start puppet service at starting of machine/Linux

$ sudo systemctl start puppet 

To check the status of puppet at startup/reboot of computer/linux

$ sudo systemctl status puppet

    

Module 3

Setting up Puppet Clients and Servers

(Video)     


    

    

     

    

   

  

   

   

   

 


   

   

 

 

  

   

   

   

 













   
$ sudo puppet config - - section master set autosign true  

 Puppet config means we are giving configuration command in 

puppet. 

Master means this command is from Puppet Server/” puppet master” 

Set autosign true mean client will Automatically Sign into the Puppet Server/master without any high security as it is a test server otherwise, we set the sign in Manually. Due to security purpose.

    

 

    
   
 


   
    
     
   
   
   
Now we will prepare this client (webserver) to talk with puppet Server,

which is running on another machine ubuntu.example.com    

 $ sudo puppet config set server ubuntu.example.com  




 
    
    
     

    
   
  
   
   
   
 
   
 

 
 

  
   
   
   
 
   
   
 


   
    
     
   
   
 

 
   
 
    
    
     
    
   
  
   
   
   
 
   
   
 

  

   
   
   
 
   
   
 



   
    
     
   
   
   
   
 
    
    
     
    
   

Question

What kind of security encryption is used when the Puppet Certificate Authority validates the identity of a node?

Correct
Great work! The Certificate Authority creates an SSL key for the
agent machine and creates a certificate request.  
   
   

Module 3

More Information about Deploying Puppet to Clients
(Readings)   
 
   
   Check out the following link for more information:
 


  Practice Quiz: Deploying Puppet to Clients   


   Question 2

When a Puppet agent evaluates the state of each component in the manifest, it uses gathered facts about the system to decide which rules to apply. What tool can these facts be "plugged into" in order to simplify management of the content of our Puppet configuration files? 

Node definitions  Incorrec 

Modules 

Incorrect

Not quite. A module is a collection of resources, and associated data used to expand the functionality of Puppet.  


  

Congratulations! You passed!

Grade received 100%
To pass 80% or higher
Question 1

When defining nodes, how do we identify a specific node that we want to set rules for?

1 / 1 point
Correct

Right on! A FQDN is a complete domain name for a specific machine that contains both the hostname and the domain name.

Question 2

When a Puppet agent evaluates the state of each component in the manifest, it uses gathered facts about the system to decide which rules to apply. What tool can these facts be "plugged into" in order to simplify management of the content of our Puppet configuration files?

1 / 1 point
Correct

Nice job! Templates are documents that combine code, system facts, and text to render a configuration output fitting predefined rules.

Question 3

What is the first thing that happens after a node connects to the Puppet master for the first time?

1 / 1 point
Correct

Awesome! After receiving a certificate, the node will reuse it for subsequent logins.

Question 4

What does FQDN stand for, and what is it?

1 / 1 point
Correct

Awesome! A fully qualified domain name (FQDN) is the unabbreviated name for a particular computer, or server. There are two elements of the FQDN: the hostname and the domain name.

Question 5

What type of cryptographic security framework does Puppet use to authenticate individual nodes?

1 / 1 point
Correct

Way to go! Puppet uses an Secure Sockets Layer (SSL) Public Key Infrastructure to authenticate both nodes and masters.

 
  Review: Modifying and Testing Manifests

(Readings)

 describe ‘gksu’ :type => :class   do

        let (:facts) {{  ‘is_vitural’ => ‘false’      }}

        it { should contain_package(‘gksu’).with_ensure(‘latest’) }   

end  

About this code

This code runs an rspec test to determine whether the gksu package has the intended behavior when the fact is_virtual is set to false. When this is the case, the gksu package should have the ensure parameter set to latest: ensure('latest').   

Module 3

Modifying and Testing Manifests   

  

Manifests are at server side, When they update at server then it should be applied to the all Clients fleet.

Before applying Manifests update Rule to all the client , we first apply the rule locally , to test the rule.

E.g. If we want change the permissions of some files of nodes,

After applying new rules on clients, you found bug/client hangs up.

First check the syntax of manifest by “puppet parser validate”

 


  

     

   

  

 

  

You may check/validate rule using the parameter –noop No Operation.

Means only to check the Rules and its syntax without Applying it.  

 


 

 

  

   
 
    
    
     
 

  
   
  
   
   
   
 
Above is testing code to check Manifest/Rule to be applied    
   

It checks that catalog is written correctly. 

   


  
   
   
   
 
   
   
 

Question

What does the puppet parser validate command do?

Correct

Great work! The puppet parser validate command checks the syntax of the

manifest to make sure it's correct.

   
    
     
   
 Module 3

Safely Rolling out Changes and Validating Them 

   
 

 
 
    
    
     
    
   
  
   
   
   
 
   What is the purpose of using multiple environments?
Incorrect

Not quite. While a "Canary" environment is often used to minimize the disruption of unforeseen problems in production, the canaries are a subset of all production machines and are thus being used by some real customers.

   

Question

What is the purpose of using multiple environments?

Correct
Right on! By creating separate directories for different purposes,
such as testing and production, we can ensure changes don't
affect end users. 

  
 Module 3

More Information About Updating Deployments

(Reading)  
   
   
 Check out the following links for more information:
   

Module 3

Practice Quiz: Updating Deployments   
 

Congratulations! You passed!

Grade received 100%
To pass 80% or higher
Question 1

What is a production environment in Puppet?

1 / 1 point
Correct

Awesome! Environments in Puppet are used to isolate software in development from software being served to end users.

Question 2

What is the --noop parameter used for?

1 / 1 point
Correct

Nice job! No Operations mode makes Puppet simulate what it would do without actually doing it.

Question 3

What do rspec tests do?

1 / 1 point
Correct

Right on! We can test our manifests automatically by using rspec tests. In these tests, we can verify resources exist and have attributes set to specific values.

Question 4

How are canary environments used in testing?

1 / 1 point
Correct

Woohoo! If we can identify a problem before it reaches all the machines in the production  environment, we’ll be able to keep the problem isolated.

Question 5

What are efficient ways to check the syntax of the manifest? (Check all that apply)

1 / 1 point
Correct

Great work! In order to perform No Operations simulations, we must use the --noop parameter when running the rules.

Correct

Groovy! To test automatically, we need to run rspec tests, and fix any errors in the manifest until the RSpec tests pass.

Correct

Excellent! Using the puppet parser validate command is the simplest way to check that the syntax of the manifest is correct.

   
    
     

Monitoring and Alerting (Main Heading)

 

Module 3

Getting Started with Monitoring   

   
    

 When service is running in a cloud, we make sure that service is:

1.   Behaving as expected

2.   Returning the right result

3.   Quickly and Reliably

To get these aims we need:

1.   Good Monitoring and

                2.   Alerting Rules     
 

    
    
     
    
   
  


Uploading: 210795 of 210795 bytes uploaded.

 







Response Code:

                When web server receive HTTP request from client, 

It generate a response code, 

Responses are grouped into five classes:

Informational responses (100–199)

Successful responses (200–299)

Redirects (300–399)

Client errors (400–499)

Server errors (500–599)


showing that the request served 

correctly or give an error code that is response code as shown under


  

  

  

 

   

   

    


   

  

 

 In general errors like 500,501, 503 means there is an error from 

Server Side

And the codes like 400,401,402  means there is an error from 

client/user side

Monitoring and Alerting System

 

àResponse codeà

   ---> Error Codes like 401,500à

        àFigure out MATRIX no of email, successful purchases/failed

         ---> store Matrix in Monitoring Systems

                        Like:

1.   AWS CloudWatch

2.   Google Stack Driver

3.   Azure Matrix

4.   Prometheus

5.   DataDog

6.   Nagios

There are two ways to store Matrix into Monitoring system

1.   Pull  (by client from Server,)

2.   Push (by client to      Server)

Pull and Push are both by client to/from Monitory System










  



  


 


 


 


  


 



 


 

Question

Which of the following monitoring models is being used if our monitoring system requires our service to actively send metrics?

Correct

Awesome! When push monitoring is used, the service being monitored actively sends metrics to the monitoring system.




  


  



  


 Module 3

Getting Alerts When Things Go Wrong


We need run our system 24 Hours, But we as a human System Administrator cannot sit in-front the system   24 Hours. So, we want to get service unattained. We need an automation to handle the bad/worse situations.

One way is an automatic program to check the health of system periodically, if the checking program found any error or system inconsistency it will send email or SMS to the System Administrator.

For example, you set an alert:

1.   If any application uses more than 10 GB of RAM

2.   If an application raising too many 500 errors

3.   If a request is too long much time to respond.

We divide the Alerts into two categories.

1.   One which need immediate actions

2.    Another which need actions near future.

 





If any problem/trouble do not need any action, then it is called NOISE






  





Non urgent bugs are configured to create TICKET for the IT supporters to solve the problem during office hours.

Question

What do we call an alert that requires immediate attention?

Correct

Nice job! Pages are alerts that need immediate human attention, and are often in the form of SMS or email.

Module 3

Service-Level Objectives

 


 

 

 

 


  




Question

If our service has a Service Level Objective (SLO) of four-nines, what is our error budget measured in downtime percentage?

Correct

Nice job! If we have an SLO of 99.99%, that gives us an error budget of .01%.

Module 3

Basic Monitoring in GCP  

We use monitoring tool called STACKDRIVER.

GCPà

        Under STACKDRIVER Monitoringà  

     In new GCP software there is no any SATACKDREVER instead

Go to Operation à Monitors


 

  

  

 

 

 

  

  

  Under Resource Dashboard click on <Instances>


 

  

  

 

 

 

  

   

  


  

  

  

  

  

 

   

   

  To create new alerting policy


   

  

 

 

 

 

To check CPU utilization we made an Infinity programs LOOP to utilize CPU 100 %

$ while true; do true; done & 

    

     

    

   

  

   

 

 Output of $ top command shows bash command given in above 

script is utilization CPU 100%

 

    

   

 

   

   

 

Question

What type of policy requires us to set up a condition which notifies us when it’s triggered?

Correct

Great work! An Alerting Policy specifies the conditions that trigger alerts, and the actions to be taken when these alerts are triggered, like sending an email address notification.

 Module 3

More Information on Monitoring and Alerting

(Readings)


Check out the following links for more information:

 Practice Quiz: Monitoring & Alerting

 

Congratulations! You passed!


Grade received 100%
To pass 80% or higher
Question 1

What is a Service Level Agreement?

1 / 1 point
Correct

Awesome! A service-level agreement is an arrangement between two or more parties, one being the client and the other being service providers.

Question 2

What is the most important aspect of an alert?

1 / 1 point
Correct

Right on! If an alert notification is not actionable, it should not be an alert at all.

Question 3

Which part of an HTTP message from a web server is useful for tracking the overall status of the response and can be  monitored and logged?

1 / 1 point
Correct

Nice job! We can log and monitor these response codes, and even use them to set alert conditions.

Question 4

To set up a new alert, we have to configure the _____ that triggers the alert.

1 / 1 point
Correct

Excellent! We must define what occurence or metric threshold will serve as a conditional trigger for our alert.

Question 5

When we collect metrics from inside a system, this is known as ______ monitoring.

1 / 1 point
Correct

Great work! A white-box monitoring system is one that collects metrics internally, from within the system being monitored

 Module 3

What to Do When You Can't Be Physically There  



   

   

 

 

 

 

 

   

    

     



  
 

 

 

   

    

     

   

   

   

  Question

Which of the following is a valid method of troubleshooting a cloud service? (Select all that apply)

Correct

Nice job! Testing through software is always our best bet in the cloud.

Correct

Well done, you! Part of the beauty of running services in the Cloud

is that you aren't responsible for everything! Most Cloud providers

are happy to provide various levels of support  

   

Module 3

Identifying Where the Failure Is Coming From 

 


 

   

    

     

   

   

   


If problem still exists it means its OUR/Your fault/problem.

If problem does not exist after shifting the location it means

It is service providers fault in infrastructure, contact/complain

The service provider     

 Similarly, if you seen the problem is in the performance of services, so we have to

Shift the service on another machine (Physical|VM) 


    

    

     

    

   

   

   If any request is getting more time to server the request, then one

Type of solution is that you shift the service to more powerful Server.     


   

   

   

 


   

   

 

 

Question

When troubleshooting, what is it called when an error or failure occurs, and the service is downgraded to a previous working version?

Correct

Great work! Rollback is the process of restoring a database or program to a previously defined state, usually to recover from an error.


   

   

   

    

Full container can be shifted from Sever to your Work Station or

to and from infrastructure (cloud). In this way a problem can be

debugged or can found from where problem is coming. 


 Module 3

Recovering from Failure  

 For complex systems failure, we must do two pre-cautionary measurement.

1.   Good Backup System

2.   Documentations for steps to be taken in case of failure.

Good Backup System:

                Backup does not mean to do the backup of data only. But also

have to backup of services, Instances and Networks configuration

automatically    


 

 

 

   

 


  

     

   


   

   

   

 In large scale, services run in different geographical location datacenters.

So, if one datacenter fails then the end users get services from other

Datacenters seamlessly.  

 

Question

Which of the following are important aspects of disaster recovery? (Select all that apply)

Correct

Nice job! Having several forms of redundancy, and failover reduces the impact when failure happens.

Correct

Awesome! In order to get things up and running as quickly as possible, we need to have a detailed plan.

Correct

Great work! Having automatic backups makes it easier to restore and recover.

 Module 3

Reading: Debugging Problems on the Cloud


Check out the following links for more information:

Practice Quiz: Troubleshooting & Debugging    


Congratulations! You passed!

Grade received 100%
To pass 80% or higher
Question 1

Which of the following are valid strategies for recovery after encountering service failure? (Select all that apply.)

1 / 1 point
Correct

Awesome! A quick way to recover is to have a secondary instance of the VM running your service that you can quickly switch to.

Correct

Nice job! As long as you've been keeping frequent backups, restoring a previous VM image will often get you where you need to be.

Correct

Woohoo! If the problem is related to recent changes or updates, rolling back to a previous working version of the service or supporting software will give the time to investigate further.

Question 2

Which of the following concepts provide redundancy? (Select all that apply.)

1 / 1 point
Correct

Right on! If your primary VM instance running your service fails, having a secondary instance running in the background ready to take over can provide instant failover.

Correct

You nailed it! Having a secondary Cloud service provider on hand with your data in case of the first provider having large-scale outages can provide redundancy for a worst-case scenario.

Question 3

If you operate a service that stores any kind of data, what are some critical steps to ensure disaster recovery? (Select all that apply)

1 / 1 point
Correct

Nice work! As long as we have viable backup images, we can restore the VM running our service.

Correct

Excellent! It's important to know that our backup process is working correctly. It would not do to be in a recovery situation and not have backups.

Question 4

What is the correct term for packaged applications that are shipped with all needed libraries and dependencies, and allows the application to run in isolation?

1 / 1 point
Correct

Great job! Containerization ensures that our software runs the same way every time.

Question 5

Using a large variety of containerized applications can get complicated and messy. What are some important tips for solving problems when using containers? (Select all that apply)

1 / 1 point
Correct

Great work! As long as we have the right logs in the right places, we can tell where our problems are.

Correct

Nice job! We should take every opportunity to test and retest that our configuration is working properly.    

  

Glossary terms from course 5, module 3

Terms and definitions from Course 5, Module 3

Configuration management: Automation technique that manages the configuration of computers at scale

Domain-Specific Language (DSL): A programming language that's more limited in scope

Facts: Variables that represent the characteristics of the system (Puppet Client sends his Facts to Puppet Server)

Puppet: The current industry standard for configuration management, also known as the client (GALAT) but puppet Agent

Puppet master: Known as the Puppet server

Graded assessment for module 3

You finished this assignment

Grade received 87.50%
Latest Submission Grade 87.50%
Question 1

When defining nodes, how do we identify a specific node that we want to set rules for?

1 / 1 point
Correct
Question 2

Puppet evaluates all functions, conditionals, and variables for each individual system, and generates a list of rules for that specific system. What are these individual lists of rules called?

1 / 1 point
Correct
Question 3

Which of the following are valid strategies for recovery after encountering service failure? (Select all that apply.)

0.75 / 1 point
Correct
Correct
You didn’t select all the correct answers
Question 4

Which part of an HTTP message from a web server is useful for tracking the overall status of the response and can be  monitored and logged?

1 / 1 point
Correct
Question 5

What is a production environment in Puppet?

1 / 1 point
Correct
Question 6

Puppet facts are stored in hashes. If we wanted to use a conditional statement to perform a specific action based on a fact value, what symbol must precede the facts variable for the Puppet DSL to recognize it?

1 / 1 point
Correct
Question 7

A Puppet agent inspects /etc/conf.d, determines the OS to be Gentoo Linux, then activates the Portage package manager. What is the provider in this scenario?

0 / 1 point
Incorrect
Question 8

What benefits can we gain by using automation to manage our configuration? (Check all that apply)

1 / 1 point
Correct
Correct
Correct
Question 9

What is the correct term for packaged applications that are shipped with all needed libraries and dependencies, and allows the application to run in isolation?

1 / 1 point
Correct
Question 10

What are efficient ways to check the syntax of the manifest? (Check all that apply)

1 / 1 point
Correct
Correct
Correct

Test manually     

 Module 4

What is devops? 

  DevOps is a way of working that combines

1.   software development, Dev, and

2.   IT operations, Ops,

to

1.   shorten the system's development lifecycle and

2.   provide continuous delivery with

3.   high software quality. 


 

  


  

 

 

 

  

  

  

 

 How do you define CI?

Continuous integration (CI) is a software development practice in 

which frequent and isolated changes are immediately tested and 

reported on when they're added to a larger codebase

 What is CI vs CD?

CI can be considered as the first stage in producing and 

delivering code, and CD as the second. CI focuses on preparing

 code for release (build/test), whereas CD involves the actual 

release of code (release/deploy).

 


 

  

   

  


  

  

  

  


  

 

   

   

Module 4

Continuous integration, delivery, and deployment (Readings)    


  

   

  

 

 Continuous integration (CI) automatically

1.builds,

2.tests, and

3.integrates code changes within a shared repository.

continuous delivery (CD) automatically

delivers code changes to production-ready environments 

for approval, and

continuous deployment (CD) automatically

deploys those code changes directly to production.    

    

 Module 4

Example pipeline (Readings)     

Pipelines

Pipelines are

1.  automated processes and

2. sets of tools

that developers use during the software development lifecycle.

the steps of a process are carried out in sequential order mandatory

 

A pipeline for a Python application is triggered

when a pull request is ready to be merged.

That pipeline can perform the following steps:

 

1.  Check out the complete branch represented by the pull request.

 

2.  Attempt to build the project by running python setup.py build.

 

3.  Run unit tests with pytest.

 

4.  Run integration tests against other parts of the application with a framework like playwright.

 

5.  Build the documentation and upload it to an internal wiki.

 

6.  Upload the build artifacts to a container registry.

 

7.  Message your team in Slack to let them know the build was successful.

 Example pipeline

In order to deploy an application successfully, your organization has to:

1.  Choose a “release day” when all the code will be merged together.

2.  Restrict  new code commits until the release is complete, to avoid conflicts.

3.  Run integration tests (and maybe performance tests). 

4.  Prepare the deployment.

5.  Notify customers of an upcoming maintenance window. 

6.  Manually deploy the application and any other updates.

 By creating a CI pipeline, the process looks very different:

1.  Developers commit code to the repository as soon as they’re done.

2.  The CI server observes the commit and automatically triggers a build pipeline.

3.  If the build completes successfully, the CI server runs all the unit tests. If any tests fail, the build stops.

4.  The CI server runs integration tests and/or smoke tests, if any.

5.  Assuming the previous steps all complete successfully, the CI server signals success. Then, the application is ready to be deployed.

6.  If the CD process has also been automated, the code is deployed to production servers. 

 

 Module 4

DevOps tools (Readings)

There are many software DevOps tools.

1.Source Repositories

a.  Github

b. Bitbucket

2. CI/CD Tools

a. Github Actions

b. Jenkens

c. Google cloud deploy

3. Infrastructure as code (Ias) tools

a.  Terraform

b. Ansible

4. Container management tools:

a. Docker

b.Kubernetes

5. Security scanning codes

a. Synic

b. SonarQube

6. Production Monitoring Tools

a.  DataDog

                    b. Application      

    

Stages of  DevOps

1.     Discover

2.     Plan 

3.     Build

4.     Test 

5.     Monitor

6.     Operate 

7.     Continuous feedback

   

1.             Discover   

This allows everyone on your team to share and comment on anything and will be important throughout the DevOps lifecycle. Examples of tools you can use include

a.    Jira Product Discovery,

b.   Miro, and

c.    Mural.

2.          Plan

    Includes

a.  sprint (to break project in actionable blocks)

b. planning and

c.  issue tracking, as well as

d. continued collaboration.

Examples of tools you can use include

a.  Jira Software,

b. Confluence, and

c. Slack.

3.          Build

a.  to create individual development environments,

b. monitor versions with version control

c.  continuously integrate and test

d.  have source control of your code

4.          Test  

         tools that can automate testing like

a. Veracode and

b. SmartBear

c.  Zephyr Squad or

d. Zephyr Scale.

5.          Monitor

Look for tools that can integrate with your group chat clients and send you notifications or alerts when you’ve automated monitoring your servers and application performance. An example tool you can use is Jira Software. 

  •  

1.Operate      

 Once your software has deployed, look for tools that can track incidents, changes, problems, and software projects on a single platform.

An example tool you can use is Jira Software

2.  Continuous feedback

   Look for applications that can integrate your chat clients with a survey platform or social media platform. Examples of tools you can use include

a.     Slack and

b.    GetFeedback.

3.  Popular tools for CI/CD

a.     Jenkins,

b.    GitLab,

c.      Travis CI, and

d.    CircleCI are all tools which can automate the different stages of the software development lifecycle, including

1.   building,

2.   testing, and

3.   deploying.

They are often used in DevOps to continuously build and test software, which allows you to continuously integrate your changes into your build.

Tools like

a.    Spinnaker,

b.   Argo CD, and

c. Harness can be used to automate continuous delivery and deployment and to simplify your DevOps processes.

 Module 4

From coding to the cloud

Coding---> …..--->…..---> Cloud

Coding--->DevOps---->Cloud

Development Team ---> Dev Ops =è operational Team

DevOps collaborate between Development Team and Operational Team

DevOps work from coding to cloud


     

    

   

  


   

   

   

 

   

   

 Programmer loads the program into Container (Docker | Kubernet)


 

  

   

   

   

 

   


   

 

 

 

   

    

     

   Module 4

Containers with Docker and Kubernetes  

 Containers

are applications that are packaged together with their

configuration and dependencies. 

Docker

is the most common way to package and run applications in containers. It can build container images, run containers, and manage container data

Kubernetes

is a portable and extensible platform to assist developers with containerized applications.

            It’s a tool that developers use while working in Docker to run and manage Docker             containers, allowing you to deploy, scale, and manage containerized applications across clusters.    

Containers in the CI/CD pipeline

Continuous integration and continuous delivery/deployment (CI/CD) is the automation of an entire pipeline of tools that build, test, package, and deploy an application whenever developers commit a code change to the source control repository.

Feedback can be provided to developers at any given stage of the process.

A pipeline is an automated process and set of tools that developers use during the software development lifecycle.

In a pipeline, the steps of a process are carried out in sequential order. The reason behind this is that if any step fails, the pipeline can stop without deploying the changes. The pipeline stops executing the steps and marks the job as failed.

Using containers in the CI/CD pipeline can bring developers additional flexibility, consistency, and benefits to building, testing, packaging, and deploying an application. Because containers are lightweight, they allow for a faster deployment of the application. Containers help eliminate the common “works on my machine” syndrome.

Docker images contain the application code, data files, configuration files, libraries, and other dependencies needed to run an application. Typically, these consist of multiple layers in order to keep the images as small as possible. Container images allow developers to run tests, conduct quality performance checks, and ensure each code change is tested and works as expected before being deployed.

Kubernetes is a tool for organizing, sharing, and managing containers. This powerful tool gives programmers and developers the ability to scale, duplicate, push updates, roll back updates and versions, and operate under version control.

Another advantage of using containers in a CI/CD pipeline is that developers are able to deploy multiple versions of an application at the same time without interfering with one another.

It can reduce the number of errors from configuration issues and allow delivery teams to quickly move these containers between different environments, like from build to staging and staging to production.

And lastly, using containers in a CI/CD pipeline supports automated scaling, load balancing, and high availability of applications creating robust deployments.

Module 4

Continuous testing and continuous improvement   

Continuous testing

Continuous testing means running automated test suites every time a change is committed to the source code repository.

 

There are three types of testing that you’ll typically see in the CI/CD pipeline. These include:

·         Unit testing

·         Integration testing

·         System testing 

 unit testing

to test an individual unit within your code—a unit can be

·         a function,

·         module, or

·         set of processes.

Unit testing checks that everything

 

 

System testing

It simulates active users and runs on the entire system to test for performance

testing for performance can include testing how your program, software, or application handles

1.   high loads or stress,

2.   changes in the configuration; and

3.   changes in system  security

 

Testing frameworks and tools

 

JUnit   JUnit,

J For java  for the Java programming language  

 

PyUnit     

for Python and

NUnit

for C#.

Selenium     Selenium,

for web application developers.

Cypress

is a JavaScript-based  .  Often used for front-end development of web-based applications.

Postman

 to automate

1.   unit tests,

2.   function tests,

3.   integration tests,

4.   end-to-end tests,

5.   regression tests, and

6.   more in your CI/CD pipeline. 

 

 

Continuous improvement

is a crucial part of the DevOps mindset.

Team is always engaged in checking for product efficiency improvements and to reduce errors and bottlenecks.

Key benefits of continuous improvement include:

·         Increased productivity and efficiency

·         Improved quality of products and services

·         Reduced waste

·         Competitive products and services

·         Increased innovation

·         Increased employee engagement 

·         Reduced employee turnover

Key performance indicators (KPIs)

          to improved software or application quality and performance

Popular metrics in DevOps that you can use to measure performance include:

·         Lead time for changes:

This is the length of time it takes for a code change to be committed to a branch (Git/Github) and be in a deployable state.

·         Change failure rate:

This is the percentage of code changes that lead to failures and require fixing after they reach production or are released to end-users.

·         Deployment frequency:

This measures the frequency of how often new code is deployed into production.

 

·         Mean time to recovery:

This measures how long it takes to recover from a partial service interruption or total failure of your product or system. 

Key takeaways

Making high-quality tests part of your CI/CD pipeline is critical to your DevOps success.

       

  Practice quiz: CI/CD pipelines 

 

Congratulations! You passed!

Grade received 90%
To pass 80% or higher
Question 1

Which types of tests are automated and run by a CI/CD pipeline?

1 / 1 point
Correct

Correct. Unit, integration, and system are the types of tests commonly used to perform continuous testing in a CI/CD pipeline.

Question 2

Why is automated testing important? Select all that apply.

0.75 / 1 point
Correct

That’s right! Automated testing ensures that all of your code changes are tested for errors or bugs, allowing you to create fixes as issues arise. It also reduces the risk of human error, especially when performing larger tests that would take time if conducted manually.

This should not be selected

Not quite. Automated testing ensures that all of your code changes are tested for errors or bugs, allowing you to create fixes as issues arise. It also reduces the risk of human error, especially when performing larger tests that would take time if conducted manually.

Correct

That’s right! Automated testing ensures that all of your code changes are tested for errors or bugs, allowing you to create fixes as issues arise. It also reduces the risk of human error, especially when performing larger tests that would take time if conducted manually.

Question 3

Which actions typically trigger a CI/CD pipeline to start? Select all that apply.

1 / 1 point
Correct

That’s right! A change in code, a scheduled or user-initiated workflow, and another pipeline are all actions that could trigger a CI/CD pipeline to start.

Correct

That’s right! A change in code, a scheduled or user-initiated workflow, and another pipeline are all actions that could trigger a CI/CD pipeline to start.

Correct

That’s right! A change in code, a scheduled or user-initiated workflow, and another pipeline are all actions that could trigger a CI/CD pipeline to start.

Question 4

What are some advantages of implementing DevOps?

1 / 1 point
Correct

That’s right! The advantages of implementing DevOps include an automated software development lifecycle, collaborative environments for the development and operations teams, and continuous, iterative improvements to your software or applications.

Question 5

Which of the following are benefits of using containers in your CI/CD pipeline? Select all that apply.

0.75 / 1 point
Correct

That’s right! The benefits of using containers in your CI/CD pipeline include deploying applications easily to multiple operating systems and hardware platforms, deploying multiple versions of an application at the same time without interfering with one another, and creating a more reliable way to work with applications at any stage in the pipeline process.

Correct

That’s right! The benefits of using containers in your CI/CD pipeline include deploying applications easily to multiple operating systems and hardware platforms, deploying multiple versions of an application at the same time without interfering with one another, and creating a more reliable way to work with applications at any stage in the pipeline process.

You didn’t select all the correct answers

Continuous Integration(main heading)

 

Module 4  Automation    

    


     

    

   


Through CI/CD.   



   

   

   

 

   

 The essential parts of CI Automation setup is

1.   VCS

2.   Build Server

                3.   Automated Testing Framwork  


 

 

  

   

   

   

 

AUTOMATION is programming function

1.   That enables continuous routines to be scaled

2.   Catch errors automatically

3.   Reduced need of the human intervention

The automation of manual task is key to CI.

 

CI/CD pipeline is vital component of software development.

Automation makes it easier to work DevOps teams with programming 

teams to work together.

Module 4

Integration with Github    

How to integrate CI with Github.com,

For CI circleCI is used , how circleCI.com

Account is created in circleci.com using google account …ni1@gmail.com

 

Module 4

Cloud Build on GCP

Cloud Build is a fully managed continuous integration and continuous delivery (CI/CD) service provided by GCP

It allows developers to automate the process of

1.   building,

2.   testing, and

3.   deploying applications or

4.   code changes

to various environments.

 

The core components of Cloud Build include:

·         Build triggers

·         Build configurations

·         Build steps

1. Build triggers They define when and under what conditions a build should be triggered. Cloud Build supports various types of build triggers, including:

·         Push trigger: This initiates a build when code changes are pushed to a specific branch of a version control repository like Github.

·          

·         Tag trigger: This triggers a build when a new tag is applied to the repository.

·          

·         Pull request trigger: allowing you to run tests and checks before merging code changes.

·          

·         Scheduled trigger:

2. Build configurations

are YAML files that define the build steps, environment variables, and other settings for a build. The build configuration file is typically named cloudbuild.yaml and is placed in the root of the repository.

 

3.  Build steps

are individual actions that Cloud Build executes in sequence according to the build configuration. Each step can run commands or scripts and the steps are executed in the order they are listed. Let’s look at an example.

 

A typical build configuration might include the following build steps:

 

                                                            i.      Fetching dependencies: The first step pulls in the required libraries and dependencies for the application.

                                                         ii.      Building the application: This step compiles the code and creates the application binaries.

                                                     iii.      Running tests:

                                                      iv.      Deploying: The last step deploys the application to a specified environment like staging or production.

 

Benefits

Using Cloud Build for CI/CD workflows offers a number of benefits, including:

1.   speed,

2.   scalability, and

3.   seamless integrations with other GCP services. 

Cloud Build is a fully managed service, meaning you do not need to worry about infrastructure setup and maintenance.

It automatically scales resources based on your build requirements, allowing you to run multiple builds in parallel, reducing build times and increasing overall development velocity, speed, and efficiency.

Cloud Build's ability to scale automatically means it can handle builds of any size, from small projects to large-scale applications. As your development needs grow, Cloud Build can accommodate the increased workload without manual intervention, ensuring that your CI/CD process remains smooth and efficient.

Cloud Build seamlessly integrates with other GCP services, making it easy to incorporate different stages of the CI/CD workflow into your projects.

Integrations include:

 

Integration capabilities

Cloud Build offers integration capabilities with both

1.   GitHub and

2.   1.   Google Cloud Source Repositories.

 

 

Module 4

CI best practices

Continuous Integration CI is software development practices, in which code changes occurs automatically, frequently and safely to integrate int shared repository.  

Key principles of CI include:

  • Integration
  • Builds
  • Tests
  • Feedback  (from clients)
  • Version control

Core practices

 

CI is composed of three core practices, which include:

·         Automated building (Auto compilations, artifacts)

·         Automated testing

·         Version control system integration

Benefits

Continuous integration enables faster feedback, higher quality software, and a lower risk of bugs and conflicts in your code.

CI is a way for developers to ensure that their code is always up to date and ready to deploy.

CI ensures that reliable software is getting into the hands of users.

 

Module 4

CI testing

 

Continuous testing means running automatic testing suites when ever code changes occurs.

Also, in another words it is running a test as a part of CI/CD pipeline in between Build and Deploy.

 Integration testing

Continuous Integration is when the developer changes the code and deposit it into shared repository frequently.

The benefits of continuous Integration is

1.   Revision Control

2.   Build automation

3.   Continuous Testing

Integration test is how different parts or software modules or routines work together.  

It runs in CI pipeline.

If any change made by developers, integration test verify that everything is working together as expected.

There are different types of CI tests in the CI pipeline:

1.   Code Quality test ,

that the code must not be complicated

2.   Unit test,

like to test function, module and set of processes.

3.   Integration test:

To test different parts of application or modules are working together as expected.

4.   Security or license tests:

To test if the application is free from

a.    Thread

b.   Vulnerability

c.    Risk

Tools used in Integration testing

1.   Pytest:

In Python, to test the integration among the web services.

2.   Selenium framework:

To test the brower based applications or sites, to load the web pages and to check functionality.

3.   Playwrigth framework:

Same as above.  

There are also some “Code coverage” testing tools.

End-to-end testing

                   it is used to test the functionality and performance of your entire application from start to finish by simulating a real user scenario.


 

Practice quiz: Continuous integration

Practice Quiz.30 min.5 total points available.

Congratulations! You passed!

Grade received 100%
To pass 80% or higher
Question 1

What is the role of a webhook in GitHub?

1 / 1 point
Correct

Correct! A webhook is a URL provided to GitHub by the CI system, and it allows GitHub to notify CI tools about code changes.

Question 2

Which of the following are the core components of Cloud Build? Select all that apply.

1 / 1 point
Correct

Correct! Build triggers, configurations, and steps all make up the core components of Cloud Build. Build triggers are events that begin the Cloud Build process. Build configurations are YAML files that define the steps and settings for your build. Build steps are the actions that Cloud Build executes in a specific order depending on the build configuration.

Correct

Correct! Build triggers, configurations, and steps all make up the core components of Cloud Build. Build triggers are events that begin the Cloud Build process. Build configurations are YAML files that define the steps and settings for your build. Build steps are the actions that Cloud Build executes in a specific order depending on the build configuration.

Correct

Correct! Build triggers, configurations, and steps all make up the core components of Cloud Build. Build triggers are events that begin the Cloud Build process. Build configurations are YAML files that define the steps and settings for your build. Build steps are the actions that Cloud Build executes in a specific order depending on the build configuration.

Question 3

What is the purpose of utilizing version control in continuous integration with your code?

1 / 1 point
Correct

Correct! Version control allows developers to view code history and manage the code changes as needed.

Question 4

Why should you run integration tests? Select all that apply.

1 / 1 point
Correct

Correct! You conduct integration tests to make sure the different parts of your application work together and catch errors earlier on in the CI pipeline, which can save you time, money, and a lot of headaches.

Correct

Correct! You conduct integration tests to make sure the different parts of your application work together and catch errors earlier on in the CI pipeline, which can save you time, money, and a lot of headaches.

Question 5

You have developed new code for an application you are creating for a client. You are using the Cloud Build service supported by Google Cloud Platform, or GCP. Which step in the build steps process refers to moving the application to an environment when it is ready for production?

1 / 1 point
Correct

Correct! The last step in the build step process is to deploy the application to a specific environment for production.

 

Module 4 Continuous Delivery and Continuous Deployment  











    
     

 

   

    

     

   

  Here deployment is not in Production but in test server as above case 



   

 



  

  

 

 

 


 

 

 

 

 

 

 

    

Module 4

Value stream mapping

                To draw flowchart of all steps taken during CI/CD

value stream mapping (VSM) is a technique used to analyze, design, and manage the flow of materials and information required to bring a final product to a customer.    

VSM is also know as

1.   Material Flow

2.   Information Flow

Mapping through flowcharts like ucid.co software

Benefits of VSM

1.   To identify bottlenecks in your value stream,

2.   To identify inefficiencies in your process, and

3.   To identify current areas of improvement.

4.   It helps to reduce the number of steps in your process and

5.   helps you visualize where handoffs occur.

6.   To identify where wait time is preventing work from moving through your system.

The goals of VSM

1.   To reduced the wastages of time and resources.

2.   To increase the efficiency of processes.

To do this, create a detailed map of all the necessary steps involved in your business process with a diagram or a flowchart   

 


 

 









 

 

 

 

 

 

This diagram outlines these steps: 

1.   Define the problem. What are you trying to solve or achieve?

2.   List the steps in your current process. For each step, make sure to note

a.    the amount of time needed,

b.   any inputs and

c.    any outputs, and

d.   the resources—both people and materials—

e.    necessary to complete each step.

3.   Create and organize the map using the above data. Your goal is to illustrate the flow of your process, so begin with the start and finish with the end of your process. If you need help organizing the flow, think back to the steps in the software development lifecycle and use that as a guide to organize your steps. 

4.   Find areas that can be improved. Gather information about your current process by answering questions like:

a.    Can some tasks be done in parallel?

b.   Can tasks be reordered to improve efficiency?

c.    Can tasks be automated to reduce the amount of manual labor?

5.   Update the map with your findings.

 

6.   Implement the new process. But don’t stop here! If this new process works well for your project—great! Keep in mind that coding, software, programs, apps—everything digital—are constantly updating to meet client or business needs. It can be helpful to implement an iterative process—either manual or automated—to make sure that any new hiccups in your process can be identified and addressed before they become a larger issue. 

For more information and an explanation of how value maps benefit DevOps, see the article How to Use Value Stream Mapping in DevOps

on the Lucidchart website.  


 

 

 

 

 

 

 Other common components of a VSM

include: lead times, wait times, handoffs, and waste. 

  • Lead time is the length of time between when a code change is committed to the repository and when it is in a deployable state. 
  • Wait time indicates the length of time a product has to wait between teams.  
  • Handoffs are the transfer of information or responsibilities from one party to another. 
  • Waste refers to any time you are not creating value. In software development, there are seven types of waste production. 
    • Partially completed work refers to when software is released in an incomplete state. This leads to more waste because additional work is needed to make updates.
    • Extra features refers to creating waste by doing more work than is required. This may be well-intentioned but can signal a disconnect between what the customer wants and what’s being created.
    • Relearning refers to waste generated from a lack of internal documentation. This can be a result of not investigating software errors, failures, or outages when they occur and having to relearn what to do if they happen again. It also includes having to learn new or unfamiliar technologies, which can create delays or wait times in workflows.
    • Handoff waste can occur in a few places—when project owners change, when roles change, when there is employee turnover, and when there is a breakdown in the communication pipeline between teams.
    • Delays refer to when there are dependencies on coupled parts of the project. A delay in one stage or decision may create a delay in another, which can create a surge in waste.
    • Task switching refers to the waste that is generated when an individual has to jump between tasks, which involves mental context switching. This may result in the individual working more slowly and/or less efficiently.
    • Defects refers to waste that is generated when bugs are released with software. Similar to partially completed work, defects can result in extra time and money down the line, as well as delays and interruptions in workflow due to task switching.

 

Module 4

Github and delivery

 

 

 

 

 

 --------https://www.youtube.com/watch?v=N-Iv4KIOvKY

GitHub can facilitate your efforts in CI/CD.

How GitHub supports CI/CD

                Github (<ACTIONS>) supports external tools of CI/CD to restrict to merge the pull request through webhooks and APIs.

For example:

        GitHub refused to merge the Pull Request before the completion of some steps E.g;

1.   Pull Request is reviewed and signed by one or more code viewers

2.   CI process is completed

3.   CI tests are completed

4.   Pull requesters are acknowledged project license, code of conduct and coding standards

 

GitHub Actions

GitHub Actions is a feature of GitHub that allows you to run tasks whenever certain events occur in your code repository.

With GitHub Actions, you are able to trigger any part of a CI/CD pipeline off any webhook on GitHub. 

 

Resources for more information

GitHub Actions documentation - GitHub Docs

A beginner’s guide to CI/CD and automation on GitHub - The GitHub Blog

GitHub Protips: Tips, tricks, hacks, and secrets from Jason Etcovitch - The GitHub Blog

 

Module 4

Configuration management

Consistency and stability

CM ensures that each component of your code is automatically and properly

1.   built,

2.   monitored, and

3.   updated as needed


Configuration files

Configuration files are commonly referred to as a manifest or playbook

You can think of these as statements in configuration files, on how you want the system to look and perform.

A playbook (Conf. file) might say, “I need a server with

1.   32GB of RAM (will be allocated Virtually from main portion of Memory)

2.   running Debian Linux,

3.   with Python 3.9 and

4.   Nginx installed.”

Create a configuration file as the input to your configuration management tool (like puppet) describing the desired state, as describe above

Pro tip:

Store configuration management files alongside the application code in your revision control system.

 

Continuous deployment (Main Heading)  

Module 4

From staging to production

Like from coding to cloud

Stagging is from coding to cloud and from cloud to end user is Production

 

Stagging is:

1.   Coding

2.   Testing   (All types of , Unit tests, integration tests….etc)

3.   Again Testing (Apha,beta tests)

4.   After removing sensitive information Containerize the application

5.   Container is delivered to DevOps

6.   DevOps Put application to cloud Server.

Then Productions:

        DevOps deliver the application From Cloud Server Application

end user i.e. called Production.

In other words, Productions means software is in REAL LIFE (Finally) 

 

 

 

 

 

 


 

 


 

 

 

 

  

 




 

 

 

 


 

 

 

 

 

Module 4

Postmortem 

 


Module 4

Qwiklabs assessment: Set up CICD


 

 

 

 ---------------Module 4

Module 4 review  

 

Graded assessment for module 4

Graded Quiz.50 min

DueFeb 2, 12:59 PM PKT

Congratulations! You passed!

Grade received 85.41%
Latest Submission Grade 85.42%
To pass 80% or higher
Retake the assignment in 23h 42m
Question 1

GitHub is very helpful in continuous integration (CI) because it automatically notifies the CI tools about code changes and whether your commits meet the conditions you have set. Which of the following is the key element that allows communication between your CI system and GitHub?

1 / 1 point
Correct
Question 2

You log on to a virtual call to meet with another software developer to discuss build steps in Cloud Build. What are the build steps that are typically included in the build configuration? Select all that apply.

0.75 / 1 point
Correct
Correct
You didn’t select all the correct answers
Question 3

You are excited to implement a new practice in your coding development. What software development practice describes code changes that occur automatically, frequently, and safely when integrating them into a shared repository?

1 / 1 point
Correct
Question 4

When speaking to a new Python programmer, how might you describe the workflows available in GitHub Actions? Select all that apply.

1 / 1 point
Correct
Correct
Question 5

Your team has just launched a mobile application that translates English into American Sign Language. Upon the release, your team discovers that the app doesn't integrate well with the Android system. Your team fixes the problem urgently and after a few quick rounds of testing, your team pushes out another release. What type of release is this?

1 / 1 point
Correct
Question 6

A software developer pushes out some poorly written code to production. This resulted in a system failure and multiple outages. Which process allows teams to understand and learn from system failures and incidents?

1 / 1 point
Correct
Question 7

What is DevOps? Select the best answer.

1 / 1 point
Correct
Question 8

As part of a development or operations team, what are the benefits of using DevOps tools in the software development lifecycle?

1 / 1 point
Correct
Question 9

Which of the steps of the “from coding to the cloud” process listed below are done by the DevOps team? Select all that apply.

1 / 1 point
Correct
Correct
Question 10

There are seven key concepts of automation for continuous integration. Which of the following are included in those concepts? Select all that apply.

0.5 / 1 point
Correct
Correct
This should not be selected

Not quite. Refer to Automation for more information.

Question 11

Which metric would you employ to measure the length of time it takes for a code change to be committed to a branch and be in a deployable state?

1 / 1 point
Correct
Question 12

Value stream mapping (VSM) can help you identify bottlenecks in your value stream, inefficiencies in your process, and current areas of improvement. Which of the following are common components of VSM? Select all that apply.

0 / 1 point
Correct
Correct

You didn’t select all the correct answers   

   

    

   

  

 

  

  

  

 

  

   

  

  

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  

 ----------

Comments

Popular posts from this blog

PANDAS micro course by www.Kaggle.com https://www.kaggle.com/learn/pandas

Course No 2 Using Python to Interact with the Operating System Rough Notes

Introduction to Git and GitHub https://www.coursera.org/learn/introduction-git-github/