Docker Basics – Part 3

1. To stop and remove ALL containers
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)

2. To stop ALL containers
docker stop $(docker ps -a -q)

3. To remove ALL containers
docker rm -f $(docker ps -a -q)

4. Stop and remove Containers
docker stop $(docker ps -qa)
docker rm $(docker ps -qa)

5. Remove Images
docker rmi $(docker images -qa)
docker images
docker rmi -f b00ea124ed62 529165268aa2 0c45f7936948
docker images

5. Docker Volume Example usage:

docker run -v c:\ContainerData:c:\data:RO for read-only access
docker run -v c:\ContainerData:c:\data:RW for read-write access
docker run -v c:\ContainerData:c:\data for read-write access (default)

docker run -itd -p 8030:80 -m 1GB –name nginx1 -v c:/html:/usr/share/nginx/html nginx

docker run -itd -p 8040:80 -m 1GB –name nginx2 -v c:/html:/usr/share/nginx/html:ro nginx:v2

6. Docker run Examples

–privileged
$ docker run -t -i –rm ubuntu bash
root@bc338942ef20:/# mount -t tmpfs none /mnt
mount: permission denied

$ docker run -t -i –privileged ubuntu bash
root@50e3f57e16e6:/# mount -t tmpfs none /mnt
root@50e3f57e16e6:/# df -h
Filesystem Size Used Avail Use% Mounted on
none 1.9G 0 1.9G 0% /mnt

-w
$ docker run -w /path/to/dir/ -i -t ubuntu pwd
The -w lets the command being executed inside directory given, here /path/to/dir/.
Note : If the path does not exist it is created inside the container.

docker run -itd -p 8050:80 -m 1GB –name nginx3 -w //usr//share//nginx//html -v c:/html:/usr/share/nginx/html nginx

-e, –env, –env-file
$ docker run -e MYVAR1 –env MYVAR2=foo –env-file ./env.list ubuntu bash

$ docker run –env VAR1=value1 –env VAR2=value2 ubuntu env | grep VAR
VAR1=value1
VAR2=value2

7. Limiting Memory Examples
$ docker run -d -p 8081:80 –memory=20m –memory-swap=20m nginx
$ docker container run -d –memory-reservation=250m –name mymem1 alpine:3.8 sleep 3600

8. Limiting CPU Examples:
–cpus
Docker 1.13 and higher:
$ docker run -it –cpus=”.5″ ubuntu /bin/bash

Docker 1.12 and lower:
$ docker run -it –cpu-period=100000 –cpu-quota=50000 ubuntu /bin/bash
$ docker run -it –cpus-shares=”512″ ubuntu /bin/bash

9. To check docker stats :
$ docker stop $(docker ps -aq); docker rm $(docker ps -aq)
$ docker run -itd -p 8030:80 –name nginx7 -v c:/html:/usr/share/nginx/html:ro nginx:v2
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
779eb8148aa7 nginx7 0.00% 1.914MiB / 8.75GiB 0.02% 906B / 0B 0B / 4.1kB 2

10. Create and start a container
$ docker create -t -i fedora bash
6d8af538ec541dd581ebc2a24153a28329acb5268abe5ef868c1f1a261221752

$ docker start -a -i 6d8af538ec5
bash-4.2#

11. Docker Copy Examples
Copy a file from host to container:
docker cp Dockerfile 779eb8148aa7:/tmp/Dockerfile
docker exec -it 779eb8148aa7 //bin/bash
docker cp Dockerfile 779eb8148aa7:/tmp/Dockerfile123
docker exec -it 779eb8148aa7 //bin/bash

Copy a file from Docker container to host:
docker cp 779eb8148aa7:/tmp/Dockerfile123 Dockerfile_Delete

Copy a Folder from host to container:
docker cp /home/captain/my_dir ubu_container:/home
docker cp ubu_container:/home/my_dir /home/captain

12. Docker Logs Examples
$ docker logs 779eb8148aa7 –follow

Docker Basics – Part 2

Lets see some basic of DockeronWindows here.

Pre-requisite:
Understanding of Docker technology
Good Configuration (with nested virtualization)
Either Windows 10 or Windows 2016
NOTE:
Windows 10 host does not share host kernel with container kernel as windows 10 is a client version of OS and its does not have any windows server core image.
So windows 10 make use of Hyper-V containerisation technology to support windows containers.

Or, you can proceed with installing docker on windows server 2016.
Here windows host kernel shares the kernel with container kernel.

NOTE:
A. If you install Docker on windows10, the docker info will be like:

Kernel version: 4.8.5-moby
Operating system: Alping Linux v3.4
OSType: Linux

B. If you install Docker on windows server 2016:

Kernel Version: 10.0 14393 (14393.3204.amd64fre.rs1_release.190830-1500)
Operating System: Windows Server 2016 Datacenter Version 1607 (OS Build 14393.3204)
OSType: windows

C. If you install Docker on Windows 10 and try to pull a microsoft image from docker hub, you may receive errors, something like:
“Unknown blob”

Details:
By default, the installed version is Linux which will not pull the microsoft image.
To resolve, you need to “switch to windows container”
Once docker is switched, container feature will be enabled and start supporting microsoft image.
You can verify the details by checking through Docker information:
Default isolation:hyperv
Kernel version:switched to linux
operating system: WIndows 10 enterprise
OSType: Windows

Hyper-V Containers:
In hyper-v containers technology, multiple instance can run concurrently on a host, however each container runs inside a special virtual machine.
This provides kernel level isoliation betweek each hyper-v container and container host.

Windows server Container:
A. windows server container provide application isolation and namespace isolation technology.
The server container shares a kernel with container host and all containers running on the host.

B. Microsoft is currently shipping two type of images:
nanoserver
windowsservercore

C. once your container is running, be default windows docker daemon kernal allocates a 20GB drive and you can see the drives in computer management(disk management) in blue colour. There will be new drives when a container iw being run.
If not kernel wont allocate any drives and you wont see any.

NOTE:
Basically, its a isolated file system by itw own created for each and every containers and mounted on host system.
The isolation will NOT affect teh host file system in any manner.
You can then copy a file from host file system to container file system as just like sharing a folder between both of them.

Namespace isolation:
Namespace enables the host to give each container a virtualized namespace that includes only the resources it should see.
With the restricted view, a container can’t access files not included in its virtualized namespace regardless of permission because it simple can’t see them, not can it list or interact with the application that are not part of the container.

Why we need Namespace isolation?
In a single user computer, a single system environment is fine.

But on a server, where you want to run multiple services, it is essential to security and stability that the services are as isolated from each other as possible.
Imagine a server running multiple services, one of which gets compromised by an intruder, chances are that he may be able to exploit other services and may be the entire server. Namespace isolation can provide a secure environment to eliminate the risk.

Docker on windows provides isolation for:
Process
Network
storage
environment variable
registries
users and groups

Process Isolation:
Every OS kernal has process tree, even Linux and windows has their own process tree.
Linux kernal maintains a single process tree while it boots for the first time and every time a new process begins there will be a parent-child relationship.
Using process isolation, it become possible to have multiple nested process trees. This ensures process isolation and processes belonging to one process tree cannot inspect or kill or even cannot know the existence of other process.

Every contianer has its own process. We can use the ‘Object ID’ parameter to understand the process tree when a new container is spun.
Even If you kill a process in that isolated process, it will now affect the host.

Network isolation:
Each container has their own network address.
A network isolation allow each of these processes to see an entirely different set of networking interfaces.
Even the loopback interface is different for each network namespace.

Scaling out with Docker:
When we scale out, each and every cotainer do specific JOB and since every container do specific job, we can even sinup multiple cantainers at same time.

This is more like getting one job done from different machines.

Microservices:
Microservices is an approach to application development, where every part of the application is deployed as a fully self-contianer component, called a microservice taht can be individually scaled and updated.

When a application is constructed using microservices, each subsystem is a microservice.
In a production environment, each can scale out to different numbers of instances across a cluster of servers depending on their resource demands as customer request levels rise and fall.

Dockerfile:

Docker can build images automatically by reading the instructions from a Dockerfile, a text file that contains all commands, in order, needed to build a given image.

Docker compose file has .yml extension
Dockerfile has NO EXTENSION

Sample Dockerfile for windows:
FROM microsoft/iis

SHELL [“powershell”]

RUN install-windowsfeature net-framework-45-aspnet;
install-windowsfeature web-asp-net45

COPY empapp empaap

RUN remove-website -name ‘Default web site’

RUN new-website -Name ‘guidegenerator’ -port 80
-physicalpath ‘c:\empapp’ -applicationpool ‘.net v4.5’

EXPOSE 80

CMD [“ping”, “-t”, “localhost”]

Docker Basics – Part 1

1. Installation and Remove Commands

Here are the few commands to get started with installation of Docker, Docker swarm, removing Docker from your machine etc.

1. curl -fsSL get.docker.com -o get-docker.sh
2. sh get-docker.sh
3. sudo usermod -aG docker riyas
4. logout
5. systemctl start docker.service
6. docker swarm join –token SWMTKN-1-251oekzthxdu2s9oq5w7xxxxxxxxxxxxxxxxxxxxlh3g62yjfevwv4-6xprhxcuvrpri3nozs0h84o1
7. docker swarm join –token SWMTKN-1-251oekzthxdu2s9oq5w7gxxxxxxxxxxxxxxxxxxxxx2yjfevwv4-6xprhxcuvrpri3nozs0h84o15 10.2.1.4:2377
8. docker swarm leave
9. yum remove docker docker-engine docker.io docker-ce

2. To Check/ADD/Update the repo for Docker:

To check existing:
cat /etc/yum.repos.d/dockerrepo.repo

To Edit:
sudo vim /etc/yum.repos.d/dockerrepo.repo
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

3. Getting started with Docker container.

a. To list all running containers
docker ps
docker container ls

b. TO list all running and stopped containers
docker ps -a
docker container ls -a

c. To list all docker images
docker images
docker images -a

d. To pull a docker image
docker pull nginx

NOTE:
By default the tag is latest. If we need a specific version or tag we must specify the version.

e. To run a docker container
docker run nginx
docker run -it nginx
docker run -it –name testnginx nginx
docker run -itd –name testnginx2 nginx
docker run -it –name testnginx3 nginx
docker run -i -t alpine /bin/sh

NOTE:
In interactive mode, we can specify Ctrl+PQ to leave the interactive mode without exiting.
We can specify the shell to interact with
When running the container, docker will search the images locally and will switch to repo resources

f. To check logs of container
docker logs

g. To run a command inside container
docker exec ls
docker exec -it /bin/sh

h. TO stop a container
docker stop

i. To delete a container
docker rm

j. To delete a docker image
docker rmi nginx

k. To look for help
docker –help

3. Docker Images

Docker images are the read-only templates from which a container is created.
Docker images can be built using docker commit. Docker provides a very simple way to create images. The workflow goes like this:

• Create a container from a Docker image
• Make required changes, like installing a webserver or adding a user
• On command line execute the following

docker commit

This is the simplest method to create docker images by ourself.

4. Build an images using Docker File:

Docker provides an easy way to build images from a description file.
This file is known as Dockerfile. A simple Dockerfile will look like this

Sample DockerFile

——
FROM centos
RUN yum -y update && yum clean all
RUN yum -y install httpd
EXPOSE 80
CMD [“/usr/sbin/httpd”, “-D”, “FOREGROUND”]
——

Dockerfile supports various commands. I have used a few of them and I am going to describe them below:
FROM: This defines the base image for the image that would be created
MAINTAINER: This defines the name and contact of the person that has created the image.
RUN: This will run the command inside the container.
EXPOSE: This informs that the specified port is to be bind inside the container.
CMD: Command to be executed when the container starts.

Let us create a file with the above lines and build an image from this.

docker build -f .

Images created this way document the steps involved clearly and hence using Dockerfile is a good way to build reproducible images.

It is also worth nothing that each line in Dockerfile create a new layer in Docker image.
So often, we will club statements using and operator (&&) like this:

RUN yum -y update && yum clean all

Terraform Basics

1. How to find the terraform state of any configuration.

Ans:
terraform state list
terraform state show ‘attribute’

Example:
1.a LoginName@Azure:~/directorysampleone$ terraform  state list
azurerm_resource_group.rgterra

1.b LoginName@Azure:~/directorysampletwo$ terraform state list
azurerm_network_interface.nicterrademo
azurerm_network_security_group.nsgterrademo
azurerm_public_ip.publicipterrademo
azurerm_resource_group.rgterrademo
azurerm_storage_account.mystorageaccount
azurerm_subnet.subnetterrademo
azurerm_virtual_network.vnetterrademo
random_id.randomId

1.c LoginName@Azure:~/directorysampleone$ terraform state show ‘azurerm_resource_group.rgterra’
# azurerm_resource_group.rgterra:
resource “azurerm_resource_group” “rgterra” {
    id       = “/subscriptions/xxxxx-xxxxx-xxxxx-xxxxx/resourceGroups/terratestResourceGroupa”
    location = “westus”
    name     = “terratestResourceGroupa”
    tags     = {
        “billingcode” = “az103-9009”
}
}

2. Write configuration for creating a resource group.

Syntax:
provider “azurerm” {
    version = “~>1.32.0”
}

# Create a new resource group
resource “azurerm_resource_group” “rg” {
    name     = “myTFResourceGroup”
    location = “eastus”
}

3.  Find the provider version in any given directory of Terrform configuration.

Ans:
LoginName@Azure:~/directorysampleone$ terraform –version
Terraform v0.12.21
+ provider.azurerm v1.44.0

LoginName@Azure:~$ terraform –version
Terraform v0.12.21

LoginName@Azure:~/directorysampleone$ cat test.tf
provider “azurerm” {
}
resource “azurerm_resource_group” “rgterra” {
        name = “terratestResourceGroupa”
        location = “westus”
}

NOTE:
In the directory configuration on ‘directorysampleone’, no provider version is mentioned, so it uses both the version in CLI and the latest to initialize.

4. What are the steps to build an infrastructure.
Terraform init
terraform plan
terraform apply

5. How to Destroy an Infrastructure?
terraform plan -destroy
terraform destroy

Azure Storage – Part 1

Azure Storage

Creating a Storage Account

Login to your Azure account.
1. In the dashboard, search for Storage
2. Click on Storage accounts
3. Fill in the mandatory fields to create a storage account.
4. Three fields are important while creating a storage account
Performance (Premium or Standard)
Account Kind (Storage V2, Storage V1, Blob storage)
Replication (LRS, GRS, ZRS, RA-GRS)
Access tier (Cool or Hot)

NOTE:
The storage account name must be unique across all storage account in azure.

Standard storage accounts are backed by magnetic drives and provide the lowest cost per GB. They’re best for applications that require bulk storage or where data is accessed infrequently.

Premium storage accounts are backed by solid-state drives and offer consistent, low-latency performance. They can only be used with Azure virtual machine disks, and are best for I/O-intensive applications, like databases.

Additionally, virtual machines that use Premium storage for all disks qualify for a 99.9% SLA, even when running outside of an availability set.

General-purpose storage accounts provide storage for blobs, files, tables, and queues.
Blog storage is specialized for storing blob data.
Access tier specifies how frequently your storage is going to be used.

5. You can fine-tune the storage account by clicking on advanced, tags, networking options
6. Review and Create.

Working with Storage account
1. Go to the storage account created.
2. Go to Settings – Access Keys – Note down the storage name and Access Keys.
3. Let’s Explore the storage using ‘Azure Storage Explorer’
4. Download and install the Azure storage explorer If not present.
5. Once installed, open the explorer and connect to your storage using the name and key copied earlier.
6. Once Connected, we can see the storage types:
Blob Containers
File Shares
Queues
Tables

Azure Blob:
1. In Storage Explorer, right-click on Blob Containers, select Create Blob Container and name your container.
2. Once the container is created, upload a sample file.
3. We can right-click the uploaded file and see the properties.
4. Copy the URL which cannot be accessed by default.
5. Go to the container – Right Click – Manage ‘set Public access level’ – Choose ‘Public read access for container and blobs’ – Apply.
6. Now the URL can be accessed over the internet.

NOTE:
You can do the procedure on creating containers/changing permission from your portal as well.

Azure Tables:
1. In storage explorer, right-click on tables
2. Create a new table.
3. Click on ADD and a new column to your table.
4. The new column must meet the properties(name, data type, value) and add the column.

Azure Files:
1. In storage explorer, right-click on ‘File Shares’
2. Create a new Share
3. Once created, note down the URL.
4. Connect to the file share from your local desktop using ‘Map Network Drive’
5. The ‘storage account name’ and ‘Key’ can be used for authentication.
6. Or, you can simply click on the ‘Fileshare’ in the Azure portal – CLick on the ‘Connect’ button.
7. Copy the Powershell script and run it in your local machine.

Azure Queues:
1. In storage explorer, right-click on ‘Queues’
2. Create a queue
3. Go to queue – Click on ‘Add Message’ – Set the time limit for the message.
4. After the expiration, the message no longer will be in the account.