Lets see some basic of DockeronWindows here.
Understanding of Docker technology
Good Configuration (with nested virtualization)
Either Windows 10 or Windows 2016
Windows 10 host does not share host kernel with container kernel as windows 10 is a client version of OS and its does not have any windows server core image.
So windows 10 make use of Hyper-V containerisation technology to support windows containers.
Or, you can proceed with installing docker on windows server 2016.
Here windows host kernel shares the kernel with container kernel.
A. If you install Docker on windows10, the docker info will be like:
Kernel version: 4.8.5-moby
Operating system: Alping Linux v3.4
B. If you install Docker on windows server 2016:
Kernel Version: 10.0 14393 (14393.3204.amd64fre.rs1_release.190830-1500)
Operating System: Windows Server 2016 Datacenter Version 1607 (OS Build 14393.3204)
C. If you install Docker on Windows 10 and try to pull a microsoft image from docker hub, you may receive errors, something like:
By default, the installed version is Linux which will not pull the microsoft image.
To resolve, you need to “switch to windows container”
Once docker is switched, container feature will be enabled and start supporting microsoft image.
You can verify the details by checking through Docker information:
Kernel version:switched to linux
operating system: WIndows 10 enterprise
In hyper-v containers technology, multiple instance can run concurrently on a host, however each container runs inside a special virtual machine.
This provides kernel level isoliation betweek each hyper-v container and container host.
Windows server Container:
A. windows server container provide application isolation and namespace isolation technology.
The server container shares a kernel with container host and all containers running on the host.
B. Microsoft is currently shipping two type of images:
C. once your container is running, be default windows docker daemon kernal allocates a 20GB drive and you can see the drives in computer management(disk management) in blue colour. There will be new drives when a container iw being run.
If not kernel wont allocate any drives and you wont see any.
Basically, its a isolated file system by itw own created for each and every containers and mounted on host system.
The isolation will NOT affect teh host file system in any manner.
You can then copy a file from host file system to container file system as just like sharing a folder between both of them.
Namespace enables the host to give each container a virtualized namespace that includes only the resources it should see.
With the restricted view, a container can’t access files not included in its virtualized namespace regardless of permission because it simple can’t see them, not can it list or interact with the application that are not part of the container.
Why we need Namespace isolation?
In a single user computer, a single system environment is fine.
But on a server, where you want to run multiple services, it is essential to security and stability that the services are as isolated from each other as possible.
Imagine a server running multiple services, one of which gets compromised by an intruder, chances are that he may be able to exploit other services and may be the entire server. Namespace isolation can provide a secure environment to eliminate the risk.
Docker on windows provides isolation for:
users and groups
Every OS kernal has process tree, even Linux and windows has their own process tree.
Linux kernal maintains a single process tree while it boots for the first time and every time a new process begins there will be a parent-child relationship.
Using process isolation, it become possible to have multiple nested process trees. This ensures process isolation and processes belonging to one process tree cannot inspect or kill or even cannot know the existence of other process.
Every contianer has its own process. We can use the ‘Object ID’ parameter to understand the process tree when a new container is spun.
Even If you kill a process in that isolated process, it will now affect the host.
Each container has their own network address.
A network isolation allow each of these processes to see an entirely different set of networking interfaces.
Even the loopback interface is different for each network namespace.
Scaling out with Docker:
When we scale out, each and every cotainer do specific JOB and since every container do specific job, we can even sinup multiple cantainers at same time.
This is more like getting one job done from different machines.
Microservices is an approach to application development, where every part of the application is deployed as a fully self-contianer component, called a microservice taht can be individually scaled and updated.
When a application is constructed using microservices, each subsystem is a microservice.
In a production environment, each can scale out to different numbers of instances across a cluster of servers depending on their resource demands as customer request levels rise and fall.
Docker can build images automatically by reading the instructions from a Dockerfile, a text file that contains all commands, in order, needed to build a given image.
Docker compose file has .yml extension
Dockerfile has NO EXTENSION
Sample Dockerfile for windows:
RUN install-windowsfeature net-framework-45-aspnet;
COPY empapp empaap
RUN remove-website -name ‘Default web site’
RUN new-website -Name ‘guidegenerator’ -port 80
-physicalpath ‘c:\empapp’ -applicationpool ‘.net v4.5’
CMD [“ping”, “-t”, “localhost”]