Key Concepts
- Docker Containers: Docker containers are lightweight, portable, and isolated environments that include everything needed to run a piece of software, including the code, runtime, system tools, libraries, and settings.
- Docker Images: An image is a blueprint for a container, defining the application and its dependencies. Images are often based on other images (like an Ubuntu or Python image) and can be built using Dockerfiles.
- Dockerfile: A text file containing instructions on how to build a Docker image. It defines the base image, application code, environment variables, and commands to be run inside the container.
- Docker Hub: A public registry where Docker users can share and store images. Docker Hub provides a collection of pre-built images, which can be pulled and used to create containers.
- Volumes: These are used to persist data generated or used by Docker containers. While containers are ephemeral, volumes allow data to outlive the lifecycle of a container.
- Docker Compose: A tool that allows users to define and manage multi-container Docker applications. Using a YAML file (docker-compose.yml), you can define the services, networks, and volumes that your application requires.
- Portability: Containers run the same, regardless of the environment.
- Efficiency: Containers share the OS kernel, making them more lightweight than traditional virtual machines.
- Speed: Containers are fast to start and run compared to traditional virtual machines.
- Scalability: Easily scale applications by running multiple containers in parallel.
- Get system-wide information.
docker info
It provides detailed information about your Docker installation, including details about the Docker daemon, containers, images, volumes, networks, and system-wide information like memory, CPUs, and storage drivers.
- List all docker commands
docker --help
You can get help for a specific command with "docker <COMMAND> --help" syntax. For example following command list all additional attributes for command 'docker run'
docker run --help
- Docker root directory /var/lib/docker/
- List all images
docker images
- List all running containers
docker ps
It list only active (running) containers
- List all containers
docker ps -a
It list both active (running) and inactive (stopped, exited) containers
- List all stopped containers
docker ps --filter "status=exited"
Here "status=exited" indicates stopped/exited/etc containers which are not active(not running)
- List last n created containers
docker ps -n 2
- List latest created container
docker ps -l
- Start stopped container
docker start <CONTAINER_ID_OR_NAME>
Docker container ID and name you can find with "docker ps -a" command
- Restart container
docker restart <CONTAINER_ID_OR_NAME>
- Stop running container
docker stop <CONTAINER_ID_OR_NAME>
You can check all running containers with "docker ps" command
docker kill <CONTAINER_ID_OR_NAME>
- Pause/Unpause container so none can access it
docker pause <CONTAINER_ID_OR_NAME>
docker unpause <CONTAINER_ID_OR_NAME>
- Attach to interact with a running container
docker attach <CONTAINER_ID_OR_NAME>
- Rename container
docker rename <CONTAINER_ID_OR_NAME> <NEW_NAME>
- Remove (delete) container
docker rm <CONTAINER_ID_OR_NAME>
You can forcefully delete running docker container with "-f" attribute
docker rm -f <CONTAINER_ID_OR_NAME>
- Remove (delete) images
docker rmi <IMAGE_ID>
- Create and run a new container from an image
docker run hello-world
It will first download an image from registry if image is not found in local system
Following command create and run a new container from image 'nginx'. Here we exposing nginx container port 80 (right side port) to host machine port 80 (left side port). We are saying container port 80 can be accessed on host port 80. We added attribute -d to run container in background (detach)
docker run -d -p 80:80 nginx
Following command expose nginx container port 80 to host machine port 8080 (You can now access nginx docker container service on host machine with url localhost:8080)
docker run -d -p 8080:80 nginx
👉 In following command the attribute -i for interactive mode which keep stdin open allowing you to send input to the container, -d to detach and run service in background, -t giving you a terminal interface, -p for port and so on. Find more with "docker run --help" command.
👉 We need to use "-dit" or "-it" attributes with most of the commands. E.g. "docker run -dit -p 80:80 container_id nginx" or "docker exec -it container_id /bin/bash" so remember these attributes.
docker run -d -i -t -p 8080:80 nginx
You can shorten this command to
docker run -ditp 8080:80 nginx
- Create a new container from an image
👉 Difference between "docker run" and "docker create" commands
The "docker create" command is used to create a new container from an image, but it does not start the container. It returns the container ID, which you can use for further actions. Once the container is created, you can start it using the "docker start" command
docker create nginx
docker start <CONTAINER_ID_OR_NAME>
The "docker create" is useful when you want to set up a container with specific configurations before running it. For example, you might want to create the container, inspect it, configure networking, or set up volumes before it starts executing its tasks. This approach gives you more control over the container's initial setup.
- Download an image from registory
docker pull mysql
Download an image with specific version from registry
docker pull mysql:5.7
- Execute a command inside a running container
docker exec <CONTAINER_ID_OR_NAME> <COMMAND>
E.g. docker exec my_container ls /
Following command open a bash shell inside the container
docker exec -it my_container /bin/bash
- Create a new custom image from an existing container's changes
docker commit <CONTAINER_ID> <MY_CUSTOM_IMAGE>
E.g. docker commit -m "Added custom software" -a "Siraj Chaudhary" my_container my_custom_image:v1
In the following example we will run ubuntu container, install software inside container by accessing bash shell of container, create a new custom image from that updated container and finally create and run a new container from that newly created custom image.
#Pull and run a new ubuntu container.
docker pull ubuntu
docker run -dit ubuntu
#Access the bash shell of container
docker exec -it <CONTAINER_ID> /bin/bash
#Install few software i.e. apache2 inside the container, create folder files etc
apt-get update
apt-get install apache2
service apache2 start
service apache2 status
mkdir myfolder
cd myfolder
touch myfile1.txt
exit
#Create a new image out of that container's changes
docker commit -m "Added custom software" -a "Siraj Chaudhary" <CONTAINER_ID> my_custom_image:v1
#Create and run a container from the created new image
docker run -dit my_custom_image:v1
#Access the bash shell of container and check apache2 service is running
docker exec -it <CONTAINER_ID> /bin/bash
service apache2 status
Note: The alternative and best solution to "git commit" is creating & using dockerfile
- Build your own image using dockerfile
mkdir my_workspace
cd my-workspace
touch app.py
touch dockerfile
Step1: Create app.py
print("Hello, Docker!")
Step2: Create a dockerfile
# Use an official Python runtime as a parent image
FROM python:3.8-slim
# Set the working directory in the container
WORKDIR /my_workspace
# Copy the current directory contents into the container at /app
COPY . /my_workspace
# Run the application
CMD ["python", "app.py"]
Step3: Build the docker image
docker build -t my-python-app .
Step4: Create and run docker container
docker run my-python-app
- Login and push image to docker hub
Step1: Login to docker hub and create a repository under username i.e. sirajchaudhary/my-python-app
Step2: Login to docker hub from terminal
docker login
Step3: Tag the image
docker tag my-python-app sirajchaudhary/my-python-app:v1
Step4: Push the image
docker push sirajchaudhary/my-python-app:v1
Step5: Logout from docker hub
docker logout
- Launch MySQL container
Step5: Logout from docker hub
docker logout
Create and run a new mysql container and set a db and password with environment variables (--env)
docker run -d -p 3306:3306 --env="MYSQL_ROOT_PASSWORD=siraj123" --env="MYSQL_DATABASE=mydb" mysql
#Access the bash shell of mysql container
docker exec -it <CONTAINER_ID> /bin/bash
#Login mysql container service with credentials
mysql -u root -p
(siraj123)
#Fire SQL queries
show databases
show tables
We can add tables and records to this mysql mydb and than build and push a custom image to hub.docker.com
- Create a volume to persist data
A Docker volume is used to persist data generated by and used by Docker containers. volume can be shared between containers.
Example: we will create a volume in host machine and container's directory will be mapped to it. Thereafter even when container get removed the data will be persisted and available for a new container.
Step1: Create a volume that will be used to store data
docker volume create my_data_volumedocker volume ls
Step2: Create a simple HTML file
cd my-html-sitemkdir my-html-site
touch index.html
<html><body>Hello, Docker Volume!</body></html>
Step3: Create and run a new nginx container using the volume. Mount the volume and map it to the appropriate directory inside the container. It mounts the my_data_volume volume to the /usr/share/nginx/html directory inside the container, which is where Nginx serves files from.
docker run -d -p 8080:80 -v my_data_volume:/usr/share/nginx/html nginx
Step4: Copy the HTML File to the Volume
docker cp ./index.html <CONTAINER_ID>:/usr/share/nginx/html/
Step5: Verify. Even if you stop and remove the container, the data in the volume will persist. You can remove the container and start a new one with the same volume, and the index.html file will still be there
docker stop <CONTAINER_ID>
docker rm <CONTAINER_ID>
docker run -d -p 8080:80 -v my_data_volume:/usr/share/nginx/html nginx
- List, inspect, create, connect, disconnect, remove networks
Lists all networks available on the host
docker network ls
Create a new custom bridge network
docker network create my_custom_network
Inspect a network
docker inspect my_custom_network
Provides detailed information about a specific network, such as the connected containers, IP address range, and configuration details.
Connect/Disconnect a container to a network
docker network connect my_custom_network <CONTAINER_ID_OR_NAME>
docker network disconnect my_custom_network <CONTAINER_ID_OR_NAME>
You can check container's networks
docker inspect <CONTAINER_ID_OR_NAME>
Remove a network
docker network rm my_custom_network
Run containers on a custom network
docker run -d --name container1 --network my_custom_network nginx
docker run -d --name container2 --network my_custom_network nginx
Both container1 and container2 are connected to same network my_custom_network. They can communicate with each other using their container names as hostnames.
You can test the connectivity between the containers. Make sure ping is installed (apt-get install -y iputils-ping) in container1
docker exec container1 ping container2
- Fetch the logs of a container
docker logs <CONTAINER_ID_OR_NAME>
Fetch only the last n lines of logs with timestamp
docker logs -t -n 5 <CONTAINER_ID_OR_NAME>
Follow live logs of a container (Track live logs of a container)
docker logs -f <CONTAINER_ID_OR_NAME>
Fetch logs of past 60 minutes until before 5 minutes
docker logs --since 60m --until 5m <CONTAINER_ID_OR_NAME>
- Copy files or directories between a docker container and the local filesystem of the host machine.
docker cp /path/on/host/file.txt <CONTAINER_ID_OR_NAME>:/path/inside/container/file.txt
docker cp file.txt my_container:/file.txt
- Shows the differences between the container's current filesystem and the filesystem of the image it was created from
docker diff <CONTAINER_ID_OR_NAME>
Thedocker diff
command is particularly useful for debugging purposes, as it allows you to track changes in the container's filesystem after running certain commands
- Backup container's filesystem
Thedocker export
command is used to export the filesystem of a Docker container as a tar archive. This can be useful for saving the state of a container's filesystem
docker export -o my_container_backup.tar <CONTAINER_ID_OR_NAME>
The
docker export
command exports only the filesystem of the container. It does not include metadata, environment variables, or any Docker-specific configurations.
If you want to create an image that includes the entire state of a container (including metadata), you should use
docker commit
instead ofdocker export
.
You can import the tar archive back into Docker as an image using the
docker import
command.
The
docker import
command is used to create a Docker image from a tar that contains a filesystem.
Import a tar into a Docker image (creating a docker image from a tar)
docker import my_container_backup.tar my_image:latest
- Save image to a tar archive
docker save -o my-app.tar my-app
The "docker save" command is used to create a tar (archive file) of a docker image, which you can then transfer to another system or use for backup purposes.
The "docker load" command is used to load a docker image from a tar (archive file). This is typically done when you have an image that was saved using "docker save" and you want to load it onto a different docker host or system.
docker load -i my-app.tar
The "docker load" command is a straightforward way to handle Docker images when working in scenarios where you need to move or deploy images across different systems.
- Create a new tag for an existing docker image
Tagging images allows you to manage different versions of your application. This is especially useful when you want to version your images.
docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
docker tag myapp:latest myapp:v1
This command doesn’t create a new image; it simply assigns another tag to the existing image.
This might be useful if you want to push the image to your personal repository on Docker Hub.
docker tag myapp:latest sirajchaudhary/myapp:v1
If you delete the original tag (or "source image") using docker rmi, Docker only removes the reference to the image associated with that tag. However, as long as there is another tag (like the one you created with docker tag), the image layers themselves are not deleted.
- Display statistics for all running containers
docker stats
The output shows various metrics for each container, including CPU usage percentage, memory usage, network I/O, and the number of processes. This can be helpful for monitoring container performance and diagnosing issues.
you can see statistics for all containers (both active and inactive containers)
docker stats -a
you can see statistics for specific container
docker stats <CONTAINER_ID_OR_NAME>
- Dynamically update resource (i.e. CPU, memory) of a running container.
Following command update a container named my-container to use 2 CPUs and 1GB of memory
docker update --cpus 2 --memory 1g my-container
- Clean up various unused resources from your docker environment
Remove all dangling images (Images which are not tagged and are not referenced by any containers).
docker image prune
Remove all unused images, not just the dangling ones, you use the -a option
docker image prune -a
Remove all unused images, not just the dangling ones, without confirmation (forcefully)
docker image prune -a -f
Remove stopped or unused containers. It removes all containers that are in the "stopped" state. Running containers are unaffected.
docker container prune
Removes all volumes that are not currently referenced by any containers.
docker volume prune
Remove unused Docker networks. This command delete all networks which are not currently in use by any containers.
docker network prune
All-in-one cleanup command that removes unused containers, networks, images, and optionally, volumes.
docker system prune
- Install docker from official website
- The Docker extension (by microsoft) for VS Code IDE simplifies working with Docker containers, images, volumes, network, compose YAML files and more directly from the VS Code IDE. It make container development more efficient. It is very useful specially for YAML files and its contents creation.
There are several alternatives to Docker for containerization, each with its own unique features and advantages. Here are some prominent alternatives:
1. Podman- Description: Podman is a daemonless, open-source container engine that provides a Docker-compatible command-line interface (CLI). It allows you to manage containers without needing a central daemon like Docker, improving security.
- Key Features:
- Rootless containers for enhanced security.
- Docker-compatible commands.
- No daemon required, which means fewer potential vulnerabilities.
- Can run Kubernetes pods directly.
- Use Case: Ideal for users who require better security and want to avoid running a container engine as the root user.
- Description: Podman is a daemonless, open-source container engine that provides a Docker-compatible command-line interface (CLI). It allows you to manage containers without needing a central daemon like Docker, improving security.
- Key Features:
- Rootless containers for enhanced security.
- Docker-compatible commands.
- No daemon required, which means fewer potential vulnerabilities.
- Can run Kubernetes pods directly.
- Use Case: Ideal for users who require better security and want to avoid running a container engine as the root user.
2. CRI-O- Description: CRI-O is an open-source container runtime specifically designed to comply with Kubernetes Container Runtime Interface (CRI) standards. It’s lightweight and optimized for Kubernetes environments.
- Key Features:
- Focuses on Kubernetes integration.
- Lightweight and secure.
- Uses Open Container Initiative (OCI)-compliant images and runtimes.
- Use Case: Best for users running Kubernetes who need a streamlined, optimized container runtime.
3. LXC (Linux Containers)- Description: LXC is a low-level container technology that provides a lightweight virtualization system to run multiple isolated Linux systems on a single host. It predates Docker and provides more direct control over containerized environments.
- Key Features:
- Lightweight and low overhead.
- Supports full Linux system environments.
- Directly integrates with the Linux kernel.
- Use Case: Ideal for users who need to manage lightweight Linux containers with more control over their environments than Docker allows.
- Description: CRI-O is an open-source container runtime specifically designed to comply with Kubernetes Container Runtime Interface (CRI) standards. It’s lightweight and optimized for Kubernetes environments.
- Key Features:
- Focuses on Kubernetes integration.
- Lightweight and secure.
- Uses Open Container Initiative (OCI)-compliant images and runtimes.
- Use Case: Best for users running Kubernetes who need a streamlined, optimized container runtime.
- Description: LXC is a low-level container technology that provides a lightweight virtualization system to run multiple isolated Linux systems on a single host. It predates Docker and provides more direct control over containerized environments.
- Key Features:
- Lightweight and low overhead.
- Supports full Linux system environments.
- Directly integrates with the Linux kernel.
- Use Case: Ideal for users who need to manage lightweight Linux containers with more control over their environments than Docker allows.
4. rkt (Rocket)- Description: rkt is an alternative to Docker designed by CoreOS (now part of Red Hat). It emphasizes security and composability by separating the container image from the runtime.
- Key Features:
- Doesn’t require a central daemon like Docker.
- Focuses on security (pod-based approach similar to Kubernetes).
- Compatible with Kubernetes.
- Use Case: Suitable for environments where security and isolation are paramount.
5. Singularity- Description: Singularity is designed for use in high-performance computing (HPC) environments, focusing on scientific applications. It allows users to encapsulate complex software stacks into portable containers.
- Key Features:
- No root privileges required for container execution.
- Optimized for HPC and research workloads.
- Focus on reproducibility and portability.
- Use Case: Ideal for research institutions and HPC environments where users don’t have root access but need reproducible environments.
6. Buildah- Description: Buildah is a tool that focuses on building OCI-compliant container images. It doesn’t require a daemon like Docker and integrates with Podman for running containers.
- Key Features:
- No daemon, improving security and reducing resource usage.
- Supports building images directly from the command line.
- Works seamlessly with Podman.
- Use Case: Suitable for users focused primarily on building container images without the need for managing container runtime environments.
7. Containerd- Description: Containerd is an industry-standard container runtime used in production environments, and it’s the core component behind Docker’s container engine. It’s lightweight and is often used as the underlying runtime for Kubernetes.
- Key Features:
- Industry-standard runtime used by Docker and Kubernetes.
- Simple and efficient design.
- Integrated with Kubernetes as part of CRI-O and Docker.
- Use Case: Best for users looking for a lightweight container runtime, often integrated with Kubernetes.
- Description: rkt is an alternative to Docker designed by CoreOS (now part of Red Hat). It emphasizes security and composability by separating the container image from the runtime.
- Key Features:
- Doesn’t require a central daemon like Docker.
- Focuses on security (pod-based approach similar to Kubernetes).
- Compatible with Kubernetes.
- Use Case: Suitable for environments where security and isolation are paramount.
- Description: Singularity is designed for use in high-performance computing (HPC) environments, focusing on scientific applications. It allows users to encapsulate complex software stacks into portable containers.
- Key Features:
- No root privileges required for container execution.
- Optimized for HPC and research workloads.
- Focus on reproducibility and portability.
- Use Case: Ideal for research institutions and HPC environments where users don’t have root access but need reproducible environments.
- Description: Buildah is a tool that focuses on building OCI-compliant container images. It doesn’t require a daemon like Docker and integrates with Podman for running containers.
- Key Features:
- No daemon, improving security and reducing resource usage.
- Supports building images directly from the command line.
- Works seamlessly with Podman.
- Use Case: Suitable for users focused primarily on building container images without the need for managing container runtime environments.
- Description: Containerd is an industry-standard container runtime used in production environments, and it’s the core component behind Docker’s container engine. It’s lightweight and is often used as the underlying runtime for Kubernetes.
- Key Features:
- Industry-standard runtime used by Docker and Kubernetes.
- Simple and efficient design.
- Integrated with Kubernetes as part of CRI-O and Docker.
- Use Case: Best for users looking for a lightweight container runtime, often integrated with Kubernetes.
8. Kata Containers- Description: Kata Containers is a secure container runtime that integrates lightweight virtual machines with container workloads, providing an extra layer of isolation.
- Key Features:
- Combines the security of VMs with the speed and simplicity of containers.
- Supports multiple hypervisors (KVM, QEMU).
- Strong security and isolation features.
- Use Case: Ideal for users who need enhanced isolation and security for sensitive workloads.
9. Firecracker- Description: Firecracker is a lightweight virtualization technology designed for microVMs, optimized for serverless workloads and function-based compute services.
- Key Features:
- Designed for microVMs with very low overhead.
- Built by AWS, it powers services like AWS Lambda and AWS Fargate.
- Strong security and isolation with a minimalist design.
- Use Case: Best for users running serverless environments or who need to manage isolated, high-density workloads.
Each of these platforms provides unique benefits, making them well-suited for different containerization use cases.
- Description: Kata Containers is a secure container runtime that integrates lightweight virtual machines with container workloads, providing an extra layer of isolation.
- Key Features:
- Combines the security of VMs with the speed and simplicity of containers.
- Supports multiple hypervisors (KVM, QEMU).
- Strong security and isolation features.
- Use Case: Ideal for users who need enhanced isolation and security for sensitive workloads.
- Description: Firecracker is a lightweight virtualization technology designed for microVMs, optimized for serverless workloads and function-based compute services.
- Key Features:
- Designed for microVMs with very low overhead.
- Built by AWS, it powers services like AWS Lambda and AWS Fargate.
- Strong security and isolation with a minimalist design.
- Use Case: Best for users running serverless environments or who need to manage isolated, high-density workloads.