DevOps Automation

DevOps Automation

Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Docker Demystified: A Beginner’s Guide

By Suraj Kaushik posted Tue August 06, 2024 01:22 AM

  

Imagine you have a magical toy box. This toy box can hold all your toys, no matter how many you have, and it keeps each toy in its own special space so they don’t get mixed up or lost. Whenever you want to play with a toy, you just open the toy box, and the toy is ready to use right away.

Docker is like that magical toy box, but for computer programs. It helps people keep their programs safe and separate, so they can run them whenever they need without any problems. Docker makes sure each program has everything it needs to work, just like your toy box keeps all the parts of your toys together.

In today’s fast-paced software development world, containerization has emerged as a popular technique for deploying and managing applications. Among the various containerization platforms, Docker stands out for its user-friendly approach and robust functionality. In this blog post, we will delve into the world of Docker, exploring its key concepts, benefits, and providing practical examples to demonstrate its capabilities.

Understanding Docker

What is Docker? containerization platforms

Docker is like a virtual computer inside our computer. It helps us to run different programs or apps (like websites, databases, or games) without needing to install them directly on our computer’s main system. So if we say in Technical terms Docker is an open-source platform that automates the deployment of applications within containers. Containers are portable, lightweight, and self-sufficient environments that encapsulate an application, its dependencies, libraries, and configuration files.

Why do we need docker? packs together everything in a box

So, Generally we have heard from developers that it was working fine on my machine but I don’t know why it is not running on yours. So this is a solution to that. Using Docker makes it easy to run apps because everything they need is packed together in a box (called a container). This box has everything the app needs to work, like files, settings, configurations and special tools. It also keeps each app separate, so they don’t mess with each other.

How? Using Containers

Docker uses containers to wrap up apps and all their stuff into one package. This package runs the same everywhere (Mac, Windows or Linux), so you can move it between different computers or servers and it should work the same way each time. It’s kind of like shipping a fully stocked kitchen in a shipping container — everything you need to cook is inside, and it doesn’t change no matter where you unpack it.

So, Docker helps make apps easier to run, share, and move around, which can save time and make things more reliable for people who build and use software.

Key Concepts of Docker

Images
Imagine you have a recipe book with instructions on how to bake different types of cookies. Each recipe is like a Docker image. It includes everything you need to know: the ingredients, the steps, and the tools required to make those cookies. Docker images are lightweight, self-contained packages that include everything needed to run a specific software application. This includes the code, runtime, system tools, and libraries. These images act as templates for creating containers and are built using a file called a Docker file, which outlines the software environment and dependencies needed.

Containers

Now, let’s say you decide to bake a batch of chocolate chip cookies using one of the recipes. The cookies you bake are like Docker containers. Just as cookies are made using a recipe, containers are made from images. A container is a running instance of an image — it’s like the final product that you can actually use and enjoy. Each container is separate from the others, just like each batch of cookies is separate, even if you use the same recipe multiple times. Containers can be easily created, started, stopped, moved, and deleted, offering a stable and consistent runtime environment no matter where they are deployed.

Registries

Docker registries are storage locations for Docker images, allowing users to save and share these images. The Docker Hub is a popular public registry that offers a wide range of pre-built images. Additionally, organisations can set up private registries to securely manage and distribute their custom images.

Basic Docker Commands and Descriptions

  1. docker run : This command is used to create and start a new container from a specified image. It is the most common way to launch a Docker container.
docker run <image-name>
For example, “docker run hello-world” will start a container using the hello-world image
For example, “docker run hello-world” will start a container using the hello-world image

2. docker build: This command builds a Docker image from a Docker file and a context. A context is a set of files located in a specified PATH or URL.

docker build -t my-image .

For example, “docker build -t my-image .” will build an image named my-image from the Dockerfile in the current directory.

3. docker pull: This command is used to download a Docker image from a registry, such as Docker Hub.

docker pull <image-name>
For example, “docker pull postgres” will download the latest Ubuntu image from Docker Hub to your local machine.

4. docker push: This command is used to upload a Docker image to a registry.

docker push my-repo/my-image

For example, “docker push my-repo/my-image” will push the image my-image to the my-repo repository on a Docker registry.

5. docker ps: This command lists all running containers. Adding the -a flag (docker ps -a) will show all containers, including those that are stopped.

docker ps 
OR
docker ps -a

6. docker stop: This command stops a running container.

docker stop <docker container id>
For example, “docker stop my-container” will stop the container named my-container.

7. docker rm: This command removes a stopped container.

docker rm <docker container id>
For example, “docker rm my-container” will delete the container named my-container.

8. docker rmi: This command removes an image from the local storage.

For example, “docker rmi my-image” will delete the image named my-image.
docker rmi nginx

9. docker exec: This command runs a command in a running container.

docker exec -it my-container 

For example,“ docker exec -it my-container” /bin/bash will start a bash shell inside the my-container container, allowing interactive access.

10. docker logs: This command retrieves the logs of a container.

docker logs <my-container>
For example, “docker logs my-container” will show the logs generated by the my-container container.

 

Volumes in Docker

Imagine you have a toy box where you keep all your favourite toys. If you move your toys to a new house, you want to make sure you take your toy box with you so you don’t lose any toys. In Docker, volumes are like that toy box.
Docker volumes are used to manage and persist data generated by and used by Docker containers. They allow data to be stored outside of the container’s filesystem, making it easier to share data between containers and maintain it across container lifecycles. There are two main types of mounts used in Docker:
Volume Mounts and Bind Mounts.

1. Volume Mount

Volume mounts are a Docker-managed way to store data. Volumes are stored in a part of the host filesystem which is managed by Docker (/var/lib/docker/volumes/ on Linux). Volumes are easier to back up or migrate than bind mounts. You can use named volumes or anonymous volumes.

Key Characteristics:

  • Managed by Docker.
  • Stored in a Docker-specific location on the host.
  • Can be used to share data between containers.
  • Preferred way to persist data in production.
docker run -d --name my-container -v my-volume:/path/in/container my-image

In this example, my-volume is a Docker volume that will be mounted to /path/in/container inside the container my-container.

2. Bind Mount

Bind mounts are mounted directly from the host’s filesystem into the container. You define the exact path on the host system to be mounted. This allows for more control and flexibility, but it is less portable and can be harder to manage compared to volumes.

Key Characteristics:

  • Directly maps a host directory or file to the container.
  • Path on the host must be specified.
  • Useful for sharing configuration files or source code between the host and container.
  • Provides greater flexibility and control, but is less isolated.
docker run -d --name my-container -v /path/on/host:/path/in/container my-image

In this example, the directory /path/on/host on the host machine is mounted to /path/in/container inside the container my-container.

When to Use Each

  • Volume Mounts: Use when you need Docker to manage the storage, especially for data that needs to persist across container restarts and deployments, such as databases.
  • Bind Mounts: Use when you need to share host directories and files with containers, such as for development purposes, where you want the container to have access to source code or configuration files on the host.

Ports in Docker

Think of ports like doors to a house. If you want to visit your friend’s house, you need to know which door to knock on. In Docker, managing ports is like deciding which door your app uses to communicate with the outside world. You map a door (port) on your computer to a door (port) on your app so people can access it.

Port Mapping

Port mapping allows you to specify how ports on the host machine are connected to ports on the Docker container. This is done using the -p or — publish option when running a container.

Syntax:

docker run -p <host_port>:<container_port> <image>

Example:

docker run -d -p 8080:80 nginx

--- Port Mapping Explanation ----
8080:80: This is the port mapping configuration.

8080 (Host Port): This is the port on your host machine (your computer or server). When you access your host machine on port 8080 (e.g., by visiting http://localhost:8080 in your web browser), Docker will forward this traffic to port 80 inside the container.
80 (Container Port): This is the port inside the Docker container. The NGINX server inside the container listens for web traffic on port 80.

Docker Compose

Imagine you’re building a LEGO city, and each building is a different LEGO set. Docker Compose is like the instruction book that tells you how to put all the LEGO sets together to create the city. It helps you organize and run multiple apps (LEGO sets) together in a coordinated way.

When using Docker Compose to manage multi-container applications, you can specify port mappings in the docker-compose.yml file.

Example

version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8081:8081"
volumes:
- .:/usr/src/app

Working Example

Building a custom image

Suppose you have a simple Node.js application and you want to containerize it. You can create a Dockerfile that defines the environment and dependencies required to run the application, then build a custom Docker image.

Here’s a basic example of a Dockerfile for a Node.js application:

# Use the official Node.js image as the base image
FROM node:14

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install npm dependencies
RUN npm install

# Copy the application code to the working directory
COPY . .

# Expose port 3000
EXPOSE 3000

# Define the command to run the application
CMD ["node", "index.js"]

File Structure

Code Execution
1) Execute “ docker build -t blog-app .”
2) Check if the image is generate “docker images”
3) Run your image “docker run <image-name>”

GIT LINK : https://github.com/kaushiksuraj1102/dockerDemystified

 

0 comments
14 views

Permalink