brandLogo

Tech Notes.

By Danish

Introduction to Docker

Before diving into Docker, let's cover some key concepts.

What is docker

It's the solution for the classic problem "It works on my machine!!". The usual issue here remains with environment and dependencies. Docker solves that problem for us. So if your app works fine in your local docker, its bound to run fine anywhere else be it staging or Production.

But how ?

Docker creates containers of our app, that includes our app and the environment & dependencies required by app to run smoothly.

I know the paragraph below seems lengthy, but bear with me it'll be totally worth it.

Let's consider an example. Imagine we own a tiny little food outlet in our basement. Our food is delicious and words spread really soon. Now people want us at different events. But since our entire setup e.g. wash basin, water supply gas supply e.t.c everything is being used from home, going somewhere else and getting the entire setup and then cooking became quiet a task. And more often than always we are likely to miss something or something else leading to issues.
This keeps ruining our name and hamper our growth.
We decided the fix the issue and came up with an idea. We updated our entire setup and converted it into a food truck. Now anytime we get in an event we don't have to worry about the setup anymore. Chef can focus on the food only and get us the consistent delicious food. Words start to spread again and now we in demand more than ever. So we created multiple identical food trucks and successfully catered all of them, thus creating tons of wealth.

This is exactly what docker does for us. The same code that works fine on dev machine breaks on production. Reason being change in environment missing dependencies etc. Like dev was using Windows and server is Linux, so file path structure changed. Dev had node installed on machine but server had not and so on. All the things that our code need to run, be it the OS or additional packages or softwares are called dependencies. Docker creates a wrapper(called container) with our app and provides all the dependencies our app needs. Since it already had everything our app needs, our apps can run in isolation without being impacted by the environment they are in. So it'll behave consistent in any environment and ensure that when an app runs at local, it runs the same in production the same.

Key terms

  • container: the actual functional unit where our app resides along with all the required dependencies it. It is what we'll be interacting with to access our app. This is bound to the host machine and consumes the hardware resources. Ensures our app runs in isolation without being impacted by the environment.
  • image: Consider this a the blueprint for our containers. It contains all the meta data required to generate the container. Containers are built from images
  • volume: This is the data persistence unit. Data, that need to survive beyond container lifecycle, is stored here. It can be used to share data amon multiple containers
  • dockerfile: This text file containing set of instruction on how to create our image and start our app. Usually resides at the root of app directory
  • dockerignore: Similar to .gitignore file. This contains part of our app that docker needs to ignore, like log files or node_modules folder.
  • tag: unique identifier for our images. These are used for versioning our images. consider it as commits in git
  • registry: similar to the repo we store our codebase in, except that it stores images and doesn't support versioning implicitly. We achieve versioning with effective use of tags. Consider it like Github for images. Default is DockerHub.
  • layer: Each image is made from different layers, consider it as different level of setup metadata and utility. Layers make images efficient by reusing unchanged parts across different builds.
  • build context: the files in the directory where dockerfile resides. basically the entire codebase

Initiate Setup

Docker is not recommended to be used natively on Windows.
Docker is primarily designed to run on Linux, as it utilizes a lot of Linux-specific functionalities under the hood.

It's better to run it in a Linux environment for a smoother experience, which is what we'll be using in production as well.

Create a Linux Environment

We don't need a separate machine to run Linux. We can create a virtual machine (VM) and run Linux inside our Windows environment.

WSL - Windows Subsystem for Linux

WSL is a native Windows solution that provides a Linux-like environment inside Windows.

How WSL Works

Hyper-V: A virtualization technology in Windows that creates and manages virtual machines. It can handle full-fledged VMs for WSL2, but it's more lightweight than traditional virtual machines. Consider it a lightweight, Windows-native alternative to Oracle VirtualBox.

  • Hyper-V creates and hosts the VM.
  • WSL provides the interface that allows us to interact with these VMs, making it feel like we're running Linux natively on Windows.

Now that we have this basic understanding, here's how it works:

us <------ WSL ------> Linux VM (inside Hyper-V)

Setup WSL

  1. Open PowerShell as administrator
  2. Execute wsl --list --online
  3. This will get the list available versions of linux distribution available
  4. If a specific version is required, use wsl --install -d <distro_name>
  5. Else wsl --install will install the latest ubuntu distribution
  6. It'll take some time and step up wsl along with the linux distribution
  7. Restart the system
  8. Press Win 🪟 key and search ubuntu
  9. We get a message like Installing, this may take a few minutes...
  10. It'll ask for a user-id and password to setup the linux system.
  11. Keep the credentials as per choice

Let's update the installed linux
Use command sudo apt update this will update the package index and our system will have the knowledge about the latest versions of the packages available. This will ensure that we don't end up installing and older version of softwares.

And done 🎉🥳, we are now inside our new linux machine💻

Install Docker

Let's start with docker's documentation available here : Docker documentation

The documentation uses apt-get, which is an old version of the package, we'll be using apt the modern version

Setup DockerHub Account

DockerHub is a cloud-based registry service to store our images. Consider it as GitHub for images. Here are the steps

  • Login to DockerHub here - DockerHub
  • Click on profile on top left corner and select Account Setting
  • Under Security, find Personal Access Token
  • Generate a token by entering the token name e.g. "Docker CLI"
  • Now copy the values as they won't ever be displayed again
  • Back to our linux terminal

Let's complete the setup now

  1. sudo apt install ca-certificates curl
    This will install the ca-certificates, which will be used to validate the authenticity of the SSL certificates of the urls. In short, this will enable us to make secure https connections.
  2. sudo install -m 0755 -d /etc/apt/keyrings
    This will create a directory and provide the owner read/write/execute permission, while group members get read and execute but not write access.
  3. This will download Docker's GPG and save it to the path specified.
    sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc

    Docker GPG key is used to verify the authenticity of the packages being installed through docker

  4. sudo chmod a+r /etc/apt/keyrings/docker.asc
    This will update the permission of the file to be accessible to system and apt, so they can use the file to verify the authenticity of the packages
  5. Now let's setup the architecture
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
        $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
        sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
  6. sudo apt update This will again refresh the packages from our new repository
  7. Let's get the docker engine now
    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    
    This command adds the Docker APT repository for our specific Ubuntu version to the /etc/apt/sources.list.d/docker.list file. It ensures Docker packages are sourced from the correct repository and verified with the provided GPG key.
  8. Login to DockerHub Account
    • execute docker login
    • go to https://login.docker.com/activate
    • enter the login code and submit
    • once done, back in terminal, we'll get 'Login Succeeded' message
  9. Verify the setup sudo docker run hello-world
    Output should be similar to this
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c1ec31eb5944: Pull complete
Digest: sha256:91fb4b041da273d5a3273b6d587d62d518300a6ad268b28628f74997b93171b2
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
      (amd64)
3. The Docker daemon created a new container from that image which runs the
      executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
      to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/

Congrats 🥳🎉 our docker setup is done. We are good to create our own images

One more thing before we proceed, each time we execute a docker command we need to run it as sudo user.
Let's add user to docker group

  1. Check if group exists
    • getent group docker if output is like docker:x:999: means group exists
    • if there is no output means user doesn't exists, we'll create it with sudo groupadd docker
  2. Add user to group sudo usermod -aG docker $USER
  3. Verify if user is added groups $USER.

    danish : danish adm dialout cdrom floppy sudo audio dip video plugdev netdev docker

  4. Restart ubuntu
    • sudo reboot : wait for a few minutes and press enter as mentioned in log to restart
  5. Then try docker ps and that should work fine
We are now good to proceed 🥳🥳