Docker is a useful tool, but for new users, useful information on Docker is hard to find. When I searched the web for introductions to Docker, I found many documents that answer the wrong questions. Some of them answer the question "why is Docker so wonderful". They are full of marketing speak, like "fast, consistent delivery of your applications", "responsive deployment", etc. Other documents answer the question "what is a complete list of all Docker commands and options", often by listing the commands in alphabetical order. Other documents give step-by-step instructions, but they are basically lists of "here is another thing you can do", without explaining any of the concepts behind Docker. Finally, there are some Docker tutorials that require you to install and run Docker before you can see the tutorial!
Here is my own attempt at explaining how to use Docker.
The Docker software is organized as a client-server model. The server, which in Docker-speak is called the daemon, is basically always running, whereas the client consists of the commands you use to interact with Docker.
There seem to be two main reasons for using a daemon. The first is that it allows Docker to be doing things (like running tasks) in the background, i.e., even when you are not interacting with it. The second is that the daemon can in principle run on some remote computer on the network; it doesn't have to run on the same computer from which you are interacting with it. For example, you could have a single daemon running on a server, and access it using multiple clients on different laptops.
The operation of Docker is basically the same, no matter whether the daemon is running remotely or locally. For the purpose of this document, we will mostly assume that the daemon is running locally.
The service that Docker provides is similar to a virtual machine. But rather than simulating an actual machine (i.e., a processor, memory, a BIOS, a graphics card, a keyboard, a network adapter, and other hardware devices), Docker simulates an operating system. It provides an environment in which processes can run and access operating system services such as a file system, networking, interprocess communication, etc. This environment is entirely virtual, i.e., separated from the host operating system. It is also reproducible, meaning that your virtual environment works exactly the same no matter which computer you run Docker on. Thus, you might be running Docker on a Macintosh but simulating Ubuntu 20.04.
One nice thing about virtual machines is that you can run several of them on a single host, and even at the same time. For example, on my Ubuntu 20.04 host, I can have one virtual machine running Windows 10, and another running macOS 10.13. I can shut down and later reboot the Windows virtual machine, while keeping the Mac one running.
Docker's name for a runnable virtual machine is a container. At any given time, a container is either running or stopped. When a container is running, there might be one or more processes running in it, and these processes will be able to read and write the container's files, access the container's network, communicate with each other, or do any of the other things that processes normally do. When a container is stopped, it is like when an operating system has been shut down. In the stopped state, the container does not consume any resources except some storage. Going from the stopped to the running state is like booting the container's operating system, and going from the running to the stopped state is like shutting it down.
Multiple containers can be running at the same time. These containers are isolated from each other and from the host operating system. For example, each container has its own file system. We can think of this as if each container had its own virtual harddrive. Thus, if two processes are running in the same container, and one of them writes information to a file, the other process can read that information. But if two processes are running in different containers, they cannot read or write each other's files. There are ways of copying files in and out of containers, which we will discuss later. There are even ways in which a container can be permitted to access some of the host operating system's files. But apart from this, each container acts like a separate, autonomous machine.
A nice feature of virtual machines is that, unlike hardware machines, they can be copied. In Docker, a snapshot of a stopped container is called an image. Thus, an image contains the exact contents of the file system at the moment the snapshot was taken. It also contains certain other information (however, it does not contain any processes). If you make an image of a stopped container, and then create a new container from the image, it is initially an identical copy of the original container. However, any changes you subsequently make to one of these containers will not affect the other.
I will keep these installation instructions brief, as the information is easily found on the internet. I personally have only installed Docker on Linux.
If you are using a Linux distribution that supports the APT package manager, such as Debian or Ubuntu, installing Docker is as simple as:
$ sudo apt update $ sudo apt install docker.io
This will install Docker and start the Docker daemon. Each time you reboot, the daemon will be started automatically.
To run most Docker commands, you need to be a member of the docker group. My username is selinger, so I used the following command to add myself to the docker group:
$ sudo usermod -aG docker selinger
Note: for the group change to take effect, you must either log out and log back in, or start a new process with su, e.g.:
$ su selinger Password: ... $
There is a very large number of Docker commands, and each command has a large number of options. Moreover, some of the commands have multiple names, so there is often more than one way of doing exactly the same thing.
Here, I will only describe the commands and options that everybody should know. After you master these, it should be easy to learn additional commands and options on a "need to know" basis.
I will list the commands in roughly the order in which a novice user would use them. Thus, you should be able to follow the rest of this section like a step-by-step tutorial.
We will assume that you have already installed Docker, and that your Docker daemon is running on your local machine. Before you can do anything useful with Docker, you must create a container, and to create a container, you first need an image. Fortunately, there are a number of pre-configured images already available. We just have to download one of them from a place called a registry. There can be any number of such registries, and you can even run your own one, but for now, we will just download an image from Docker's default registry:
$ docker image pull ubuntu:20.04
Assuming there are no errors, this command downloads an image, but where does the image end up being stored? It will be stored in the Docker daemon (not a file). To see the list of all images currently stored in the daemon, use the command
$ docker images
It should output something like:
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu 20.04 20fffa419e3a 5 weeks ago 72.8MB
This shows the Ubuntu 20.04 image we just pulled from the registry. Before we continue, we note a few things:
We will discuss more methods for creating images later. We will also discuss how to name and rename images, delete images, and more. For now, let's move on to more exciting things.
Creating a new container from an image is called running the image in Docker. The simplest command to do so is the following:
$ docker run -id ubuntu:20.04
Here, docker run is the command to run an image, -id are necessary options that we'll discuss later, and ubuntu:20.04 is the name of the image to run. The command outputs something like
8d90cdca45d81478fc857d0aa92352e35559efdcd12307207c92e695cdc9d5c5
and then quits. The output is the ID of the new container; in subsequent use, this container ID is usually abbreviated to its first 12 characters, i.e., 8d90cdca45d8.
Although the command docker run appears to be finished, the new container is actually running in the background. To see a list of all containers that are currently running, use
$ docker ps
The output should be something like
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8d90cdca45d8 ubuntu:20.04 "bash" About an hour ago Up About an hour tender_cray
Note that besides the container ID and the name of the image the container was created from, the container has also been given a name, which is tender_cray. Such names are randomly generated. It is also possible to rename a container, or to specify a desired name when the container is first created. We will come back to this later.
When we created a container, the container automatically started running. We can now log into the container. Remember that our container is called tender_cray, and you can see the name with docker ps. To log into the container:
$ docker exec -it tender_cray bash
This command should give you a prompt, such as
root@8d90cdca45d8:/#
This prompt is a bash shell running inside the container. You can
now execute normal Linux shell commands, such as
root@8d90cdca45d8:/# echo Hello Hello root@8d90cdca45d8:/# date Thu Jul 14 23:39:43 UTC 2022 root@8d90cdca45d8:/# uname -a Linux 8d90cdca45d8 5.4.0-121-generic #137-Ubuntu SMP Wed Jun 15 13:33:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux root@8d90cdca45d8:/# cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04.4 LTS" root@8d90cdca45d8:/# ls bin dev home lib32 libx32 mnt proc run srv tmp var boot etc lib lib64 media opt root sbin sys usr root@8d90cdca45d8:/# cd /tmp root@8d90cdca45d8:/tmp# echo "This is a test file" > testfile.txt root@8d90cdca45d8:/tmp# cat testfile.txt This is a test file root@8d90cdca45d8:/tmp# ll testfile.txt -rw-r--r-- 1 root root 20 Jul 14 23:42 testfile.txt root@8d90cdca45d8:/tmp# whoami root root@8d90cdca45d8:/tmp# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 22:17 ? 00:00:00 bash root 27 0 0 23:53 pts/0 00:00:00 bash root 36 27 0 23:53 pts/0 00:00:00 ps -ef
Note that this container is running Ubuntu 20.04, as we requested when we pulled our container's image. Presumably, if your container were running Windows, you would use DOS commands here. Also note that we are logged into the container as the root user. In fact, there are no other users inside the container, unless you create them. Also note that unlike in a real operating system, there are very few processes running in the container.
To log out of the container, simply exit the shell with exit or Ctrl-D:
root@8d90cdca45d8:/tmp# exit exit
This will log out of the container and bring you back to the host operating system. Note, however, that it does not stop the container. We can check that the container is still running:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8d90cdca45d8 ubuntu:20.04 "bash" 2 hours ago Up 2 hours tender_cray
You can see that it is taking me much longer to write these instructions than it is taking you to read them: my container has already been running for 2 hours!
You can log into the container again and check that the file we created in the /tmp directory is still there:
$ docker exec -it tender_cray bash root@8d90cdca45d8:/# cat /tmp/testfile.txt This is a test file root@8d90cdca45d8:/# exit exit
In fact, you can log into the same running container as many times as you want, even simultaneously.
A final detail: You may have noticed that the command for logging into a container is called docker exec and not something like docker login. In fact, this command can do much more than simply logging in. It can execute an arbitrary command. We gave the command as bash, and this is why we were dropped into a bash shell. But we could execute some other command instead:
$ docker exec -it tender_cray echo Hello Hello
To stop a running container, use this command:
$ docker stop tender_cray tender_cray
This may take a few moments; in fact, stopping a container seems to take much longer than starting it. Any processes that are still running in the container are stopped, and any users that are logged in are logged out, just as if an operating system were shutting down. When the command is done, it prints the name of the container that was stopped.
Note that if we look at the containers with docker ps, we don't see the stopped container.
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
This is because by default, the docker ps command only shows running containers. To see stopped containers, use the -a option:
$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8d90cdca45d8 ubuntu:20.04 "bash" 2 hours ago Exited (137) 45 seconds ago tender_cray
The "Exited" status means that the container is stopped.
Logging into a stopped container is not possible, and we get an error if we try to do so. But we can easily restart (i.e., reboot) a stopped container like this:
$ docker restart tender_cray
If we do not like a container's name, we can rename it like this:
$ docker rename tender_cray my-container-name $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8d90cdca45d8 ubuntu:20.04 "bash" 2 hours ago Up 4 minutes my-container-name
Note that the container ID has not changed; only the container's name has been updated.
Finally, when we no longer need a container, we can delete it. The command to delete a container is docker rm. In case the container is still running, we must stop it first:
$ docker stop my-container-name my-container-name $ docker rm my-container-name my-container-name $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
As you can see, the container is now completely gone; even docker ps -a does not show it anymore.
We already saw that images have names, such as the image named ubuntu:20.04 that we used above. More precisely, each image can have zero, one, or several names, and Docker calls them tags rather than names. We can give a name to an image with the docker image tag command. Let's start by listing all currently known images:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu 20.04 20fffa419e3a 5 weeks ago 72.8MB
At this time, we only have one image, and it is the Ubuntu image we used above. We can add another name to this image like this:
$ docker image tag 20fffa419e3a my-first-image $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE my-first-image latest 20fffa419e3a 5 weeks ago 72.8MB ubuntu 20.04 20fffa419e3a 5 weeks ago 72.8MB
Note that docker images now lists two images, but they are actually both the same image! We can see this because they have the same image ID. This image now has two different names or tags: ubuntu:20.04 and my-first-image:latest. Recall that if the part after the colon is missing, it defaults to "latest".
We can delete an image that we no longer need.
$ docker image rm ubuntu:20.04 $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE my-first-image latest 20fffa419e3a 5 weeks ago 72.8MB
Note that this did not really delete the image; it just deleted the tag. The same image is still present under the tag my-first-image. If we delete this too, the image is gone:
$ docker image rm my-first-image $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE
This time, we have succeeded in completely deleting the image. If we need it again, we must pull it from the registry.
So far we haven't really done much with our containers, except run a few Linux commands in them. If we want to do any serious work inside a container, we probably need to move data into or out of the container. For example, we might want to copy source code into the container, compile a program there, run the program, and copy the output out of the container. Fortunately, there are simple commands for doing so. Let's start by spinning up a container running our favorite version of Ubuntu:
$ docker run -id ubuntu:20.04 Unable to find image 'ubuntu:20.04' locally 20.04: Pulling from library/ubuntu d7bfe07ed847: Pull complete Digest: sha256:fd92c36d3cb9b1d027c4d2a72c6bf0125da82425fc2ca37c414d4f010180dc19 Status: Downloaded newer image for ubuntu:20.04 8c3c47abd1e9f20b7923c82c14759c8f54c1400d23b5e2df59021404d3c7488d
Something interesting happened here. The image we were trying to run is the one that we deleted in the previous step. When Docker could not find the image in its memory, it automatically pulled it from the default registry. You can check with docker images that the image is now present again, with the same ID 20fffa419e3a as before. The ID is actually a cryptographic hash of the image, so that two images have the same ID if and only if they are identical.
After the image was pulled, the docker run command created and started running a container as usual, as we can check with docker ps.
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu 20.04 20fffa419e3a 5 weeks ago 72.8MB $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8c3c47abd1e9 ubuntu:20.04 "bash" 3 minutes ago Up 3 minutes ecstatic_bhaskara
Note that this new container was randomly assigned the name ecstatic_bhaskara.
Now let's copy a file into the container. Suppose that in our host operating system, we have a file /tmp/my-important-file.txt. We can copy it into the /tmp directory of the container with the following command:
$ docker cp /tmp/my-important-file.txt ecstatic_bhaskara:/tmp
This works pretty much like the cp command in Linux, except that we specify the name of the container before the destination path. Let's log into the container, check that the file is there, and create another file.
$ docker exec -it ecstatic_bhaskara bash root@8c3c47abd1e9:/# ll /tmp total 12 drwxrwxrwt 1 root root 4096 Jul 15 01:15 ./ drwxr-xr-x 1 root root 4096 Jul 15 01:15 ../ -rw-rw---- 1 1000 1000 27 Jul 15 01:12 my-important-file.txt root@8c3c47abd1e9:/# echo "I am creating this other file now." > /tmp/other-file.txt root@8c3c47abd1e9:/# ll /tmp total 16 drwxrwxrwt 1 root root 4096 Jul 15 01:23 ./ drwxr-xr-x 1 root root 4096 Jul 15 01:15 ../ -rw-rw---- 1 1000 1000 27 Jul 15 01:12 my-important-file.txt -rw-r--r-- 1 root root 35 Jul 15 01:23 other-file.txt
We can now copy the new file out of the container to the host filesystem:
$ docker cp ecstatic_bhaskara:/tmp/other-file.txt /tmp $ ll /tmp/other-file.txt -rw-r--r-- 1 selinger selinger 35 Jul 14 22:23 /tmp/other-file.txt
Note that we can copy files in and out of containers even while we are logged into them. In fact, we can even copy files into and out of stopped containers.
Copying directories works in the same way as copying files.
Copying files into and out of containers is useful, but having to do this all the time can be awkward. It is easy to modify a file inside a container and then forget to copy it to the host, or vice versa. For this reason, it is possible to permit a container to access specific directories of the host filesystem. This way, any relevant files that are modified in the container are automatically modified in the host filesystem, and vice versa. This process is known as mounting a host directory in the container's filesystem.
For example, let's assume there is a directory /tmp/abc in the host filesystem, and we want this to be visible as /media/abc in the container filesystem. We can do this by creating the container with the following -v option:
$ docker run -di -v /tmp/abc:/media/abc ubuntu:20.04 8d402de260532515c82ff38e2fdd1cf9f01699dda65e99bd0550e815843b21c5
Now if we log into the docker, we see /media/abc. Moreover, any file or subdirectory we create, delete, or modify in the container's /media/abc is automatically reflected in the host's /tmp/abc and vice versa.
One small complication is file ownership. Currently, the container has no users except the root user, but in the host operating system, I am an ordinary user with username selinger and user id 1000. Thus, any files that I create on the host are owned by user 1000, which doesn't exist in the container, and any files I create in the container are only readable by root. We can solve this problem by creating a non-root user with user id 1000 in the container, and then logging in as that user instead of root.
root@8d402de26053:/# adduser --uid 1000 selinger Adding user `selinger' ... Adding new group `selinger' (1000) ... Adding new user `selinger' (1000) with group `selinger' ... Creating home directory `/home/selinger' ... Copying files from `/etc/skel' ... New password: Retype new password: passwd: password updated successfully Changing the user information for selinger Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] y root@8d402de26053:/# su selinger selinger@8d402de26053:/$ cd /media/abc/ selinger@8d402de26053:/media/abc$ echo Test > test.txt selinger@8d402de26053:/media/abc$ ll test.txt -rw-rw-r-- 1 selinger selinger 5 Jul 15 11:35 test.txt
Now files created by the user selinger in the container are immediately accessible by the user selinger in the host.
So far, the only image we have worked with was the ubuntu:20.04 image. It is easy to create your own custom images. The easiest way to do this is by creating a snapshot of a (running or stopped) container. For example, the default ubuntu:20.04 image doesn't have many packages installed. Let's create a custom image of Ubuntu with gcc (the C compiler) preinstalled. For good measure, let's also add a file to the image. To do so, we first create a fresh ubuntu:20.04 container, run the necessary commands to install gcc, and then take a snapshot of the container with docker commit.
$ docker run -di --name ubuntu-container ubuntu:20.04 8a477107bee74b168e45f4e7d728dd9e4afcba2b0643f7a8bca2103529dc5b6d $ docker exec -ti ubuntu-container bash root@8a477107bee7:/# apt update [output not shown] root@8a477107bee7:/# apt install gcc [output not shown] root@8a477107bee7:/# exit $ docker cp /tmp/myfile.txt ubuntu-container:/tmp $ docker commit ubuntu-container sha256:03dc5343285975bb292cc6bd5af3faf25d3ed6308b41c38b8d42c5f36d029ff2 $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> 03dc53432859 6 seconds ago 254MB ubuntu 20.04 20fffa419e3a 5 weeks ago 72.8MB
Note that we gave the option --name ubuntu-container to the docker run command; this prevents the container from being assigned a random name. Also note that the docker commit command created a new image with ID 03dc53432859. (As usual, the actual ID is a much longer string, which is output by the docker commit command, but for most purposes, the first 12 characters suffice to identify the image). Finally, note that the new image does not have a name. We can name it as usual:
$ docker image tag 03dc53432859 ubuntu-gcc:20.04 $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu-gcc 20.04 03dc53432859 31 seconds ago 254MB ubuntu 20.04 20fffa419e3a 5 weeks ago 72.8MB
Alternatively, we could have immediately specified a name for the new image by adding the desired name to the docker commit command, like this:
$ docker commit ubuntu-container ubuntu-gcc:20.04
Since you can run arbitrary commands before taking a container' snapshot, you can create a wide variety of customized images. For example, you can install software, create users, create files and directories, and more.
It is also possible to create an image with custom values for environment variables, such as PATH. When you create a container from an image, you can specify or modify the values of environment variables with the -e option to the docker run command. For example, to create a container with a modified PATH:
$ docker run -id -e PATH="/root/.cabal/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ubuntu:20.04 67f9894d059f0b8f58ce8a113cc6ebf5485690bb87648877860041a7b0c73d00 $ docker exec -ti 67f9894d059f bash root@67f9894d059f:/# echo $PATH /root/.cabal/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
When you take a snapshot of such a container with docker commit, the updated environment is stored in the image and will be the new default for container created from the image.
We saw in the previous section that we can create an image by creating a container from an earlier image, setting environment variables, running commands, and/or adding files, and then saving a snapshot of the new container with docker commit. There is a convenient way to automate all of these steps by using a Dockerfile.
A Dockerfile is a list of instructions for creating an image. Here is an example Dockerfile:
# This is a Dockerfile FROM ubuntu:20.04 RUN apt update RUN apt install gcc -y COPY myfile.txt /tmp ENV PATH="/root/.cabal/bin:${PATH}" entrypoint ["/bin/bash"]
To use this, create a directory, say /tmp/docker. In this directory, create a file named Dockerfile that contains the above seven lines. In the same directory, also create a file named myfile.txt containing anything you want (it is just an example file). Then run the command
$ docker build -t my-new-image /tmp/docker
This will create a new image starting from ubuntu:20.04. It will first run the commands apt update and apt install gcc -y. Note that we specified the -y option to prevent apt from asking interactive questions. It will then copy the file myfile.txt to the image, and update the PATH environment variable. Finally, the Dockerfile also specifies an entrypoint, which is basically a default command to run. The new image will be given the tag my-new-image, as specified by the -t option to docker build.
Any lines in the Dockerfile that start with '#' are comments and are ignored.
Once you have created one or more Docker images, you might want to share these images with other users. There are several ways to do so.
If all users have access to the same Docker daemon, you do not need to do anything to share the images. The other users can simply use the images that are already stored by the daemon.
You can share an image by setting up a registry. We will not explain how to do so here, please look at the documentation.
You can share an image by saving the image to a file and sending the file to another user. This is done with the commands docker image save and docker image load. For example, to save the image ubuntu-gcc to a file named ubuntu-gcc.tgz:
$ docker image save ubuntu-gcc > /tmp/ubuntu-gcc.tgz
You or another user can then recreate the image as follows:
$ docker image load < /tmp/ubuntu-gcc.tgz
Finally, instead of directly sharing an image, you can share the Dockerfile you used to create that image. This is often the best method, because images can be very large, whereas Dockerfiles are typically quite small.
This section contains some additional tips and details that are not really needed on the first run-through. I am mostly using it as a place to write down some information that is not really all that useful.
So far, we have always run the docker run command with the options -id. Actually, this is an abbreviation for two different command line options -i and -d.
When running docker run without the -i option, it creates the container, then immediately shuts it down. Any attempt to restart the container will cause it to restart and then again shut down immediatly. I have no idea what this is useful for or why it is the default behavior. As far as I can tell, the docker run command without the -i option is completely useless.
When running docker run without the -d option, it behaves exactly identically as with the -d option, except it does not put the process in the background. The docker run command will just sit there until you stop the container.
So far, we have always run the docker exec command with the options -it, which is just an abbreviation for two different command line options -i and -t.
The -i option causes the docker's standard input to be connected to the standard input to the shell from which you are calling docker exec. Without it, the shell or command running in the docker container will not be able to receive input from your keyboard.
The -t option causes a tty to be allocated to the process your are running in the container. Essentially, what this means is that if you run bash without the -t option, you will not get a prompt (but everything else still works the same). For example, compare this
$ docker exec -it tender_cray bash root@8d90cdca45d8:/# echo "Hello" Hello root@8d90cdca45d8:/# exit exit $
to this:
$ docker exec -i tender_cray bash echo "Hello" Hello exit $
It is possible to create a container and log into it with a single command. This is done by giving the -t option to docker run, like this:
$ docker run -it ubuntu:20.04 root@4fb528ce7314:/# echo "Hello" Hello root@4fb528ce7314:/# exit exitAs soon as the shell exits, the container will be stopped. However, it can still be restarted afterwards.
It is possible to create a container without running it, using this command:
$ docker create -i ubuntu:20.04
The container can then be started using the command
$ docker start unruffled_nightingale
Processes running inside a docker container may use up some of the host operating system's resources. Specifically, if a process uses a lot of memory, it will of course be the host's memory that is actually filled up. Sometimes it is useful to limit the amount of memory a docker container can use. This is done with the -m option to docker run. For example, to limit the container's memory to 100Mb:
$ docker run -m 100m ubuntu:20.04
Some commands have abbreviations, or aliases, so that there is often more than one way to do the same thing.