Ionel Cristian Mărieș
- Concepts
- Building images
- Running apps inside a container
Docker:
- High level management tool for containers that hides the gnarly Linux APIs.
- Relatively new (since 2013).
- Lots of competition. But Docker is the most popular. More resources, better docs, more tooling.
Image: A snapshot (readonly).
Container: An execution environment.
Engine: It's just the docker daemon - a service that manages containers.
Running images:
For building images there are two ways:
docker run --rm -it ubuntu:xenial bash # ^ don't keep stuff around # ^ short for --interactive and --tty # ^ image name # ^ optional image tag, like a version # ^ optional command to run, image # usually has a default
docker build path
FROM ubuntu:xenial RUN apt-get update \ && apt-get install -y --no-install-recommends \ ca-certificates curl \ strace gdb lsof locate net-tools htop \ python2.7-dbg python2.7 \ && rm -rf /var/lib/apt/lists/* RUN curl -fSL 'https://bootstrap.pypa.io/get-pip.py' | python2.7 -
Alternate path to Dockerfile:
docker build -f path/Dockerfile path
The Dockerfile must be inside of path.
No mounts! Caching for apt or pip becomes tricky. Solutions:
Bunch of commands: RUN, ADD (don't use), COPY, EXPOSE, VOLUME, ENV, ARG, WORKDIR, USER, ENTRYPOINT, CMD etc.
In a Dockerfile:
ENTRYPOINT is rarely needed, avoid it.
If CMD=["/entrypoint.sh"]:
No ENTRYPOINT ENTRYPOINT exec_entry p1_entry ENTRYPOINT ["exec_entry", "p1_entry"]No CMD error, not allowed /bin/sh -c exec_entry p1_entry exec_entry p1_entry CMD ["exec_cmd", "p1_cmd"] exec_cmd p1_cmd /bin/sh -c exec_entry p1_entry exec_cmd p1_cmd exec_entry p1_entry exec_cmd p1_cmd CMD ["p1_cmd", "p2_cmd"] p1_cmd p2_cmd /bin/sh -c exec_entry p1_entry p1_cmd p2_cmd exec_entry p1_entry p1_cmd p2_cmd CMD exec_cmd p1_cmd /bin/sh -c exec_cmd p1_cmd /bin/sh -c exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd
Runs as root by default. Mounts become inconvenient, if not dangerous:
ionel@newbox:~$ docker run --rm -it -v $PWD:/stuff ubuntu root@5b28926fb3a8:/# touch /stuff/foobar root@5b28926fb3a8:/# exit ionel@newbox:~$ ls -al foobar -rw-r--r-- 1 root root 0 May 16 10:26 foobar
Two ways to deal with it:
ARG USER ARG UID ARG GID RUN echo "Creating user: $USER ($UID:$GID)" \ && groupadd --system --gid=$GID $USER \ && useradd --system --create-home --gid=$GID --uid=$UID $USER WORKDIR /home/$USER USER $USER
You can have this in your Dockerfile:
VOLUME /foo COPY stuff /foo/bar
Files from image make the initial data for the volume.
You can mount volumes from a different container
docker run --volumes-from other-container my-image
You can make containers depend on each other. Example: web container depends on postgres
docker run --name=pg postgres:9.5 docker run --link=pg web
Then we can connect to pg inside the web container (Docker provides a DNS server)
A nicer way, using docker-compose:
version: '2' services: web: build: 'docker/web' ports: - '8080:80' links: - 'pg' pg: image: 'postgres:9.5'
Services aren't ready right away. Solutions:
Let Docker restart the web container till it works 😒
Use an orchestration system that has healthchecks (docker-compose don't have 'em).
Or just wait for services to be ready. Several tools: dockerize, wait-for-it, holdup. Example entrypoint script:
#!/bin/sh set -eux holdup tcp://pg:5432 -- uwsgi ...
Ways to allow changing configuration without rebuilding images:
Environment variables. Have to change app to take various settings from os.environ (aka the 12-factor way).
Volumes:
RUN mkdir /etc/app # default data for the volume COPY settings.conf /etc/app/ VOLUME /etc/app
Table of Contents | t |
---|---|
Exposé | ESC |
Full screen slides | e |
Presenter View | p |
Source Files | s |
Slide Numbers | n |
Toggle screen blanking | b |
Show/hide slide context | c |
Notes | 2 |
Help | h |