This docker command cheat sheet is divided based on the type of docker file/image starting point.
docker login
List current images on system.
docker image ls
Save image as a tar file locally on your computer.
docker image save -o /home/my_computer/Desktop/rust-yew-env.tar your_username/rust-dashboard-base
docker system prune
docker rmi $(docker images -a -q)
docker container rm [ID] [ID] [ID]
docker rm $(docker ps -aq)
These are commands that work on Images.
docker run -it <imagename> bash
docker run -it <imagename> sh
docker build -t <your_dockerhub_id>/<what_you_want_to_call_your_image>:latest .
docker image tag your_username/nginx your_username/nginx:testing
docker image build -t [REPONAME] .
These are commands for when you already have a built or running image.
docker start goofy_hopper
docker run --rm -v $(pwd):/usr/share/nginx/html -p 8080:80 nginx:latest
docker run --rm -d -v $(pwd):/usr/share/nginx/html -p 8080:80 nginx:latest
docker exec -it <name of container> /bin/bash
Examples:
docker exec -it goofy_hopper bash
#or
docker exec -it goofy_hopper sh
Tags are labels that point to an image ID
docker tag local-image:tagname new-repo:tagname
docker push new-repo:tagname
docker push capcombravo/rust-dashboard-base:tagname
docker image tag nginx your_username/nginx
docker image push your_username/nginx
If denied you may need to login again.
docker login
docker container run -d -p 80:80 --name nginx nginx (-p 80:80 is optional as it runs on 80 by default)
docker container run -d -p 8080:80 --name apache httpd
docker container run -d -p 27017:27017 --name mongo mongo
docker container run -d -p 3306:3306 --name mysql --env MYSQL_ROOT_PASSWORD=123456 mysql
Environment variables are supported by the following list of instructions in the Dockerfile:
Common env vars to set in a dockerfile for python.
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PIP_NO_CACHE_DIR=yes \
LANG=C.UTF-8 \
LC_ALL=C.UTF-8 \
C_FORCE_ROOT=1 \
APP_HOME=/app
Variable | Meaning |
---|---|
PYTHONDONTWRITEBYTECODE=1 |
Prevents Python from writing .pyc (compiled bytecode) files to disk. Keeps container filesystem cleaner. |
PYTHONUNBUFFERED=1 |
Forces Python output to be sent straight to the terminal without buffering. Important for real-time logs (especially in Docker logs). |
PIP_NO_CACHE_DIR=yes |
Tells pip not to store downloaded packages in a cache. Reduces the size of your final Docker image. |
APP_HOME=/app |
Just defines the path to your application inside the container. Useful later (e.g., for WORKDIR , COPY , etc.). |
LANG=C.UTF-8 |
Sets the default system locale to UTF-8 encoding. Important for handling Unicode correctly. |
LC_ALL=C.UTF-8 |
Ensures all locale settings fall back to the C.UTF-8 locale. (Overrides other locale environment variables.) |
C_FORCE_ROOT=1 |
If you're using Celery and running as root (which is common inside containers), this forces Celery to allow it. Normally Celery refuses to run as root without this. |
Example 1:
Refs:
Dockerfile Documentation:
FROM python:3.7-slim-stretch
LABEL maintainer="Skystone Analytics <skystoneanalytics@gmail.com>"
RUN apt-get update \
&& apt-get install -y python3-dev gcc
RUN pip install tweepy pymongo
WORKDIR /steam_mongo_test
COPY . .
CMD ["python", "server-stream.py"]
Example 2:
Build and run a Dockerfile
https://hub.docker.com/_/rust?tab=description
If you have the below Dockerfile you would execute:
FROM rust:slim-buster
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y apt-transport-https git cmake jq build-essential
RUN git clone https://github.com/KomodoPlatform/atomicDEX-API --branch mm2.1 --single-branch
RUN cd atomicDEX-API && cargo build --features native -vv
This to build it
docker build -t dex1:latest .
docker run -it --name dex1 dex1:latest bash
#format is
docker run -it --name <what you want to call built container> <tag of image you just built> bash
Refs:
docker-compose Documentation:
version: "3.7"
services:
db:
image: mongo:3.6.4
networks:
- backend
restart: unless-stopped
application:
build:
context: ./application
dockerfile: Dockerfile
restart: unless-stopped
networks:
- backend
depends_on:
- db
- server
- worker
worker:
build:
context: ./worker
dockerfile: Dockerfile
restart: unless-stopped
networks:
- backend
depends_on:
- db
- server
server:
build:
context: ./server
dockerfile: Dockerfile
restart: unless-stopped
environment:
- ACCESS_TOKEN=${ACCESS_TOKEN}
- ACCESS_SECRET=${ACCESS_SECRET}
- CONSUMER_KEY=${CONSUMER_KEY}
- CONSUMER_SECRET=${CONSUMER_SECRET}
networks:
- backend
depends_on:
- db
networks:
backend:
driver: bridge
MYSQL_DATABASE=ninja
MYSQL_ROOT_PASSWORD=pwd
APP_DEBUG=0
APP_URL=http://localhost:8000
APP_KEY=SomeRandomStringSomeRandomString
APP_CIPHER=AES-256-CBC
DB_USERNAME=root
DB_PASSWORD=pwd
DB_HOST=db
DB_DATABASE=ninja
MAIL_HOST=mail.service.host
MAIL_USERNAME=username
MAIL_PASSWORD=password
MAIL_DRIVER=smtp
MAIL_FROM_NAME="My name"
MAIL_FROM_ADDRESS=user@mail.com
This allows you to use a .env file in same dir as docker-compose.yml.
But you can pass in only specific values using the environment:
array.
environment:
- ACCESS_TOKEN=${ACCESS_TOKEN}
- ACCESS_SECRET=${ACCESS_SECRET}
- CONSUMER_KEY=${CONSUMER_KEY}
- CONSUMER_SECRET=${CONSUMER_SECRET}
This uses environment:
array.
environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:
environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET
This uses the env_file
array.
Single .env file in same directory as docker-compose.yml file.
env_file:
- .env
Specify multiple .env files.
env_file:
- ./common.env
- ./apps/web.env
- /opt/secrets.env
The bitcoin_data volume is configured to use the local driver, which means that it will be stored on the local host system outside of the container.
The exact location on the local host system where the bitcoin_data volume will be saved depends on the Docker installation and configuration.
By default, Docker stores volumes in the Docker data directory on the host system. The location of the Docker data directory may vary depending on the operating system:
On Linux: The default location is /var/lib/docker/volumes/.
On macOS: The default location is /var/lib/docker/volumes/ (Docker Desktop) or /var/lib/docker/volumes (Docker Toolbox).
On Windows: The default location is C:\ProgramData\Docker\volumes\ (Docker Desktop) or C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks (Docker Toolbox).
So, the bitcoin_data volume will be saved in the respective location on the local host system based on the Docker installation and configuration being used.
Delete colume from local host machine
docker volume rm VOLUME_NAME
List current docker volumes
docker volume ls
Size of a docker volumes
docker volume inspect VOLUME_NAME | jq '.[0].UsageData.Size'
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-debian-10
#!/bin/bash
sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
sudo apt update
apt-cache policy docker-ce
sudo apt install docker-ce docker-compose
sudo systemctl status docker
sudo usermod -aG docker ${USER}
su - ${USER}
id -nG
# Add a user to the group if you need it
#sudo usermod -aG docker username
docker
#Notes:
#Start docker-compose as sudo
#!/bin/bash
sudo pacman -Syu docker docker-compose
sudo systemctl enable docker
sudo systemctl restart
sudo systemctl enable docker-compose
sudo systemctl restart
sudo usermod -aG docker ${USER}
su - ${USER}
id -nG
https://docs.docker.com/engine/install/ubuntu/
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
curl -fsSL \
https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
| sudo tee /etc/apt/sources.list.d/docker.list \
> /dev/null
# Install docker engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
# list versions available in added repo
#apt-cache madison docker-ce
# add version string
#sudo apt-get install docker-ce=<VERSION_STRING> docker-ce-cli=<VERSION_STRING> containerd.io
sudo systemctl status docker
# Add docker to sudo group so you dont have to type sudo everytime you run docker.
sudo usermod -aG docker ${USER}
su - ${USER}
id -nG
sudo docker run hello-world
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
apt-cache policy docker-ce
sudo apt install docker-ce
sudo systemctl status docker
# Add docker to sudo group so you dont have to type sudo everytime you run docker.
sudo usermod -aG docker ${USER}
su - ${USER}
id -nG
sudo yum check-update
curl -fsSL https://get.docker.com/ | sh
sudo systemctl start docker
# Verify docker is running.
sudo systemctl status docker
sudo systemctl enable docker
# Add docker to sudo group.
sudo usermod -aG docker $(whoami)
redis_name=redis-instance
docker run -p 6379:6379 --name $redis_name -d redis
docker exec -it $redis_name redis-cli ping
version: '3.3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data: {}
Wiki-js docker-compose from documentation
version: "3"
services:
db:
image: postgres:11-alpine
environment:
POSTGRES_DB: wiki
POSTGRES_PASSWORD: wikijsrocks
POSTGRES_USER: wikijs
logging:
driver: "none"
restart: unless-stopped
volumes:
- db-data:/var/lib/postgresql/data
wiki:
image: requarks/wiki:2
depends_on:
- db
environment:
DB_TYPE: postgres
DB_HOST: db
DB_PORT: 5432
DB_USER: wikijs
DB_PASS: wikijsrocks
DB_NAME: wiki
restart: unless-stopped
ports:
- "80:3000"
volumes:
db-data:
Wiki-js docker-compose latest
version: "3"
services:
db:
image: postgres:11-alpine
environment:
POSTGRES_DB: wiki
POSTGRES_PASSWORD: wikijsrocks
POSTGRES_USER: wikijs
logging:
driver: "none"
restart: unless-stopped
volumes:
- db-data:/var/lib/postgresql/data
wiki:
# Latest image available.
image: requarks/wiki:beta
#image: requarks/wiki:canary
depends_on:
- db
environment:
DB_TYPE: postgres
DB_HOST: db
DB_PORT: 5432
DB_USER: wikijs
DB_PASS: wikijsrocks
DB_NAME: wiki
restart: unless-stopped
ports:
- "80:3000"
volumes:
db-data:
ref:
- https://docs.gitlab.com/omnibus/docker/#install-gitlab-using-docker-compose
wget https://gitlab.com/gitlab-org/omnibus-gitlab/raw/master/docker/docker-compose.yml
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'gitlab.example.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://gitlab.example.com'
# Add any other gitlab.rb configuration here, each on its own line
ports:
- '80:80'
- '443:443'
- '22:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
Below is another docker-compose.yml example with GitLab running on a custom HTTP and SSH port.
Notice how the GITLAB_OMNIBUS_CONFIG variables match the ports section:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'gitlab.example.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.example.com:8929'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
ports:
- '8929:8929'
- '2224:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
NOTE: Uses digital ocean's managed SQL instance
While this shows configuration for a DGO SQL instance. Any Managed SQL instance should work fine.
version: '2'
services:
web:
image: odoo:12.0
depends_on:
- pg-db-1-do-user-4185874-0.b.db.ondigitalocean.com
ports:
- "8069:8069"
environment:
- HOST=pg-db-1-do-user-4185874-0.b.db.ondigitalocean.com
- USER=nonadmin
- PASSWORD=ulq1vdxnh3ave8rh
pg-db-1-do-user-4185874-0.b.db.ondigitalocean.com:
image: postgres:10
ports:
- 25060:5432
environment:
- POSTGRES_PORT=25060
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=ulq1vdxnh3ave8rh
- POSTGRES_USER=nonadmin
This script will download and run titra
#!/bin/bash
curl -L https://github.com/faburem/titra/raw/master/docker-compose.yml | ROOT_URL=http://localhost:3000 docker-compose -f - up
Local postgres
#!/bin/bash
volume_dir=postgres_data
postgres_container_name=merc-postgres
password=alexj
#1. Create a folder in a known location for you
#sudo rm -rf ${HOME}/$volume_dir/
mkdir ${HOME}/$volume_dir/
#2. run the postgres image
docker run \
-d \
--name $postgres_container_name \
-e POSTGRES_PASSWORD=$password \
-v ${HOME}/$volume_dir/:/var/lib/postgresql/data \
-p 5432:5432 postgres
## 3. check that the container is running
docker ps
# 4 term into container
docker exec -it $postgres_container_name bash
Setup pgadmin
#!/bin/bash
docker_image=dpage/pgadmin4
pgadmin_container_name=ex-pgadmin
pg_email=email@gmail.com
pg_password=qwerty
docker pull $docker_image
docker run \
-p 80:80 \
-e PGADMIN_DEFAULT_EMAIL=$pg_email \
-e PGADMIN_DEFAULT_PASSWORD=$pg_password \
--name $pgadmin_container_name \
-d $docker_image
#check running
docker ps
# check ip addr of postgres container
docker inspect merc-postgres -f "{{json .NetworkSettings.Networks }}"
#### login creds:
# http://localhost:80
# username: email@gmail.com
# password: qwerty
## postgres ip addr from output
# IPAddress":"111.13.0.2"
Setup local MongoDB Database
docker-compose up
~/Dockerfile
FROM mongo:latest
# install Python 3
RUN apt-get update && apt-get install -y python3 python3-pip
RUN apt-get -y install python3.8-dev
RUN pip3 install pymongo
EXPOSE 27017
~/docker-compose.yml
version: '3.7'
services:
mongo_db:
build:
context: ./
dockerfile: Dockerfile.mongodb
volumes:
- $PWD/data_mongo:/data/db
- $PWD/data_mongo:/var/www/html
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: 123
https://www.elastic.co/guide/en/kibana/current/docker.html
https://rigorousthemes.com/blog/best-kibana-dashboard-examples/
For local dev:
docker network create elastic
docker pull docker.elastic.co/elasticsearch/elasticsearch:8.3.3
docker run --name es-node01 --net elastic -p 9200:9200 -p 9300:9300 -t docker.elastic.co/elasticsearch/elasticsearch:8.3.3
version: '2'
services:
kibana:
image: docker.elastic.co/kibana/kibana:8.3.3
environment:
SERVER_NAME: kibana.example.org
ELASTICSEARCH_HOSTS: '["http://es01:9200","http://es02:9200","http://es03:9200"]'
example 1:
https://jinnabalu.com/Elasticsearch-Kibana-using-Docker-Compose/
https://jinnabalu.com/Elasticsearch-Single-Node-using-Docker-Compose/
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
container_name: elasticsearch
environment:
- node.name=ws-es-node
- discovery.type=single-node
- cluster.name=ws-es-data-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
# - xpack.security.enabled='false'
# - xpack.monitoring.enabled='false'
# - xpack.watcher.enabled='false'
# - xpack.ml.enabled='false'
# - http.cors.enabled='true'
# - http.cors.allow-origin="*"
# - http.cors.allow-methods=OPTIONS, HEAD, GET, POST, PUT, DELETE
# - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type, Content-Length
# - logger.level: debug
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- vibhuviesdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:7.3.1
container_name: kibana
environment:
SERVER_NAME: 127.0.0.1
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
# XPACK_GRAPH_ENABLED: false
# XPACK_ML_ENABLED: false
# XPACK_REPORTING_ENABLED: false
# XPACK_SECURITY_ENABLED: false
# XPACK_WATCHER_ENABLED: false
ports:
- "5601:5601"
networks:
- esnet
depends_on:
- elasticsearch
restart: "unless-stopped"
volumes:
vibhuviesdata:
driver: local
networks:
esnet:
version: '3'
services:
mongodb:
image: mongo:latest
container_name: fastapi-db
hostname: mongodb
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
ports:
- 27017-27019:27017-27019
volumes:
- ./data:/data/db
app:
container_name: fastapi-app
build: .
command: uvicorn main:app --host 0.0.0.0 --reload
env_file: .env.develop
volumes:
- .:/fastapi-proj
depends_on:
- mongodb
ports:
- "8000:8000"
version: '2'
services:
dokuwiki:
image: 'bitnami/dokuwiki:0'
ports:
- '80:80'
- '443:443'
volumes:
- 'dokuwiki_data:/bitnami'
volumes:
dokuwiki_data:
driver: local
MediaWiki with MariaDB
# Access via "http://localhost:8080"
# (or "http://$(docker-machine ip):8080" if using docker-machine)
version: '3'
services:
mediawiki:
image: mediawiki
restart: always
ports:
- 8080:80
links:
- database
volumes:
- /var/www/html/images
# After initial setup, download LocalSettings.php to the same directory as
# this yaml and uncomment the following line and use compose to restart
# the mediawiki service
# - ./LocalSettings.php:/var/www/html/LocalSettings.php
database:
image: mariadb
restart: always
environment:
# @see https://phabricator.wikimedia.org/source/mediawiki/browse/master/includes/DefaultSettings.php
MYSQL_DATABASE: my_wiki
MYSQL_USER: wikiuser
MYSQL_PASSWORD: example
MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
FROM grafana/grafana:latest
# or
#FROM grafana/grafana:master-ubuntu
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
Solution:
All Deps:
RUN apt-get update
RUN apt-get install ffmpeg libsm6 libxext6 -y
Specific Deps:
apt-get update && apt-get install libgl1
failed to create endpoint optimistic_spence on network bridge: failed to add the host (veth9fc3a03) <=> sandbox (veth15abfd6) pair interfaces: operation not supported
Solution:
https://stackoverflow.com/questions/52059451/docker-build-error-failed-to-add-the-host
Specific Deps:
docker build . --network=host