Docker from create container to deployment in Azure

Rating & reviews (0 reviews)
Study Notes

Mounting
Volumes
Docker manage the mount points and data.
Use it if want to store your container’s data on a remote host and not locally.

Bind (need it here)
Use it if you want to save your data locally on the host itself or want to share data from your Docker host to the Docker container especially, configuration files, source code, etc.

#Works in PowerShell only , command prompt have not pwd
docker run -it --mount type=bind,src="$(pwd)",target=/src ubuntu bash

tmpfs (not relevant here)
Best for sensitive data or information, you do not want the data to be saved on the host machine or docker container (store secret keys for an example)

Data does not persist in container (run)

Create container, create file and write a random number to it.
docker run -d ubuntu bash -c "shuf -i 1-10000 -n 1 -o /data.txt && tail -f /dev/null"
docker ps
Result:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
834744ade459 ubuntu "bash -c 'shuf -i 1-…" 3 minutes ago Up 3 minutes silly_clarke

Read data.txt content
docker exec 834744ade459 cat /data.txt
# or
docker run -it ubuntu cat /data.txt
Result:
8072

Delete container and create from the same image a new one
docker rm 834744ade459
docker run -it ubuntu ls
# or
docker exec 834744ade459ls

Result:
bin dev home lib32 libx32 mnt proc run srv tmp var
boot etc lib lib64 media opt root sbin sys usr
No data.txt

Data (and code) persists in container image

git clone https://github.com/docker/getting-started.git
cd app

Create Docker file
# syntax=docker/dockerfile:1
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000

Create image getting-started
docker build -t getting-started .

Edit and change something in (line #56 for example) in /app/src/static/js/ap.js
Create new image
docker build -t getting-started1 .

Create and run container from getting-started
docker run -dp 3000:3000 getting-started
Open http:://localhost:3000
See intial code

Create and run container from getting-started1
docker run -dp 3001:3000 getting-started
Open http:://localhost:3001
See changed code

  • Keep versions of code in container image.
  • Keep data in container volumes, see next.

Persist data between runs

Use the same git image like above and create a container.

git clone https://github.com/docker/getting-started.git
cd app
Create Docker file
# syntax=docker/dockerfile:1
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000
Create image getting-started
docker build -t getting-started .

Using SQLLite
By default, the todo app stores its data in a SQLite database at /etc/todos/todo.db in the container’s filesystem
With the database being a single file, if we can persist that file on the host and make it available to the next container, it should be able to pick up where the last one left off.
By creating a volume and attaching (mounting) it to the directory the data is stored in, we can persist the data.
As our container writes to the todo.db file, it will be persisted to the host in the volume.

docker volume create persistent-data-db
docker run -dp 3000:3000 --mount type=volume,src=persistent-data-db,target=/etc/todos getting-started

Load:
http://localhost:3000
Add some items
Stop & delete getting-started container

Run it again
docker run -dp 3000:3000 --mount type=volume,src=persistent-data-db,target=/etc/todos getting-started

All entered before is still there.


MySQL volume and Compose
Compose makes it getting interesting

Create MySQL volume

Multiline command in PowerShell (for Linux replace ` with backslash)
docker run -d `
--network todo-app --network-alias mysql `
-v todo-mysql-data:/var/lib/mysql `
-e MYSQL_ROOT_PASSWORD=secret `
-e MYSQL_DATABASE=todos `
mysql:8.0

Connect to database:
docker exec -it 7f5291ecd67e mysql -u root -p
# or interactive
docker exec -it 7f5291ecd67e bash
#mysql -u root -p
..
..
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| tods |
+--------------------+
5 rows in set (0.00 sec)
mysql> quit

Use compose
Allows to create multiple containers and the environment. Manage all from a file.
You may say infrastructure as code at small scale in this case.

Same git source like above.
in /app create a file docker-compose.yml
When run, this will create 3
services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos

mysql:
image: mysql:8.0
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos

volumes:
todo-mysql-data:

Run
docker compose up -d

All you change in codes source locally reflect into app container immediately (on reload)

To see database / tables:
docker exec -it 86230b97c184 bash
bash-4.4# mysql -u root -p
#Enter password:
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 12
...
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| todos |
+--------------------+
5 rows in set (0.00 sec)

mysql> use todos;
Database changed
mysql> show tables;
+-----------------+
| Tables_in_todos |
+-----------------+
| todo_items |
+-----------------+
1 row in set (0.01 sec)

mysql> select * from todo_items;
+--------------------------------------+------+-----------+
| id | name | completed |
+--------------------------------------+------+-----------+
| 68e9fc83-66b8-406e-b404-8899d5570c36 | sdfg | 0 |
| 1c441ab9-4d8d-45a8-865e-f2b49494e6cc | sdfg | 0 |
+--------------------------------------+------+-----------+
2 rows in set (0.00 sec)

mysql> delete from todo_items;
Query OK, 2 rows affected (0.01 sec)

mysql> select * from todo_items;
Empty set (0.01 sec)

mysql>quit


Shut it down:
docker compose down

Containers are stopped and deleted
Images stay.
Volume with data stay

Create & start images.
docker composeup -d
All is back - no database lost

CI/CD pipe

Terms

1. venv
Creation of virtual environments — Python 3.11.1 documentation
The venv module supports creating lightweight “virtual environments”, each with their own independent set of Python packages installed in theirsite directories. A virtual environment is created on top of an existing Python installation, known as the virtual environment’s “base” Python, and may optionally be isolated from the packages in the base environment, so only those explicitly installed in the virtual environment are available.
When used from within a virtual environment, common installation tools such aspip will install Python packages into a virtual environment without needing to be told to do so explicitly.

2. PEP 405
Python Virtual Environments
This PEP proposes to add to Python a mechanism for lightweight “virtual environments” with their own site directories, optionally isolated from system site directories. Each virtual environment has its own Python binary (allowing creation of environments with various Python versions) and can have its own independent set of installed Python packages in its site directories but shares the standard library with the base installed Python.

Create a flask web server container
python3 -m venv .venv
# on windows
.venvScriptsace
python3 -m flask run

Tag it and check
docker build --tag python-docker .
curl localhost:8000
Result:
ok

Create data volumes and network
docker volume create mysql
docker volume create mysql_config
docker network create mysqlnet

Create and run containers

#Note: 3006 is taken by local MYSQL so I replace ot with 3307
docker run --rm -d -v mysql:/var/lib/mysql -v mysql_config:/etc/mysql -p 3307:3306 --network mysqlnet --name mysqldb -e MYSQL_ROOT_PASSWORD=p@ssw0rd1 mysql
# Add mysql-connector-python to requ
docker build --tag python-docker-dev .

docker run --rm -d --network mysqlnet --name rest-server -p 8001:5000 python-docker-dev

That's all here

Push all on GitHub

1. Create a new GitHub repository using this template repository.
docker-python-learn

2. Open the repository
Settings, and go to Secrets > Actions.
Create a new secret named DOCKERHUB_USERNAME and your Docker ID (your username, see it top right) as value.

Copy Access Token
When logging in from your Docker CLI client, use this token as a password.

ACCESS TOKEN DESCRIPTION
clockboxci

ACCESS PERMISSIONS
Read, Write, Delete
To use the access token from your Docker CLI client:

1. Run docker login -u <YOUR_DOCKER_ID>

2. At the password prompt, enter the personal access token.
<GENERATED_PASSWORD>
WARNING: This access token will only be displayed once. It will not be stored and cannot be retrieved. Please be sure to save it now.

Commit.
When finish you will have all done in Docker (repository just creatde)

Deploy docker container to Azure

In VS code.
docker login azure

Write down details, you need them
1.Subscription you will use:
<AZURE_SUBSCRIPTIO_ID>
Location
<AZURE_LOCATION>
For East US you will write eastus

Resource group. Create it before starting. Cost nothing if nothing in it.
<AZURE_RESOURCE_GROUP>

Context name, name it whatever you wish - be it: docker-context-eastus

In VS Code terminal:
docker context create aci docker-context-eastus --subscription-id <AZURE_SUBSCRIPTIO_ID> --resource-group <AZURE_RESOURCE_GROUP> --location <AZURE_LOCATION>

Result:
Successfully created aci context "docker-context-eastus"

Run (create) in Azure (you are in Azure context)
docker run -p 80:80 registry.hub.docker.com/library/nginx

List containers (in Azure)
docker ps

Result:
pensive-kowalevski - registry.hub.docker.com/library/nginx - Running 20.241.142.42:80->80/tcp

IP is wat I need:

Load in browser:
http:// 20.241.142.42:80

Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer tonginx.org.
Commercial support is available atnginx.com.
Thank you for using nginx

REMOVE IT FROM AZURE - it starts changing your CC
Delete resource group if there is nothing in it except this container.

List contexts
docker context ls

Result:
https://kubernetes.docker.internal:6443 (default) - swarm
desktop-linux moby - npipe:////./pipe/dockerDesktopLinuxEngine
docker-context-eastus - aci - <AZURE_RESOURCE_GROUP>@<AZURE_LOCATION>
Last line is what I was looking for.

Set default context back, otherwise I will not have access to any of my local dockercontainers.
docker context use default

How to update container (code inside) from your local VS Code on Commit?
You have all you need so far. Let you imagination wild or learn Azure DevOps from Microsoft.

Containers basis, Login to view

Docker basis, Login to view

References