Related post: The poor man’s DB proxy.
This guide assumes you already have the swarm setup with at least two nodes (two docker hosts). If not, follow the official documentation to setup the swarm and nodes. We’ll assume that the swarm is setup with these two nodes:
We will setup a docker swarm stack
db. This stack will have two services:
mydb1will be the primary database, deployed onto
mydb2will be the replica database, deployed onto
To setup MySQL replication in Docker Swarm, we need the following steps:
- Prepare Docker swarm and nodes
- Deploy MySQL services on the swarm
- Setup MySQL instances
1. Prepare Docker swarm nodes
We need a docker network that connects the different docker nodes and docker services together. If you don’t already have a docker network, create with the command below. This command will create a network named
IMPORTANT: run this command from the docker manager node
$ docker network create --driver overlay swarmnet
Add labels to each docker node
When deploying services on docker swarm, if one node goes offline, the container for that service might get deployed onto a differ swarm node. Since MySQL services are persistent (requires storage), we want to pin each service to a particular node so that upon restart, the service containers will always be deployed on the same node. One way to do this is to use the label deployment constraint. We’ll use these constraints in the second step.
From the swarm manager node, run the following commands:
$ docker node update --label-add mydb1=true node1 $ docker node update --label-add mydb2=true node2
Create storage volumes on each node
This step is optional. When deploying the containers in step 2, we also have the option to bind directories directly instead of mounting docker volumes.
Run the following commands on each node (i.e. run the following commands on node1, then node2):
$ docker volume create dbdata $ docker volume create dblog
2. Deploy MySQL services on the swarm
We are only setting up one replica
mydb2 in this guide. However, subsequent replicas can be setup following the guide for mydb2.
MySQL config files
To simplify the docker swarm deployment process, we’ll make sure each node has the MySQL config files at
The best way to keep these files in sync for different nodes is to use a shared drive, then run
ln -s /mnt/shared-drive/... /var/container-config/mysql
We’ll need two config files, one for
mydb1 and one for
[mysqld] server-id=1 binlog-format=ROW log-bin=mysql-bin binlog-ignore-db=mysql binlog-ignore-db=information_schema binlog-ignore-db=performance_schema binlog-ignore-db=sys binlog-do-db=mydb
[mysqld] server-id=2 log-bin=mysql-bin relay-log=mysql-relay-bin relay-log-index=mysql-relay-bin.index log-slave-updates=1 read-only=1 replicate-ignore-db=mysql replicate-ignore-db=information_schema replicate-ignore-db=performance_schema replicate-ignore-db=sys replicate-do-db=mydb
A few note-worthy config items in these files:
server-id: it is important to have unique server ID for each MySQL instance. This means if we were to create a third replica called
mydb3, we’ll want a different config file, with a different server-id for it.
log-bin: this file name will be used later to start the replication process.
With the config file in place, we’ll just need a docker-compose file for dataswarm. A few things of note:
- The placement constraint for each service
- The different config files for each service, all mapped to
/etc/mysql/conf.d/my.cnfon the container
- The MySQL port on
mydb2is exposed on a different port, 4306. This is necessary when using a docker swarm relay network. If another replica
mydb3is setup, it will need to be exposed on a different port as well.
- We are reusing the
swarmnetnetwork and the data
dblogsvolumes created in step 1 above.
# mydb.docker-compose.yaml version: "3" services: mydb1: image: mysql:8 deploy: placement: constraints: - node.labels.mydb1 == true restart_policy: condition: any networks: - swarmnet volumes: - /var/container-configs/mysql/mydb1.cnf:/etc/mysql/conf.d/my.cnf - dbdata:/var/lib/mysql - dblog:/var/log/mysql ports: - "3306:3306" environment: - MYSQL_ROOT_PASSWORD=SOMETHING_SUPER_SECURE mydb2: image: mysql:8 deploy: placement: constraints: - node.labels.mydb2 == true restart_policy: condition: any networks: - swarmnet volumes: - /var/container-configs/mysql/mydb2.cnf:/etc/mysql/conf.d/my.cnf - dbdata:/var/lib/mysql - dblog:/var/log/mysql ports: - "4306:3306" environment: - MYSQL_ROOT_PASSWORD=SOMETHING_SUPER_SECURE networks: swarmnet: name: swarmnet external: true volumes: dbdata: external: true dblog: external: true
With that, we can deploy these DB containers. Running the following command on the swarm manager node will deploy
mydb2 services onto the
db stack of the swarm:
$ docker stack deploy -c mydb.docker-compose.yaml db
3. Setup MySQL instances
Remember, if you’re accessing
mydb2from outside the swarm, you need to connect to
Create a “replica” user on the Primary instance
Run the following queries on
CREATE USER 'repli'@'%' IDENTIFIED WITH mysql_native_password BY 'SECURE_REPLI_PASSWORD'; GRANT REPLICATION SLAVE ON *.* TO 'repli'@'%';
Create the database on the Primary instance
CREATE DATABASE mydb;
Get the replication checkpoint from the Primary instance
SHOW MASTER STATUS;
The output should look something like below. We’ll need the file name and position to setup
File: mysql-bin.000003 Position: 573 Binlog_Do_DB: mydb Binglog_IgnoreDB: mysql,information_schema,performance_schema,sys
Create the database on the Replica instance
CREATE DATABASE mydb
Setup replication on the Replica instance
IMPORTANT: The file name and the log position from
SHOW MASTER STATUSpoints to the latest state of the database and replication log. This means that any data written to the DB before this point won’t be replicated over. Use
mysqldumpor some similar utility to dump the data to the replica before starting replication below.
STOP REPLICA; CHANGE REPLICATION SOURCE TO SOURCE_HOST = 'mydb1', SOURCE_USER = 'repli', SOURCE_PASSWORD = 'SECURE_REPLI_PASSWORD', SOURCE_LOG_FILE = 'mysql-bin.000003', -- From the previous step SOURCE_LOG_POS = 573; -- From the previous step -- Optionally, SOURCE_RETRY_COUNT and SOURCE_CONNECT_RETRY are also useful to set. START REPLICA; SHOW REPLICA STATUS;
The replica status should look something similar to this. The file name and log position should change as replication happens:
Replica_IO_State: Waiting for source to send event Source_Host: mydb1 Source_User: repli Source_Log_File: mysql-bin.000003 Read_Source_Log_Pos: 573 ...