This essentially means that we will have a NFS Volume, when the service gets created on Docker Swarm, the cluster creates these volumes with path mapping, so when a container gets spawned, restarted, scaled etc, the container that gets started on the new node will be aware of the volume, and will get the data that its expecting.
Its also good to note that our NFS Server will be a single point of failure, therefore its also good to look at a Distributed Volume like GlusterFS, XtreemFS, Ceph, etc.
NFS Server (10.8.133.83)
Rancher Convoy Plugin on Each Docker Node in the Swarm (10.8.133.83, 10.8.166.19, 10.8.142.195)
Setup NFS:
Setup the NFS Server
Update:
In order for the containers to be able to change permissions, you need to set (rw,async,no_subtree_check,no_wdelay,crossmnt,insecure,all_squash,insecure_locks,sec=sys,anonuid=0,anongid=0)
#!/bin/sh### BEGIN INIT INFO# Provides:# Required-Start: $remote_fs $syslog# Required-Stop: $remote_fs $syslog# Default-Start: 2 3 4 5# Default-Stop: 0 1 6# Short-Description: Start daemon at boot time# Description: Enable service provided by daemon.### END INIT INFOdir="/usr/local/bin"cmd="convoy daemon --drivers vfs --driver-opts vfs.path=/mnt/docker/volumes"user="root"name="convoy"pid_file="/var/run/$name.pid"stdout_log="/var/log/$name.log"stderr_log="/var/log/$name.err"get_pid(){ cat "$pid_file"}is_running(){[ -f "$pid_file"]&& ps `get_pid` > /dev/null 2>&1
}case"$1" in
start)if is_running;thenecho"Already started"elseecho"Starting $name"cd"$dir"if[ -z "$user"];then sudo $cmd >> "$stdout_log" 2>> "$stderr_log"&else sudo -u "$user"$cmd >> "$stdout_log" 2>> "$stderr_log"&fiecho$! > "$pid_file"if ! is_running;thenecho"Unable to start, see $stdout_log and $stderr_log"exit 1
fifi;; stop)if is_running;thenecho -n "Stopping $name.."kill`get_pid`for i in {1..10}doif ! is_running;thenbreakfiecho -n "." sleep 1
doneechoif is_running;thenecho"Not stopped; may still be shutting down or shutdown may have failed"exit 1
elseecho"Stopped"if[ -f "$pid_file"];then rm "$pid_file"fifielseecho"Not running"fi;; restart)$0 stop
if is_running;thenecho"Unable to stop, will not attempt to start"exit 1
fi$0 start
;; status)if is_running;thenecho"Running"elseecho"Stopped"exit 1
fi;; *)echo"Usage: $0 {start|stop|restart|status}"exit 1
;;esacexit 0
Make the script executable:
1
$ chmod +x /etc/init.d/convoy
Enable the service on boot:
1
$ sudo systemctl enable convoy
Start the service:
1
$ sudo /etc/init.d/convoy start
This should be done on all the nodes.
Externally Managed Convoy Volumes
One thing to note is that, after your delete a volume, you will still need to delete the directory from the path where its hosted, as the application does not do that by itself.
Creating the Volume Before hand:
123456789
$ convoy create test1
test1
$ docker volume ls
DRIVER VOLUME NAME
convoy test1
$ cat /mnt/docker/volumes/config/vfs_volume_test1.json
{"Name":"test1","Size":0,"Path":"/mnt/docker/volumes/test1","MountPoint":"","PrepareForVM":false,"CreatedTime":"Mon Feb 05 13:07:05 +0000 2018","Snapshots":{}}
Viewing the volume from another node:
123
$ docker volume ls
DRIVER VOLUME NAME
convoy test1
Creating a Test Service:
Create a test service to test the data persistence, our docker-compose.yml:
$ docker service scale apps_test=2
apps_test scaled to 2
Inspect to see if the new replica is on another node:
123456
$ docker service ps apps_test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
myrq2pc3z26z apps_test.1 alpine:edge scw-docker-1 Running Running 45 seconds ago
ny8t97l2q00c \_ apps_test.1 alpine:edge scw-docker-1 Shutdown Failed 51 seconds ago "task: non-zero exit (137)"iojo7fpw8jir \_ apps_test.1 alpine:edge scw-docker-1 Shutdown Failed about a minute ago "task: non-zero exit (137)"tt0nuusvgeki apps_test.2 alpine:edge scw-docker-2 Running Running 15 seconds ago
Logon to the new container and test if the data is persisted:
123
$ docker exec -it apps_test.2.tt0nuusvgekirw1c5myu720ga sh
/ # cat /data/file.txtok
Delete the Stack and Redeploy and have a look at the data we created earlier, and you will notice the data is persisted: