Files
cloud-compose/SETUP.md
2026-02-09 09:52:00 -05:00

104 lines
3.5 KiB
Markdown

Dokploy + Docker Swarm Homelab Setup Instructions
This guide walks through setting up a fresh, multi-node Docker Swarm cluster using Dokploy for quick web app deployment and easy hosting of infrastructure services (like Pi-hole and Minio), including shared storage via NFS from your NAS node.
1. Prepare Environment
• Choose a primary node (can be any capable Linux server).
• Identify your NAS node (high capacity storage).
• Gather all SSH credentials.
• Ensure all nodes have Docker installed (curl -fsSL https://get.docker.com | sh).
2. Initialize Docker Swarm Cluster
On your primary node:
docker swarm init --advertise-addr <PRIMARY_NODE_IP>
On each additional node:
• Run the join command given by the previous step, e.g.:
docker swarm join --token <TOKEN> <PRIMARY_NODE_IP>:2377
3. Label Nodes for Placement Constraints
On your primary node, label nodes:
docker node update --label-add role=storage nas-node-01
docker node update --label-add storage=high nas-node-01
docker node update --label-add role=compute node-light-01
docker node update --label-add infra=true nas-node-01
(Replace node names as appropriate)
4. Set Up Dokploy
On primary node:
curl -sSL https://dokploy.com/install.sh | sh
• Dokploy UI will be available on port 8080.
• Default credentials: admin / admin (change ASAP).
5. Set Up Shared NFS Storage from Your NAS
On your NAS node:
• Install NFS server (Debian/Ubuntu):
sudo apt install nfs-kernel-server
• Export a directory:
o Edit /etc/exports, add:
/mnt/storage/docker-data *(rw,sync,no_subtree_check)
o Restart NFS:
sudo exportfs -ra
sudo systemctl restart nfs-kernel-server
6. Create Shared NFS Volume in Docker
On the manager node:
docker volume create
--driver local
--opt type=nfs
--opt o=addr=<NAS_IP>,rw,nolock,nfsvers=4
--opt device=:/mnt/storage/docker-data
shared-data
(Replace <NAS_IP> with your NAS's address.)
7. Deploy Apps with Dokploy + Placement Constraints
• Use Dokploy UI to:
o Deploy your web apps (Node.js, PHP, static sites)
o Set replica counts (scaling)
o Pin infrastructure apps (like Pi-hole or Minio) to the NAS node via placement constraints.
o Use the shared NFS volume for persistent data.
Example Docker Compose snippet for Pinning:
services:
pihole:
image: pihole/pihole
deploy:
placement:
constraints:
- node.labels.role==storage
volumes:
- shared-data:/etc/pihole
8. (Optional) Set Up Minio (S3-Compatible Storage)
• Deploy Minio with Dokploy, pin it to your NAS, and use shared volume for data:
services:
minio:
image: minio/minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: changeme123
volumes:
- shared-data:/data
deploy:
placement:
constraints:
- node.labels.role==storage
ports:
- "9000:9000"
- "9001:9001"
9. Add Web Apps and Experiment!
• Use Dokploy's UI to connect to your Gitea instance, auto-deploy repos, and experiment rapidly.
• Traefik integration and SSL setup is handled automatically in Dokploy.
10. Restore K3s (Optional, Later)
• Your original K3s manifests are saved in git—just reapply if you wish to revert:
k3s server
kubectl apply -f <your-manifests>
References
• Docker Swarm Docs: https://docs.docker.com/engine/swarm/
• Dokploy Docs: https://dokploy.com/docs/
• Docker Volumes: https://docs.docker.com/engine/storage/volumes/
• NFS on Linux: https://help.ubuntu.com/community/NFS
This guide gives you a fast start for a declarative, multi-node homelab with web app simplicity and infrastructure reliability using Dokploy and Docker Swarm!