- Add init container to set up config files with correct ownership - Run main container as socktop user (UID 100, GID 101) from the start - Use fsGroup to ensure proper volume permissions - Add emptyDir volume for /var/lib/socktop to avoid permission issues - Create docker-entrypoint.sh wrapper to detect root vs non-root execution - Root mode: uses init-config.sh for Docker/docker-compose - Non-root mode: directly runs entrypoint.sh for K8s - Update deployment command format to work with new entrypoint This resolves 'Operation not permitted' errors when running in K8s with security contexts that restrict user switching and ownership changes. |
||
|---|---|---|
| .. | ||
| .deploysummary.txt | ||
| 01-configmap.yaml | ||
| 02-secret.yaml | ||
| 03-deployment.yaml | ||
| 04-service.yaml | ||
| 05-ingress.yaml | ||
| deploy.sh | ||
| deploy.sh.backup | ||
| kustomization.yaml | ||
| monitor-sessions.sh | ||
| QUICKSTART.md | ||
| README.md | ||
| registries.yaml.example | ||
| setup-kubectl.sh | ||
| setup-registry.sh | ||
| test-registry.sh | ||
| TLDR.md | ||
Kubernetes Deployment for Socktop WebTerm
This directory contains Kubernetes manifests for deploying Socktop WebTerm on your k3s cluster.
Overview
The deployment includes:
- 3 replicas for high availability
- Host networking to access Raspberry Pi nodes on port 8443
- Session affinity to maintain terminal connections
- Traefik Ingress for routing (default with k3s)
- WebSocket support for terminal connections
- External SSL termination via NGINX Proxy Manager
- ConfigMaps for configuration files
- Secrets for TLS certificates
Prerequisites
- k3s cluster running with at least 3 nodes
- Traefik Ingress Controller (comes default with k3s)
- External NGINX Proxy Manager for SSL termination
- DNS records pointing to your external IP:
socktop.io→ your external IPwww.socktop.io→ your external IPorigin.socktop.io→ your external IP
- Docker registry access configured for
192.168.1.208:3002 - Proxy hosts configured in NGINX Proxy Manager to forward to k3s on port 8080
Installation
Step 1: Configure Docker Registry Access (if needed)
If your k3s nodes need authentication to pull from your Gitea registry:
# Create docker-registry secret
kubectl create secret docker-registry gitea-registry \
--docker-server=192.168.1.208:3002 \
--docker-username=YOUR_USERNAME \
--docker-password=YOUR_PASSWORD \
--docker-email=your-email@example.com
# Add to deployment (uncomment imagePullSecrets in 03-deployment.yaml)
Step 2: Configure Insecure Registry on k3s Nodes
Since your Gitea registry uses HTTP, configure k3s to allow insecure registries.
On each k3s node, create or edit /etc/rancher/k3s/registries.yaml:
mirrors:
"192.168.1.208:3002":
endpoint:
- "http://192.168.1.208:3002"
configs:
"192.168.1.208:3002":
tls:
insecure_skip_verify: true
Then restart k3s:
# On server node
sudo systemctl restart k3s
# On agent nodes
sudo systemctl restart k3s-agent
Step 3: Create TLS Certificates Secret
Replace the placeholder secret with your actual Raspberry Pi TLS certificates:
kubectl create secret generic socktop-webterm-certs \
--from-file=rpi-master.pem=/path/to/rpi-master.pem \
--from-file=rpi-worker-1.pem=/path/to/rpi-worker-1.pem \
--from-file=rpi-worker-2.pem=/path/to/rpi-worker-2.pem \
--from-file=rpi-worker-3.pem=/path/to/rpi-worker-3.pem \
--namespace=default
Or if you don't have certificates yet, the deployment will work without them (secret is optional).
Step 4: Configure External NGINX Proxy Manager
In your NGINX Proxy Manager, create proxy hosts for:
For socktop.io:
- Domain:
socktop.io - Scheme:
http - Forward Hostname/IP:
<k3s-node-ip> - Forward Port:
8080 - Enable WebSocket Support: ✓
- SSL Certificate: Your SSL cert
- Force SSL: ✓
Repeat for www.socktop.io and origin.socktop.io.
Step 5: Update Configuration (Optional)
Edit 01-configmap.yaml to customize:
- profiles.json - Add/remove Raspberry Pi nodes
- alacritty.toml - Adjust terminal appearance
- catppuccin-frappe.toml - Change color scheme
Step 6: Deploy to Kubernetes
Apply all manifests in order:
# From the kubernetes directory
kubectl apply -f 01-configmap.yaml
kubectl apply -f 02-secret.yaml
kubectl apply -f 03-deployment.yaml
kubectl apply -f 04-service.yaml
kubectl apply -f 05-ingress.yaml
Or apply all at once:
kubectl apply -f .
Step 7: Verify Deployment
Check pod status:
kubectl get pods -l app=socktop-webterm
Expected output:
NAME READY STATUS RESTARTS AGE
socktop-webterm-xxxxxxxxxx-xxxxx 1/1 Running 0 30s
socktop-webterm-xxxxxxxxxx-xxxxx 1/1 Running 0 30s
socktop-webterm-xxxxxxxxxx-xxxxx 1/1 Running 0 30s
Check service:
kubectl get svc socktop-webterm
Check ingress:
kubectl get ingress socktop-webterm
View logs:
kubectl logs -l app=socktop-webterm -f
Step 8: Access the Application
Once deployed and NGINX Proxy Manager is configured, access your terminal at:
- https://socktop.io (SSL terminated at NGINX Proxy Manager)
- https://www.socktop.io
- https://origin.socktop.io
Traffic flow: Internet → NGINX Proxy Manager (SSL) → k3s:8080 (HTTP) → Traefik → Service → Pods
Architecture
Host Networking
The deployment uses hostNetwork: true to allow containers to access your Raspberry Pi nodes on port 8443 directly. This means:
- Each pod binds to the host's network interface
- Pods can reach
192.168.1.101:8443,192.168.1.102:8443, etc. - The containerized socktop-agent runs on port 3001 (not 3000)
Session Affinity
The Service uses sessionAffinity: ClientIP and the Ingress uses cookie-based affinity to ensure:
- Terminal sessions stay connected to the same pod
- WebSocket connections don't get routed to different pods
- Session timeout is set to 3 hours (10800 seconds)
Replicas and Load Balancing
With 3 replicas and hostNetwork: true:
- k3s will spread pods across available nodes (if you have 3+ nodes)
- If you have fewer nodes, multiple pods may share nodes
- Each pod has its own socktop-agent on port 3001
- Traefik balances HTTP requests across all pods
- NGINX Proxy Manager forwards external traffic to Traefik on port 8080
Configuration Updates
To update configuration without restarting pods:
# Edit the ConfigMap
kubectl edit configmap socktop-webterm-config
# Force pods to reload (rolling restart)
kubectl rollout restart deployment socktop-webterm
Troubleshooting
Pods in ImagePullBackOff
Check if nodes can access the Gitea registry:
# On any k3s node
docker pull 192.168.1.208:3002/jason/socktop-webterm:0.2.0
If it fails, verify /etc/rancher/k3s/registries.yaml is configured correctly.
Pods in CrashLoopBackOff
Check pod logs:
kubectl logs -l app=socktop-webterm --tail=100
Common issues:
- Missing configuration files
- Port conflicts (if hostNetwork is used)
- Resource limits too low
Can't Connect to Raspberry Pi Nodes
Test from within a pod:
kubectl exec -it deployment/socktop-webterm -- curl -k https://192.168.1.101:8443/health
If this fails:
- Verify
hostNetwork: trueis set in deployment - Check if your k3s nodes can reach the Raspberry Pi IPs
- Verify TLS certificates are correct
Can't Access via HTTPS
SSL is terminated at your external NGINX Proxy Manager, not in the cluster.
Check external NGINX Proxy Manager:
- Verify proxy host configuration
- Check SSL certificate is valid
- Ensure WebSocket support is enabled
- Verify forwarding to correct k3s node IP on port 8080
- Check DNS points to external IP, not cluster IP
Check Traefik ingress:
kubectl get ingress socktop-webterm
kubectl describe ingress socktop-webterm
Test internal access:
# From a k3s node
curl http://localhost:8080
WebSocket Connections Failing
WebSocket support must be enabled in two places:
- External NGINX Proxy Manager - Enable WebSocket support in proxy host settings
- Traefik - Should handle WebSockets by default
Check Traefik logs:
kubectl logs -n kube-system deployment/traefik -f
Test WebSocket upgrade:
# Check headers are being passed correctly
curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" http://<k3s-node>:8080/
Scaling
Scale up or down:
# Scale to 5 replicas
kubectl scale deployment socktop-webterm --replicas=5
# Scale down to 2 replicas
kubectl scale deployment socktop-webterm --replicas=2
Updating the Image
After publishing a new version to Gitea:
# Update to specific version
kubectl set image deployment/socktop-webterm webterm=192.168.1.208:3002/jason/socktop-webterm:0.3.0
# Or force pull latest
kubectl rollout restart deployment socktop-webterm
Uninstalling
Remove all resources:
kubectl delete -f .
Or individually:
kubectl delete ingress socktop-webterm
kubectl delete service socktop-webterm
kubectl delete deployment socktop-webterm
kubectl delete configmap socktop-webterm-config
kubectl delete secret socktop-webterm-certs
Resource Usage
Each pod uses:
- CPU: 500m request, 2000m limit
- Memory: 256Mi request, 1Gi limit
With 3 replicas:
- Total CPU: 1500m request, 6000m limit
- Total Memory: 768Mi request, 3Gi limit
Adjust in 03-deployment.yaml based on your cluster capacity and workload.
Security Considerations
- Host Network: Using
hostNetwork: truereduces isolation. Ensure your cluster network is trusted. - TLS Certificates: Store Pi certificates as Kubernetes secrets, not in ConfigMaps.
- External SSL: SSL is terminated at NGINX Proxy Manager before reaching the cluster.
- Authentication: Consider adding authentication layer in NGINX Proxy Manager or as a k8s middleware.
- Network Policies: Implement NetworkPolicies to restrict pod-to-pod communication.
- Port Exposure: Only port 8080 needs to be accessible from NGINX Proxy Manager, not from public internet.
Support
For issues specific to:
- Kubernetes deployment: Check logs and events with
kubectl describe - Container build: Refer to main repository documentation
- k3s configuration: Consult k3s documentation at https://docs.k3s.io
- Traefik ingress: Check Traefik logs in kube-system namespace
- External proxy: Verify NGINX Proxy Manager configuration and SSL certificates