- Multi-architecture Docker image (ARM64 + AMD64) - Kubernetes manifests for 3-replica deployment - Traefik ingress configuration - NGINX Proxy Manager integration - ConfigMap-based configuration - Automated build and deployment scripts - Session monitoring tools
5.2 KiB
5.2 KiB
Socktop WebTerm - Kubernetes Quick Start
Get your terminal interface running on k3s in 5 minutes!
Prerequisites Checklist
- k3s cluster running
- kubectl configured and working
- DNS records for socktop.io pointing to your cluster
- Nginx Ingress Controller installed on k3s
- cert-manager installed (for automatic HTTPS)
Quick Deploy
Option 1: Automated Deploy Script
cd kubernetes
./deploy.sh
The script will:
- Check your cluster connection
- Optionally configure TLS certificates for Pi nodes
- Deploy all manifests
- Wait for pods to be ready
- Show you status and access URLs
Option 2: Manual Deploy
cd kubernetes
# Apply all manifests
kubectl apply -f .
# Watch deployment progress
kubectl get pods -l app=socktop-webterm -w
Option 3: Using Kustomize
cd kubernetes
# Deploy with kustomize
kubectl apply -k .
# Or customize on the fly
kubectl apply -k . --replicas=5
Verify Deployment
# Check if pods are running
kubectl get pods -l app=socktop-webterm
# Expected output:
# NAME READY STATUS RESTARTS AGE
# socktop-webterm-xxxxxxxxxx-xxxxx 1/1 Running 0 30s
# socktop-webterm-xxxxxxxxxx-xxxxx 1/1 Running 0 30s
# socktop-webterm-xxxxxxxxxx-xxxxx 1/1 Running 0 30s
Access Your Terminal
Open your browser to:
Common Issues
1. ImagePullBackOff Error
Your k3s nodes can't pull from the Gitea registry.
Fix: Configure insecure registry on each k3s node:
# On each k3s node, create /etc/rancher/k3s/registries.yaml
sudo tee /etc/rancher/k3s/registries.yaml <<EOF
mirrors:
"192.168.1.208:3002":
endpoint:
- "http://192.168.1.208:3002"
configs:
"192.168.1.208:3002":
tls:
insecure_skip_verify: true
EOF
# Restart k3s
sudo systemctl restart k3s # on server
sudo systemctl restart k3s-agent # on agents
Can't Access via HTTPS
Check your external NGINX Proxy Manager configuration.
Verify:
- Proxy host is configured correctly
- Points to k3s node IP on port 8080
- SSL certificate is valid
- WebSocket support is enabled
- DNS records point to your external IP
3. Can't Connect to Raspberry Pi Nodes
Test from within a pod:
kubectl exec -it deployment/socktop-webterm -- curl -k https://192.168.1.101:8443/health
If this fails, your k3s nodes may not be able to reach the Pi network.
4. 502 Bad Gateway
Pods aren't ready yet or have crashed.
Check logs:
kubectl logs -l app=socktop-webterm --tail=100
Configuration
Update Profiles (Add/Remove Pi Nodes)
# Edit the ConfigMap
kubectl edit configmap socktop-webterm-config
# Restart pods to pick up changes
kubectl rollout restart deployment socktop-webterm
Scale Up/Down
# Scale to 5 replicas
kubectl scale deployment socktop-webterm --replicas=5
# Scale to 1 replica
kubectl scale deployment socktop-webterm --replicas=1
Update to New Version
After publishing a new image version:
# Update image tag
kubectl set image deployment/socktop-webterm \
webterm=192.168.1.208:3002/jason/socktop-webterm:0.3.0
# Or force re-pull latest
kubectl rollout restart deployment socktop-webterm
Monitoring
View Logs
# All pods
kubectl logs -l app=socktop-webterm -f
# Specific pod
kubectl logs socktop-webterm-xxxxxxxxxx-xxxxx -f
# Previous crashed pod
kubectl logs socktop-webterm-xxxxxxxxxx-xxxxx --previous
Resource Usage
# CPU and memory usage
kubectl top pods -l app=socktop-webterm
# Detailed pod info
kubectl describe deployment socktop-webterm
Check Ingress
# View ingress details
kubectl describe ingress socktop-webterm
# Check if external IP is assigned
kubectl get ingress socktop-webterm
Cleanup
Remove Everything
cd kubernetes
kubectl delete -f .
Or individually:
kubectl delete ingress socktop-webterm
kubectl delete service socktop-webterm
kubectl delete deployment socktop-webterm
kubectl delete configmap socktop-webterm-config
kubectl delete secret socktop-webterm-certs
Performance Testing
With 3 replicas across your k3s cluster:
- Load Distribution: k3s will spread pods across nodes
- Session Affinity: Each user sticks to the same pod
- High Availability: If a pod crashes, others handle traffic
- Horizontal Scaling: Add more replicas for more capacity
Monitor performance:
# Watch resource usage
kubectl top pods -l app=socktop-webterm
# See which nodes pods are on
kubectl get pods -l app=socktop-webterm -o wide
Next Steps
- Set up monitoring with Prometheus/Grafana
- Configure backup for any stateful data
- Add authentication layer (OAuth2 Proxy)
- Set up log aggregation (Loki/ELK)
- Configure network policies for security
Need Help?
Check deployment status:
kubectl get all -l app=socktop-webterm
Describe resources:
kubectl describe deployment socktop-webterm
kubectl describe pods -l app=socktop-webterm
View events:
kubectl get events --sort-by='.lastTimestamp' | grep socktop
Full README: See README.md for detailed documentation.