Deployment Strategies
Haloy supports two deployment strategies: rolling deployment (default) and replace deployment.
Rolling Deployment (Default)
Gradually replaces old containers with new ones, ensuring zero downtime.
How It Works
- Start new container(s)
- Wait for health checks to pass and any configured
min_ready_secondswindow - Route traffic to new containers
- Stop old containers
- Repeat for all replicas
Configuration
name: "my-app"
deployment_strategy: "rolling" # Default, can be omitted
replicas: 3
domains:
- domain: "my-app.com"
name: "my-app"
deployment_strategy: "rolling" # Default, can be omitted
replicas: 3
domains:
- domain: "my-app.com"
Benefits
- Zero downtime: Always have containers running
- Safe rollouts: Can catch issues before all containers update
- More confidence at cutover:
min_ready_secondscan catch crash-after-startup issues - Automatic rollback: If health checks fail, old containers remain
Use Cases
- Production applications requiring high availability
- Services with multiple replicas
- Applications where downtime is not acceptable
Example
name: "high-availability-app"
deployment_strategy: "rolling"
replicas: 5
health_check_path: "/health"
min_ready_seconds: 10
image: "my-org/ha-app:v2.0.0"
domains:
- domain: "ha-app.com"
name: "high-availability-app"
deployment_strategy: "rolling"
replicas: 5
health_check_path: "/health"
min_ready_seconds: 10
image: "my-org/ha-app:v2.0.0"
domains:
- domain: "ha-app.com"
Replace Deployment
Stops all old containers before starting new ones.
How It Works
- Stop old containers
- Wait for containers to exit and verify removal
- Start new container(s)
- Wait for health checks and any configured
min_ready_secondswindow - Route traffic to new containers
This strategy is required when using naming_strategy: "static", as it ensures the old container name is freed before the new one starts.
Configuration
name: "my-app"
deployment_strategy: "replace"
replicas: 3
domains:
- domain: "my-app.com"
name: "my-app"
deployment_strategy: "replace"
replicas: 3
domains:
- domain: "my-app.com"
Benefits
- Clean state: No overlap between old and new versions
- Resource efficiency: Lower peak resource usage
- Simpler: Easier to reason about deployment state
Considerations
- Brief downtime: Service unavailable during container swap
- All-or-nothing: All containers update at once
Use Cases
- Development/staging environments
- Batch processing applications
- Services where brief downtime is acceptable
- Applications requiring clean state between versions
- Single-replica deployments
Example
name: "batch-processor"
deployment_strategy: "replace"
replicas: 2
health_check_path: "/status"
image: "my-org/batch-app:v1.5.0"
env:
- name: "BATCH_SIZE"
value: "1000"
name: "batch-processor"
deployment_strategy: "replace"
replicas: 2
health_check_path: "/status"
image: "my-org/batch-app:v1.5.0"
env:
- name: "BATCH_SIZE"
value: "1000"
Comparison
| Aspect | Rolling | Replace |
|---|---|---|
| Downtime | Zero | Brief (during swap) |
| Resource usage | Higher (overlapping containers) | Lower |
| Risk | Lower (gradual rollout) | Higher (all at once) |
| Rollback | Automatic on failure | Manual |
| Complexity | Higher | Lower |
| Best for | Production HA services | Dev/staging, batch jobs |
Per-Target Strategies
Use different strategies for different environments:
name: "my-app"
# Default for all targets
deployment_strategy: "rolling"
replicas: 3
targets:
production:
server: prod.haloy.com
deployment_strategy: "rolling" # High availability
replicas: 5
domains:
- domain: "my-app.com"
staging:
server: staging.haloy.com
deployment_strategy: "replace" # Faster deployments
replicas: 2
domains:
- domain: "staging.my-app.com"
development:
server: dev.haloy.com
deployment_strategy: "replace" # Clean state
replicas: 1
domains:
- domain: "dev.my-app.com"
name: "my-app"
# Default for all targets
deployment_strategy: "rolling"
replicas: 3
targets:
production:
server: prod.haloy.com
deployment_strategy: "rolling" # High availability
replicas: 5
domains:
- domain: "my-app.com"
staging:
server: staging.haloy.com
deployment_strategy: "replace" # Faster deployments
replicas: 2
domains:
- domain: "staging.my-app.com"
development:
server: dev.haloy.com
deployment_strategy: "replace" # Clean state
replicas: 1
domains:
- domain: "dev.my-app.com"
Health Checks
Both strategies rely on health checks, and can optionally add a stabilization window, to determine when containers are ready:
name: "my-app"
health_check_path: "/api/health"
min_ready_seconds: 10
port: "8080"
# Your app should respond with 200 OK when healthy
# Example health endpoint response:
# {
# "status": "healthy",
# "uptime": 42,
# "database": "connected"
# }
name: "my-app"
health_check_path: "/api/health"
min_ready_seconds: 10
port: "8080"
# Your app should respond with 200 OK when healthy
# Example health endpoint response:
# {
# "status": "healthy",
# "uptime": 42,
# "database": "connected"
# }
Readiness Stabilization
Use min_ready_seconds when a container often passes health checks and then fails shortly afterward:
name: "my-app"
deployment_strategy: "rolling"
health_check_path: "/health"
min_ready_seconds: 10
name: "my-app"
deployment_strategy: "rolling"
health_check_path: "/health"
min_ready_seconds: 10
Haloy starts the stabilization timer from the container StartedAt time. If the container has already been running for 10 seconds when it first becomes healthy, deployment continues immediately. Otherwise Haloy waits the remaining time, re-checks the container, and fails the deployment if the container died, restarted, or became unhealthy during the wait.
Health Check Requirements
Your application should:
- Respond to GET requests at the health check path
- Return HTTP 200 when healthy
- Return non-200 when unhealthy or not ready
- Check critical dependencies (database, cache, etc.)
min_ready_seconds complements health checks. It does not replace a proper readiness endpoint.
Example Health Check Implementation
Node.js/Express:
app.get('/health', async (req, res) => {
try {
// Check database connection
await db.ping();
// Check other dependencies
const cacheConnected = await cache.isConnected();
if (cacheConnected) {
res.status(200).json({ status: 'healthy' });
} else {
res.status(503).json({ status: 'unhealthy', reason: 'cache unavailable' });
}
} catch (error) {
res.status(503).json({ status: 'unhealthy', reason: error.message });
}
});
app.get('/health', async (req, res) => {
try {
// Check database connection
await db.ping();
// Check other dependencies
const cacheConnected = await cache.isConnected();
if (cacheConnected) {
res.status(200).json({ status: 'healthy' });
} else {
res.status(503).json({ status: 'unhealthy', reason: 'cache unavailable' });
}
} catch (error) {
res.status(503).json({ status: 'unhealthy', reason: error.message });
}
});
Go:
func healthHandler(w http.ResponseWriter, r *http.Request) {
// Check database
if err := db.Ping(); err != nil {
w.WriteHeader(http.StatusServiceUnavailable)
json.NewEncoder(w).Encode(map[string]string{
"status": "unhealthy",
"reason": "database unavailable",
})
return
}
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]string{
"status": "healthy",
})
}
func healthHandler(w http.ResponseWriter, r *http.Request) {
// Check database
if err := db.Ping(); err != nil {
w.WriteHeader(http.StatusServiceUnavailable)
json.NewEncoder(w).Encode(map[string]string{
"status": "unhealthy",
"reason": "database unavailable",
})
return
}
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]string{
"status": "healthy",
})
}
Deployment Process
Monitor your deployment:
# Deploy with logs
haloy deploy
# Deploy without logs (faster)
haloy deploy --no-logs
# Check status after deployment
haloy status
# View application logs
haloy logs
# Deploy with logs
haloy deploy
# Deploy without logs (faster)
haloy deploy --no-logs
# Check status after deployment
haloy status
# View application logs
haloy logs
Troubleshooting
Rolling Deployment Stuck
If a rolling deployment gets stuck:
-
Check health check endpoint:
curl https://my-app.com/healthcurl https://my-app.com/health -
View application logs:
haloy logshaloy logs -
Verify health check path is correct in config
-
If health checks pass but deployment still waits, check
min_ready_seconds
Replace Deployment Downtime Too Long
If replace deployment takes too long:
- Optimize container startup time
- Use health checks to detect readiness faster
- Keep
min_ready_secondsonly as high as needed - Pre-warm caches and connections on startup
- Consider switching to rolling deployment
Best Practices
- Use rolling for production: Ensures high availability
- Implement robust health checks: Critical for safe deployments
- Test in staging first: Verify deployment strategy works
- Monitor during deployment: Watch logs and health metrics
- Document health check endpoint: Ensure team knows the requirements
- Use
min_ready_secondsfor flaky startups: Useful when failures appear a few seconds after the first healthy response - Set appropriate replica counts: More replicas = smoother rolling deployments
Next Steps
Stay updated on Haloy
Get notified about new docs, deployment patterns, and Haloy updates.