Cache Management¶
Guide to managing and optimizing the Redis cache in Champa Intelligence.
Overview¶
Champa Intelligence uses Redis as a high-performance caching layer to accelerate database queries, user sessions, and AI analysis results. The cache management interface allows administrators to monitor cache performance and perform maintenance operations.
Required Permission: manage_users (admin access)
Navigation: Admin → Cache Management
Cache Architecture¶
Cache Layers¶
graph TB
A[Request] --> B{Redis Cache}
B -->|Hit| C[Return Cached Data]
B -->|Miss| D[Query Database]
D --> E[Store in Cache]
E --> F[Return Data]
G[Deployment] --> H[Selective Invalidation]
H --> I[Clear Process-Specific Cache] Cache Types:
- Session Cache - User authentication sessions
- Query Cache - Database query results
- AI Cache - AI analysis components and results
- Static Data Cache - BPMN/DMN XML, process definitions
Cache TTL Strategy¶
Time-To-Live by Data Type¶
| Cache Type | TTL | Use Case |
|---|---|---|
| Session (Normal) | 1 hour | User login sessions |
| Session (Remember Me) | 30 days | Persistent logins |
| Portfolio Overview | 5 minutes | High-level KPIs |
| Dashboard Sections | 15 minutes | Process analytics |
| Performance Metrics | 1 hour | Historical data |
| BPMN/DMN XML | 24 hours | Static model data |
| AI Analysis | 4 hours | Generated reports |
| Process Definitions | 24 hours | Deployment metadata |
Cache Warming¶
What is Cache Warming?¶
Cache warming pre-populates the cache with frequently accessed data before users request it, ensuring fast response times for the first visitor.
When to Warm Cache¶
- After clearing all caches
- After application restart
- Before peak usage periods
- After major deployments
Warming Strategies¶
Warm All Critical Data¶
Populates: - All portfolio data - Top 5 active processes - Health monitoring data
Duration: ~2 minutes
Troubleshooting¶
High Miss Rate¶
Symptoms: - Hit rate <80% - Slow response times - High database load
Diagnosis:
# Check cache statistics
curl http://localhost:8088/cache/api/stats \
-H "Authorization: Bearer $ADMIN_TOKEN"
Possible Causes:
- TTL too short - Cache expires before reuse
-
Solution: Increase TTL for stable data
-
Cache keys not consistent - Query variations cause misses
-
Solution: Normalize query parameters
-
Memory pressure - Evictions due to low memory
- Solution: Increase Redis memory limit
High Eviction Rate¶
Symptoms: - Frequent evictions in Redis stats - Inconsistent cache performance
Diagnosis:
Solutions:
-
Increase Redis memory:
-
Reduce TTL for less critical data
-
Enable eviction policy:
Cache Connection Failures¶
Symptoms: - "Redis connection failed" errors - Fallback to database only
Diagnosis:
Solutions:
-
Check Redis is running:
-
Verify connection settings:
-
Check logs:
Stale Data in Cache¶
Symptoms: - Users see outdated information - Dashboard shows old metrics
Solutions:
-
Clear specific cache:
-
Reduce TTL for real-time data
-
Implement cache versioning:
Best Practices¶
1. Regular Cache Maintenance¶
Weekly: - Review cache hit rates - Check memory usage trends - Analyze eviction patterns
Monthly: - Clear stale sessions - Optimize TTL values - Review cache key patterns
2. Optimize TTL Values¶
Guidelines:
TTL_CONFIG = {
# Real-time data (5-15 min)
'active_incidents': 300,
'current_jobs': 600,
# Semi-static data (1-4 hours)
'performance_metrics': 3600,
'historical_trends': 7200,
# Static data (24+ hours)
'bpmn_xml': 86400,
'process_definitions': 86400
}
3. Monitor Cache Impact¶
Track performance with/without cache:
import time
def measure_cache_impact(query_func, cache_key):
# Cold (cache miss)
invalidate_cache(cache_key)
start = time.time()
result = query_func()
cold_time = time.time() - start
# Warm (cache hit)
start = time.time()
cached_result = query_func()
warm_time = time.time() - start
speedup = cold_time / warm_time
print(f"Cache speedup: {speedup:.1f}x ({cold_time:.3f}s → {warm_time:.3f}s)")
4. Use Cache Warming Strategically¶
Good:
Bad:
# Warming during peak hours (wastes resources)
0 9 * * * curl -X POST http://localhost:8088/cache/api/warm-all
5. Implement Circuit Breaker¶
Gracefully handle Redis failures:
def get_cached_data(key, fallback_func):
try:
data = redis_client.get(key)
if data:
return json.loads(data)
except redis.ConnectionError:
logger.warning("Redis unavailable, using fallback")
# Fallback to database
return fallback_func()
API Reference¶
Get Cache Statistics¶
Clear All Caches¶
Clear Specific Cache¶
Types: sessions, queries, ai, static
Warm Cache¶
Scopes: portfolio, dashboard, all
Next Steps¶
- Session Management - Monitor user sessions
- Performance Monitoring - System performance
- Troubleshooting - Common issues
Support¶
For cache management questions:
- Email: info@champa-bpmn.com
- Documentation: https://champa-bpmn.com/docs