Skip to content

Cache Management

Guide to managing and optimizing the Redis cache in Champa Intelligence.


Overview

Champa Intelligence uses Redis as a high-performance caching layer to accelerate database queries, user sessions, and AI analysis results. The cache management interface allows administrators to monitor cache performance and perform maintenance operations.

Required Permission: manage_users (admin access)

Navigation: Admin → Cache Management


Cache Architecture

Cache Layers

graph TB
    A[Request] --> B{Redis Cache}
    B -->|Hit| C[Return Cached Data]
    B -->|Miss| D[Query Database]
    D --> E[Store in Cache]
    E --> F[Return Data]

    G[Deployment] --> H[Selective Invalidation]
    H --> I[Clear Process-Specific Cache]
Hold "Alt" / "Option" to enable pan & zoom

Cache Types:

  1. Session Cache - User authentication sessions
  2. Query Cache - Database query results
  3. AI Cache - AI analysis components and results
  4. Static Data Cache - BPMN/DMN XML, process definitions

Cache TTL Strategy

Time-To-Live by Data Type

Cache Type TTL Use Case
Session (Normal) 1 hour User login sessions
Session (Remember Me) 30 days Persistent logins
Portfolio Overview 5 minutes High-level KPIs
Dashboard Sections 15 minutes Process analytics
Performance Metrics 1 hour Historical data
BPMN/DMN XML 24 hours Static model data
AI Analysis 4 hours Generated reports
Process Definitions 24 hours Deployment metadata

Cache Warming

What is Cache Warming?

Cache warming pre-populates the cache with frequently accessed data before users request it, ensuring fast response times for the first visitor.

When to Warm Cache

  • After clearing all caches
  • After application restart
  • Before peak usage periods
  • After major deployments

Warming Strategies

Warm All Critical Data

curl -X POST http://localhost:8088/cache/api/warm-all \
  -H "Authorization: Bearer $ADMIN_TOKEN"

Populates: - All portfolio data - Top 5 active processes - Health monitoring data

Duration: ~2 minutes


Troubleshooting

High Miss Rate

Symptoms: - Hit rate <80% - Slow response times - High database load

Diagnosis:

# Check cache statistics
curl http://localhost:8088/cache/api/stats \
  -H "Authorization: Bearer $ADMIN_TOKEN"

Possible Causes:

  1. TTL too short - Cache expires before reuse
  2. Solution: Increase TTL for stable data

  3. Cache keys not consistent - Query variations cause misses

  4. Solution: Normalize query parameters

  5. Memory pressure - Evictions due to low memory

  6. Solution: Increase Redis memory limit

High Eviction Rate

Symptoms: - Frequent evictions in Redis stats - Inconsistent cache performance

Diagnosis:

# Check memory usage
redis-cli INFO memory

Solutions:

  1. Increase Redis memory:

    # docker-compose.yml
    redis:
      command: redis-server --maxmemory 1gb --maxmemory-policy allkeys-lru
    

  2. Reduce TTL for less critical data

  3. Enable eviction policy:

    # allkeys-lru: Evict any key using LRU
    # volatile-lru: Evict only keys with TTL
    redis-cli CONFIG SET maxmemory-policy allkeys-lru
    

Cache Connection Failures

Symptoms: - "Redis connection failed" errors - Fallback to database only

Diagnosis:

# Test Redis connectivity
redis-cli PING

Solutions:

  1. Check Redis is running:

    docker ps | grep redis
    

  2. Verify connection settings:

    # config
    REDIS_HOST = 'redis'  # Container name
    REDIS_PORT = 6379
    REDIS_PASSWORD = 'your_password'
    

  3. Check logs:

    docker logs champa-redis
    

Stale Data in Cache

Symptoms: - Users see outdated information - Dashboard shows old metrics

Solutions:

  1. Clear specific cache:

    curl -X POST http://localhost:8088/cache/api/clear-process \
      -H "Authorization: Bearer $ADMIN_TOKEN" \
      -d '{"process_key": "order-to-cash"}'
    

  2. Reduce TTL for real-time data

  3. Implement cache versioning:

    # Include timestamp in cache key
    cache_key = f"dashboard:{process_key}:{version}:{int(time.time() / 300)}"
    


Best Practices

1. Regular Cache Maintenance

Weekly: - Review cache hit rates - Check memory usage trends - Analyze eviction patterns

Monthly: - Clear stale sessions - Optimize TTL values - Review cache key patterns

2. Optimize TTL Values

Guidelines:

TTL_CONFIG = {
    # Real-time data (5-15 min)
    'active_incidents': 300,
    'current_jobs': 600,

    # Semi-static data (1-4 hours)
    'performance_metrics': 3600,
    'historical_trends': 7200,

    # Static data (24+ hours)
    'bpmn_xml': 86400,
    'process_definitions': 86400
}

3. Monitor Cache Impact

Track performance with/without cache:

import time

def measure_cache_impact(query_func, cache_key):
    # Cold (cache miss)
    invalidate_cache(cache_key)
    start = time.time()
    result = query_func()
    cold_time = time.time() - start

    # Warm (cache hit)
    start = time.time()
    cached_result = query_func()
    warm_time = time.time() - start

    speedup = cold_time / warm_time
    print(f"Cache speedup: {speedup:.1f}x ({cold_time:.3f}s → {warm_time:.3f}s)")

4. Use Cache Warming Strategically

Good:

# Warm cache during off-hours
0 2 * * * curl -X POST http://localhost:8088/cache/api/warm-all

Bad:

# Warming during peak hours (wastes resources)
0 9 * * * curl -X POST http://localhost:8088/cache/api/warm-all

5. Implement Circuit Breaker

Gracefully handle Redis failures:

def get_cached_data(key, fallback_func):
    try:
        data = redis_client.get(key)
        if data:
            return json.loads(data)
    except redis.ConnectionError:
        logger.warning("Redis unavailable, using fallback")

    # Fallback to database
    return fallback_func()

API Reference

Get Cache Statistics

GET /cache/api/stats
Authorization: Bearer <admin_token>

Clear All Caches

POST /cache/api/clear-all
Authorization: Bearer <admin_token>

Clear Specific Cache

POST /cache/api/clear-{type}
Authorization: Bearer <admin_token>

Types: sessions, queries, ai, static

Warm Cache

POST /cache/api/warm-{scope}
Authorization: Bearer <admin_token>

Scopes: portfolio, dashboard, all


Next Steps


Support

For cache management questions: