Caching Strategy¶
Champa Intelligence employs an intelligent, multi-level caching strategy to deliver high performance and reduce load on the underlying Camunda database. The primary caching layer is powered by Redis, with a fallback to in-memory caching for resilience.
Caching Layers¶
1. Redis: The Primary Cache¶
Redis serves as the main, high-performance cache for frequently accessed and computationally expensive data.
- Session Cache: User session data is stored in Redis for fast authentication and authorization checks on every request. This allows the application to remain stateless and scale horizontally.
- Query Result Cache: The results of over 80 database queries are cached in Redis. This is the most critical performance feature, as it avoids re-running complex SQL aggregations on every page load.
- AI Analysis Cache: Components of AI analyses, such as aggregated data summaries and even full reports, are cached to speed up repeated requests and reduce costs associated with the Gemini API.
2. In-Memory Cache (Fallback)¶
A small, LRU (Least Recently Used) in-memory cache is maintained within the Flask application as a fallback. If the Redis connection is temporarily lost, the application can continue to serve some cached data from memory, enhancing resilience.
Intelligent Cache Management¶
Smart Time-To-Live (TTL)¶
Not all data is cached for the same duration. The system uses a smart TTL strategy based on the data's volatility:
- Short TTL (5-15 minutes): For highly dynamic data like real-time incident counts, active instance metrics, and portfolio overview stats.
- Medium TTL (1-4 hours): For semi-static data like process version lists, historical performance analytics, and AI analysis results.
- Long TTL (24+ hours): For static data like BPMN/DMN XML content, which only changes upon a new deployment.
Selective Cache Invalidation¶
The platform provides API endpoints for administrators to surgically clear the cache, avoiding a full cache flush.
- Clear All Caches: Flushes the entire Redis cache.
- Invalidate on Deployment: A dedicated function,
invalidate_on_deployment, can be triggered after a new process version is deployed. It intelligently clears only the caches related to static data (like BPMN XML and definition lists) for that specific process, leaving dynamic performance data intact.
Cache Warming¶
For critical dashboards, a "cache warming" process can be initiated. This preemptively runs the most important queries and populates the cache before the first user visits the page, ensuring an exceptionally fast experience for the first visitor of the day.