Skip to content

Installation Guide

This guide covers the complete installation process for Champa Intelligence in both development and production environments.


System Requirements

Minimum Requirements

Component Requirement
CPU 2 cores
RAM 4 GB
Storage 20 GB
OS Linux (RHEL 8+, Ubuntu 20.04+), macOS, Windows with WSL2
Component Requirement
CPU 4+ cores
RAM 8+ GB
Storage 50+ GB SSD
OS Ubuntu 22.04 LTS or RHEL 9+

Prerequisites

Required Software

# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

# Install Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/download/v2.24.0/docker-compose-$(uname -s)-$(uname -m)" \
  -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# Verify installation
docker --version
docker-compose --version

Python 3.12+

# Ubuntu/Debian
sudo apt update
sudo apt install python3.12 python3.12-venv python3-pip

# macOS
brew install python@3.12

Node.js 18+

# Ubuntu/Debian
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt install nodejs

# macOS
brew install node@18

PostgreSQL 15+

# Ubuntu/Debian
sudo apt install postgresql-15 postgresql-contrib-15

# macOS
brew install postgresql@15

Redis 7+

# Ubuntu/Debian
sudo apt install redis-server

# macOS
brew install redis

External Dependencies

  • Camunda 7 Database: Access to PostgreSQL database with Camunda schema
  • Google Gemini API Key: For AI analysis features (Get API Key)
  • Camunda REST API: Access credentials for health monitoring

Installation Methods

This is the fastest and most reliable way to deploy Champa Intelligence.

Step 1: Clone Repository

git clone https://github.com/your-org/champa-intelligence.git
cd champa-intelligence

Step 2: Configure Environment

# Copy example environment file
cp .env.example .env

# Edit configuration
nano .env

Required environment variables:

# Camunda Database (Customer DB)
DB_NAME=camunda
DB_USER=camunda
DB_PASSWORD=your_camunda_password
DB_HOST=your_camunda_host
DB_PORT=5432

# System Database (Champa's own DB)
SYSTEM_DB_PASSWORD=strong_password_here
REDIS_PASSWORD=strong_redis_password

# Security
JWT_SECRET=$(openssl rand -hex 32)
APP_SECRET_KEY=$(openssl rand -hex 32)

# AI Configuration
GOOGLE_API_KEY=your_gemini_api_key

# Camunda API Access
CAMUNDA_API_USER=demo
CAMUNDA_API_PASSWORD=demo

Step 3: Build Frontend Assets

# Install Node.js dependencies
npm install

# Build production assets
npm run build:prod

Step 4: Build Docker Image

docker build -t champa-intelligence:latest .

Step 5: Start Services

# Start all services (app + system DB + Redis)
docker-compose -f docker-compose.server.yml up -d

# Check status
docker-compose -f docker-compose.server.yml ps

# View logs
docker-compose -f docker-compose.server.yml logs -f champa-intelligence

Step 6: Initialize Database

# Initialize auth database schema
docker exec champa-intelligence python -c "
from db import init_auth_db
init_auth_db()
"

Step 7: Access Application

Open browser: http://localhost:8088

Default credentials: - Username: admin - Password: admin

Security

Change the default admin password immediately after first login!


Method 2: Manual Installation

For development or custom deployments.

Step 1: Clone and Setup Python Environment

# Clone repository
git clone https://github.com/your-org/champa-intelligence.git
cd champa-intelligence

# Create virtual environment
python3.12 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install Python dependencies
pip install -r requirements.txt

Step 2: Install Node.js Dependencies

npm install

Step 3: Configure Environment

# Copy example environment file
cp .env.example .env

# Edit with your settings
nano .env

Step 4: Setup Databases

System Database (PostgreSQL):

# Create database
createdb -U postgres champa_system

# Create user
psql -U postgres -c "CREATE USER champa_user WITH PASSWORD 'your_password';"
psql -U postgres -c "GRANT ALL PRIVILEGES ON DATABASE champa_system TO champa_user;"

# Create auth schema
psql -U champa_user -d champa_system -c "CREATE SCHEMA auth;"

Initialize Auth Tables:

python -c "from db import init_auth_db; init_auth_db()"

Step 5: Setup Redis

# Start Redis
redis-server --requirepass your_redis_password

Step 6: Build Frontend

# Development build
npm run build

# OR Production build
npm run build:prod

Step 7: Start Application

Development:

python app.py

Production:

gunicorn \
  --workers 4 \
  --threads 2 \
  --worker-class gthread \
  --bind 0.0.0.0:8088 \
  --timeout 300 \
  --access-logfile logs/access.log \
  --error-logfile logs/error.log \
  app:app


Post-Installation Steps

1. Configure Camunda Nodes

Edit config.py:

CAMUNDA_NODES = {
    'node-1': 'http://camunda-node-1:8080/engine-rest',
    'node-2': 'http://camunda-node-2:8080/engine-rest',
}

2. Configure JMX Exporters (Optional)

For advanced JVM monitoring:

JMX_EXPORTER_ENDPOINTS = {
    'node-1': 'http://camunda-node-1:9404/metrics',
    'node-2': 'http://camunda-node-2:9404/metrics',
}

JVM_METRICS_SOURCE = 'jmx'  # or 'micrometer'

3. Test Health Endpoint

curl http://localhost:8088/health/ping

Expected response:

{
  "status": "OK",
  "timestamp": "2025-01-15T10:30:00.123456",
  "version": "1.0.0"
}

4 Test Database Connectivity

curl http://localhost:8088/health/db

5. Verify Redis Connection

Check logs for:

INFO - Redis connection successful: host=localhost, port=6379


Configuration Files

Key Configuration Files

File Purpose
config.py Main application configuration
config_ai.py AI analysis settings
security/env_config.py Environment variable parsing
.env Environment-specific secrets
docker-compose.server.yml Production Docker setup

Configuration Examples

config.py:

# Camunda Configuration
CAMUNDA_NODES = {
    'production-node-1': 'http://10.0.1.10:8080/engine-rest',
    'production-node-2': 'http://10.0.1.11:8080/engine-rest',
}

# Logging
LOG_CONFIG = {
    'default_log_level': 'INFO',
    'slow_request_ms': 5000,
    'slow_query_ms': 3000,
}

# Business Keys
BUSINESS_KEY_DELIMITERS = ['_', '-', ':']

# Health Checks
STUCK_INSTANCE_DAYS = 7

config_ai.py:

# AI Model
AI_MODEL_NAME = 'gemini-2.0-flash-exp'
AI_TEMPERATURE = 0.3
AI_TOP_P = 0.95
AI_TOP_K = 40

# Caching
AI_PARALLEL_DATA_FETCH_ENABLED = True
AI_PARALLEL_DB_MAX_WORKERS = 5

# Response Limits
AI_WORD_LIMITS = {
    'executive': 500,
    'standard': 1500,
    'detailed': 3000
}


Troubleshooting

Common Issues

1. Database Connection Failed

Symptom:

ERROR - Database connection failed: could not connect to server

Solution:

# Check PostgreSQL is running
sudo systemctl status postgresql

# Verify connection parameters
psql -h DB_HOST -U DB_USER -d DB_NAME

# Check firewall rules
sudo ufw status
sudo ufw allow 5432/tcp

2. Redis Connection Failed

Symptom:

WARNING - Redis connection failed, using PostgreSQL fallback

Solution:

# Check Redis is running
redis-cli -h localhost -p 6379 -a your_password ping

# Restart Redis
sudo systemctl restart redis

3. Frontend Assets Not Found

Symptom:

404 Not Found: /static/js/dist/layout_bundle.js

Solution:

# Rebuild frontend
npm install
npm run build:prod

# Check files exist
ls -la static/js/dist/

4. AI Analysis Not Working

Symptom:

ERROR - AI features are not configured on the server

Solution:

# Check API key is set
echo $GOOGLE_API_KEY

# Test API key
curl -H "Content-Type: application/json" \
  -d '{"contents":[{"parts":[{"text":"Hello"}]}]}' \
  "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash-exp:generateContent?key=$GOOGLE_API_KEY"

5. Permission Denied Errors

Symptom:

PermissionError: [Errno 13] Permission denied: '/app/logs'

Solution:

# Fix permissions
sudo chown -R $USER:$USER logs/ tmp/
chmod 755 logs/ tmp/


Verification Checklist

After installation, verify:

  • [ ] Application starts without errors
  • [ ] Can access login page at http://localhost:8088
  • [ ] Can log in with admin credentials
  • [ ] Portfolio dashboard loads
  • [ ] Process list appears in dropdowns
  • [ ] Health monitoring page shows Camunda nodes
  • [ ] AI analysis feature is available
  • [ ] Prometheus metrics endpoint works (/health/light/metrics)
  • [ ] Logs are being written to logs/ directory
  • [ ] Cache is working (check Redis with redis-cli)

Next Steps

Installation Complete!

Continue with:


Getting Help

If you encounter issues:

  1. Check Troubleshooting Guide
  2. Review application logs: tail -f logs/application.log
  3. Check Docker logs: docker-compose logs -f
  4. Contact support: info@champa-bpmn.com