Deployment Overview
Condensa is designed to be flexible across environments — cloud, private cloud, and on-prem deployments are supported.
Environment Roles & Tools
| Environment | Role | Tool / Stack |
|---|---|---|
| Web | Frontend app for users — handles uploads and light local OCR | Next.js (browser client) or static site for docs / portal |
| API | Backend service that receives files, orchestrates OCR and mapping | Flask (or FastAPI) for routes & business logic |
| Engine | Heavy visual compression & tokenization | Condensa Vision Processor (GPU-accelerated) |
| Build | Reproducible environment definitions | Nix + Docker for consistent builds |
| CI/CD | Automated tests, builds, and deployments | GitHub Actions (pipelines) |
Deployment Recommendations
- Deploy the Vision Processor close to data (same region) to lower latency and egress costs.
- Use container orchestration (Kubernetes) for scalable worker pools handling OCR and mapping jobs.
- Isolate sensitive workloads to private subnets or on-prem clusters as required by data residency.
- Enable metrics and tracing to monitor pipeline performance end-to-end.