From zero to running Fortem in your cluster. ~5 minutes.
Fortem is in early access. Helm chart: cybrixcc.github.io/fortem-helm-charts. Source: github.com/cybrixcc/fortem.
A running Kubernetes cluster (EKS, GKE, AKS, or self-managed ≥ 1.26). Helm 3.x and kubectl installed.
kubectl cluster-infohelm version
Add the Fortem chart repository and install. The operator registers CRDs and starts the AI engine — takes about 3 minutes.
helm repo add fortem https://cybrixcc.github.io/fortem-helm-chartshelm repo updatehelm install fortem fortem/fortem \ --namespace fortem-system \ --create-namespace# Watch it come up:kubectl get pods -n fortem-system -w
Port-forward to access locally. The dashboard auto-discovers your clusters, namespaces, and workloads — nothing to configure.
kubectl port-forward svc/fortem-ui 8080:80 -n fortem-system# Open: http://localhost:8080
In the AI Ops tab, describe what you need in plain English. Fortem generates a Kubernetes manifest, shows you a dry-run diff, and deploys on your approval.
# Type this in the AI Ops input field:"Create staging namespace for api-gateway with Postgres 15 and Redis"# Fortem generates and applies:apiVersion: fortem.dev/v1alpha1kind: Environmentmetadata: name: api-gateway-staging namespace: api-gateway-stagingspec: cluster: prod-eu-west autoShutdown: true workloads: - kind: Deployment name: api-gateway image: api-gateway:latest - kind: StatefulSet name: postgres image: postgres:15 - kind: StatefulSet name: redis image: redis:7-alpine
No setup needed. Fortem watches your cluster and surfaces actionable insights — idle namespaces, OOM kills, right-sizing opportunities.
# Insights appear in the AI Ops tab automatically. Examples:[CRITICAL] OOMKilled 3× in 2h → Deployment/worker · staging → Memory limit 256Mi, peak 310Mi → increase to 512Mi[WARNING] Idle namespace · dev-pr-448 · 9 days → No traffic or deploys → $180/mo savings if removed[INFO] Right-size opportunity · auth-service · production → CPU request 500m, p95 actual 48m → save $76/mo
Book a 20-minute call — we'll walk through your cluster setup end-to-end.