Technology
Collective Learning
Collective Learning is Chkk’s always-on knowledge refinery. Its purpose is to capture insights from across the open-source ecosystem—hundreds of OSS projects, add-ons, applications, and services—and convert that raw change stream into source-grounded, machine-actionable knowledge. Every downstream function—classification, risk scanning, upgrade planning, automated actions—depends on the accuracy, freshness, and auditability of this layer.
Continuously running Source Feeds harvest upstream signals from release notes, official documentation, GitHub issues, container registries, cloud bulletins, and official blog posts. These incoming events are routed into highly specialized, Task-Specific AI Agents. Each agent is responsible for a distinct artifact type—such as a breaking change or OS compatibility—and executes a deterministic, AI-driven ETL pipeline: extract the relevant fragment, transform it to Chkk’s canonical schema, and load the candidate fact for validation.
All authoritative sources used by the Knowledge Engine are integrated into a Grounding Layer that is both AI-curated and subject to human oversight. This Grounding Layer models and organizes all required inputs to reliably identify clouds, open-source projects, add-ons, and application services in a customer’s environment. The Chkk Research Team actively reviews and validates every source incorporated into this layer, ensuring ongoing trustworthiness. Agents, workflows, and tools then rely on this curated corpus to prevent AI hallucinations and uphold the accuracy of knowledge attributes.
Curated facts are written to two data stores. The first is the Risk Signature Database (RSig DB), which houses every Risk Signature (RSig)—complete with severity, trigger conditions, and mitigations. The second is the Knowledge Graph, which encodes compatibility edges, version metadata, packaging information, component hierarchies, end-of-life schedules, and safety guardrails for Kubernetes, add-ons, and applications.
Whenever Chkk onboards a new OSS project, add-on, or cloud distribution, Collective Learning automatically extends coverage across changelogs, compatibility timelines, image registries, package systems, and upstream GitHub activity. This ensures that the moment a community, cloud or vendor discloses a breaking change, publishes a versioned artifact or posts changelogs, they are ingested, verified, tagged, curated and made available for downstream reasoning within minutes.
Artifact Collection
Artifact Collection involves the retrieval of raw customer configuration and metadata. Because this metadata is private to a customer, it is stored separately from any data refined through the Collective Learning systems.
Configuration and metadata are collected on a continuous basis. These periodic collections form an auditable timeline of snapshots that enable Chkk to detect changes to the customer’s infrastructure configuration and risk profile.
Classification
Classification connects collected customer configuration and metadata to Chkk’s Collective Learning corpus, enabling the platform to richly contextualize and reason about every inventory object. Upon ingest, each resource arrives unclassified—identified only by its raw coordinates: cluster ID, namespace, kind, name, and hash.
A multi-stage classification pipeline then resolves these opaque blobs into fully enriched inventory records. Classifiers systematically detect and assign properties, including Deployment System (e.g., Terraform, ArgoCD, FluxCD, Helm, kubectl), Project, Project Release, Project Component, Package System (e.g., Helm, Kustomize, Kube, Terraform), Package, Package Release, Package Component, OCI Registry, OCI Repository, OCI Tag, and OCI Artifact.
The pipeline supports custom overrides to accommodate customer-specific realities—such as private registries, internal charts, or hardened AMIs—ensuring that even bespoke infrastructure components are correctly identified and reasoned over.
Contextualization
Contextualization begins once classification has pinned every inventory object to its exact Project, Package, Release, and deployment metadata. Contextualizers layer situational intelligence onto those links—pruning changelogs to only the deltas that affect the customer, composing readiness probes that reflect the cluster’s actual topology, flagging pre-upgrade actions for client teams, and generating upgrade steps that align with the specific Deployment System, Package version, and OCI artifacts in play. By translating canonical release knowledge into environment-aware instructions, contextualization turns abstract matches into clear, executable guidance operators can trust.
Reasoning and Generation Engines
Reasoning and Generation Engines power Chkk’s core reasoning modules—Upgrade Copilot, Artifact Register, and Risk Ledger. Together they reconcile two parallel truths: the classified inventory of what is currently running and the knowledge from Collective Learning. This is the platform’s decision cortex, converting source-grounded intelligence into production-safe change artefacts for the Action Engines to execute.
Action Engines
Action Engines convert high-level intent—“mitigate this risk,” “upgrade that add-on,” “snooze for 30 days”—into precise, auditable changes across code, infrastructure, and collaboration systems. Each engine is a purpose-built, domain-specific workflow.
These engines span multiple functional categories—from planning and preverification to remediation, validation, collaboration, monitoring, and reporting. They generate Upgrade Plans and readiness reports, Preverify upgrades in a Digital Twin, apply temporary safeguards or permanent fixes, verify change safety, manage workflow ownership, enforce SLAs, and produce governance-ready audit reports.
Workflows
Actions are stitched into durable workflows that orchestrate every operational objective end-to-end. Chkk ships a library of workflow Blueprints—spanning planning, mitigation, remediation, preverification, and reporting—running on the Durable Workflow Fabric. For instance, a “Fix Misconfigured PDBs” workflow assigns the owner, defines fix criteria (e.g., making sure allowedDisruptions > 0), sets a need-by date with an SLA timer, schedules Slack or email reminders, opens a Jira ticket if one doesn’t exist, monitors continuously until the condition is fully remediated, auto-closes the ticket upon verification, and notifies stakeholders when the mitigation is complete.
Blueprints
A Blueprint is a reusable, parameter-driven recipe that compiles one or more canonical Actions into a fully wired workflow on Chkk’s Durable Workflow Fabric. Chkk ships Blueprints spanning 100s of open-source project change lifecycles and lets enterprises author custom Blueprints to encode their own governance, approval, and remediation processes.