Skip to content

Ceph Cluster Configuration#

MiraCeph is a Mirantis-developed declarative object used to configure and manage a Ceph cluster within a Kubernetes environment (in this case, on deployed and managed with Mirantis k0rdent Enterprise). The MiraCeph object must be created in the ceph-lcm-mirantis namespace and is processed by the Ceph Controller, ensuring that any desired changes to the cluster are automatically applied.

MiraCeph's configuration is split into two main sections: spec and status. The spec section allows you to define every aspect of the cluster, including client access with finely tuned authorization capabilities, data storage paths, external cluster connections, extra options for device management, health check settings for monitors, managers, and OSDs, and resource configuration for hyperconvergence. Additional components like ingress rules, manager modules, network settings, detailed node definitions, object storage options (such as RADOS Gateway and multisite setups), and pool configurations further customize the cluster behavior. Rook-specific configurations (via rookConfig and rookNamespace) and shared filesystem (CephFS) setups also integrate seamlessly.

Beyond the primary configuration, the health of the Ceph cluster is monitored through the MiraCephHealth object, which aggregates status checks, timestamps, state information, and detailed cluster conditions. Similarly, the MiraCephSecret object manages and tracks secrets for Ceph clients and RGW users, ensuring secure credential handling.

For lifecycle operations, dedicated objects replace more generic requests. The CephOsdRemoveRequest handles the removal of Object Storage Daemons (OSDs), providing granular control over cleanup actions—such as full node cleanups or targeted removal by device or OSD ID. Meanwhile, the CephPerfTestRequest object facilitates periodic performance testing with FIO, complete with configurable parameters, scheduling, and result storage on persistent volumes.

Overall, these components work together to provide an integrated, Kubernetes-native way to deploy, manage, and monitor Ceph clusters while automating many of the complexities associated with traditional storage systems.