Install Ceph on Ubuntu
Ceph is a storage system designed for excellent performance, reliability, and scalability. However, the installation and management of Ceph can be challenging. The Ceph-on-Ubuntu solution takes the administration minutiae out of the equation through the use of snaps and Juju charms. With either approach, the deployment of a Ceph cluster becomes trivial as does the scaling of the cluster's storage capacity.
Looking for help running Ceph?
Choose the Ceph installation option for your deployment:
Single-node deployment
- Uses MicroCeph
- Works on a workstation or VM
- Suitable for testing and development
These installation instructions use MicroCeph - Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort.
-
To get started, install the MicroCeph snap with the following command on each node to be used in the cluster:
sudo snap install microceph -
Then bootstrap the cluster:
sudo microceph cluster bootstrap -
Check the cluster status with the following command:
sudo microceph.ceph statusHere you should see that there is a single node in the cluster.
-
To use MicroCeph as a single node, the default CRUSH rules need to be modified:
sudo microceph.ceph osd crush rule rm replicated_rule
sudo microceph.ceph osd crush rule create-replicated single default osd -
Next, add some disks that will be used as OSDs:
sudo microceph disk add /dev/sd[x] --wipeRepeat for each disk you would like to use as an OSD on that node, and additionally on the other nodes in the cluster. Cluster status can be verified using:
sudo microceph.ceph status
sudo microceph.ceph osd status
Multi-node deployment
- Uses MicroCeph
- Minimum 4-nodes, full-HA Ceph cluster
- Suitable for small-scale production environments
These installation instructions use MicroCeph – Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort.
-
To get started, install the MicroCeph snap with the following command on each node to be used in the cluster:
sudo snap install microceph -
Then bootstrap the cluster from the first node:
sudo microceph cluster bootstrap -
On the first node, add other nodes to the cluster:
sudo microceph cluster add node[x] -
Copy the resulting output to be used on node[x]:
sudo microceph cluster join pasted-output-from-node1Repeat these steps for each additional node you would like to add to the cluster.
-
Check the cluster status with the following command:
sudo microceph.ceph statusHere you should see that all the nodes you added have joined the cluster, in the familiar ceph status output.
-
Next, add some disks to each node that will be used as OSDs:
sudo microceph disk add /dev/sd[x] --wipeRepeat for each disk you would like to use as an OSD on that node, and additionally on the other nodes in the cluster. Cluster status can be verified using:
sudo microceph.ceph status
sudo microceph.ceph osd status
Containerized deployment
- Uses a Canonical-supplied and maintained rock (OCI image)
- Works with cephadm and rook
- Suitable for all types of containerized deployments
These installation instructions use the Canonical produced and supplied Ceph rock — this OCI compliant image provides a drop in replacement for the upstream Ceph OCI image.
Large-scale deployment
- Uses Charmed Ceph
- Uses MAAS for bare metal orchestration
- Suitable for large-scale production environments
Charmed Ceph is Canonical's fully automated, model-driven approach to installing and managing Ceph. Charmed Ceph is generally deployed on bare-metal hardware that is managed by MAAS.