[Linux-aus] Ceph FS vs Gluster FS

Andrew Ruthven andrew at etc.gen.nz
Wed Jun 7 08:15:28 AEST 2023


On Wed, 2023-06-07 at 07:53 +1000, Andrew Radke via linux-aus wrote:
> Also how well does Ceph work with nodes with only one storage device in them? For instance we have
> some that only support one NVMe and one SATA, so we could put the boot/OS on a SATA SSD and then
> use the NVMe for Ceph.

Yeah, this would work okay. I would think that a bigger chassis with more NVMe devices would be more
cost effective though.

Incidentally, I've been thinking about getting 4x Turing Pi V2 boards with CM4s for home and run a
Ceph OSD per Turing board. Little k8s cluster with a shared filesystem.

There was even a time when people were looking at running the Ceph OSD (the bit that actually
manages the data in the filesystem) in Ethernet enabled hard drives. I'm pretty sure that there was
a proof of concept running.

Cheers,
Andrew

-- 
Andrew Ruthven, Wellington, New Zealand
andrew at etc.gen.nz |
Catalyst Cloud: | This space intentionally left blank
https://catalystcloud.nz |

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linux.org.au/pipermail/linux-aus/attachments/20230607/02ccaf41/attachment.html>


More information about the linux-aus mailing list