[Linux-aus] Ceph FS vs Gluster FS

Andrew Radke andrew at deepport.net
Wed Jun 7 07:53:11 AEST 2023


Hi Miles,

I’ve been a ZFS guy from long before Linux had heard of it. The data integrity in ZFS, performance with the ARC and L2ARC on modern machines with excess RAM and fast storage, and snapshots that can be taken and shoved around the network or to external disks are just part of why I rarely think about other filesystems.

But I’ve looked at Ceph a few times over the last few years since Proxmox has it largely built in. I’m loath to reduce the level of data integrity I have now with ZFS. How do you find Ceph compares?

Also how well does Ceph work with nodes with only one storage device in them? For instance we have some that only support one NVMe and one SATA, so we could put the boot/OS on a SATA SSD and then use the NVMe for Ceph.

Cheers,
Andrew

> On 6 Jun 2023, at 10:45 am, Miles Goodhew via linux-aus <linux-aus at lists.linux.org.au> wrote:
> 
> Hi Anestis,
>   I used to be "The Ceph guy" at a large an annoying government department. I think the nutshell differences I see are:
>  
> Gluster:
> Smaller scale (5-ish nodes max, I think)
> Network filesystem only
> Integrated services (storage and control/mgmt on the same boxes)
> Limited redundancy and failure-domain options
> A little simpler to set up on its own
> Ceph:
> Scales up to gigantic, multi-region clusters
> Block storage (RBD), File storage (CephFS) and Object storage (RGW) options available
> Control/mgmt can be on separate nodes (And should be unless you have a really small cluster)
> Any speed, redundancy (replication or erasure coding) or failure-domain setup you can think of. You can have multiple setups for different storage pools within the cluster.
> Takes a bit more planning and implementation to deploy
> Like Neill said: Openstack uses the RBD application to present "disk like" virtual storage devices to the compute nodes for the VMs to use. The old Redhat Enterprise Virtualisation (OVirt) used to use Gluster as its network storage system (putting disk images as files on top of it). However I'm not sure this is still the case.
> 
> CephFS works really well as an NFS replacement (it's just a lot more fiddly to set up). RGW can present itself as either S3 or Swift protocol (Or a "weird" NFS version too - but don't go there).
> 
> Hope that's enough, but not too much info,
> 
> M0les.
> 
> On Tue, 6 Jun 2023, at 04:55, Anestis Kozakis via linux-aus wrote:
>> I was wondering fi people could summarize  me the difference as well s the pros and cons fo GlusterFS vs CephFS inr regards to the following uses:
>> 
>> File Server/System and creating Virtual Machines and Containers.
>> 
>> I will, of course, do my own research, but I am looking to get other people's experiences and opinions.
>> 
>> Anestis.
>> --
>> Anestis Kozakis | kenosti at gmail.com <mailto:kenosti at gmail.com>
>> 
>> _______________________________________________
>> linux-aus mailing list
>> linux-aus at lists.linux.org.au <mailto:linux-aus at lists.linux.org.au>
>> http://lists.linux.org.au/mailman/listinfo/linux-aus
>> 
>> To unsubscribe from this list, send a blank email to
>> linux-aus-unsubscribe at lists.linux.org.au <mailto:linux-aus-unsubscribe at lists.linux.org.au>
> 
> _______________________________________________
> linux-aus mailing list
> linux-aus at lists.linux.org.au <mailto:linux-aus at lists.linux.org.au>
> http://lists.linux.org.au/mailman/listinfo/linux-aus
> 
> To unsubscribe from this list, send a blank email to
> linux-aus-unsubscribe at lists.linux.org.au <mailto:linux-aus-unsubscribe at lists.linux.org.au>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linux.org.au/pipermail/linux-aus/attachments/20230607/7cfee253/attachment.html>


More information about the linux-aus mailing list