<html><head></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div>On Wed, 2023-06-07 at 07:53 +1000, Andrew Radke via linux-aus wrote:</div><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><div>Also how well does Ceph work with nodes with only one storage device in them? For instance we have some that only support one NVMe and one SATA, so we could put the boot/OS on a SATA SSD and then use the NVMe for Ceph.</div></blockquote><div><span><pre><br></pre><pre>Yeah, this would work okay. I would think that a bigger chassis with more NVMe devices would be more cost effective though.</pre><pre><br></pre><pre>Incidentally, I've been thinking about getting 4x Turing Pi V2 boards with CM4s for home and run a Ceph OSD per Turing board. Little k8s cluster with a shared filesystem.</pre><pre><br></pre><pre>There was even a time when people were looking at running the Ceph OSD (the bit that actually manages the data in the filesystem) in Ethernet enabled hard drives. I'm pretty sure that there was a proof of concept running.</pre><pre><br></pre><pre>Cheers,</pre><pre>Andrew</pre><pre><br></pre><pre>-- <br></pre><pre>Andrew Ruthven, Wellington, New Zealand
andrew@etc.gen.nz |
Catalyst Cloud: | This space intentionally left blank
https://catalystcloud.nz |
</pre></span></div></body></html>