[Linux-aus] contest proposal
Hugh Blemings
hugh at blemings.org
Tue Jan 2 12:06:13 AEDT 2024
On 2/1/24 11:24, John Dalton via linux-aus wrote:
> Thanks, I love this story (and the history lesson!)
Hear hear! :)
>
> I spend a lot of time working with folks using database servers
> (almost exclusively PostgreSQL these days, for me) and often run into
> a some variation of the same question, usually "we don't really have
> time to optimise it now, when we can fix it by just throwing more RAM
> at the problem". All the old wisdom still applies here, and it turns
> out that paying attention to stuff like what is indexed, which columns
> are stored next to each other, how they're written to disk and how
> it's all being cached still matters. Even if you're dealing with
> *hundreds* of gigabytes of RAM, what you're storing there can have a
> huge impact on performance.
>
> I am way out of touch on desktop stuff but I love the idea of making
> this into a competition. As Russell says in another message, this work
> matters because we have existing, usable hardware held back from
> certain tasks mostly because of growing RAM requirements - and
> *everyone* benefits from performance improvements made to hit some
> specific RAM usage budget.
>
> While I won't make it to EO this year, in the open source spirit I put
> my hand up to help evaluate entries if someone decides to make this
> happen.
I'm sadly in this camp too - unable to make EO due to a completely
immovable clash, bummed I won't get to see folks
But I will be around at home and able to assist with evaulating entries
if that's helpful!
Cheers,
Hugh
>
> J.
>
>
> On Tue, Jan 2, 2024 at 10:20 AM jon.maddog.hall--- via linux-aus
> <linux-aus at lists.linux.org.au> wrote:
>
> Brendan,
>
> I appreciate your comment, but let me add some real history about
> this:
>
> [For those of you not interested in historical discussion of this,
> you can skip this section]
>
> In the early 1990s DEC had the OSF/1 based release of OSF/1
> Alpha. In its initial release it needed 64 Megabytes of Main
> memory in order to boot and run. This was declared to be too much
> main memory to be required on a SERVER CLASS machine. Marketing
> wanted it reduced to 32 Megabytes.
>
> Now you have to understand the underlying requirements of this.
> First of all, most customers did not use DEC semiconductor memory,
> they used National Semiconductor or other RAM, but they needed to
> have the first 32 Mbytes of memory in order to allow the DEC
> service people to truthfully install and qualify an "All DEC"
> system. If we required 64 Mbytes of DEC main memory to install
> and qualify it, it might have increased the cost of the "All DEC"
> system by thousands of dollars.
>
> By reducing the amount of DEC main memory to "only" 32 MBytes, the
> customer could then buy the much cheaper National Semiconductor
> memory to fill out their system.
>
> So the DEC engineers spent an entire YEAR squeezing libraries and
> the kernel, etc. down to the point where the system would boot and
> load in 32 MBytes. But we did not anticipate the next thing.
>
> [Here the people not interested in history can continue]
>
> The DEC OSF/1 Alpha system reduced from 64 MBytes to 32 MBytes
> actually benchmarked as 7% FASTER than before all this work was done.
>
> It was faster because more of the runtime set lived in the cache
> of the system and processor. Reading the program text coming off
> disk was more compact, and took up less pages of the runtime
> set. More of the instructions stayed in cache. The DEC OSF/1
> kernel did not page, so it had to stay in main memory. But that
> did not mean it had to stay in the CACHE of that main memory or
> the processor. And if the kernel HAD to stay in levels of main
> memory cache, that left less space for actively used programs,
> including the shell, the X-server, etc.
>
> But it was not just the size of the code. There was a lot of
> work done on "locality of reference", trying to get the next
> instructions on the same page of cache as the last ones.
>
> Not every processor has a lot of cache. Some have little or no
> cache, but they still do demand-paged virtual memory and compete
> with other processes for the space the system has.
>
> I would also propose that we talk about "performance". This
> contest so far about memory performance, but should also be about
> processor performance. I hear the same talks about "why are we
> so interested in 'performance'" when processors are so fast (and
> memory is so cheap) I respond back that we care about performance
> because we do not want o charge our phones three times a day or
> that Google might only need 9000 servers instead of 10,000 servers
> or use "only" 900 megabytes of electricity instead of 1 Gigabyte
> of electricity (and less power needed for cooling).
>
> Often I hear the reasons given for these inefficiencies because of
> "virtualization". WRONG. Virtualization might actually help
> improve some of these issues, but it has to be virtualization done
> correctly, and we may have to make tradeoffs, but we should be
> doing these in a reasonable way.
>
> Warmest regards,
>
> maddog
>> On 12/31/2023 11:18 PM EST Brendan Halley via linux-aus
>> <linux-aus at lists.linux.org.au> wrote:
>> Hi Russell,
>> I've seen the issue of memory bloat discussed many times by lots
>> of people, all with different priorities. The consensus at the
>> end of the conversation is always why waste part of someone's
>> life fixing the problem when memory is so cheap.
>
>> [snip]
>
>
> _______________________________________________
> linux-aus mailing list
> linux-aus at lists.linux.org.au
> http://lists.linux.org.au/mailman/listinfo/linux-aus
>
> To unsubscribe from this list, send a blank email to
> linux-aus-unsubscribe at lists.linux.org.au
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linux.org.au/pipermail/linux-aus/attachments/20240102/216f44a1/attachment-0001.html>
More information about the linux-aus
mailing list