<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">On 2/1/24 11:24, John Dalton via
linux-aus wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CABFxdF0Ok9E2yi1p09StnJj2BxSYQSk=hLLiu9BT3tNjTj+fdQ@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div>Thanks, I love this story (and the history lesson!)</div>
</div>
</blockquote>
<p>Hear hear! :)</p>
<blockquote type="cite"
cite="mid:CABFxdF0Ok9E2yi1p09StnJj2BxSYQSk=hLLiu9BT3tNjTj+fdQ@mail.gmail.com">
<div dir="ltr">
<div><br>
</div>
<div>I spend a lot of time working with folks using
database servers (almost exclusively PostgreSQL these days,
for me) and often run into a some variation of the same
question, usually "we don't really have time to optimise it
now, when we can fix it by just throwing more RAM at the
problem". All the old wisdom still applies here, and it turns
out that paying attention to stuff like what is indexed, which
columns are stored next to each other, how they're written to
disk and how it's all being cached still matters. Even if
you're dealing with *hundreds* of gigabytes of RAM, what
you're storing there can have a huge impact on performance.</div>
<div><br>
</div>
<div>I am way out of touch on desktop stuff but I love the idea
of making this into a competition. As Russell says in another
message, this work matters because we have existing, usable
hardware held back from certain tasks mostly because of
growing RAM requirements - and *everyone* benefits from
performance improvements made to hit some specific RAM
usage budget.</div>
<div><br>
</div>
<div>While I won't make it to EO this year, in the open source
spirit I put my hand up to help evaluate entries if someone
decides to make this happen.</div>
</div>
</blockquote>
<p>I'm sadly in this camp too - unable to make EO due to a
completely immovable clash, bummed I won't get to see folks</p>
<p>But I will be around at home and able to assist with evaulating
entries if that's helpful!</p>
<p>Cheers,<br>
Hugh</p>
<p><br>
</p>
<blockquote type="cite"
cite="mid:CABFxdF0Ok9E2yi1p09StnJj2BxSYQSk=hLLiu9BT3tNjTj+fdQ@mail.gmail.com">
<div dir="ltr">
<div><br>
</div>
<div>J.</div>
<div><br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Tue, Jan 2, 2024 at
10:20 AM jon.maddog.hall--- via linux-aus <<a
href="mailto:linux-aus@lists.linux.org.au"
moz-do-not-send="true" class="moz-txt-link-freetext">linux-aus@lists.linux.org.au</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div
style="font-size:12pt;font-family:helvetica,arial,sans-serif;color:rgb(51,51,51)">
<span style="font-family:helvetica;font-size:12pt">Brendan,<br>
<br>
I appreciate your comment, but let me add some real
history about this:<br>
<br>
[For those of you not interested in historical
discussion of this, you can skip this section]<br>
<br>
In the early 1990s DEC had the OSF/1 based release of
OSF/1 Alpha. In its initial release it needed 64
Megabytes of Main memory in order to boot and run.
This was declared to be too much main memory to be
required on a SERVER CLASS machine. Marketing wanted
it reduced to 32 Megabytes.<br>
<br>
Now you have to understand the underlying requirements
of this. First of all, most customers did not use
DEC semiconductor memory, they used National
Semiconductor or other RAM, but they needed to have
the first 32 Mbytes of memory in order to allow the
DEC service people to truthfully install and qualify
an "All DEC" system. If we required 64 Mbytes of DEC
main memory to install and qualify it, it might have
increased the cost of the "All DEC" system by
thousands of dollars.<br>
<br>
</span>By reducing the amount of DEC main memory to
"only" 32 MBytes, the customer could then buy the much
cheaper National Semiconductor memory to fill out their
system. <br>
<br>
So the DEC engineers spent an entire YEAR squeezing
libraries and the kernel, etc. down to the point where
the system would boot and load in 32 MBytes. But we
did not anticipate the next thing. <br>
<br>
[Here the people not interested in history can continue]
<br>
<br>
The DEC OSF/1 Alpha system reduced from 64 MBytes to 32
MBytes actually benchmarked as 7% FASTER than before all
this work was done. <br>
<br>
It was faster because more of the runtime set lived in
the cache of the system and processor. Reading the
program text coming off disk was more compact, and took
up less pages of the runtime set. More of the
instructions stayed in cache. The DEC OSF/1 kernel did
not page, so it had to stay in main memory. But that
did not mean it had to stay in the CACHE of that main
memory or the processor. And if the kernel HAD to stay
in levels of main memory cache, that left less space for
actively used programs, including the shell, the
X-server, etc. <br>
<br>
But it was not just the size of the code. There was a
lot of work done on "locality of reference", trying to
get the next instructions on the same page of cache as
the last ones. <br>
<br>
Not every processor has a lot of cache. Some have
little or no cache, but they still do demand-paged
virtual memory and compete with other processes for the
space the system has. <br>
<br>
I would also propose that we talk about "performance".
This contest so far about memory performance, but should
also be about processor performance. I hear the same
talks about "why are we so interested in 'performance'"
when processors are so fast (and memory is so cheap) I
respond back that we care about performance because we
do not want o charge our phones three times a day or
that Google might only need 9000 servers instead of
10,000 servers or use "only" 900 megabytes of
electricity instead of 1 Gigabyte of electricity (and
less power needed for cooling). <br>
<br>
Often I hear the reasons given for these inefficiencies
because of "virtualization". WRONG. Virtualization
might actually help improve some of these issues, but it
has to be virtualization done correctly, and we may have
to make tradeoffs, but we should be doing these in a
reasonable way. <br>
<br>
Warmest regards, <br>
<br>
maddog </div>
<blockquote type="cite">
<div> On 12/31/2023 11:18 PM EST Brendan Halley via
linux-aus <<a
href="mailto:linux-aus@lists.linux.org.au"
target="_blank" moz-do-not-send="true"
class="moz-txt-link-freetext">linux-aus@lists.linux.org.au</a>>
wrote: </div>
<div> </div>
<div> </div>
<div dir="ltr"> Hi Russell,
<div> </div>
<div> I've seen the issue of memory bloat discussed
many times by lots of people, all with different
priorities. The consensus at the end of the
conversation is always why waste part of someone's
life fixing the problem when memory is so cheap. </div>
</div>
</blockquote>
</div>
</blockquote>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<blockquote type="cite">
<div dir="ltr">
<div>[snip] </div>
</div>
</blockquote>
</div>
</blockquote>
</div>
</div>
<br>
<fieldset class="moz-mime-attachment-header"></fieldset>
<pre class="moz-quote-pre" wrap="">_______________________________________________
linux-aus mailing list
<a class="moz-txt-link-abbreviated" href="mailto:linux-aus@lists.linux.org.au">linux-aus@lists.linux.org.au</a>
<a class="moz-txt-link-freetext" href="http://lists.linux.org.au/mailman/listinfo/linux-aus">http://lists.linux.org.au/mailman/listinfo/linux-aus</a>
To unsubscribe from this list, send a blank email to
<a class="moz-txt-link-abbreviated" href="mailto:linux-aus-unsubscribe@lists.linux.org.au">linux-aus-unsubscribe@lists.linux.org.au</a></pre>
</blockquote>
</body>
</html>