[Linux-aus] Post in ZDnet re: Heartbleed

Brent Wallis brent.wallis at gmail.com
Thu Apr 17 14:11:42 EST 2014


Gidday,

On Thu, Apr 17, 2014 at 12:49 PM, Glen Turner <gdt at gdt.id.au> wrote:

>
> > You're absolutely right, the process followed by the OpenSSL team and the
> > various distributions in fixing this has been very well done and is a
> model
> > for how these things should be fixed.
>
> And here we part company. The advice for people with possibly-affected web
> servers should have been to shut that web server down. Then determine if
> the web server was vulnerable. Then patch it and reboot.
>
>    Not getting the web server offline immediately simply allowed people to

> pull 64KB blocks from webservers and archive them to disk for future
> analysis.
>

Note that it was not proven in a wider sense that a servers private key
could be gained until the CloudFlare challenge over the weekend of 12th and
13th of April.
http://blog.cloudflare.com/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed

Up until that point, an exploit had not been posted nor had any proof been
made that showed it could be done.
But it had to be assumed that the probability of the bug having been
exploited in the last 12 months would have been as close to 1 as you could
get.

 As one who has to deal with this for many over the last few days... want
to politely point out that an immediate shut down for a lot of servers was
impossible for various reasons and in a lot of respects a waste of effort.
... the horse had already bolted ...

If the server had been up for a period, and if it used a vulnerable OpenSSL
version then the smart course of action had to assume that it had already
been compromised and perhaps had been for some time.

Shutting it down immediately was not going to fix that.


> Instead we've had major websites stay up whilst determining if the
> vulnerability is present. The seriousness of the issue and ease of
> exploitation demanded a more rapid and abrupt response from systems
> administrators.
>

Determination was easy for some and not so for others....

eg: The large number of orgs that use Akamai for content delivery were
informed late last week that all was well.
THEN, they come out last Monday to tell us all that they have to re-issue
every cert because they were shown to vulnerable after all.

With respect, I do understand your idea around shutting down a server but
in the end it would have fixed nothing and it would have protected no-one,
the damage had already been done.

The best efforts I have seen and participated in simply required a strict
adherence to a simple hierarchy of mitigations:

1. Identify and Patch
2. Replace Certs
3. Inform users and have them change passwords.

HTH
BW
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.linux.org.au/pipermail/linux-aus/attachments/20140417/2c1d94fc/attachment.htm 


More information about the linux-aus mailing list