Wednesday, January 25. 2012
I wasn’t a keynoter. Or even a regular presenter. I was just doing a talk at a miniconf. It was still an un-nerving enough experience that I went to see my doctor on the Thursday before I flew to Tullamarine to make sure the chest pains I was having weren’t the onset of a heart attack. It almost would have been a relief if they had been.
As you can see, I did make it, and I didn’t drop dead on stage.
Normally I’m pretty comfortable about speaking in front of people. To the point where, for example, last year I needed to double the time I’d been told I would be allocated, and spoke extemporaneously from the bullet-points I’d listed on a bit of paper, only looking at them once. Or, 4 years ago, spoke at a funeral after leaving my speech at home. Give me a run-up and I can usually stand up and talk about most anything I care about on short notice, and probably for longer than you envisaged when you asked.
So why so nervous? There’s a simple reason: the audience.
Continue reading "Speaking for the first time at linux.conf.au"
Friday, January 20. 2012
Rusty Russell & Matt Evans
Three Cool Projects
In The Beginning
Thursday, January 19. 2012
Why do Hackers Attack?
Developers Need to Adopt Better Development Strategys
Dataflow Analaysis 101
Side-Channel Attacks and Protections
Tools for More Secure Software
Forcing Faults Reliably
Wednesday, January 18. 2012
Andrew Bartlett and Amitay Isaacs
It’s an interesting problem, but there’s no good solution for the problem, as yet. But there are some steps towards one: - You can graph out seed-based dependencies; you can use rdepends to extract information for driving regression testing.
Some key points (having lost many notes due to Firefox being fucking useless).
When Bad Things Happen to Good Data
Beeeellions of files
yum upgrade and snapshots
Tuesday, January 17. 2012
Origin of the talk was when a customer rang with a complaint that a site was wrong, but Sarah couldn’t find a problem, and this provoked her into thinking about where data can ad should be cached.
We want to move data as close to the end used, while retaining ACID-style guarantees. The abandonment rate after 7 seconds is huge. We need reliable speed.
A Short Diversion
Which Caches are Redundant
Why do we keep doing this? Because we want things to go faster! But there’s a conflict between the DB cache and filesystem cache, too. You’re double-buffering. They aren’t particular dangerous on modern filesystems, but it’s an inefficient use of memory and CPU to manage both sets of caches.
Which Caches are Risky
Slides at: Slideshare.
Failure to Imagine
Read the DailyWTF.
Editorial: This seems kind of dead in the water to me. Kind of NIH-ish.
(Page 1 of 2, totaling 18 entries) » next page
Ada android bikes cars ceph copyright economics egroupware eve farming fatherhood feminism football french funambol gym hi-fi internet Isis Jaques java judo lca2010 lca2012 lego Lias linux Maire mangling language movies new zealand oracle perl phil ochs pixar postgresql question of the day racism rails Rosa snark sony-ericsson syncml sysadmin typo uk venting vignette virtualisation wave wtc bombing