Wednesday, July 18. 2012
One of the more incredulous spit-takes I’ve done recently was the claim that developers use Agile because they want to avoid doing documentation.
I’m incredulous for a bunch of reasons, but the core one is this: I’ve never, in almost two decades, worked with any decent developer who flat-out wouldn’t do any documentation. Hell, I’ll go further than that and note that most of the good developers I’ve worked with want documentation and are keen on writing it (up to a point; more on that in a moment).
Continue reading "It's Not The Methodology, Stupid"
Sunday, June 17. 2012
In 2009 I had a real “Living in the Future” moment; at a conference in Brisbane, I ducked out of the last presentation a few minutes early, found a quiet spot in the lobby connecting the hotel’s conference rooms, popped open my laptop, and videoconferenced with my daughter in Wellington to say good night to her. It hit me afterwards that this felt like such a “golden sci-fi moment”.
That feeling was probably heightened by my experiences, 30-odd years before, when my father had been travelling on business. He would head off to such then-exotic locales as Kuala Lumpar or Tokyo for a week at a time, perhaps more, and from the time he stepped on the prop plane at New Plymouth airport until the time he came back with t-shirts, Air New Zealand hard-boiled lollies, or, on especially extraordinary occasions, Nintendo portable games (impossible to get in New Zealand), he would be out of contact. Timezones weren’t the problem; we couldn’t afford to call overseas, what with calls being in the ‘many dollars per minute’ bracket, and travelling didn’t allow for such fripperies as phoning home. This was the bad old days, now alleviated by cheap ubiquitious Internet access and webcams providing the option of making the stuff of my childhood sci-fi (well, the bits about video calling, not so much the bits about interplanetary travel) a simple thing indeed.
Continue reading "The Bad Old Days"
Saturday, April 28. 2012
Over the last wee while I’ve been testing JBoss apps virtualised under RHEV, and this week I had a bizarre experience: my team-mates and I had been puzzling over the high standard deviations (and hence eccentric behaviour) of our web app, which wasn’t even using all the available JVM heap or virtual processors assigned to it. While I was off in meetings, the rest of the team doubled the number of vCPUs, and the SD improved significantly, but more importantly, the utilisation of each vCPU improved. This was odd, and, on the face of it, inexplicable. If you’re only half-to-three-quarters utilising 4 vCPUs, why would you get better utilisation when you doubled that number? And if you weren’t CPU-bound before, why would increasing the amount of virtual processors improve matters?
We threw around some hypothesis and worked up some lines of investigation, which boiled down to “more stats from the hypervisors, please”, when I had a thought.
These symptoms tickled my memory banks: a few weeks ago I’d been reading about bizarre misbehaviour of large MySQL instances on modern x86 NUMA architectures, when the processes got to the point that they were so large that they grew larger than the bank of memory with affinity for a given processor; there’s some write-ups here, but it boils down to this: if you don’t tell the kernel that you want it to ignore its normal best-guess behaviour about the penalties involved in the NUMA topologies, you’ll see weird performance problems. So, for shits and giggles, I suggested we shrink the JVM and guest under the size of a single bank of memory: almost halved the heap and guest sizes, and, at the same time, took the number of vCPUs back to 4. Result? It ran faster, and with a substantially better standard deviation of results.
(Of course, to confirm this theory the real test will be what happens if we use numactl hints to force the KVM process to behave as we want.)
Less, it would appear, is more. And you need to understand what lies beneath your virtual layer.
Wednesday, January 25. 2012
I wasn’t a keynoter. Or even a regular presenter. I was just doing a talk at a miniconf. It was still an un-nerving enough experience that I went to see my doctor on the Thursday before I flew to Tullamarine to make sure the chest pains I was having weren’t the onset of a heart attack. It almost would have been a relief if they had been.
As you can see, I did make it, and I didn’t drop dead on stage.
Normally I’m pretty comfortable about speaking in front of people. To the point where, for example, last year I needed to double the time I’d been told I would be allocated, and spoke extemporaneously from the bullet-points I’d listed on a bit of paper, only looking at them once. Or, 4 years ago, spoke at a funeral after leaving my speech at home. Give me a run-up and I can usually stand up and talk about most anything I care about on short notice, and probably for longer than you envisaged when you asked.
So why so nervous? There’s a simple reason: the audience.
Continue reading "Speaking for the first time at linux.conf.au"
Friday, January 20. 2012
Rusty Russell & Matt Evans
Three Cool Projects
In The Beginning
Thursday, January 19. 2012
Why do Hackers Attack?
Developers Need to Adopt Better Development Strategys
Dataflow Analaysis 101
Side-Channel Attacks and Protections
Tools for More Secure Software
Forcing Faults Reliably
Wednesday, January 18. 2012
Andrew Bartlett and Amitay Isaacs
Some key points (having lost many notes due to Firefox being fucking useless).
When Bad Things Happen to Good Data
Beeeellions of files
yum upgrade and snapshots
Tuesday, January 17. 2012
Origin of the talk was when a customer rang with a complaint that a site was wrong, but Sarah couldn’t find a problem, and this provoked her into thinking about where data can ad should be cached.
We want to move data as close to the end used, while retaining ACID-style guarantees. The abandonment rate after 7 seconds is huge. We need reliable speed.
A Short Diversion
Which Caches are Redundant
Why do we keep doing this? Because we want things to go faster! But there’s a conflict between the DB cache and filesystem cache, too. You’re double-buffering. They aren’t particular dangerous on modern filesystems, but it’s an inefficient use of memory and CPU to manage both sets of caches.
Which Caches are Risky
(Page 1 of 12, totaling 119 entries) » next page
Ada ada android bikes cars ceph copyright economics egroupware eve farming fatherhood feminism football french funambol gym hi-fi internet Isis Jaques java judo lca2010 lca2012 lego Lias linux maire Maire mangling language movie new zealand oracle perl phil ochs pixar postgresql question of the day racism rails Rosa snark sony-ericsson syncml sysadmin typo uk venting vignette virtualisation wave wtc bombing