As much as I hate to start the year off on a kind of negative foot, I can’t let this recent article, which proposes a causal link bewteen lead exposure and crime, go unexamined. The author, Kevin Drum, first dismisses prior analyses of social phenomena as being “purely correlative”, then goes on to present a causal explanation using… pure correlation, primarily from the work of Rick Nevin.
Going by the number of tweets that appeared in my timeline, apparently there were many in my circles who didn’t immediately see the folly.
December 1, 2012 at 10:17am
Passwords: Just Use the Mnemonic
For many years I have used an application to generate and store reasonably long complex passwords for the various web services and applications that I use. The more common ones, such as iTunes, Google and Twitter, I end up committing to memory. The downside of course is that for everything else I actually don’t know what my password is, meaning if I ever need it and I don’t have a PC and my database (I use PasswordSafe) to hand, I’m screwed
The only way for me to do this is using a mnemonic - alas, random (for all intents and purposes) strings of double digit length are not something my brain retains easily. But then I thought, why bother with the “complex” password and just go for the mnemonic itself? You’d think I would have realised this the first time I saw http://xkcd.com/936/, but alas I didn’t.
Let’s try an example. Here’s the kind of password I might use:
And the mnemonic I might have come up with:
Qu MEtH! kGE + j 4 Y M8 Rz dpL nG m
"cue method man cagey and just for you mate RZA double platinum nextgen machine"
Which password is “stronger” :). Now if only everyone would get of the ridiculous “password between 8 and 15 characters” type limitations!
November 16, 2012 at 7:31pm
Chinwag with Mike
A couple of weeks ago (pre-‘reverse goatee’) I had the pleasure of being on Mike Laverick's chinwag video / podcast. It's always good chatting with Mike, and every time I do I remember those times when I was a newbie back in Australia and Mike's site was pretty much the only (but certainly the best) source of VMware info on the web. How time flies.
I did make some slight errs during the chat however, the worst of which was somehow saying “dynamic DHCP” when of course I meant to say “dynamic DNS”! Mike’s brain heard the correct thing and just carried on the conversation as normal, but yeh… pretty stupid lol. Getting Geoffrey West’s surname wrong was slightly more forgivable.
Anyway I refer to a few things you should watch, so here are the links for those:
Geoffrey West: The surprising math of cities and corporations
Jason Hoffman: WAR SIGNALS: INDUSTRIALIZATION, MOBILIZATION AND DISRUPTION
Towards the end I also mention something about innovation and organisational structure, which was with reference to this excellent post:
An Operations Mindset Is at Odds with Innovation
And of course, catch my interview here! Thanks Mike :)
Distributed Programming vs Distributed Programs
I had the pleasure of hanging out with VMware community legend John Troyer the other weekend, while he was sojourned in London after VMworld Barcelona.
The conversation was less often about technology, but one of the things we did talk about was application architectures in 2012 and beyond, which really boiled down to the point that distributed programming is hard, and as such can we really expect distributed applications to be pervasive in the Enterprise any time soon.
It only occurred to me later that I had rather poorly communicated my position, which amounts to a perhaps subtle distinction between distrubuted programming and distributed programs. WTF do I mean by that? Read on…
The Accidental Standing Desk
A long time ago, in a galax…
The original idea was to end up with this http://www.ikea.com/gb/en/catalog/products/S99843742/#/S59865632, but along the way something went horribly wrong… to blame, as with so many things, was a combination of impulse and too much choice.
I’m not exactly an Ikea regular. So when I went to my local “store”, i was overwhelmed by the choice! Expecting to find a nice little package with all the parts I need to assemble the thing in the picture, instead I was presented with an array of table tops and legs that could all work together.
Before deciding to go cheap, i was very close to getting a custom desk built in the spare room. But at the last minute I decided it would be bad if we ever decided to move and rent our place out, so I opted for something less permament. I had the dimensions of the custom desk mapped already, and had called for a “slimline” 600mm depth. So when I saw a 600mm deep option in Ikea (the original table I saw was 750mm deep) i went for it, naively thinking that the legs I had already picked up would be fine with my new slim table top.
My original idea of a custom desk was such that it would run along an entire length of the room, and so i ended up buying the parts to construct 2 tables. Little did I know, that would matter months later.
When i got home, i eagerly unpacked everything and started putting it together… only the legs didn’t fit!!! The holes in the uprights were fine so I at least had a functional desk, but it didn’t have the aesthetics of the original image. I had 3 options: trek back to Croydon to return the legs for some plain ones. The weather and the thought of going back to Croydon (not the nicest of London suburbs!) ruled that one out. Option 2 was to find someone to cut the steel tubing to a length that would work with my configuration. Unlikely to find those services in the area that i live, i might as well just go back to Croydon. And I’m not doing that. Option 3 was to do nothing. Which is exactly what I did. For ~18 months.
September 19, 2012 at 8:56pm
All You Wanted to Know About ZFS
I had started to write a post about ZFS to follow up on my last SmartOS related post, when word finally started getting out about the inaugural ZFS Day on Tuesday, October 2nd. Somewhat unbelievably, the good companies of the Illumos-centric community are putting on the event for FREE, and live streaming the whole thing!!!
So if I’m perfectly honest, there’s no point in me writing something trivial compared to what is bound to be covered at the event - if you want to learn all there is to know about the only modern filesystem worth trusting (yes, I’m looking squarely at you Btrfs) head over and register now even if you’re just going to be watching the live stream (as I will be).
September 3, 2012 at 9:20pm
On the Software Defined Data Center
Yeh I know… it’s one of the US spellings I actually like.
I guess this all kicked off a few days ago, with this tweet:
Don’t take the following too literally… you can of course take the scenario to the nth degree, I’ve tried to strike a balance between too much detail and not enough.
It’s really just something to think about. Something I think about.
A server arrives at the datacenter. An hour later it’s in service and being used.
A server arrives at the datacenter. A facilities member removes the packaging, slaps on a QR code sticker then scans it into the inventory management system. A few milliseconds later a rack location is returned from the inventory management system. Less than an hour later, the server has ping and power. Shortly after that, it’s in service and being used.
A server arrives at the datacenter. A facilities member removes the packaging, slaps on a QR code sticker then scans it into the inventory management system along with the asset tag provided by the server manufacturer and the burn-in confirmation tag provided by the server assembly company. A few milliseconds later a rack location is returned from the inventory management system. Less than an hour later, the server has ping and power. After power on self test, the BIOS initiates a PXE boot sequence. The server provisioning system delivers the relevant operating system installer to the server. The application configuration management system delivers the relevant application dependencies, binaries, code and configuration to the server. Shortly after that, the server is in production and being used.
Using SmartOS for Node.js Development
I was fortunate enough to attend Nodejitsu's latest workshop in London today, which was totally awesome - as an infrastructure guy, I'm a total Node.js lightweight and the workshop really opened my eyes to some of the power beneath the surface of node. The workshop was delivered by none other than Paolo and Nuno - people who need no further introduction, suffice to say they are seriously freaky when it comes to pretty much anything involving 1’s and 0’s.
During the course of the workshop Paolo took some time to espouse the virtues of SmartOS… he was in my case preaching to the converted, but aside from the Clock guys who were in the room I wasn’t sure how many others had tried on SmartOS. So this post is for you guys :)
A Tale of Two Domains - Part 1
For a while now I’ve been trying to build something for the web - not easy when you’re as easily distracted as I am, and holding down a full time job! Typical excuses I know, i should JFDI.
Like many people I have worked with over the years, i have this not-so-uncommon block… I can’t start something without a name. And so when 2 ideas sprang up I started searching for that increasingly elusive quinella of available domain and twitter ID. In this case (or rather these cases), I was more than half fortunate - the twitter names were available, but the domains were… in the redemption period!
To those with a similar level of knowledge as me a few days ago, here’s how the domain expiry process works.
1. Domain ‘expires’, and enters a 40 day grace period. I have read things that imply this can vary from registrar to registrar, but it seems pretty standard from what I have seen.
2. After the grace period, it enters a 30 day redemption period. Again, apparently this can vary, but I have yet to see it (admittedly I have only looked at a small number of domains)
3. Finally, when the redemption period expires, a 5 day ‘pending deletion’ period is entered.
At the end of those 5 days is where things get interesting.
The Fallacy of vHadoop
I really hate writing these kind of posts, especially when there is the remote possibility that it will look like I’m going after someone I have the utmost respect for, Richard McDougall. The guy is a legend and borders on deity status in my books. So keep that in mind when you’re reading this… I’m not attacking anyone, just providing an ‘other side of the coin’ kind of view to what VMware marketing would have you believe. Maybe I should re-adopt my old moniker of “cutting a swath of destruction through marketing bullshit" (those are my old vinternals.com novelty business cards :)
VMware recently announced “Project Serengeti”, a tool designed to rapidly deploy Hadoop clusters on top of vSphere 5, with an accompanying whitepaper “Virtualizing Apache Hadoop”. A whitepaper that is unfortunately not without some glaring omissions / understatements / contradictions.
Let’s take a look at the “Use Cases and advantages of virtualizing Hadoop” section of that whitepaper.