Stu Radnidge

Data, Infrastructure, Node.

April 1, 2014 at 6:22am

1 note

EMC Announces New Product - VESX

In an astonishing move, EMC Corporation today unveiled a new breakthrough technology under the marketing banner of VESX - that’s right, as the name suggests they have virtualised the hypervisor.

CEO Joe Tucci was quoted by unknown sources as saying “VMware may have a Virtual SAN, but now we have a Virtual Hypervisor… what are they gonna do? Gelsinger wants to virtualise me? I just damn well virtualised him and whole the goddamn horse he rode in on!”. He then (allegedly) motioned a ‘drops mic on stage' action and walked out of the room.

Well known EMC evangelist Chad Sakac went on to explain the technology, in one of his customary lengthy blog posts. The pertinent information is around 40,000 words into the piece, where he presents the following image:

image

He goes on to elaborate that one layer of hypervisor is “sooooo early 2000’s”, and that like many problems in computer science, adding another layer of indirection is usually the easiest solution. The lengthy article, which now seems to have been removed from his website, concluded with claims of bare-metal performance and repeating ‘stupid NetApp’ several times.

Gartner Analysts have been quick to applaud the release, but have also cautioned that many enterprises will be looking for cross platform support and warned that without strong management tooling, there was a risk of Virtual Hypervisor sprawl. An unnamed senior analyst remarked “They may be able to virtualise VMware’s hypervisor, but what about Hyper-V? If they’re not fast enough, people will just deploy hypervisors on to bare metal and they’ll lose the small window of opportunity that they have with this breakthrough release”.

Much like VMware’s launch of VSAN, no VESX pricing has been announced yet. It’s also unclear as to what the support implications will be for virtual machines running on top of a Hypervisor that is itself virtualised, nor what would happen if someone used VESX to virtualise VESX itself.

The reaction from VMware has, as expected, been somewhat muted - the company seems to be adopting a ‘head in the sand’ approach, not even acknowledging that the product exists.

Renegade VMware employee Lance Berc broke ranks with the corporate stance however, reportedly saying “Virtual Hypervisor… what the fuck are you talking about?”. Indeed Lance, indeed.

March 12, 2014 at 11:12pm

1 note

VSAN: A Giant Leap… Upwards?

And no, I’m not talking about the price… even if it was price neutral on retail cost per GB compared to pSAN©, it’s still a far better value proposition because of the other capabilities it has.

What I’m talking about are the assumptions it makes around deployment patterns and application architecture.

It’s certainly not a forward leap in those respects. And although in many respects VMware has possibly just created a WAFL / OneFS competitor (VMware has had a pluggable storage architecture for a very long time now, so it’s not too hard to imagine slipping any protocol in there alongside VMFS and boom - you have a NetApp / Isilon / <insert NAS of choice here>), calling VSAN a backwards step (WAFL is 20 years old) would be idiotic. So I’m going for ‘upwards’ - not forwards, not backwards, you’re basically in the same place as before.

Long way of explaining the title of a post. Now onto what I really want to talk about.

Read More

May 30, 2013 at 7:47pm

0 notes

Pirewall in Action!

After a few weeks of mulling it over, I finally put the pirewall into action - that is, I switched my ADSL modem into pure bridge mode and made the pirewall internet facing.

The main reason for me doing this was that I wanted a DMZ proper, in order to host some small projects and a few parked domains. Thus I ended up with a ‘3 pronged’ Pi - in addition to the onboard ethernet, I added 2 x Apple USB-Ethernet adaptors. Why Apple adaptors? Simply because they work out of the box, don’t require their own power source (ie don’t need to be run via a separately powered USB hub), and aren’t really that much more expensive than the other reliable options I could find (there were plenty of cheap options available but I couldn’t determine what the chipset inside was, and thus wasn’t sure if they would work).

Read More

May 22, 2013 at 10:24pm

2 notes

Minimal Raspbian

I’m getting a bit more serious with my Raspberry Pi usage, to the point where all the cruft in the default Raspbian build was getting to me. Yes I know, the point of the distro is education… but it’s not like I’m gonna use Arch.

I’m not overly familiar with Debian, so rolling my own install image seemed like a good place to start finding my way around. But as with many things Linux, most of the work had already been done for me… it was just a little out of date / incomplete.

Basically, after running the image build script below in an existing Debian environment (I used an x86_64 Wheezy install running in a VM), you need to copy the boot files from the official Raspbian image - the script will prompt you to do so (and tell you where to put them). It’s a bit hacky, I’m probably missing something, but it works.

So here’s the script - you’ll end up with a fully functional Debian that idles at around 14MB of memory used, and no extraneous services running. Just the thing for the foundation of a ‘pirewall’ ;)

And don’t forget to roll your SSH keys on first boot! I should probably add something to do that…

10:04pm

1,990 notes
Reblogged from troycummings
troycummings:

I hate when this happens.

Me too&#8230; me too.

troycummings:

I hate when this happens.

Me too… me too.

April 29, 2013 at 9:27pm

0 notes

Raspberry Router on a Stick


One of the great use cases for a Raspberry Pi is for a low cost, low power router for your home network - great for implementing pseudo-network segregation, but obviously not something that can be made secure with a single interface.

Contrary to a lot of things you’ll find on the internets, you don’t need 2 physically separate interfaces (unless you also want security), and you don’t need a sub-interface - 2 IP addresses will happily live on the same interface.

Say for example your internal network is currently 192.168.1.0/24, and that’s where your inside modem interface lives. Your Raspberry Pi is at 192.168.1.253, and you want to add a new network 10.0.0.0/24.

1. Create a manual route on your modem for network 10.0.0.0/24 with a default gateway of the Raspberry Pi IP address (in this case 192.168.1.253)

2. On the Pi, add a new IP to eth0
ip addr add 10.0.0.254/24 dev eth0

- to make this persistent, open up /etc/network/interfaces and add the following after whatever you have in there already

up ip addr add 10.0.0.254/24 dev eth

3. On the Pi, ensure ip forwarding is enabled
sysctl net.ipv4.ip_forward = 1

- to make this persistent, add the following to /etc/sysctl.conf

net.ipv4.ip_forward = 1

4. On the Pi, disable sending redirects to stop hosts from skipping the “router”
sysctl net.ipv4.conf.all.send_redirects = 0

- to make this persistent, add the following to /etc/sysctl.conf

net.ipv4.conf.all.send_redirects = 0

And that’s it - you can now spin up machines on 10.0.0.0/24 and have everything talk to everything else!

April 1, 2013 at 9:09am

0 notes

Oracle Open Sources Solaris, HotSpot JVM

In a surprise move, Larry Ellison reportedly announced today that his company Oracle would be open-sourcing Solaris and the HotSpot Java Virtual Machine.

This is of course not the first time Solaris has been open sourced - Sun originally did this with the Solaris 10 release, however Oracle originally closed the source shortly after their acquisition of Sun in 2010.

The motive, like everything that Oracle does, is purely financial. “With all these startups wasting their much more limited resources on not only further developing features native to Solaris, but bringing new features such as KVM that can’t be incorporated into Solaris while it remains closed, I just thought ‘why am I wasting money on all these kernel engineers when I can get other people to do the work for FREE!’” the CEO apparently said, continuing with “I don’t know why I didn’t just keep it open to begin with - what was I thinking. I have my application and database cash cows, and everyone knows you can’t make money from a server OS - I mean look at OEL, we can’t even give that away. Our wikipedia page doesn’t even mention Solaris as a software product we own, that’s how insignificant it is.”

As for open sourcing the HotSpot Java Virtual Machine, the same logic applied: “There is a significant number of people contributing to the Apache Foundation’s Java-based projects, as well as companies like Red Hat and EMC (via Pivotal Initiative) dedicating plenty of resources to open source commercial Java-based products. Giving up HotSpot will just mean more people using Java instead of these other pissant non-Enterprise languages, like JavaScript.”.

As a result of the move, there will be an estimated 7000 job losses at Oracle. But there is an upside to the layoffs - all the money previously paid to those engineers will now be donated to the Ellison Foundation charity.

The community that has formed around Illumos (the fork of the Open Solaris kernel), although small, is already fractured. There are at least 4 commercially backed distributions from Joyent, Delphix, OmniTI and Nexenta - each with their own package manager as well as a slew of other minor inconsistencies. The purely community driven OpenIndiana has failed to muster significant support in the midst of these alternatives. It is hoped that now the golden source has been re-opened, those fractures will disappear and the community will come together once more.

Although it may also lead to some heavy hitting departures from the Illumos community. I called Bryan Cantrill (former Sun kernel engineer / current VP of engineering at Joyent) this morning to tell him about the news, and after a tirade of abuse broken only by primal sounding screams, he concluded with “Fuck this, we’re moving to Linux.” before hanging up in a manner that suggested his phone had been smashed against a wall. And not one that was nearby.

February 2, 2013 at 4:27pm

0 notes

I Know What You Did Last S… aturday

After a lengthy spell of rain, snow, and generally foul weather, when I awoke to a clear sunny day this morning I just had to get out. With the missus out for the day, I thought why not go on a little London walkabout - the kind of thing I used to do every weekend when I first arrived here nearly 6 years ago.

Not long ago I bought a 5th generation iPod Touch, with the intention of using the camera on it in place of the Canon point-and-shoot we’ve had for many many years. And so I thought I’d try a little experiment, by doing a kind of photo walking tour of some parts of London that I like, which are within walking distance of where I live.

Unfortunately this idea didn’t dawn on me until I had already reached the Barbican Centre (i was paying a visit to the library there), so there are a few little things I could’ve shot on the the way there. But not to worry.

Read More

January 29, 2013 at 6:09pm

1 note

The Simplicity of Independence

No, this post is not Dr Stu giving relationship advice. Hopefully you’ll keep reading now ;)

I was reminded of an experience I went through a little while ago, whereby the oh-so-typical reductionist approach was taken by managers overseeing the development of a new system. That is, after identifying that there was significant overlap between a new proposed component and an existing component (albeit one that would apparently require “little modification” to provide equal functionality), in the name of DRY it was decided that a dependency would be introduced rather than a duplication.

The problem was, the overlap was more perceived than real.

Read More

January 24, 2013 at 7:59pm

0 notes

Hadoop - to V, or not to V?

Today I delivered a cheekily titled (alas, I cannot take credit for that title - it was a workmate, who I shall not name!) session at the London VMware User Group, wherein I attempted to explain Hadoop for the uninitiated and then present some considerations for virtualising Hadoop nodes. All in 45 mins. Which is potentially mind melting stuff, if you have never looked into Hadoop before - not because it is difficult material, it’s just dense.

I think I did a pretty good job of getting my points across, however I might not have emphasized enough that everything in there is with reference to centralised multi-tenant SAN based VMware deployments. That is, the kind of VMware infrastructure that 99% of people have deployed in production today.

That distinction is important, because a lot of the messaging that came out from VMware originally suggested that it was reasonable to stand up Hadoop clusters on the homogenous pool of compute that you use for non-distributed IO intensive applications.

So just be clear about that if you have a look through the deck - deploying virtual Hadoop slaves backed with local disk may or may not be much less efficient than bare metal, but I don’t have the data or experience to say either way. All I can say, is deploying production Hadoop slaves onto SAN based virtual infrastructure is not just a terrible idea - it’s also a stupid one.