HTTP FTW

I was reading “The Web Is Dead. Long Live the Internet” today and a good response on gigaom.

To a number of people, “the web” = HTTP. And HTTP as a protocol on the Internets has clearly won.

The fun thing to notice is that “the web” to Chris Anderson and Michael Wolff is just content delivered on a web site. Video and peer-to-peer is still on average occurring over what protocol?

That’s right!

HTTP.

So while content is diversifying on his web, HTTP is clearly the winner.

The “machine” needs to die

I was reading this.

The “computer machine” as our base unit of work is a shitty unit.

What I typically want is

  1. – Agility and flexibility
  2. – Performance and scale
  3. – Business continuity and taking a resource pricing point of view for dev, test, staging and DR.
  4. – Business and security best practices baked into infrastructure

You can do agility and flexibility with virtual machines. But that’s it.

Virtual “machines” suffer from the same fundamental problems as “physical machines”.

1) VMs still take up space just like PMs and the space they take up is additive. A machine is a machine. Whether logical or physical. You cannot do business continuity, dev and test for the cost of production. It’s normal to figure out what a piece of software needs for production and then buy 20x all at once that to account for everything one might need. This is for an application that will not need to scale beyond the initial deployment and it’s clear to see why one would end up at 5% utilization on average. VMs are not in line with the idea of having accessible software fully utilize a server.

2) Performance and scale can not, will not and are not a unique feature of a pure VM approach. They can’t be. No more than a square peg’s inability to fit into a round hole. The same wall that you hit with physical machines, you will hit it with virtual machines. Except. You. will. hit. it. sooner. It you’re not hitting it, you’re not big enough, so maybe don’t worry about it: you’re likely just concerned about agility and flexibility.

You don’t buy CDN services by the “VM”. We need to move what we’re doing with compute to a utility around a transaction, around the usage of a specific resource. Everything else needs to be minimized.

To be clear about the problem and to leave you with some food for thought. I can take two racks of servers with two 48-port non-blocking 10 Gbps at the top of each, and then write a piece of software that will saturate the lines between these racks.

Can someone name a web property in the world that does more than a Tbps?

Can someone name one that gets close and only uses 20 servers for it’s application tier?

We have massive inefficiencies in our software stacks and hardware has really really really really outpaced what software can deliver. And the solution is what? More layers of software abstractions? More black boxes from an instrumentation point of view? Odd.

But familiar.

Oracle and OpenSolaris: A Kernel of Truth

@nevali on Twitter asked a question that we’ve heard from many customers, so I’m writing a response to everyone, though none of you need to worry. His question is, “As a long time Open Solaris stalwart, I do wonder what @Joyent’s perspective on the post-Oracle-takeover world is.”

In many ways, we’re happy to have seen Oracle and Sun combine. Sun was a great company for technologists and Oracle is tremendously good at operating a business. Oracle may prove to be the management team that can turn around Sun’s fortunes. And I think they’re completely committed to the Solaris kernel.

A lot of people think of OpenSolaris™ when they think of Joyent, and that’s reasonable — since it’s the most well known open source distribution of the Solaris Kernel. But in truth, Joyent has never used OpenSolaris™. OpenSolaris™ is a full operating system, a “distribution” containing both a kernel and a userland (along with packaging tools), the name a marketing term used to refer to this full distribution. There are a number of features in there that we’ve simply never cared about: For instance, we have no need to allow laptops to sleep. Since 2005, Joyent has been using the open source Solaris 11 kernel, a couple of binary bits and combining it with a Solaris build (that we maintain) of NetBSD’s pkgsrc. Combining a BSD set of tools with the rock solid Solaris gave us a foundation that contained the best of both worlds and allowed us to have a functional userland while having access to DTrace, ZFS and Zones.

So given Oracle’s commitment to the Solaris kernel, and the way we’re using it in SmartOS, we’re actually very well aligned with Oracle. Also, we’ve been working and will continue to work to make our base operating system a completely open operating system, and we are aligned with and believe in the vision behind the Illumos project.

If you have any particular questions, comments or concerns in this area, please feel free to let me know directly at jason@joyent.com and I’ll make sure they get addressed.