Eisenhower

In “The Chance for Peace” (1953)

Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed.

This world in arms is not spending money alone.

It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children.

The cost of one modern heavy bomber is this: a modern brick school in more than 30 cities.

It is two electric power plants, each serving a town of 60,000 population.

It is two fine, fully equipped hospitals.

It is some 50 miles of concrete highway.

We pay for a single fighter with a half million bushels of wheat.

We pay for a single destroyer with new homes that could have housed more than 8,000 people.

This, I repeat, is the best way of life to be found on the road the world has been taking.

This is not a way of life at all, in any true sense. Under the cloud of threatening war, it is humanity hanging from a cross of iron.

These plain and cruel truths define the peril and point the hope that come with this spring of 1953.

This is one of those times in the affairs of nations when the gravest choices must be made, if there is to be a turning toward a just and lasting peace.

It is a moment that calls upon the governments of the world to speak their intentions with simplicity and with honesty.

Nostalgia and my nascent views about what is now called cloud computing

In 2004 I had finally edited down everything that I wanted from my infrastructure into a single sentence and then reused this phrase in a lot of the proposals during the early years of Joyent.

I simply wanted to run applications well and keep data well. And needed

A highly available, redundant and modular setup with a determinable QoS, easily administered, scaleable, and can be right-sized and geographically distributed without issue.

Simple. The “determinable QoS” is the hard, unknown part. And by unknown, I mean that no one in the world actually knew now to do it. The redundancy requirement over the years has shifted into better parts.

The task then became a single question too.

How do I physically and logically design a scalable, cost-effective datacenter independent of knowing what I’m going to run on it and still manage to run these unknown applications well?

Cost-effective is synonymous with “don’t over-engineer it”.

Then it turned out that there wasn’t a vendor in existence that we could buy this from.

On Cascading Failures and Amazon’s Elastic Block Store

This post is one in a series discussing storage architectures in the cloud. Read Network Storage in the Cloud: Delicious but Deadly and Magical Block Store: When Abstractions Fail Us for more insight.

Resilient, adjective, /riˈzilyənt/ “Able to withstand or recover quickly from difficult conditions”.

In patients with a cough, you know what commonly causes them to keep coughing? Coughing.

Nearly 4 years ago I wrote a post titled “Why EC2 isn’t yet a platform for ‘normal’ web applications” and said that the “No block storage persistence” was a feature of EC2: Making it fine for such things as batch compute on objects in S3 but likely making it difficult for people expecting to use then-state-of-the-art databases.

Their eventual solution was to provide what most people are familiar with, basically a LUN coming off of a centralized storage infrastructure. Thus the command of /mount comes back into use and one can start booting /root partitions from something other than S3. While there was the opportunity to kill centralized SAN-like storage, it was not taken.

Read More

Facebook’s Open Compute: The Data Center is the New Server and the Rise of the Taiwanese Tigers

Today Facebook took the great step of openly talking about their server and datacenter designs at the level of detail where they can actually be replicated by others. Another reason why I call it “great?” Well, it’s interesting that the sourcing and design of these was done by Facebook and with Taiwanese component makers. Nothing new for many of us working in the industry, but it’s something that’s often not discussed in the press when talking about US server companies.

If you take a look at the Facebook Open Compute server page and listen to the video with Frank Frankovsky you’ll hear a few company names mentioned. Many of them might not be familiar to you. Frank is the Director of Hardware Design and Supply Chain at Facebook, and used to be at Dell DCS (the datacenter solutions group) where he was the first technologist. One last piece of trivia: He was the technologist that covered Joyent too. We’ve been lucky enough to have bought servers from him and Steve six years ago and went out for sushi when he was down here interviewing.

So who made the boxes?

Read More

On Bruno’s Concern About the Current Coupling of node.js and V8

Bruno Fernandez-Ruiz (Yahoo! Fellow, VP and Platform Architect) wrote about his concerns around the current tight coupling between node.js and V8. Feel free to take a moment and read the original article: “NodeJS: To V8 or not to V8”.

A reply doesn’t fit into a twitter response, and an update mentioning my reply would be great.

Overall, it’s a valid concern, partially enforced by the fact that we’ve left the github page titled “Evented I/O for V8 javascript”. We have debated this internally and have intended to correct it. It is also currently true: Node.js is only implemented on V8 but that’s only because we’re going to focus on making node.js awesome first.

I haven’t had a chance to meet Bruno, we’ve only exchanged a couple of emails in the past. Bruno forgot or doesn’t know a couple of key things about node.js and Joyent. I’m always happy to talk about development process at Joyent.

Because I actually have answers to his questions, comments and concerns, I’m going to reply below. They are trimmed down and if I’ve missed one in particular, please let me know in the comments.

Update: Bruno took the time to follow-up with responses to my queries, while former Yahoo Principal Engineer (and current “Desperado at Facebook“) Peter Greiss has waded into the debate as well.

Read More

Comparing Virtual Machines is Like Comparing Cars: It Doesn’t Get to their Actual Utility or Value

A BMW and a Yugo are both cars. In a Yugo, “carpet” was listed as a feature. Enough said.

McCrory recently blogged a Public Cloud hourly cost comparison comparing Microsoft, Amazon, Rackspace and Joyent. I’m happy to see Joyent included in such great company but the comparisons are between “VMs.” As stated by Alistair Croll, “the VM is a convenient, dangerous unit of measure dragging the physical world into the virtual.” And as I’ve said before, the “machine,” either physical or virtual, is a poor measure for capacity planning and actual costs. Just like how a “fiber” is a poor measure of bandwidth. (Remember when the internet was the “cloud” depicted in slides?)

As a criticism, we still do a poor job of making this clear.

Read More

HTTP FTW

I was reading “The Web Is Dead. Long Live the Internet” today and a good response on gigaom.

To a number of people, “the web” = HTTP. And HTTP as a protocol on the Internets has clearly won.

The fun thing to notice is that “the web” to Chris Anderson and Michael Wolff is just content delivered on a web site. Video and peer-to-peer is still on average occurring over what protocol?

That’s right!

HTTP.

So while content is diversifying on his web, HTTP is clearly the winner.

The “machine” needs to die

I was reading this.

The “computer machine” as our base unit of work is a shitty unit.

What I typically want is

  1. – Agility and flexibility
  2. – Performance and scale
  3. – Business continuity and taking a resource pricing point of view for dev, test, staging and DR.
  4. – Business and security best practices baked into infrastructure

You can do agility and flexibility with virtual machines. But that’s it.

Virtual “machines” suffer from the same fundamental problems as “physical machines”.

1) VMs still take up space just like PMs and the space they take up is additive. A machine is a machine. Whether logical or physical. You cannot do business continuity, dev and test for the cost of production. It’s normal to figure out what a piece of software needs for production and then buy 20x all at once that to account for everything one might need. This is for an application that will not need to scale beyond the initial deployment and it’s clear to see why one would end up at 5% utilization on average. VMs are not in line with the idea of having accessible software fully utilize a server.

2) Performance and scale can not, will not and are not a unique feature of a pure VM approach. They can’t be. No more than a square peg’s inability to fit into a round hole. The same wall that you hit with physical machines, you will hit it with virtual machines. Except. You. will. hit. it. sooner. It you’re not hitting it, you’re not big enough, so maybe don’t worry about it: you’re likely just concerned about agility and flexibility.

You don’t buy CDN services by the “VM”. We need to move what we’re doing with compute to a utility around a transaction, around the usage of a specific resource. Everything else needs to be minimized.

To be clear about the problem and to leave you with some food for thought. I can take two racks of servers with two 48-port non-blocking 10 Gbps at the top of each, and then write a piece of software that will saturate the lines between these racks.

Can someone name a web property in the world that does more than a Tbps?

Can someone name one that gets close and only uses 20 servers for it’s application tier?

We have massive inefficiencies in our software stacks and hardware has really really really really outpaced what software can deliver. And the solution is what? More layers of software abstractions? More black boxes from an instrumentation point of view? Odd.

But familiar.