A BMW and a Yugo are both cars. In a Yugo, “carpet” was listed as a feature. Enough said.
McCrory recently blogged a Public Cloud hourly cost comparison comparing Microsoft, Amazon, Rackspace and Joyent. I’m happy to see Joyent included in such great company but the comparisons are between “VMs.” As stated by Alistair Croll, “the VM is a convenient, dangerous unit of measure dragging the physical world into the virtual.” And as I’ve said before, the “machine,” either physical or virtual, is a poor measure for capacity planning and actual costs. Just like how a “fiber” is a poor measure of bandwidth. (Remember when the internet was the “cloud” depicted in slides?)
As a criticism, we still do a poor job of making this clear.
The economic good in computer is a bit. The economic service is the movement of bits. In computers a bit only lives in RAM (volatile data), disc (non-volatile data) or CPU. Bits only move via Network IO, Memory IO and Disc IO. The Internet (cloud networking) was easier for many to understand because it was really one thing we were making readily available: network IO. Cloud computing (the “Intercomp” if we keep the same nomenclature rules) is a bit more difficult to do and grasp because we’re at the intersection of all 6 of these. We’re talking about the capacity planning around RAM and Disc (both are data storage) and then providers are supposed to provide CPU, Network IO, Memory IO and Disc IO in a way that mirrors what we did with the Internet. We used to lay fiber at 10-20x what we needed at peak, now we buy at 10-20% what we need at peak and often get our peaks for free. Most people are currently still “laying servers” at 10-20x what they need at peak, when we should be moving to a system where people can buy at 10-20% of what we need at peak and often get our peaks for free.
Selling at the virtual server level does not enable a world where non-providers can stop “laying servers.” A server is a server, whether physical or virtual.
So how is Joyent different from all providers on that list? In a nutshell, we’re the only one that allows you to buy at 10-20% of what you need at peak and get your peaks for free, or for a clear fee. Specifically, we do things like wrap all SmartMachines in a storage object, and then we keep memory utilization at 100% at all times so that we transparently cache everything. If you’re hitting the disc a lot, it means that we push you to a performance state where it’s like you’re entirely in memory, and now that you’re largely CPU bound, we let customers use 4-8x more CPU without doing anything. And we do this in a way that provides complete memory IO access. In some comparative, third-party benchmarks that we’ll be posting soon, we’ll demonstrate how this memory IO work we’ve done really makes a difference when compared to Amazon.
When you take performance and scale into account and stop thinking about the VM, I know we always come out at a lower overall cost. The only argument against this is for when you’re not actually doing anything or you’re only developing an application that a few people are hitting. This is also where we need to work harder to make it clear that we’re supporting developers with what is effectively free infrastructure and services to get started.