Part 3, On Joyent and Accelerators as Cloud Computing “Primitives”

In the last part of this series we ended by talking about 6 “simple” utilities that software uses on “servers”. They were

1) CPU space
2) Memory space
3) Disc space
4) Memory bus IO
5) Disc IO
6) Network IO

Along with their natural minimums (zero) and maximums.

Providing compute units that do these utilities

What we’ve always wanted to do at Joyent was provide “scalable network appliances”: online servers that just worked for given functions and were capable of both handling bursts and serving as logical lego-like building blocks for new and legacy architectures. Sometimes these appliances might contain our own software, sometimes not. They would be on a network where it would be difficult for a given piece of software to saturate the immediate parts of it.

For most workloads, a ratio of 1 CPU:4GB:2 spindles works pretty well. The faster the CPU (constrained by power) the better, memory is well … memory, and the size and speed of those spindles can be varied depending on what a node is going to be doing (one end of a workload possibility or the other). In other words, 1) CPU space, 2) Memory space and 3) Disc space aren’t terribly interesting or difficult to schedule and manage in most environments (in some ways, they’re a purchasing decision), and 4) Memory bus IO is set in silicon stone by their creators.

Which ones matter?

We’re left with disc and network IO as being the key utilities and the ability to move things on-and-off disc and in-and-out of the network comes from using more CPU. So when an application experiences a surge in activity, it typically results in more use of the CPU, disc IO and network IO. We decided the best approach was to put CPU and disc IO on a fair-share system, and standardize on an overbuilt network (10 Gbps interconnects, multi-gigabit out of the back of each physical node). Our physical servers exclusively use intel NICs in a PCIe format. This way we can have multiple gigabits per node, physical segregation and failover as options, and we can standardize on a network driver and get away from on-board NICs changing from server to server.

The fair share system overall is supposed to provide for short-term (could just be minutes) bursting needed to handle spikes in demand. Spikes that occur too fast for either hardware upgrades or even to spin up new VMs on most systems (and are often too fast to be noticed in 5-10 minute monitoring pings). This allows for you to stop thinking about the servers that you pay for as being these constrained maximums, and start thinking about a “1 CPU, 4GB of RAM” server as being a guaranteed minimum allotment.

This was at least why we stopped calling them “Grid Containers” and started calling them “Accelerators”.

Moving up the stack

The next installments are going to be talking about some kernel and userland experiences and choices, our desire to use ever more “dense” hardware, 32 and 64 bit environments, and rapid reboot and recovery times.

%d bloggers like this: