The “machine” needs to die
I was reading this.
The “computer machine” as our base unit of work is a shitty unit.
What I typically want is
- – Agility and flexibility
- – Performance and scale
- – Business continuity and taking a resource pricing point of view for dev, test, staging and DR.
- – Business and security best practices baked into infrastructure
You can do agility and flexibility with virtual machines. But that’s it.
Virtual “machines” suffer from the same fundamental problems as “physical machines”.
1) VMs still take up space just like PMs and the space they take up is additive. A machine is a machine. Whether logical or physical. You cannot do business continuity, dev and test for the cost of production. It’s normal to figure out what a piece of software needs for production and then buy 20x all at once that to account for everything one might need. This is for an application that will not need to scale beyond the initial deployment and it’s clear to see why one would end up at 5% utilization on average. VMs are not in line with the idea of having accessible software fully utilize a server.
2) Performance and scale can not, will not and are not a unique feature of a pure VM approach. They can’t be. No more than a square peg’s inability to fit into a round hole. The same wall that you hit with physical machines, you will hit it with virtual machines. Except. You. will. hit. it. sooner. It you’re not hitting it, you’re not big enough, so maybe don’t worry about it: you’re likely just concerned about agility and flexibility.
You don’t buy CDN services by the “VM”. We need to move what we’re doing with compute to a utility around a transaction, around the usage of a specific resource. Everything else needs to be minimized.
To be clear about the problem and to leave you with some food for thought. I can take two racks of servers with two 48-port non-blocking 10 Gbps at the top of each, and then write a piece of software that will saturate the lines between these racks.
Can someone name a web property in the world that does more than a Tbps?
Can someone name one that gets close and only uses 20 servers for it’s application tier?
We have massive inefficiencies in our software stacks and hardware has really really really really outpaced what software can deliver. And the solution is what? More layers of software abstractions? More black boxes from an instrumentation point of view? Odd.