Oracle and OpenSolaris: A Kernel of Truth

@nevali on Twitter asked a question that we’ve heard from many customers, so I’m writing a response to everyone, though none of you need to worry. His question is, “As a long time Open Solaris stalwart, I do wonder what @Joyent’s perspective on the post-Oracle-takeover world is.”

In many ways, we’re happy to have seen Oracle and Sun combine. Sun was a great company for technologists and Oracle is tremendously good at operating a business. Oracle may prove to be the management team that can turn around Sun’s fortunes. And I think they’re completely committed to the Solaris kernel.

A lot of people think of OpenSolaris™ when they think of Joyent, and that’s reasonable — since it’s the most well known open source distribution of the Solaris Kernel. But in truth, Joyent has never used OpenSolaris™. OpenSolaris™ is a full operating system, a “distribution” containing both a kernel and a userland (along with packaging tools), the name a marketing term used to refer to this full distribution. There are a number of features in there that we’ve simply never cared about: For instance, we have no need to allow laptops to sleep. Since 2005, Joyent has been using the open source Solaris 11 kernel, a couple of binary bits and combining it with a Solaris build (that we maintain) of NetBSD’s pkgsrc. Combining a BSD set of tools with the rock solid Solaris gave us a foundation that contained the best of both worlds and allowed us to have a functional userland while having access to DTrace, ZFS and Zones.

So given Oracle’s commitment to the Solaris kernel, and the way we’re using it in SmartOS, we’re actually very well aligned with Oracle. Also, we’ve been working and will continue to work to make our base operating system a completely open operating system, and we are aligned with and believe in the vision behind the Illumos project.

If you have any particular questions, comments or concerns in this area, please feel free to let me know directly at jason@joyent.com and I’ll make sure they get addressed.

Yahoo Post: “Multi-Core HTTP Server with NodeJS”

The Yahoo! Developer Blog has a nice post about node.js on how they’re running node.js

A good comment on news ycombinator:

Node.js lets you write server applications in a server container that can handle tens of thousands of concurrent connections in a loosely typed language like Javascript which lets you code faster. It uses the same design as Nginx which is why it can handle so many connections without a huge amount of memory or CPU usage.

If you were to do this on Nginx you’d have to write the module in C.

You can’t do it on Apache because of Apache’s multi-process/thread model.

The fact that you can write a web server in a few lines of easy to understand and maintain Javascript that can handle over 10,000 concurrent connections without breaking a sweat is a breakthrough.

Node.js may do for server applications what Perl did for the Web in the 90’s.

EMC bought Greenplum

EMC said today that it will acquire private data warehousing company Greenplum in an all-cash transaction, though the terms of the deal were not released. It said that Greenplum will “form the foundation of a new data computing product division within EMC’s Information Infrastructure business.”

It’s no secret that digital data is on the rise, both on business and consumer levels. EMC called Greenplum a visionary leader that utilizes a built-from-the-ground architecture for analytical processing. In a statement, Pat Gelsinger, President and Chief Operating Officer of EMC’s Information Infrastructure Products, said:

The data warehousing world is about to change. Greenplum’s massively-parallel, scale-out architecture, along with its self-service consumption model, has enabled it to separate itself from the incumbent players and emerge as the leader in this industry shift toward ‘big data’ analytics. Greenplum’s market-leading technology combined with EMC’s virtualized Private Cloud infrastructure provides customers, today, with a best-of-breed solution for tomorrow’s ‘big-data’ challenges.

The company said it expects the deal to be completed in the third quarter, following regulatory approval. It is not expected to have a material impact on EMC’s fiscal 2010 GAAP and non-GAAP earnings.

From this ZDNET article.

I actually think that in 7-10 years, this acquisition by EMC could be as important as their VMWare acquisition. Remember the past was “cloud networking, the presence is “cloud computing” and the future is “cloud data”. Virtualization is not the end-all-be-all of “cloud computing” but it is a component. Think of these types of data stores as an important component in the future of distributed, pervasive data.

On Solaris

Over the years that I’ve been developing on unix platforms I’ve come in contact with quite a few… Linux (from 2.2 upwards), FreeBSD (version 4 upwards), Solaris (8, 9, 10 on both SPARC and x86), OS X, AIX, HPUX and even VMS. Even though I’ve come into contact with these I’ve not really gotten to spend some real quality time with them other than Linux, FreeBSD and OS X.

Since starting at Joyent it was obvious that I was going to be spending quite some time with Open Solaris (nevada) on x86 and there’s several things I’ve come to love:

1) SMF

SMF is the Solaris service managment framework and it maps to the functionality of things like init, launchd and the like. SMF will do automatic service restarting, loading of services in dependency order, provides built-in logging and hooks in with monitoring and notifications. You can add new services easily by creating your own XML manifest and have your own user daemons managed by a fantastic tool. To find out more about SMF, visit Solaris’ Managing Services Overview.

Read More

Triple Parity Raid

In an effort to catch up on links.

Adam talks about triple parity RAID (raidz3) in an ACM queue article.

When RAID systems were developed in the 1980s and 1990s, reconstruction times were measured in minutes. The trend for the past 10 years is quite clear regardless of the drive speed or its market segment: the time to perform a RAID reconstruction is increasing exponentially as capacity far outstrips throughput. At the extreme, rebuilding a fully populated 2-TB 7200-RPM SATA disk—today’s capacity champ—after a failure would take four hours operating at the theoretical optimal throughput. It is rare to achieve those data rates in practice; in the context of a heavily used system the full bandwidth can’t be dedicated exclusively to RAID repair without adversely affecting performance.

Fifteen years ago, RAID-5 reached a threshold at which it no longer provided adequate protection. The answer then was RAID-6. Today RAID-6 is quickly approaching that same threshold. In about 10 years, RAID-6 will provide only the level of protection that we get from RAID-5 today. It is again time to create a new RAID level to accommodate the realities of disk reliability, capacity, and throughput merely to maintain that same level of data protection.

You’re Doing it Wrong by PHK

PHK’s You’re Doing It Wrong

Think you’ve mastered the art of server performance? Think again.
Would you believe me if I claimed that an algorithm that has been on the books as “optimal” for 46 years, which has been analyzed in excruciating detail by geniuses like Knuth and taught in all computer science courses in the world, can be optimized to run 10 times faster?

Later in the article, PHK has a key example with Varnish’s memory management. In short, it doesn’t. It takes advantage of the fact that it’s on a great kernel that already does this for it. Far too often, we’re making userland software that behaves as if there is nothing underneath and instead strives to do everything. The result? Massive inefficiencies and performance far less than you should be getting.

On Cloud Standards, Transparency and Data Mobility

I was on a panel last week talking about the role of infrastructure and “The Cloud” in online gaming (and I’m talking “fun” games, like Farmville, not online gambling).

One of the questions was “What do you think about cloud interoperability and standards?”.

To which I asked, “What do you mean?”

“Well, what do you think about API standards and the like?”

To which I replied, “Completely uninteresting.”

Now I know that at first read, it sounds like I’m saying to forget “standards” and to forget “interoperability”, but I’m not. It’s just that most of the current conversations about it are uninteresting. And uninteresting in the sense that I’m not convinced there is even customer pain, I’m not convinced that having to tool around different APIs that only currently accomplish provisioning is that difficult (remember the great thing about it is generally that it takes <30 minutes to understand how to do things and get going). In the case of virtualization, many use libvirt and that’s what you generally do in programming: interoperability comes in the form of a library or middleware generated by producers and real users and not design by committee. I expect to see more of these types of projects emerge.

Beside the fact that one’s application shouldn’t have to be aware that it’s no longer in your datacenter and it’s now “in the cloud”, I’m not even sure what most of the current standardization discussions (many seem focused around provisioning APIs or things like “trust” and “integrity”) would enable start-ups, tool vendor adoption, ISV adoption and an “ecosystem” to emerge in the grand scheme of things. I don’t think that these are the main problems that limit adoption.

And what are the real problems where interoperability and standardization is important? I think in data mobility and transparency.

Data mobility?

Let’s only talk about mobility at the VM level. If I create an AMI at Amazon Web Services and push it into S3, I can use that AMI to provision new systems on EC2, but for the life of me, I can’t find the ability to export that AMI as a complete bootable image so that I can run it on a local system capable of booting other Xen images. If you have a reference for this, please send it my way.

The same goes for Joyent Accelerators. We don’t make this easy to do. We should.

Transparancy?

Now this is where I think things get good. And where standard data exchanges around what our “cloud” is doing and whether it has the capacity to accomplish what a customer needs it to. In a previous post, I said

The hallmark of this “Cloud Computing” needs to be complete transparency, instrumentability and while making it certain that applications just work, the interesting aspects of future APIs aren’t provisioning and self-management of machine images, it’s about stating policies and being able to make decisions that matter to my business.

The power of this is that it would actually enable customers to get the best price at the best times, know that they’re moving an application workload to somewhere that will actually accomplish it and it is required for there to be the computing equivalent to the energy spot market.

I’d like to hear from our readers to please. What are the current “standardization” efforts that you thing are going well and might be interesting? Any realistic ones? Which ones are boiling the ocean?

The “Cloud” is supposed to be better than the “Real”

In my weekly reading of posts around this mighty collection of tubes, pipes and cans connected by shoestrings, the thing most call The Internets™, I came across “Why we moved away from “the cloud” to a “real” server”  from the fellows at Boxed Ice. They have a server metrics and monitoring service named Server Density.  Their exodus from a small VPS to collection of VPSs (What is the plural anyway? If Virtual Private Server is VPS, then Virtual Private Servers is VPSs?) is typical of a service that’s starting out and doesn’t have the in-house expertise and capital yet to go out and start building everything yourself (which is fine, that’s not a value statement, I’m simply saying that it’s a common path to go shared hosting to VPS to Managed Hosting to entirely DIY).

What I don’t like is the title. They moved from “the cloud” to the “real”.

To be exact, it was easier for them to get a “box” from a managed hosting provider with an NFS or iSCSI mount then it is to take the additional effort in configuring and managed EC2 images and the Elastic Block Store (EBS). They “would have had to build our own infrastructure management system” in order to get Amazon Web Services to do exactly what they wanted.

That’s entirely correct. A precise title would therefore be “Why we moved from two VPSs to some servers at Rackspace instead of getting into the morass of managing EBS plus EC2 and probably spend more money while we’re at it”.

I admit, it’s a longer, more awkward title but it doesn’t use a generic term for what is a pretty specific complaint.

I want to make really clear at this point that I think they’re entirely right, and everyone in the “cloud industry” should think to themselves “Am I writing the correct software that would make such commentary a thing of the past?”.

What then is the cloud? And really why did the “cloud” fail them at this stage in the game?

In my opinion, “Cloud computing” is a software movement.

Software requires a hardware platform, and the hardware platform must be scalable, robust and able to either directly handle a workload or be a valid part of partitioning that workload. At Joyent, we’re pushing our physical nodes to the point where they’re simply not normal for most people: base systems with 0.25 to 0.5 TBs of RAM, more spindles than a textile factory and multiple low latency, 10 Gbps out of the back (looking back to 6 years ago, it’s amazing what you can get nowadays for about the same price) . These then become the transistors on the Datacenter Motherboard, and just like how on a current “real” server, many of the components are abstracted away from us (we deal with a “server” and an “operating system”), those of use writing “Cloud Software” need to abstract away all the components of a datacenter and have everyone deal with telling the “cloud” what they want to accomplish. Taking desired performance and security into account, the ultimate API or customer interaction is “I want you to do X for me”.

What is X? Some examples,

“I want this site to be available 99.99% of the time to 99.99% of my end users. How much is that?”

“I have this type of data, it needs to be N+1 redundant, writes have to happen within 400 ms and reads within 100ms 99.99% of the time. How much is that?”

I could go one with a series of functional use cases like this where the user of a “cloud” is asking to do exactly what we typically want an entire infrastructure to do (not defining the primitives etc), where the user is telling the cloud how important something is to them and where the user is asking for an economic value/cost to be associated with it.

The “How much is that?” should always return a value that is cheaper than “real” and cheaper than DIY.

That’s something that’s different.

That’s something that’s an improvement on the current state of computing. It’s not a horizontal re-tooling of the current state of affairs.

That’s something that would take us closer and closer to providing products to growing companies like Boxed Ice where they start out really small on the “cloud” and seamlessly scale to a global scale when they need it, and the “cloud” becomes “real”.

100,000 Joyent Accelerators

We just delivered the 100,000th Joyent Accelerator to a customer. That’s a big milestone. Congratulations to the Joyent team. And congratulations to our customers who are doing such interesting things with Joyent Accelerators, everyone from Prince (the artist known as), to all the Facebook developers, to the many enterprise shops removing the barriers of IT from the innovations of smart developers. Onwards to 1,000,000.

Structure 09 in SF

I’m moderating the first panel of the day at Om’s Structure09 conference today.

If you’re at the conference please make sure you say “Hi”.