I work for a server manufacturer. We integrate other people's hardware (SuperMicro and Intel EPSD mostly) and sell it to customers. We sell everything from simple individual systems to highly-integrated multi-rack clusters to an incredibly diverse set of customers.
We were one of the initial Open Compute hardware partners thanks to some historical networking connections. We had almost zero interest in Open Compute from our customers. Maybe a dozen quotes total over six months, and we've never shipped a single Open Compute system despite all the initial hype, and despite the designs suiting a lot of our repeat customers. Nobody that would otherwise buy a commodity server wants these Open Compute designs, at least from us. We ended our Open Compute effort a few months ago.
Is anyone outside of the original designers actually using Open Compute hardware in a production environment?
Open Compute is really desirable if you meet one criterion; you are designing a purpose built infra for a SOA app.
If you are building a singular app that requires a DC specifically built to meet your scaling requirements, then Open Compute can fit your needs.
If you rent a rack, cage, area in a DC - then you'll fail.
The Open Compute initiative i similar to Zuck's H1B Visa actions; they are solely designed to meet his specific needs. To put it bluntly; don't be fooled by this effort for FB to PR their personal optimizations...
Notice how GOOG never even promoted their internal server/system/DC design and kept all their design vendors under NDA lock and key?
FB tried to "open source" a proprietary architecture approach (they stole from google) in a PR push to look like they were doing the industry good...
The fact is that open compute CAN be great - so long as you're operating at the scale that you can really benefit; OR if a provider embraces the full scope of open compute deployents... this is ONLY recognized with players such as amazon and rackspace (lesser)....
It is NOT an capital sot savings; "It takes money to make money, but once you have money; money makes itself"
No way a small ISP is going to be able to operate this way.
The benefit for open compute infra going forward will be in the modular space.
Google never published any information about it's internal server designs because they suffer from NIH and many times come up with really crappy hardware.
Nobody stole anything from Google. FB's design's are different enough, otherwise Google would have sent David Drummond and his asshole lawyers team after FB. Didn't happen.
Google's platforms team always takes this arrogant approach bragging about their 5 year lead and then come up with broken and buggy products that have to be supported and suffered by various SRE teams.
Their servers are just run off the mill AMD or Intel based design's with a single 12v DC power requirement to make it easier to hook them up to Google's proprietary Ikea racks.
There are alternate design like Manifest Destiny which uses POWER cpus, but they're not in production yet.
Maybe Open Compute would gain more traction if you could buy these supposedly cost effective parts from the usual value-oriented channels like Newegg.
There are plenty of shops, big and small, who prefer to do their own integration from white box parts for good reason. Having to call for quotes from some margin-hungry middleman who probably doesn't even have the stuff in stock sort of defeats the purpose of the "Open" platform.
The target audience won't be deploying one or two machines, they'll be deploying multiple racks. Given that we regularly ship customers full racks of machines and entire preconfigured clusters, we kind of expected some of them to be interested in the alleged superiority of the platform...
Hardware goes through longer "sprints" than software. The equivalent of "merge and compile" is "setup a new manufacturing line" so it isn't surprising to me that at less than 3 years old adoption isn't widespread.
I believe the F# and TypeScript languages are both open source as well.
That said, most of their flagship stuff is closed, in many cases understandably so but in others, it doesn't make as much sense (to me), and seems not to benefit anyone.
I'm with you...I think that they should have open sourced a lot of the supporting software around windows server. I wouldn't really care if the source was available, it would be nice, but I personally think that System Center should be free to any customer with more than 100 servers.
This counts as a wanton act of hostility toward every server vendor in the x86 space.
Is this payback for all of the PC players adopting Chromebook and Android, or is this the tipping point where MS has decided not to care about selling Windows Server and instead cares only about driving down its own costs to deploy Azure and O365?
Sorry but what? If you read the article, its about open sourcing their rack and server designs. You know like Facebook did when they built their own datacenter? (and no they din't do it from the threat of Google+'s popularity.)
I think these documentations will be very helpful for startups and companies moving away from the cloud to building their own datacenters.
I remember Blackblaze open sourcing something similar, their pod designs, which were not only immensely helpful for other startups and companies but also for individuals building home server racks.
Windows Server isn't like the PC OS, it's mostly sold via Select and Enterprise agreements.
Microsoft's pricing model is ingenious -- pay a couple of thousand bucks per socket for the OS + half that for the entire system center suite. HyperV in 2012R2 is pretty sweet... it will become increasingly difficult to justify additional investments in the VMWare stack.
This is really something I think people overlook. For a 2 socket server, the total cost for as many VMs as you want is under 8K in licensing for OS, Virtualization platforms, management tools, with little to no limitation.
I run a Hyper-V shop now, and I don't know that I would look back until I hit the performance limitations of Hyper-V. It doesn't make sense, especially if you're running Windows OSes on the HW.
Xen+Xen-tools with Ubuntu 12.04 LTS, unlimited VMs, 2 sockets per server: $0 per server, baby.
Edit: Downvotes? I'm sharing my server setup, which is simple to set up (in conjunction with Puppet), works wonderfully, doesn't involve any bullshit RAM or core/socket limits, and doesn't cost you as much as the hardware in software licensing costs. Sorry if it was a bit flippant, but I think a better approach would be to point out the advantages that these extremely expensive bits of software bring over the free open source software that many of them are ultimately based on.
For Unix workloads, sure, although I'd probably go the RHEL/KVM route. For windows workloads, I'm already buying HyperV, because it's part of windows. So why wouldn't I use it?
A client of mine had several Linux VMs hosted with a company that relied on Hyper-V for their VPS product. Never in my life I saw so many prolems in such a small infrastructure. Volumes would seize and become unreadable, or would become unwritable for no apparent reason. Once, after an outage that lasted more than 24 hours, they had to bring in someone from Microsoft to solve the problem. After this, we moved our machines to EC2. We had some glitches, but none even close to that one.
I know I am not Microsoft biggest fan, but it even seems like their software knows it.
To be honest, I don't understand why anyone uses Windows as a server OS, except for things like Exchange servers. I'd probably not use HyperV because I'd only be running Windows in a container on a Xen machine with primarily linux VMs.
Any particular reason for choosing RHEL? apt-get is so much better than yum in my experience, and if you go with the support contract, you're adding $1,000-$4,500 to the cost of the server over 3 years. That's a pretty substantial bump that I'd rather spend on more RAM...
There is a whole world of energy (Oil) companies, hospitals and finance where $4,500 is a rounding error in IT infrastructure costs. You pay for the support contracts and licenses because A. It's what everyone else uses. B. So when something breaks you can point a finger at the vendor and say "We're waiting on them." It's more about being able to CYA and defer liability than anything.
At these companies IT is a cost center, not a revenue generating department. That is to say, they are structured to where it doesn't matter. Go on Dice.com and search asp.net developers to get an idea of what I'm talking about.
I understand the issue. Even if IT is not considered a revenue generator, when properly managed, it can become a competitive advantage. And since so many companies treat IT as a cost center, subject it to all the "best practices" and use only "best-of-breed" solutions, turning IT into a competitive asset has never been so easy.
The history of corporations is full of examples of not-very-bright companies vanquishing their competition only because they were the least incompetent in that segment and all their competition blundered itself into oblivion.
If it was just one server, that would be insignificant, but that could add 50% to your server costs (if you have a bunch of $10k servers, and you have to add a $5k support contract to each), that can end up being a significant increase in expense.
There are many applications delivered on the Windows platform. .Net platform, SQL Server, AD, ADFS, etc. I haven't cared about OS religious wars for a long time, dollars drive decisions for infrastructure and applications.
Why RHEL? Commercial apps are certified to run on it. Its easier to hire people who know it. There's a manual and vendor guidance who will recommend the "right" way to do things. I'm not going to stand up an application generating a few million dollars a year and discover during an outage that some change in Ubuntu breaks the app.
I love Ubuntu, and use it a lot on my personal projects. But in my professional experience, I've seen plenty of cases where some talented SA sets up a "special snowflake" Ubuntu/Debian environment, then leaves. The company is kind of fucked when nobody understands how things work.
My experience is from enterprise environments. Obviously in startups things are a bit different.
Thanks for the explanation. I'd probably not use any of those services for a greenfield project, regardless of how much money was riding on it.
RE: Ubuntu changes breaking, couldn't this be mitigated by running your own repo and introducing the changes to your repo once they'd been tested on some staging servers? It may be a moot point, though - I've never had a Ubuntu update break an app, and I've found LTS to be very stable. And if there was an issue, I've always got a backup image I can revert to.
Fair wrt the SA thing, it can be very janky in startups and SMBs, but in an enterprise, I'd expect most server deployment to be automated in a sane way, is this not the case?
Usually, a sign of a well run enterprise IT environment is infrastructure builds driven by vendor recommendations, but customized by a deep understanding of the vendor's bugs and other strategic factors. On the procurement side, they make deals to keep costs in line. No Fortune 500 is paying $1,300/socket per host for RHEL -- they're wheeling and dealing site licenses that probably end up being similar to the Microsoft model of $X/socket for unlimited VMs.
For example, Microsoft hands out lots of useless sizing guidance by assuming that you use physical hardware and JBOD disk. Good Windows admins/build engineers test the actual limits on their environment and size appropriately, and get the vendor to sign off.
Enterprises are all about risk mitigation. Usually that means (Product X broke because vendor Y told us to deploy component Z wrong.)
Yeah, Windows in itself doesn't seem like a very good reason to drop $X,000 more per server on a commercial virtualization solution vs. just going with Xen.
One key motivation is likely Google. Data centers represent one of Google's largest competitive advantages. Commoditization of your competitors' products and infrastructure is just good business: It's a strategy that works and usually benefits both consumers and upstart businesses.
Google doesn't offer its computing as a product (well, yes, there's App Engine, but it's pretty tiny). Amazon, OTOH, has a huge, dominant, and very profitable service dedicated for just these sort of things, and it would be good if it got some competition.
> Google doesn't offer its computing as a product (well, yes, there's App Engine, but it's pretty tiny).
Well, if you ignore not only App Engine (their PaaS offering), but also the rest of the Google Cloud Platform products -- https://cloud.google.com/products/ -- sure, I guess you could say that.
But, really, you are completely wrong. Google clearly does offer its computing as a product.
Wait, can you explain what you mean? From my reading (especially from the source blog post[1]), they're releasing server designs and open sourcing some management software. If you were going to be running Windows Server (or someone convinces you to run it), cheaper commodity hardware is great, but it seems like you'll still be paying for Windows Server licenses.
Maybe MS is trying to make it easy for other people to run private Azure clouds? Even if they have to buy the software from MS? I mean, these cloud infrastructure services have terrible lock-in concerns, so MS saying "you can buy Azure-ish machines with all our software on them from companies X,Y,Z so if you get pissed off at us you can migrate away from Azure painlessly". That would help adoption of the Azure system for them, and MS gets money either way - service fees on Azure or licenses for the Azure platform software being run on the servers sold by the 3rd-party.
It's just to save money. Microsoft has been designing their own servers for years and Open Compute designs have also been available for a while, so any damage to the server vendors is well underway. ODM server sales increased 45% last year (granted, from a small base).
Commoditize your complement[1]. Business strategy 101.
Not that x86 servers aren't already pretty much commoditized anyway. People pay for the support and for incremental features like who has the most free disk bays or does the best deals on SSD storage. Open Compute won't change that in anyway.
IBM was struggling for traction anyway. The people who drink the IBM kool aid buy the power stuff, and the garbage BladeCenter hardware soured the Pure stuff.
In competitive procurements, IBM was buying business, taking massive losses to get some hardware installed.
We were one of the initial Open Compute hardware partners thanks to some historical networking connections. We had almost zero interest in Open Compute from our customers. Maybe a dozen quotes total over six months, and we've never shipped a single Open Compute system despite all the initial hype, and despite the designs suiting a lot of our repeat customers. Nobody that would otherwise buy a commodity server wants these Open Compute designs, at least from us. We ended our Open Compute effort a few months ago.
Is anyone outside of the original designers actually using Open Compute hardware in a production environment?