Commonly heard fallacies and half truths we've all encountered in networking.
Big Data and networking in general are all about numbers and statistics. We talk about our Petabytes of data, the number of packets per second and the Mbytes replicated in an hour. We want to know about the uptime percentages of our services and the time to repair on our circuits. We’re sold on loss ratios and round trip times.
And like statistics and their “Lies, damned lies,” as Mark Twain put it, networking has its own set of fallacies and half truths we’ve all run up against. Here are the four that bug me a bunch. I’d be interested in hearing which ones are on your hit list:
“Network performance is guaranteed by the SLA”
I love this one. Every one of the hundreds of IT managers I’ve spoken with in my 20 years of reporting on IT looked at SLAs as a necessary evil. You need something to hold your carriers accountable, but few expect the SLA to reflect the true conditions of the network and even fewer expected to collect on their agreements. SLAs will cover the carrier core not the last mile, where most of the problems occur. Some will promise zero percent packet loss, but that’s averaged over a months, not by minutes. Others might talk about 99.99% uptime in a year, which sounds great, but neglect to mention that equates to nearly an hour of downtime at any given moment. It’s for those reasons that finding a service provider with a straightforward SLA is so refreshing.
“Line speed equates to line throughput”
The throughput of a line is a complex function of latency, bandwidth and packet loss. So a 10Mb line from New York to California (about 100ms) with .5% packet loss might have 10Mb of bandwidth, but a flow will only reach a maximum throughput of 1.65 Mbps. Since most applications transfer data in single session this effectively determines the speed of the application.
“To achieve sufficient compute performance you’ll need proprietary hardware.”
This truism was disproved in the early days of networking and it’ll be the same again in this generation of networking. During the file server wars back in the 90s (for those who remember such things), I wrote extensively about Carl Amdahl’s new venture, NetFrame Systems. NetFrame was supposed to revolutionize the file server market with its high degree of fault tolerance and file service-optimized backplane. The company itself had incredible pedigree, great funding and even better technical knowhow. Amdahl was a principal system architect and central processor design engineer for two generations of IBM System 370 compatible mainframe computers and his father practically built the supercomputer.
Flash forward and NetFrame is no more (acquired by Micron Electronics in 2007) and proprietary file servers, well, my good old Dell/HP/IBM work fine, thank you very much. The same holds true with today’s appliances. Sure there are markets that might require custom hardware, maybe, then again there were markets for the NetFrame. The overwhelming majority of customers and use cases, though, are fine with a generalized server architecture.
Consider this: it’s a great point that David Hughes, founder and chief technology officer over here at Silver Peak Systems made to me the other day. The number of cores in off-the-shelf two-socket servers grew from 2 cores to 32 cores since Silver Peak was founded several years ago. “Not only are there 16x more cores, each core is approximately twice as fast,” he said, “With proper multi-core software design, I would expect virtual appliance vendors to see a 20x increase in performance over those years. At least that’s been our experience as a virtual appliance developer.”
“Physical appliances are the best way to solve today’s networking problems”
In the age of virtualization, it always surprises me when a vendor lacks a credible virtual appliance. All too often though companies are wedded to the upgrade chain and lock-in that comes with proprietary appliances that they never develop a viable virtual solution. Virtual appliances today can often match physical appliances in performance, can be deployed and installed in a fraction of the time and, depending on the architecture, cost dramatically less as they rely on existing hardware. So why is it that you needed a physical appliance?
No comments:
Post a Comment