This week's sponsor is PGi. | | Webinar: IT and Marketing: Extreme Collaboration Tuesday, August 26th, 2pm ET / 11am PT | New Editorial Event! Media outlets love to focus on the tension between IT and marketing. But if it's a war, both sides lose. Instead, CIOs have to partner with CMOs to help deliver on aggressive business goals in an ever-changing landscape. Register Today! | Editor's Corner: What I mean by 'communications' Also Noted: Spotlight On... The inevitable 'cloud crash' headline The coming end to Windows 8; The coming end to Android 2.1 insecurity; and much more... Want to stay on top of the latest Enterprise IT news? Sign up for our free FierceCIO daily briefing! FierceCIO gives CIOs, CTOs and senior IT Managers must-know news, trends, insights, and best practices in executive IT management. Sign up today and get full access to our jobs board and whitepaper library. News From the Fierce Network: 1. Windows Phone is dark horse in enterprise mobility 2. 10 steps to enterprise mobility mastery 3. Move over BYOD ... it's time for BYOID What I mean by 'communications' Every week, I get this question from someone at least once: What's the difference between the "Telecom" in FierceTelecom and the "Communications" in FierceEnterpriseCommunications? There's one obvious similarity, actually: There's no space between either of these things and "Fierce". I like to pretend that symbology was intentional, but it's cool nonetheless. Yet the distinction is more than symbolic. In 1988, in a piece that was published for, and read by, over three million people, I wrote that all computing processes were communications processes. While the tech press was spending most of the last quarter-century observing the convergence among devices--or more accurately, predicting convergences that never happened, like TV + PC, TV + telephone, and TV + good TV shows--communications services and processes have snaked their way into our data centers and opened up a world of bandwidth. Now the planet can be our data center. We call it "the cloud", but that moniker may fall into disuse as we grow more accustomed to it. Computing happens everywhere, and it's no longer individual devices that do it. Mobile apps are mainly clients for data center-driven processes. (What, did you really think your smartphone was translating your voice?) Now, a great many tech press sources will tell you they have this subject "covered". Most cover the communications aspect of computing to the extent that bikinis cover folks walking along Muscle Beach. To the point, they tend to obscure the parts that anyone with intent would actually want to see. There is no single strategy for covering a topic this big with one publication. Global digital communications is a topic that involves not just technology but science, the law, sociology, history and economics. There are a multitude of angles, all of which put together produce a total representation of this topic. Here's one angle: Cloud dynamics, software-defined networking and network functions virtualization are making it possible for the complete communications infrastructure of an enterprise--from the wires in the walls to the routers in the basement--to be replaced with software that can travel through the cloud and scale to whatever size it needs to be for any given moment in time. We're taking something that, for many enterprises, was laid in pipes in the 1800s and has lasted over a century, and replacing it with something whose entire substance can be simulated in the memory of one microprocessor and might last for five minutes. Which means what? Who walks onto your premises with the magic wand to leap your communications infrastructure from the 19th century right over the 20th and into today? When an Internet replaces what you've come to know as your telephone (not your smartphone, but the wire that used to lead to its predecessor), what happens to your business model? What laws will change, which standards must adapt, what principles should be reconsidered? When the tools on your factory floor all gain identities and IP addresses, will you use fewer of them? Will their utilization improve? What's a "toll-free number" in an era where you want your customer to click to connect? If the administrators of your infrastructure are all with one company, the providers of your applications are with other companies, the consultants for your business processes are with still other companies, your communications providers are yet another group of companies, and you're none of the above... your IT department is where, exactly? And if we don't confront these questions head-on, and answer them responsibly, how long before the window of opportunity for change will close, and the life-support system for the 19th century gets turned back on? That's my angle. FierceTelecom is the standard bearer for coverage of the global telecommunications industry, its many companies and its multitude of technologies. And I have the responsibility--and the honor, incidentally--of addressing the people that industry serves with the same tenacity, asking the same questions these people are asking with FierceEnterpriseCommunications. Any economy is a many-headed beast. The head I'm dealing with is a beast in itself. - SF3 Read more about: Ip Convergence back to top | | Today's Top News 1. Successful proof-of-concept of migration orchestration across clouds We talk quite a lot about "hybrid cloud", which typically refers to the notion that cloud platforms like OpenStack can leverage both on-premise resources and public cloud capacity, and can often shift workloads between the two on-demand. That shifting process is part of the job of orchestration, which is finally coming to the forefront as a topic of discussion around cloud management. If a cloud truly does connect multiple systems, then who or what is responsible for ensuring those systems work together--for orchestrating them? Orchestration is a communications process, and will involve telecommunications networks. Which is why a topic that is ostensibly about the cloud and the role of IT in managing it turns out to be about AT&T. In a proof-of-concept whose success was announced Tuesday, a software-defined network designed by AT&T successfully marshaled the migration of a fully automated migration of virtual machines between two IBM-built servers running OpenStack. Now, if you think this is the sort of thing that must be taking place between clouds all the time, the keyword here is automated. As AT&T describes it, one server cluster requests the bandwidth necessary for the SDN to transfer the active server load to the other server cluster. So the network upon which the transfer takes place is literally built by request, then the receiving cloud network is provisioned, and the transfer handled all while load balancing is active, in a course of time AT&T cites as about 40 seconds. In a blog post Tuesday, IBM senior technical staff member Douglas Freimuth supplied the key piece of the puzzle: "It works by the cloud data center sending a signal to a network controller that describes the bandwidth needs, and which cloud data centers need to connect. The key technology in the cloud IBM will provide is the intelligent orchestration capability that knows when and how much bandwidth to request and between which clouds. "The cloud data center orchestrator will continue to get more intelligent in its utilization of the network," Freimuth continues. "Longer term, an application on your smartphone might be smart enough to request bandwidth from the network controller." In the larger sense, orchestration is the synchronization of automated and managed processes across all server clusters worldwide that participate in a cloud operation. User applications, such as finding the nearest place to order a good pizza, might not require sophisticated orchestration, but instead can rely on asynchronous operations running under REST protocols to respond to requests when they get around to it. But if multiple clouds are to participate in, say, a collective data warehouse, they can't just rely on a flimsy, Web-like approach to dispatching and responding to API calls. An orchestration engine is like a higher-order operating system, designed to make cooperative components in a cloud operation behave nicely with one another. Consulting firm Deloitte cites survey numbers from Forrester saying that only about 9 percent of enterprises' new adoption of cloud services is intended to replace existing systems. That's because businesses fully intend to integrate their legacy systems with their cloud systems, for a finite but indeterminate length of time. Deloitte cites these numbers by way of making its case for cloud orchestration as a personal service, requiring consultation, planning and comprehensive strategy. Then along comes AT&T and IBM, who show off how it can all be done with an egg timer and the flip of a switch. Cloud dynamics is not just about pooling resources but automating and orchestrating their implementation. If a cloud management process that had been done by people can be effectively orchestrated, then it seems that inevitably, someone will. It means that, with respect to the argument that proper cloud management requires a wealth of talented people, there's a flip side. For more: - Douglas Freimuth's IBM blog post explaining the proof-of-concept - Deloitte's explanation of "Cloud Orchestration and System Integration" Related Articles: Cloud orchestration platform Cloudify rethinks its entire purpose Oracle's new Orchestrator aims to help NFV do whatever it does Alcatel-Lucent's Cassidy Shield: The need to overhaul OSS Read more about: SDN, hybrid cloud back to top | 2. Preliminary GAO report reveals usage-based Internet pricing already prevalent If the point of the Federal Communications Commission improving net neutrality regulations is to preclude ISPs from creating "fast lanes" for certain types of traffic, it may want to update its facts. Preliminary findings from a Government Accountability Office report (.pdf) due to be released in November, but publicly revealed Wednesday by the ranking member of the House Communications and Technology Subcommittee, show ISPs may already have accomplished this from their customers' perspective. Their tool of choice: usage-based pricing (UBP)--which can take many forms: overage charges, monthly allowances and even discount rewards for low usage. The GAO's information came from the nation's top 13 wireline and 4 wireless ISPs; covering by its estimate 97 percent of the nation. Its preliminary findings state that all four wireless ISPs present customers with data allowance tiers. Without naming the ISPs specifically, three charge overage fees per additional gigabyte above those tiers while one responds by throttling speeds. Surprisingly, some seven of the 13 wireline ISPs studied apply similar plans. Two of those presently do not impose overage fees, while another offers a low-data discount. One unnamed wireline ISP is testing overage fees in select markets--and it's a safe bet that ISP is Comcast. About 2 percent of wireline ISPs' customers exceed their overage plans for any given month, GAO determined. But this may be a very temporary state of affairs. Citing figures from Canada-based broadband network provider Sandvine, GAO says that when customers "cut the cord" with their cable TV provider, preferring the Internet instead, their average monthly data consumption rises to 212 GB per month, which GAO says is close to existing data allowances (probably 250 GB per month). In a letter to FCC Chairman Tom Wheeler yesterday, ranking member Rep. Anna Eshoo (D-Calif.) makes the case that high-bandwidth and low-bandwidth applications already pervade the Internet. But because consumers haven't been educated about which is which, they may be setting themselves up for penalties if they choose to sever their ties with cable companies. Writes Eshoo: Ultimately, whether accessing the Internet through a mobile device or through a wired broadband connection at home, consumers have come to expect an experience that includes streaming high definition video, downloading music, and video conferencing with family and friends using the app or service of their choice. The GAO study sheds light on the effects of data caps, including the potential impact on "cord-cutters" and suggests that consumers may not be fully benefiting from lower-cost options under usage-based pricing. Customers of so-called "business class" service (which were not covered by the preliminary GAO findings) may not be subject to the same usage caps, even though--as some customers report--they're paying premiums for the same nodes as consumers use. There are some industry experts who claim that the consumption models for everyday consumers and enterprises are different, having to do mainly with psychology rather than the economy. The most poignant example of this claim came in January 2013 at a broadband industry conference, when former FCC chairman Michael Powell compared consumer usage habits to a buffet line. "If you want to go to the Denny's buffet and fill up your bowl," said Powell, "you are going to pay more than the person who chooses broccoli spears." His implication there was that businesses are conservative by nature, whereas consumers discover they have to go on diets only when it's too late. Powell is presently President & CEO of the National Cable & Telecommunications Association. Current chairman Wheeler held that title from 1979 until 1984. For more: - Preliminary GAO report, released July 29 (.pdf) - Rep. Anna Eshoo's letter to FCC Chairman Wheeler - The B&C report on the broadband conference attended by Powell and other former FCC chairs Related Articles: When will Cisco's projected IP traffic growth collide with Comcast's caps? AT&T, DirecTV merger will reignite 'regulated duopolies' issue CenturyLink DSL usage caps cause subscriber confusion [FierceTelecom, March 15, 2013] Read more about: Michael Powell, Government Accountability Office back to top | 3. How would the Senate's revised surveillance bill impact governance? Recent changes to a proposed amendment to the USA Freedom Act, published Tuesday by the office of Sen. Patrick Leahy (D-Vt.) who chairs the Judiciary Committee, could compel the government to start using a "specific selection term" to refer to individuals (or entities) who may be the subjects of government inquiries. In response to criticism that this language may have been too vague, the new language gets a lot more specific. Indeed, if passed, the language could introduce into the law an abstractly permissible way for databases everywhere to collectively retain information on individuals for long periods of time. The technical means have not been spelled out, and perhaps don't yet exist, but it's clearer than ever just what a "specific selection term" is not. The newly polished bill refers to such a term as follows: (i) ...a term that specifically identifies a person, account, address, or personal device, or another specific identifier, that is used by the Government to narrowly limit the scope of tangible things sought to the greatest extent reasonably practicable, consistent with the purpose for seeking the tangible things; and (ii) does not include a term that does not narrowly limit the scope of the tangible things sought to the greatest extent reasonably practicable... In other words, if an element of the term is so broad as to apply to more than one person--for instance, someone's zip code, or an identifier or handle used to refer to someone online--then it's excluded from the selection term. Technically speaking, this means that a database may be legal if 1) it's used only for specific purposes under court warrant, and 2) it seeks data that relates to someone (or something) specifically. The text in (ii) may have been added in response to a complaint raised by Sen. Dianne Feinstein (D-Calif.) this past June. During a Senate hearing, Sen. Feinstein stated that private sector companies may be confused by the concept--for instance, when attempting to adopt the government's standards as compliance frameworks of their own. It's hard to make the case that a privately-held database should have fewer restrictions on its use of personally identifiable data (PID), than one used by a federal agency capable of conducting mass surveillance. Last February, the National Institute of Science and Technology (NIST) released version 1.0 of its Cybersecurity Framework (.pdf), which called upon all retainers of personal data--but especially the Government--to take steps to minimize the collection and retention of PID. The hope is, of course, for the Government to set a positive example, which would be the exact opposite of what the National Security Agency had been doing. But the expectation among businesspeople is that adoption of the NIST framework may become mandatory among certain classes of businesses, such as those who do business with the Government. And as Jim Guinn, II, the managing director of consulting firm PwC, recently shared on LinkedIn, the companies that do business with the companies that do business with the government may also find themselves having to adopt NIST best practices. Last May, when "specific selection term" was first introduced, the Electronic Frontier Foundation warned that its use of the word "entity" as an alternate for "person" could perhaps be interpreted too broadly. "Everyone in this building" could perhaps be an "entity". The warning became amplified by the Center for Democracy & Technology as a civil rights violation waiting to happen. The new language eliminates the word, but replaces it with "another specific identifier", opening the door for a kind of unimpeachable authenticator yet to be devised. Think of it as a kind of "primary key", to borrow the phrase from database technology, for a record of a person, whose use as a key was only made technically feasible through a kind of unlocking permit. Put another way: bulk collection of stuff that doesn't become data until the FISA judge says it is. For more: - Sen. Feinstein's objection to the original language, from a C-SPAN tape - NIST Cybersecurity Framework v. 1.0 (.PDF) - Jim Guinn, II's LinkedIn comment on NIST mandates - The EFF's warning on Sen. Leahy's originally proposed language - CD&T's warning on the originally proposed language Related Articles: Security risks lie just below the surface of data lakes [FierceITSecurity] Along came the Data Transparency Coalition to clear things up [FierceBigData] NIST group to NSA: Keep your hands off our encryption [FierceITSecurity] Read more about: Patrick Leahy, USA Freedom Act back to top | 4. Ford's next car-charging network could bypass smart meters For the past few decades, the argument for modernizing the nation's electric power grid has included some reference to the benefits customers would receive from upgrading their on-premise power meters to one that communicates with the headend over a two-way line, or maybe even wirelessly. Many of this nation's utility companies report smart meter installation rates of about 85 percent--even though customers may not even be aware. But the nation's utility providers have chosen to use communications protocols that are not immediately interoperable with one another, making nationalization of the grid difficult, if not impossible. So Ford Motor Company, leading a charge along with other global automakers including GM, Mercedes-Benz and Toyota, is announcing an initiative to build a centralized system that effectively federates communications between electric and hybrid vehicles, and the utilities providing the power that charges them. The purpose: empowering utilities with the ability to at least suspend the charging of vehicles in their service areas, when power demand overtakes supply. In an interview Wednesday with FierceEnterpriseCommunications, Ford Motor Company hybrid and electric vehicle engineer David McCreadie tells us the communications protocols this system will utilize are not entirely new, but already industry standards: specifically, the automated demand/response (ADR) protocol proffered by the OpenADR Alliance; and the ZigBee Alliance's Smart Energy Profile 2.0, currently used by energy saving devices in the home. "Those would be the two ways in which a utility would typically send out a message to vehicles saying, 'Hey, throttle your load back,' or, 'We need you to stop charging,'" says McCreadie. "The central server will be able to accommodate both of those communications protocols." The server to which McCreadie refers is one that Ford envisions being used by all electric vehicle manufacturers. "Not only do the utilities in any certain region need to communicate with Ford cars, but they also want to communicate with GM cars, Toyota cars, Honda cars, etc.," he says. "So this central server, in essence, is a way to simplify things on both sides of the equation. The utilities would no longer have to communicate to 12 or 15 different auto-makers; they just need to communicate with the essential interface--this gateway point, called the 'central server'--and then, at the same time, from the OEM standpoint, because our vehicles are throughout the country, we definitely don't want to have to communicate with two or three thousand different utilities. So this simplifies things for us greatly." The most common use case for an "Internet of Things" that extends into the home is the monitoring and remote control of power consumption. SEP 2.0 is one of the key protocols for that use case. The path that Ford appears to be paving--ostensibly for electric cars, but certainly worth investigating for other appliances, as well as for HVAC systems--lets the power company communicate directly with a device through its own manufacturer, rather than to a proxy acting in lieu of a smart meter... or, maybe even more importantly, rather than to a smart meter. So remind me what smart meters were for again? When I posed the question of whether the central server could be a fully virtualized server managed by a cloud service provider, McCreadie said that while he was not an IT professional, he could accept this as a possibility. Then when I asked whether the installation of a smart meter would present any value-add to this vehicle communications process over and above simply connecting wirelessly to the central server, he responded, "I don't think it necessarily enhances the communication. "What the smart meter does, of course, is allow the utility to know when your electricity is being used," he continues, "so that you can have differential rates based on the time you use it." Part of the problem there, he explained, lies with power companies still having yet to decide on the pricing schemes and incentives for customers to deploy smart meters in the first place--for instance, how to use parameters such as demand/response (DR) and frequency regulation. "Some of the value here, frankly," he says, "is part of what the OEMs are still trying to figure out." For more: - J.D. Power: Utilities face smart meter adoption challenges [Electric Light & Power, October 24, 2012] - Energy Storage Association's explanation of Frequency Regulation Related Articles: Big data's impact on utilities: from smart grid to soft grid [FierceBigData] Big data analytics sparks reinvention of smart grid [FierceBigData] Read more about: OpenADR Alliance, demand/response back to top | Also Noted SPOTLIGHT ON... The inevitable 'cloud crash' headline For this week's spotlight, I'm featuring a link to a piece on Seeking Alpha by veteran stock analyst Dana Blankenhorn, not because I particularly agree with it but because I do feel it should be read and considered. In the old days, back when we got our news from sources that didn't also try to show us "upskirt" photos, we could read or listen to commentary we disagreed with (right now, I'm thinking George Will, pre-Fox) and appreciate the reasoning behind it. In "The Coming Cloud Crash," Blankenhorn argues that containerization (the concept of distributing virtual machines with built-in dependencies, made popular by Docker) will increase efficiency in on-premise data centers to such an extent that it will compel enterprises to dis-invest in public cloud capacity. The reason I disagree is because Blankenhorn's not considering the potential use case shift here: Businesses that operate apps on PaaS services like Heroku and the original Azure may move to a platform-less public cloud where Docker containers may be managed freely, or to platforms like ActiveState's Stackato that support Docker out of the box. Read more: The Coming Cloud Crash [by Dana Blankenhorn, SeekingAlpha] > Consumerization and the CIO - Now Available On-Demand From devices to services to apps, end users have a lot of choices - and those choices are bleeding into enterprise IT faster than ever. How do these changes affect IT strategy, budget and infrastructure? Register to watch now! > IT and Marketing: Extreme Collaboration - Sponsored by: PGi Media outlets love to focus on the tension between IT and marketing. But if it's a war, both sides lose. Instead, CIOs have to partner with CMOs to help deliver on aggressive business goals in an ever-changing landscape. Register Today! | > eBook: 5 Key Strategies for Successful Mobile Engagement Read this eBook to discover how you can deliver highly targeted, personalized content and services to your customers across all mobile channels – and the key strategies that are critical to a successful mobile approach. Download today! > Whitepaper: Supporting VDIs and Thin Clients Companies have already begun deploying VDIs and thin clients (like Google's Chromebook) on a massive scale. The low-cost, easily deployed workstations present a significant cost savings for companies, but require unique tools to support them. This whitepaper, written by Proxy Networks, outlines the best way to do that. Download now. > eBook: eBrief | Making BYOD Work: 4 Critical Strategies for Midmarket and SMB Companies Bring-your-own-device (BYOD) can be a blessing for mid-size and small businesses. But getting the real payoff requires some attention to details that may differ from those at large enterprises. Download this eBrief to get more practical advice for making BYOD work. | |