FCC chief claims power over network management

Vows rescue of coders and consumers


Left Coast Comcast Hearing US Federal Communications Commission Chairman Kevin Martin wants everyone to know his agency has all the power it needs to regulate the likes of Comcast.

"The commission will be vigilant in monitoring the broadband marketplace and protecting consumers' access to the internet content of their choice," Martin said this afternoon on the campus of Stanford University, where the FCC held another public hearing on broadband network management practices. "The commission's existing authority and its existing net neutrality principles give it the necessary tools to continue to do so."

In August 2005, the FCC laid down a policy statement (PDF) meant to encourage an open internet, and Martin is adamant that the US Telecommunications Act and the Supreme Court's 2005 "Brand X" decision give the commission free rein to enforce the statement's principles. "It is critically important for the commission to take seriously anyone who files a complaint that accuses broadband providers of violating these principles," the chairman said.

This, of course, is a veiled reference to a complaint recently filed against Comcast, the second largest internet service provider in the country. Back in May, tests from an independent researcher showed that Comcast was preventing users from uploading BitTorrent and other peer-to-peer files, and after these tests were verified by The Associated Press, members of the SaveTheInternet.com Coalition asked the FCC to investigate.

So the commission is investigating. In February, Martin and his crew held an east coast public hearing on the campus of Harvard University to discuss the network management practices of Comcast and other ISPs, and today, it staged the West Coast equivalent at Stanford.

Shortly after the Harvard hearing, Comcast argued that the commission has no legal right to regulate its management practices. But Martin sees things differently.

Judging from Martin statements this afternoon - which echo statements he's made in the past - he intends to take action. Of some sort. Part of the problem, Martin seems to be saying, is that Comcast wasn't exactly open about its practices. When first accused of throttling peer-to-peer traffic, the company flatly denied it.

"A network operator must provide adequate disclosure of its network management practices," said Martin, one of the three Republican FCC commissioners. "Operators must disclose the particular network management tools they use - not only to consumers but also to the designers of various applications.

"Application designers must understand what will and what will not work on the network, and consumers must be adequately informed of the exact nature of the service they are purchase."

It should be said that Comcast is still less than open about its practices. But more on that later.

Similar topics


Other stories you might like

  • Everything you wanted to know about modern network congestion control but were perhaps too afraid to ask

    In which a little unfairness can be quite beneficial

    Systems Approach It’s hard not to be amazed by the amount of active research on congestion control over the past 30-plus years. From theory to practice, and with more than its fair share of flame wars, the question of how to manage congestion in the network is a technical challenge that resists an optimal solution while offering countless options for incremental improvement.

    This seems like a good time to take stock of where we are, and ask ourselves what might happen next.

    Congestion control is fundamentally an issue of resource allocation — trying to meet the competing demands that applications have for resources (in a network, these are primarily link bandwidth and router buffers), which ultimately reduces to deciding when to say no and to whom. The best framing of the problem I know traces back to a paper [PDF] by Frank Kelly in 1997, when he characterized congestion control as “a distributed algorithm to share network resources among competing sources, where the goal is to choose source rate so as to maximize aggregate source utility subject to capacity constraints.”

    Continue reading
  • How business makes streaming faster and cheaper with CDN and HESP support

    Ensure a high video streaming transmission rate

    Paid Post Here is everything about how the HESP integration helps CDN and the streaming platform by G-Core Labs ensure a high video streaming transmission rate for e-sports and gaming, efficient scalability for e-learning and telemedicine and high quality and minimum latencies for online streams, media and TV broadcasters.

    HESP (High Efficiency Stream Protocol) is a brand new adaptive video streaming protocol. It allows delivery of content with latencies of up to 2 seconds without compromising video quality and broadcasting stability. Unlike comparable solutions, this protocol requires less bandwidth for streaming, which allows businesses to save a lot of money on delivery of content to a large audience.

    Since HESP is based on HTTP, it is suitable for video transmission over CDNs. G-Core Labs was among the world’s first companies to have embedded this protocol in its CDN. With 120 points of presence across 5 continents and over 6,000 peer-to-peer partners, this allows a service provider to deliver videos to millions of viewers, to any devices, anywhere in the world without compromising even 8K video quality. And all this comes at a minimum streaming cost.

    Continue reading
  • Cisco deprecates Microsoft management integrations for UCS servers

    Working on Azure integration – but not there yet

    Cisco has deprecated support for some third-party management integrations for its UCS servers, and emerged unable to play nice with Microsoft's most recent offerings.

    Late last week the server contender slipped out an end-of-life notice [PDF] for integrations with Microsoft System Center's Configuration Manager, Operations Manager, and Virtual Machine Manager. Support for plugins to VMware vCenter Orchestrator and vRealize Orchestrator have also been taken out behind an empty rack with a shotgun.

    The Register inquired about the deprecations, and has good news and bad news.

    Continue reading

Biting the hand that feeds IT © 1998–2021