Google prepping Web2.0rhea search?

But of course


Google has hinted it may launch a search tool specifically for Twitter and other services dedicated to Web2.0rhea.

The Google Operating System blog - a site unaffiliated with the Mountain View Chocolate Factory - recently noticed that the company's localization service was alluding to some sort of Google "MicroBlogsearch" tool, describing a Microblog as "a blog with very short entries. Twitter is the popular service associated with this format":

Google Twitter Hint

A slim hint, to be sure, but the Google Operating System blog has told the world that a Mountain View Web2.0rhea search engine is just around the corner. And if it is, that would hardly be a surprise.

Yes, Google CEO Eric Schmidt has called Twitter "a poor man's email." But he quickly moved to soften that statement, and last month company poster child Marissa Mayer said "We are interested in being able to offer, for example, micro-blogging and micro-messaging in our search."

Of course, Google already indexes Twitter content, and it's now offering a prominent search sort dedicated to "recent results," including Tweets. But it's unclear how often Google indexes what is ostensibly real-time user-generated spew. And its "recent results" include more than Web2.0rhea.

Many have speculated that Google will swallow Twitter whole, but it needn't acquire the company to index its service. Several companies already offer Twitter search tools. They include Twitter itself, but its search tool - based on tech it gobbled with the acquisition of Summize - is less than efficient, and it sorts results by date and time.

Google Operating System has decided that Google's Microblogging service will operate much like Google's native Blog Search tool, sorting results according to how relevant they are - not just when they were posted. But again, this merely states the obvious.

Whether Google is looking to purchase Twitter or not, it's undoubtedly looking to improve its Web2.0rhea indexing skills. Last month, at Google Zeitgeist in London, after lauding Twitter's ability to show info in ostensible real-time, Google co-founder Larry Page said he's been urging his search gurus to index web content every second. "They sort of laugh at me and go, ‘It’s O.K. if it’s a few minutes’ old,’” he said. "And I’m like, ‘No, no, it needs to be every second.’"

We've asked Google to comment on the latest Twitter claims, but it has yet to respond. ®

Update

Google has responded - as you'd expect them to respond: "At Google we strive to connect people to all the world's information, and this includes information that's frequently updated such as news sites, blogs and real-time sources," says a company spokesman. "While we don't have anything to announce today, real-time information is important, and we're looking at different ways to use this information to make Google more useful to our users."

Similar topics


Other stories you might like

  • Everything you wanted to know about modern network congestion control but were perhaps too afraid to ask

    In which a little unfairness can be quite beneficial

    Systems Approach It’s hard not to be amazed by the amount of active research on congestion control over the past 30-plus years. From theory to practice, and with more than its fair share of flame wars, the question of how to manage congestion in the network is a technical challenge that resists an optimal solution while offering countless options for incremental improvement.

    This seems like a good time to take stock of where we are, and ask ourselves what might happen next.

    Congestion control is fundamentally an issue of resource allocation — trying to meet the competing demands that applications have for resources (in a network, these are primarily link bandwidth and router buffers), which ultimately reduces to deciding when to say no and to whom. The best framing of the problem I know traces back to a paper [PDF] by Frank Kelly in 1997, when he characterized congestion control as “a distributed algorithm to share network resources among competing sources, where the goal is to choose source rate so as to maximize aggregate source utility subject to capacity constraints.”

    Continue reading
  • How business makes streaming faster and cheaper with CDN and HESP support

    Ensure a high video streaming transmission rate

    Paid Post Here is everything about how the HESP integration helps CDN and the streaming platform by G-Core Labs ensure a high video streaming transmission rate for e-sports and gaming, efficient scalability for e-learning and telemedicine and high quality and minimum latencies for online streams, media and TV broadcasters.

    HESP (High Efficiency Stream Protocol) is a brand new adaptive video streaming protocol. It allows delivery of content with latencies of up to 2 seconds without compromising video quality and broadcasting stability. Unlike comparable solutions, this protocol requires less bandwidth for streaming, which allows businesses to save a lot of money on delivery of content to a large audience.

    Since HESP is based on HTTP, it is suitable for video transmission over CDNs. G-Core Labs was among the world’s first companies to have embedded this protocol in its CDN. With 120 points of presence across 5 continents and over 6,000 peer-to-peer partners, this allows a service provider to deliver videos to millions of viewers, to any devices, anywhere in the world without compromising even 8K video quality. And all this comes at a minimum streaming cost.

    Continue reading
  • Cisco deprecates Microsoft management integrations for UCS servers

    Working on Azure integration – but not there yet

    Cisco has deprecated support for some third-party management integrations for its UCS servers, and emerged unable to play nice with Microsoft's most recent offerings.

    Late last week the server contender slipped out an end-of-life notice [PDF] for integrations with Microsoft System Center's Configuration Manager, Operations Manager, and Virtual Machine Manager. Support for plugins to VMware vCenter Orchestrator and vRealize Orchestrator have also been taken out behind an empty rack with a shotgun.

    The Register inquired about the deprecations, and has good news and bad news.

    Continue reading

Biting the hand that feeds IT © 1998–2021