Cloud says 'no'

Internal challenges for the public cloud


Workshop Apart from a few laggard evangelists and smaller vendors with nothing else to sell, most people with an interest in cloud computing have concluded that a wholesale move of everything IT into the cloud won’t happen any time soon.

The various reasons boil down to a couple of fundamentals - that some things will always need to happen in-house, and that it makes no financial sense to throw away past investments and create them all again somewhere else, however hosted.

Given this more realistic starting point, the challenges of the public cloud are less about managing some seismic transition, and more making it work with what is already in place, in terms of internal IT environments, operational processes and user behaviours. In this article we look into some of the specific challenges, and consider what you can do to avoid being caught out down the line.

Procurement

The first complications arise before any data can be exchanged, during the buying process. We use the term “procurement” guardedly, given that many cloud-based services can be purchased using a credit card for a subscription or a one-off purchase of CPU time - but indeed, this very different payment model can cause difficulties if spend goes above certain levels.

The default procurement process for most companies involves gaining approval for a fixed-cost purchase, which is then budgeted within a certain quarter; ongoing costs are budgeted separately and require their own approvals.

Without dwelling on the subtleties of capital expenditure vs. operational expenditure, rental models vs. subscriptions and so on for cloud services, there is bound to be a certain amount of chin-rubbing if executives start paying for core IT facilities on company cards and reporting them through expense claims.

This creates other problems - not least that the procurement process, onerous as it may be, exists to enable traceability and visibility of costs. It just isn’t feasible in the long term to run an organisation’s IT systems using credit cards, particularly if forward costs are unpredictable.

Integration

Once an online resource has been selected, appropriate due diligence has taken place (you did check the reliability of the supplier, didn’t you?) and a service has been deployed, the next set of challenges are around integration - with existing systems and applications.

No one system can do everything, so however good cloud services may be, they will always require data exchange with other IT systems, online or otherwise. So, your online CRM may need to feed your offline BI, or virtual machines running protein-folding algorithms may need to transfer large data sets back and forth.

This is the same, age-old challenge that has been discussed and repeatedly solved through generations of applications - most recently in the shape of service-oriented architecture (SOA). The solution could fill volumes - but in a nutshell, ensure you have thought about your data exchange requirements up front, both in terms of interfaces and volumes to be transferred. A bottleneck is a bottleneck, however it is hosted.

Access all areas

OK, so you’ve managed to get procurement under control, and all the data you need is where you need it, when you need it. But can your users access the service? It’s all very well having all of the processing in the world at your behest, particularly if it means people can work in places that they couldn’t before.

All that clever stuff goes to pot as soon as a facility is not available. As many cloud service providers have learned, and are now resolving, services from email to sales automation can benefit from a level of offline access. Enough said - but this is an important factor to consider in the selection process.

The beady eye

The final set of challenges for the public cloud are around manageability, as for any core IT system. The promise of SaaS (software as a service) applications may be that they are self-updating, which helps.

With the caveats of accessibility and due diligence above, cloud providers also claim that they can offer services more reliably and more efficiently than internal systems.

This may well be the case for more commodity functions such as email. Meanwhile however, even smaller companies require an internal IT role to monitor service performance, and (where possible pre-emptively) resolve any issues that may impact service levels.

In some cases this can just be a simple question of being aware when providers schedule downtime, for example for a network service upgrade. As the relationship between online and offline systems becomes more complex however, it becomes harder to separate the two - if an online analytics task using data from in-house systems should fail, say, spotting the fault and diagnosing the cause requires visibility on both online and in-house.

From a front-line support perspective as well, help desk staff and support teams will need to cover a range of environments and problems that were not previously in their remit. Irate callers will have zero interest in whether a service is cloud-based or internally managed. Sadly, cloud computing is not some alternate universe where everything just works. Whatever new opportunities are offered by online services, we’re also stuck with what we have.

On the upside, this means that all that best practice about systems and software architecture, IT management and indeed procurement is still valid. Indeed, given the option to just outsource responsibility to the cloud doesn’t exist, it becomes even more important to ensure the necessary measures are in place up front.

Later in this series will get into the specifics of making cloud work, drilling into some of these topics as well as the financial, security and sustainability aspects of using cloud-based services. For now we’d be interested in knowing whether you’ve got any experiences of your own to share, good or bad - and what lessons can be passed on to others thinking of incorporating the public cloud in their own journey towards better IT delivery. ®


Other stories you might like

  • Demand for PC and smartphone chips drops 'like a rock' says CEO of China’s top chipmaker
    Markets outside China are doing better, but at home vendors have huge component stockpiles

    Demand for chips needed to make smartphones and PCs has dropped "like a rock" – but mostly in China, according to Zhao Haijun, the CEO of China's largest chipmaker Semiconductor Manufacturing International Corporation (SMIC).

    Speaking on the company's Q1 2022 earnings call last Friday, Zhao said smartphone makers currently have five months inventory to hand, so are working through that stockpile before ordering new product. Sales of PCs, consumer electronics and appliances are also in trouble, the CEO said, leaving some markets oversupplied with product for now. But unmet demand remains for silicon used for Wi-Fi 6, power conversion, green energy products, and analog-to-digital conversion.

    Zhao partly attributed sales slumps to the Ukraine war which has made the Russian market off limits to many vendors and effectively taken Ukraine's 44 million citizens out of the global market for non-essential purchases.

    Continue reading
  • Colocation consolidation: Analysts look at what's driving the feeding frenzy
    Sometimes a half-sized shipping container at the base of a cell tower is all you need

    Analysis Colocation facilities aren't just a place to drop a couple of servers anymore. Many are quickly becoming full-fledged infrastructure-as-a-service providers as they embrace new consumption-based models and place a stronger emphasis on networking and edge connectivity.

    But supporting the growing menagerie of value-added services takes a substantial footprint and an even larger customer base, a dynamic that's driven a wave of consolidation throughout the industry, analysts from Forrester Research and Gartner told The Register.

    "You can only provide those value-added services if you're big enough," Forrester research director Glenn O'Donnell said.

    Continue reading
  • D-Wave deploys first US-based Advantage quantum system
    For those that want to keep their data in the homeland

    Quantum computing outfit D-Wave Systems has announced availability of an Advantage quantum computer accessible via the cloud but physically located in the US, a key move for selling quantum services to American customers.

    D-Wave reported that the newly deployed system is the first of its Advantage line of quantum computers available via its Leap quantum cloud service that is physically located in the US, rather than operating out of D-Wave’s facilities in British Columbia.

    The new system is based at the University of Southern California, as part of the USC-Lockheed Martin Quantum Computing Center hosted at USC’s Information Sciences Institute, a factor that may encourage US organizations interested in evaluating quantum computing that are likely to want the assurance of accessing facilities based in the same country.

    Continue reading
  • Bosses using AI to hire candidates risk discriminating against disabled applicants
    US publishes technical guide to help organizations avoid violating Americans with Disabilities Act

    The Biden administration and Department of Justice have warned employers using AI software for recruitment purposes to take extra steps to support disabled job applicants or they risk violating the Americans with Disabilities Act (ADA).

    Under the ADA, employers must provide adequate accommodations to all qualified disabled job seekers so they can fairly take part in the application process. But the increasing rollout of machine learning algorithms by companies in their hiring processes opens new possibilities that can disadvantage candidates with disabilities. 

    The Equal Employment Opportunity Commission (EEOC) and the DoJ published a new document this week, providing technical guidance to ensure companies don't violate ADA when using AI technology for recruitment purposes.

    Continue reading
  • How ICE became a $2.8b domestic surveillance agency
    Your US tax dollars at work

    The US Immigration and Customs Enforcement (ICE) agency has spent about $2.8 billion over the past 14 years on a massive surveillance "dragnet" that uses big data and facial-recognition technology to secretly spy on most Americans, according to a report from Georgetown Law's Center on Privacy and Technology.

    The research took two years and included "hundreds" of Freedom of Information Act requests, along with reviews of ICE's contracting and procurement records. It details how ICE surveillance spending jumped from about $71 million annually in 2008 to about $388 million per year as of 2021. The network it has purchased with this $2.8 billion means that "ICE now operates as a domestic surveillance agency" and its methods cross "legal and ethical lines," the report concludes.

    ICE did not respond to The Register's request for comment.

    Continue reading
  • Fully automated AI networks less than 5 years away, reckons Juniper CEO
    You robot kids, get off my LAN

    AI will completely automate the network within five years, Juniper CEO Rami Rahim boasted during the company’s Global Summit this week.

    “I truly believe that just as there is this need today for a self-driving automobile, the future is around a self-driving network where humans literally have to do nothing,” he said. “It's probably weird for people to hear the CEO of a networking company say that… but that's exactly what we should be wishing for.”

    Rahim believes AI-driven automation is the latest phase in computer networking’s evolution, which began with the rise of TCP/IP and the internet, was accelerated by faster and more efficient silicon, and then made manageable by advances in software.

    Continue reading
  • Pictured: Sagittarius A*, the supermassive black hole at the center of the Milky Way
    We speak to scientists involved in historic first snap – and no, this isn't the M87*

    Astronomers have captured a clear image of the gigantic supermassive black hole at the center of our galaxy for the first time.

    Sagittarius A*, or Sgr A* for short, is 27,000 light-years from Earth. Scientists knew for a while there was a mysterious object in the constellation of Sagittarius emitting strong radio waves, though it wasn't really discovered until the 1970s. Although astronomers managed to characterize some of the object's properties, experts weren't quite sure what exactly they were looking at.

    Years later, in 2020, the Nobel Prize in physics was awarded to a pair of scientists, who mathematically proved the object must be a supermassive black hole. Now, their work has been experimentally verified in the form of the first-ever snap of Sgr A*, captured by more than 300 researchers working across 80 institutions in the Event Horizon Telescope Collaboration. 

    Continue reading
  • Shopping for malware: $260 gets you a password stealer. $90 for a crypto-miner...
    We take a look at low, low subscription prices – not that we want to give anyone any ideas

    A Tor-hidden website dubbed the Eternity Project is offering a toolkit of malware, including ransomware, worms, and – coming soon – distributed denial-of-service programs, at low prices.

    According to researchers at cyber-intelligence outfit Cyble, the Eternity site's operators also have a channel on Telegram, where they provide videos detailing features and functions of the Windows malware. Once bought, it's up to the buyer how victims' computers are infected; we'll leave that to your imagination.

    The Telegram channel has about 500 subscribers, Team Cyble documented this week. Once someone decides to purchase of one or more of Eternity's malware components, they have the option to customize the final binary executable for whatever crimes they want to commit.

    Continue reading

Biting the hand that feeds IT © 1998–2022