Azure Dev Spaces has hit public preview, so El Reg took it for a spin

Or, if you will, an arthritic stagger around the park


Azure Dev Spaces is one of those technologies that looks great in demonstrations, but can end up being infuriating when introduced to real life.

Shown off at this year's Build conference and subsequently released in private preview, the toys were released to public preview this week, and The Register was keen to get its talons on it to enjoy some real-time container code debugging.

Suffice to say, it has not gone well. Though this is a public preview, quite a bit of work is needed to make it stable enough for production purposes.

Dev Space Delights

Azure Dev Spaces is aimed fairly and squarely at persuading developers that they should be targeting Microsoft's Azure Kubernetes Service (AKS) in their container development. Redmond reckons that if developers take their first tentative steps in the familiar Visual Studio environment (although others can be used), with the tools they are used to, they won't look back.

Azure Dev Spaces

Intrepid Reg reporter does the breakpoint fandango in Azure Dev Spaces

The theory is that once Dev Spaces is enabled in your chosen environment (Visual Studio 2017 in the case of El Reg), code gets synced to the cloud. It is then built and deployed as a container into AKS. The neat trick is that you can still edit and debug code as though it was running locally, and don't need to spray your local workstation with all manner of Docker and Kubernetes components.

The concept is certainly appealing and deals with common workflow problems encountered by developers, particularly in the arena of end-to-end testing. The Register likes to do a bit of code-monkeying from time to time, and we decided to take the thing for a spin.

Putting the oh no in Visual Studio

From the outset you'll find a few limitations. You'll need Kubernetes 1.9.6 or later, and you'll need to make sure your cluster is running in one of a limited list of Azure regions. If the thought of setting up this stuff alarms you, probably best to stop reading now.

Microsoft does provide a helpful guide, which is festooned with errors. This is possibly because it looks like it was written against an earlier version of AKS that doesn't match the current feature set, such is the rapid release cadence adopted by Redmond of late.

Having spent too many years typing in BASIC listings from computer magazines in the '80s and debugging due to the inevitable printing typos, your humble El Reg hack sees this as more an opportunity for learning than something to be too distressed about.

Once configured correctly, the integration is impressively seamless, in Visual Studio 2017 at least. Directing the build toward Azure Dev Spaces pops up a dialog from which the developer can select the previously created cluster and – hey presto – the project gets sprayed with the scaffolding needed to make it container-happy.

At least, that is the theory. The process is slow. Very, very slow. And if you try to interrupt it to get on with something else, bad things can happen and leave the project in a state where rolling back to the last-known decent checkpoint is easier than trying to unpick the work done by Azure Dev Spaces.

Patience is a virtue, possess it if you can

Patience is its own reward, and once Azure Dev Spaces has done its stuff, the project can be run pretty much as normal, with the code spinning up in the container.

But again it is slow. The initial build and deploy is coffee-break material, even for the "Hello World!"-like example we put together. And if that build and deploy gets interrupted... well, it isn't good news for the cluster. We saw our first cluster crash and burn (or rather "left in a failed state") to the point where we simply deleted and recreated it rather than continue trying to resurrect the poor thing.

Again, patience is the key, and if left to its own devices for far longer than one would expect if running locally, Azure Dev Spaces does indeed work as advertised. Code running in the container can be stepped into and debugged – invaluable for end-to-end testing. Performance, however, can best be described as glacial.

Balan Subramanian's Azure DevEx product team has been tasked with "creating delightful experiences on Azure for all kinds of developers". The experience of Dev Spaces has, alas, not been delightful for this developer. It is sluggish, a little fragile, and needs to be configured just so.

However, this is still preview technology with clearly a way to go before it hits the big time. For developers used to a visual way of debugging and happy in the world of Visual Studio, Azure Dev Spaces will indeed provide a familiar wrapper around the potentially alien environment of containerisation. Just not quite yet. ®

Serverless Computing London in November will give you the state of play on serverless and function as a service, and explain how to put them to work in your business. Full details, and ticket information, at the website here.


Other stories you might like

  • VMware claims ‘bare-metal’ performance from virtualized Nvidia GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual datacenter product updates across CPU, GPU, and DPU
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Now Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading

Biting the hand that feeds IT © 1998–2022