Take some time to ease yourself into Monday with a round-up of some tales from Redmond in the last week that you may have missed.
The Seekers to make a comeback in Windows 10 19H1
Windows Insiders on the Slow Ring wondering if and when they might receive Build 18362 will be delighted to learn that the Windows team has enabled what it refers to as the "Seeker Experience".
This, sadly, does not entail being regaled with such smashing hits as Georgy Girl and I'll Never Find Another You.
Let's face it: after sending its music service, Groove, to the great soundproofed studio in the sky, Microsoft would be hard pressed to play much else more than the Windows 10 Critical Stop sound.
In this instance "Seekers" are those individuals who pop into Windows 10's settings and hit the update button, keen to experience whatever is headed down the pipe toward them. In this case, build 18362, the Insider team's latest and greatest (at least as far as 19H1 is concerned).
Unfortunately, those still rocking build 18356.16 remain locked out while Microsoft continues to work on whatever got borked in that build to break the installer.
And then there were two - anti-cheat code issue resolved?
There was more good news for Insiders, at least for those who like playing games, as Microsoft announced that it would be lifting the roadblock it had flung up to stop the builds being installed on systems with certain games installed.
The games in question include anti-cheat code that does not play well with 19H1.
Gameplaying insiders rejoice! Makers of some of the affected titles have issued fixes which, as far as Microsoft is concerned, means testing can resume. Handy, because the release of 19H1 is, after all, just around the corner.
Ok - team got back to me. Many games issued their own individual fixes. Working now to get the block removed. Updated the blog post for Build 18362. pic.twitter.com/cLvU0ShyFd— Brandon LeBlanc (@brandonleblanc) March 28, 2019
This just leaves issues with Realtek SD card readers and Creative X-Fi sound cards left on the table.
For the latter, at least, Microsoft is pointing users in the direction of Creative for some updated drivers in the 18865 build of 2020's Windows 10, released on 27 March. While light on new features at this stage, the update will find its way into 19H1 before long, stripping one more thing from the Windows 10 To-Do list.
Microsoft: Vulns, we've found a few. We did it Huawei...
Microsoft also shared an insight last week into the inner workings of its Windows Microsoft Defender team in the discovery and disclosure of some iffy behaviour in a device driver.
Drivers have the potential to shove all manner of sticky fingers into the Windows 10 kernel, since many must run at ring-0. A flaw in a driver therefore has the potential to allow miscreants to make mischief.
In this instance, the gang spotted something a bit off with a Huawei device driver. It would have to be Huawei, wouldn’t it?
Unlike some other notable vulnerabilities in its products, Huawei was quick to issue a fix after a talking-to from Microsoft.
The story of the flaw is a fascinating one, beginning with the sensors with which Microsoft has festooned the Windows 10 1809 kernel, designed to spot User APC code injection initiated by a kernel code.
While monitoring alerts generated by Microsoft Defender Advanced Threat Protection, the gang noticed something whiffy with memory allocation and execution in the context of services.exe by a kernel code. Having spotted an identical alert on another machine at the same time, the team dug deeper.
The culprit turned out to be a driver installed as part of PC Manager, Huawei’s management software for Matebook Laptops. Further forensics found some ugly code aimed at reviving the MateBookService.exe service after termination. Alas, due to some shoddy validation which simply used a whitelist approach to check what was being started (along the lines of “is it called MateBookService.exe? Fire it up then!”) attacker-controlled instances of the process could do all manner of harm.
While now mitigated and fixed, the story is an interesting insight in the processes involved in tracking down the source of an alert, just how much telemetry is being extracted from enrolled systems and the naughtiness possible when drivers go rogue.
Azure ends the week with a good old-fashioned wobble
There was good news and bad news in the Azure world this week. As if to take users’ minds off the ongoing Brexit shitshow negotiations, the West Europe region tottered about a like beer-infused protestor on a march to Parliament.
Things kicked off at around 15:20 UTC on 27 March and didn’t get back to normal until more than a day later, at 17:30 UTC on 28 March. A "subset" of users found themselves unable do such minor service management operations as, ooh, create, update, deploy, scale and delete resources hosted in the West Europe region.
@Azure Service outage in westeurope, some machines are unavailable, portal site for that machines is loading forever.— Silicium (@naturalgeek) March 28, 2019
As ever, Microsoft was impressively open with an admittedly worrying root cause explanation – automatic throttling had kicked in due to a larger than expected volume of requests (including retries, which added to the pain.)
If only there was some sort of scalable cloud platform to deal with peaks in load. Oh, wait...
To be fair to Microsoft, the team had spotted the problem rumbling toward them a few days earlier, but were still a day from deploying the fix when the incident happened.
Those over the pond pointing and laughing at the discomfiture of their European counterparts soon had their own problems to deal with.
First Data Lake Storage and Analytics fell over for that unlucky "subset" of users, this time in East US 2 as well as West Europe. The Europeans were hit first, with a near two hour wobble ending at 23:50 UTC on 28 March. East US 2 users saw a much longer period of tottering, from 22:40 UTC on 28 March through to 03:23 UTC on 29 March.
The cause was a borked deployment, which contained a configuration change "preventing requests from completing."
This, however, was as nothing compared to what was coming later on 29 March. Between 16:45 and 22:05 UTC, kind old Microsoft reckoned cloudy DBAs should enjoy a POETS day as Azure SQL Database collapsed for another "subset of users" across the UK and US.
[status] Identified: Azure SQL is failing. Our developers are researching the cause and determining whether we will wait out their downtime or swap to another SQL instance. We'll post again within 20 minutes. https://t.co/yN1kVVhtNJ— Rallybound Support (@TeamRallybound) March 29, 2019
Affected users found themselves unable to connect to SQL Database resources or perform service management operations for App Service resources among other failures.
Engineers eventually tracked the problem down to a misbehaving virtual network plugin that caused nodes to fall over. While a manual restart cured the problem, heads are still being scratched as to how the nodes failed and, more importantly, why it took a human to kick them back into life.
In an unfortunate bit of timing, the SQL slip-up occurred as Microsoft rolled out Read scale-out support for Azure SQL Database, a capability to redirect read-only connections to an automatically provisioned High-Availability replica.
Azure Good News: New toys all round!
Microsoft Azure gains Availability Zones and Immutable BlobsREAD MORE
Still, the Azure gang had reasons to celebrate as well, as Azure Premium Blob Storage became generally available. Aimed at workloads that need fast response times, Premium sits on the top of the pile, above the existing Hot, Cool and Archive tiers. With great performance comes, of course, higher pricing, and while stashing data in Premium Blob storage will be pricey, Microsoft reckons the transaction cost will be lower than the next tier, hot.
Also making an appearance last week was High-Throughput Block Blob (HTBB) storage, on which Microsoft coyly lifted the drapes at 2018's Ignite event. In a nutshell, the tech makes things considerably quicker when ingesting larger block blobs. Microsoft demoed the HTBB hitting 12.5GB/s single blob throughput at Ignite, but as with all things, your mileage may vary.
There is no fee for the speedier blob action (indeed, it is automatically enabled), although operations need to be over a certain size in order to wake it up. Specifically, that's from over 4MB for ordinary blob storage to anything over 256KB for the premium goodness described above.
And as for where to store all this speedier data? Microsoft has also bumped up Azure Managed Disk sizes, with up to 32 TiB on on Premium SSD, Standard SSD, and Standard HDD disks and up to 64 TiB on Ultra Disks in preview.
Performance is also seeing an uptick, now going to 20,000 IOPS and 900 MB/sec for the top of the line Premium SSD type. ®