On November 10, Microsoft rolled out KB4589212. That support note is entitled “Intel microcode updates for Windows 10, version 2004 and 20H2, and Windows Server, version 2004 and 20H2.” It is currently available only from the Microsoft Update Catalog, where a search on KB4589212 provides links to related downloads. As you can see from the following screencap, KB4589212 offers Intel microcode updates as downloads that apply to Windows Server and Windows 10 for X64 and X86 systems, versions 20H2 and 2004.
If you read the note, you’ll see this update applies to all Intel processors back to Ivy Bridge (circa 2011-2012).
[Click image for full-sized view.]
If KB4589212 Offers Intel Microcode Updates, What’s Covered?
In addition to covering most Intel processors still in use back to Ivy Bridge (which is as old as anything I’ve got, from the 2012 mini-ITX box), this microcode update covers 7 different CVE items (3 from 2018, 2 from 2019, 3 from 2020). Here’s that table of items, plucked verbatim from the Microsoft Support note:
I’ve run this on half-a-dozen different 20H2 PCs of all vintages from 2012 to 2019 with no ill effects. This one’s definitely worth downloading and installing sooner, rather than later. That said, note that microcode vulernabilities do require physical access to PCs to foist. Once foisted, though. they’re mostly indetectible and difficult to remove, too. Take no chances: schedule this update for your next maintenance window. You can access the CVE links in the preceding table to learn more about the vulnerabilities involved. In fact, the most recent CVE is fascinating: it decrypts data based on detailed voltage consumption over time simply by carefully monitoring and plotting CPU power usage. Zounds!
Suddenly, the usual login prompt from my Credit Union, where my wife and I both bank, has become inaccessible on my local network. No PC, no browser, no nothing will open the login URL. Errors proliferate like mushrooms after the rain instead. What gives?
I’ve been working in and around IP networks professionally since 1988, and with IP networks since 1979. I’ve seen many weird things, and now have another to add to that list. From my LAN right now, no PCs can login to our credit union on the web. Nobody, that is, unless I go through a VPN link. Otherwise, when we (my wife and I bank together) try to access the login page, a raft of error messages presents. Only the VPN works around weird credit union access issue, which throws up beacoup HTTP error codes. (Explanatory text verbatim from Wikipedia.):
400 Bad Request: The server cannot or will not process the request due to an apparent client error (e.g., malformed request syntax, size too large, invalid request message framing, or deceptive request routing).
401 Unauthorized: Similar to 403 Forbidden, but specifically for use when authentication is required and has failed or has not yet been provided.
403 Forbidden: The request contained valid data and was understood by the server, but the server is refusing action.
404 Not Found: The requested resource could not be found [(aka “File not found/Page not found”)].
501 Not Implemented: Server either does not recognize the request method, or it lacks the ability to fulfill the request.
502 Bad Gateway: The server was acting as a gateway or proxy and received an invalid response from the upstream server
How VPN Works Around Weird Credit Union Access Issue
I can only assume that the address resolution for the specific login URL is somehow malformed or invalid. Changing DNS server assignments at the Windows 10 clients (in the TCP v4 Interface properties) does not help. When I switch to VPN, though, that bypasses the local DNS infrastructure. That connection uses the VPN provider’s DNS infrastructure instead. Then, we have no problems accessing the bank URL.
Now, here’s where things get interesting. I can’t remember the login credentials for the Spectrum device that acts as a Wi-Fi AP and router at the network boundary. Thus, I can’t check the DNS situation on that device, which is where DHCP tells all my Windows 10 machines to get their DNS information from. I’ve got a call into Spectrum to see if they can help me break into my router without having to do a factory reset. In the meantime, we’re using the VPN to access the credit union stuff, and plain-vanilla networking for everything else. It’s strange and unfathomable, but at least there’s a workaround.
For Want of a Nail…
Last night, I drove to the nearby Spectrum outlet and swapped my Technicolor cable modem/VoIP device for an identical replacement unit. The theory was that something about this device was behind the issue. It was sheer hell trying to get back online because Spectrum’s activation drill requires providing account, password, and other identity characteristics. I keep all that stuff in Norton Password Vault, and I couldn’t get access to that info through my iPhone nor did I have another path onto the Internet to grab the necessary data. I eventually had to spend another 45 minutes on the phone with tech support as they FINALLY activated our Internet service, TV, and VoIP phone. Reminded me too much of Catch-22 “How can you see you’ve got flies in your eyes when you’ve got flies in your eyes?” Last night, I couldn’t see much of anything for far too long!
Because our son attends school online, doing without Internet is impossible. Thus, I ordered a 5G hotspot from Verizon last night, so we have a medium performing fallback. They tell me the hotspot I ordered delivers about 200 Mbps downstream and 25 Mbps upstream in our neighborhood. I’ll be finding out — and making sure the fallback works — when it shows up via USPS early next week. Sigh.
Router Reset Solves Resolution Hiccup [Added 1 Day Later]
With a little more time to think about what could cause my problem, I formulated a hypothesis about the cause — and a likely fix — for my troubles. All nodes on my LAN had an issue with that one specific URL. But neither the site operator nor my ISP could replicate that problem. Thus it had to be on the boundary between my LAN and the ISP’s aggregation network. That means only one possible culprit: the Spectrum router. It sits at my network boundary. It also provides DHCP to the nodes on the LAN and acts as the DNS server for all internal nodes.
“Aha” I thought, “I bet resetting the router will fix this issue because it reloads — or repopulates, rather — the DNS cache.” I was right. After powering off the router, letting it sit for a minute or two, then powering it back on, our name resolution issue was gone. Glad to have it fixed because it was deucedly inconvenient without credit union account access. Ultimately, it was the “VPN trick” that led me to the solution. Sigh again.
This morning, I noticed something different just after 9 AM. That’s when the usual scheduled backup job on my production desktop fires off, and about 2 minutes later the drive starts clunking away. Check the timestamps for the Macrium Image (mrimg) files in the lead-in graphic in File Explorer. Except for today — November 10 — all the other jobs show a stamp in a range from 9:02 – 9:21 AM. What was different this morning? No drive clunking provided audible clues when 8TB backup drive goes south. And sure enough, when I checked Explorer at first, the drive was MIA. In fact, Disk Management showed a drive with neither GPT nor MBR disk layout.
After Audible Clues When 8TB Backup Drive Goes South, Time for Repairs
Luckily, I’ve got a commercial license for MiniTool Partition Wizard (MTPW). It includes both Data Recovery and Partition Recovery capabilities. So first, I let MTPW define the drive layout as GPT (as it must for a drive bigger than 2TB). Next, I ran the program’s Partition Recovery capability. About 30 seconds later, the drive’s contents were visible in the MTPW Partition Explorer. But I still had to assign a drive letter before repairs were complete. Immediately thereafter, I ran a manual image backup using Macrium Reflect to make up for the backup I’d missed along with the 8TB drive. As you can see from the most recent timestamp for the top file in the lead-in graphic, today’s belated backup is stored with all its predecessors.
A Bit of Insurance Against Recurrence
I also finally switched in my brand-new Wavlink USB 3.0 docking station (Model: ML-ST3334U) for the old Intatek unit I’d been using. Turns out the Inatek couldn’t handle even a 4 TB and and 8TB drive. Given that I’ve had problems with this dock before, I’d been waiting for the “next fault” to force the swap. I think that’s what happened this morning. I also think the Inatek can’t really handle ONE 8TB drive without power issues. The Wavlink, OTOH, is rated to handle 2 8TB drives. That’s why I bought it, and why I hope this means I won’t see my big backup drive go bye-bye again soon.
But weirder things have happened on my production PC, and may happen again. As we all know, that’s just the way things sometimes go (or go south) in Windows World. Count on me to keep you posted as and when such weirdness happens.
OK then, I admit it: I just flat-out got tired of waiting. It’s been 20 days since 20H2 went GA, and my production PC still hadn’t gotten “the offer” from Windows Update. Having long ago downloaded the ISO for 20H2 using the Media Creation Tool, I used it. The process took almost 40 minutes from start to finish. That’s much longer than it took my PCs that did get “the offer” to finish the task. At least 4 times as long. Right now, I’m pausing for this blog post. Next, I’ll do my usual post-upgrade cleanup, now that impatience prompts production PC forced 20H2 upgrade is done.
After Because Impatience Prompts Production PC Forced 20H2 Upgrade, Then What?
My usual post-upgrade cleanup routine of course. This consists of:
Running TheBookIsClosed/Albacore’s Managed Disk Clean (mdiskclean.exe) utility to get rid of Windows.old and other stuff
Using Josh Cell’s nifty (but increasingly dated) UnCleaner tool to get rid of about 310 MB of junk files.
Running Macrium Reflect to capture an image of this pristine OS update
Getting on with business as usual
Just for grins, I ran DriverStore Explorer to see if it would find any outmoded drivers. As you’d expect, everything was ship-shape. Ditto for DISM ... /analyzecomponentstore, which tells me no updates since the GA date of October 22 have left old, orphaned packages behind. And because this kind of upgrade really is like starting over, Reliability Monitor gets a clean slate (in fact, it’s “dead empty” right now):
Right after a feature upgrade (which is what happens when you install from setup.exe), Reliability Monitor is devoid of data, and runs only forward from there.
[Click image for full-sized view.]
Status: 2004 to 20H2 Upgrades at Chez Tittel
This is the last and final machine to transition from 2004 to 20H2. My upgrades are done. One profound impetus for this change came from the three new Dell PCs — two review units, and one new purchase — that showed up over the past two weeks. All of those new 11th-gen PCs got “the offer” as soon as they booted up for the first time. I know that my production PC is solid and reliable and I’ve long since worked out any driver kinks on this machine. Seeing the Dell units transition painlessly (and incredibly quickly), I bet that the production PC would also get over the hump. But while it worked, I can’t say it was fast. But all too often that’s how things go here in Windows World. Stay tuned!