Tag Archives: featured

Hard Disks Remain Useful PC Storage Devices

Hmmm. I just read a disturbing story over at Gizmodo. Something of a rant from Sam Rutherford, it explains “Why I’m Finally Getting Rid of All My HDDs Forever.” I’ve been following his work for some time, and he usually has intelligent and useful things to say. This time, though, I’m opposed to his position. In fact, I still firmly believe that hard disks remain useful PC storage devices. Quick count: I have at least 10 of them here in my office, at capacities ranging from 1 TB to 8 TB.

Why Say: Hard Disks Remain Useful PC Storage Devices?

If I understand his complaint, Mr. Rutherford is giving up on HDDs (Hard Disk Drives) because several of them gave up on him recently. One failure cost him 2 TB of data, some of it precious. I say: Boo hoo!

The lead-in graphic for this story comes from my production PC running a freeware program named CrystalDiskInfo. (Note: grab the Standard Edition: the others have ads and bundleware). Notice the top of that display lists Windows drives C:, J:, K:, G:, D:, I:, F:, and H:. In fact, all of them show blue dots and the word “Good” as well. These elements provide rude measures of disk health for both HDDs and SSDs. Of the 8 drives shown, 3 are SSDs, 4 are HDDs, and 1 is a so-called hybrid HDD; all are healthy.

Mr. Rutherford could have used this tool. Or used others like it, of which there are many (see these Carl Chao and WindowsReport survey pieces, for example). Then, he would have known his problem HDDs were headed for trouble before they failed. Plus, he himself admits he erred in not backing up the drive whose failure caused data loss. I check all my drives monthly (both SSDs and HDDs) looking for signs of impending trouble, as part of routine maintenance.

Backup, Backup and More Backup

SSDs are not mechanical devices, so they don’t suffer mechanical failures. Over the 10 years or so I’ve owned SSDs (perhaps a couple of dozen by now) not one has ever failed on me. Over the 36 years I’ve owned HDDs, I’ve had half-a-dozen fail out of the hundreds I’ve used. But it’s inevitable that I will suffer an SSD failure sometime, even though I’ve yet to experience one personally. Why? Because all devices fail, given enough time and use.

Personally, I think HDDs still have a place in my storage hierarchy. I just bought 2 8 TB drives earlier this year, for about $165 each. That’s way cheaper storage than even the cheapest of SSDs on today’s market, and much more capacity in a single device than I’d want to purchase in solid state form. (Note: a 7.68 TB Samsung 870 QVO SSD costs $750 at Newegg right now. Thus it aims at those with more money than sense, or those with cash-generating workflows that can actually cover such costs.)

The real secret to protecting data is multiple backups. I bought those 8 TB drives to back up all my other drives, so they’re my second local line of defense. I also pay for 5 TB of online storage at OneDrive and DropBox and have two extra copies of production OSes, key files and archives in the cloud as well. I backup my production PCs daily, my test PCs weekly, and key bits and pieces to the cloud weekly as well). Basta!

Facebooklinkedin
Facebooklinkedin

Interesting Single-Builder SSD Benefits

Just read an absolutely fascinating story at Tom’s Hardware by Sean Webster. Entitled Not-So-Solid State: SSD Makers Swap Parts Without Telling Us, it’s worth a read. The main point it makes is that many builders of SSDs — most notably Adata and its XPG brand — build SSDs using parts from multiple makers. Their products do change over time because of availability of component parts such as controllers and flash memory chips. In the case the story lays out, a highly recommended drive suffered performance losses owing to replacement of better faster parts with newer slower ones. This leads me to understand there can be interesting single-builder SSD benefits .

Where Interesting Single-Builder SSD Benefits Come From

Samsung, chief among SSD makers, builds all of the parts that go onto its SSDs. Thus, it controls the mix of elements on those devices completely. When constituent parts change, the company always changes its model numbers so that buyers know there’s “something different” on board. Tom’s points to practices from WD, Kingston, Crucial and other makers to indicate that the majority do indeed change model numbers as constituent parts change, too. Thus, the most interesting single-builder SSD benefits clearly come from end-to-end supply chain control. Third-party builders don’t have that luxury, because they buy parts from multiple suppliers.

Where does all this leave me? In fact, I bought an Adata/XPG SSD for my Ventoy “Big Drive.” It’s a 256 GB SX8200 Pro model, the very item that Tom’s Hardware finds fault with in the afore-linked story. Good thing I only use this device for storing and occasionally loading Windows ISOs. It’s new enough that I’m sure it’s subject to the flaws that Tom’s uncovered. If I were using it as a boot or internal SSD I’d be irate. As it is, running it over USB 3.1 means I’d never come close to the theoretical maximum read/write rates anyway.

The Moral of the Story

Ironically, this XPG device is one of two non-Samsung NVMe devices I currently own. The other such device is a Toshiba that came pre-installed in a cheap-o purchase of a year-old Lenovo X380 Yoga laptop. I wasn’t expecting top-of-the-line components because I paid under 50% of the unit’s original MSRP. But from now on, I’m sticking with Samsung NVMe drives, so I can avoid performance dings from covert or undisclosed parts changes in the SSDs I buy and use.

Who knew this kind of thing might happen? I certainly didn’t and I’m grateful to Tom’s for calling it to the world’s attention. It will certainly guide my future NVMe SSD buying habits…

 

Facebooklinkedin
Facebooklinkedin

DIY Desktops vs Prefab Still Favor DIY

I got started building PCs back in the mid-1990s when I hired a talented young man who worked in the PC parts department at Fry’s to come work with me. He showed me the ins and outs of system construction. Along the way I learned that careful parts selection could indeed deliver a faster, more capable system for less than the price of an OEM pre-fab desktop. That’s why, IMO, DIY desktops vs prefab still favor DIY, 25 years on.

Why Assert: DIY Desktops vs Prefab Still Favor DIY

As I write this item, it’s Cyber Monday. We’re in the market for another desktop here at Chez Tittel. As my son’s PC is getting older — i7-6700 and Z170 vintage, now 5 years old — it’s time to start planning a replacement. My findings show DIY still gets more than prefab, as I will illustrate.

Doing the DIY Thing

Given that major deals are available today, I decided to see what I could get for around $2K either pre-fab or DIY. I’ve already got a case and plenty of HDD storage, so what I need is a PC with a capable CPU, 32 GB RAM, a 1 TB NVMe SSD for boot/system drive, and a next-to-top-rung AMD or Nvidia graphics card. I found some motherboard/CPU bundles for about $550, memory for about $115, Nvidia 2070 $600, Samsung 980 Pro 1 TB $230,  Seasonic 650 Platinum PSU $130 for a total of $1,625. Even if  I price in the case (Antec P8 for $90) and an 8TB drive ($165) total pricing comes in at $1,880.

Looking at Prefab options

Looking around online at Newegg or amazon, with a $1,900 budget (I used a number range of $1,850 to $2,000 for my searches). I mostly came up with 16 GB RAM configurations, 4 to 8 core CPUs,  lower-end GPUs (e.g. Nvidia 1060 or 1070X), 512GB – 1 TB NVMe SSDs (at least 1 generation back from the Samsung 980 Pro), and 1 TB HDD storage. That’s quite a bit less oomph than the same DIY budget, as you’d expect. I did see some pretty amazing refurbished deals on one or two generations back (most Intel) CPUs and kit. It still looks like refurb is the way to go if you want to buy an OEM desktop, especially if it comes straight from the OEM with a like-new warranty (no sytem warranties on DIY systems, only component-level warranties apply).

As an example of a killer refurb deal, here’s an HP Z840 workstation with two Xeon 8-core CPUs, 256 GB DDR4 RAM (!), 1TB SSD + 1 TB HDD, and a Quadro K4000 professional graphics card for $1,750. Now that’s pretty tempting…

This bad boy comes with 16 cores and 256GB RAM. Zounds!
This bad boy comes with 16 cores and 256GB RAM. Zounds!

I’m still sold on DIY

When it’s all said and done, I guess I’m OCD enough that I like picking all my own parts, and putting my own systems together. I do think you get more for your money, but you also have to have the time, the patience and the knowledge to put things together and to troubleshoot and support them for yourself. I realize that puts me in a minority, but I can live with that.

 

Facebooklinkedin
Facebooklinkedin

Russinovich Showcases Monster Azure VMs

Trolling through Twitter yesterday I found a tweet from Azure CTO Mark Russinovich. I’ll quote the text verbatim “Like I mentioned, Notepad really screams on the Azure 24TB Mega Godzilla Beast VM.” Ultimately this thread leads to an Ignite presentation from October, 2020. Therein, Russinovich showcases monster Azure VMs.

When Russinovich Showcases Monster Azure VMs, What’s the Point?

From left (older) to right (newer), the lead-in graphic shows a historical retrospective what’s been “monster” for memory optimized servers over time. Itty-bitty boxes at far left started out with Intel and AMD Gen7 versions, with 512 GB and 768 GB of RAM respectively. Along came Godzilla after that, with 768 GB or RAM and more cores. Next came the Beast, with 4 TB RAM and 64 cores. After that: Beast V2 with 224 Cores and 12TB RAM. The current king of Azure monsters is  Mega-Godzilla-Beast. It has a whopping 448 cores and 24TB RAM. No wonder Notepad really screams. So does everything else, including huge in-memory SAP HANA workloads for which this VM is intended.

I took Russinovich’s “really screams” Notepad remark as tongue-in-cheek when I saw it. Viewing his Ignite video proves that point in spades. What’s fascinating, though, is that some of the highest-end Azure users are already pushing Microsoft for an even bigger monster. They’re ready to tackle even bigger and more demanding workloads than Mega-Godzilla-Beast can handle.

Who Needs Mega-Monster VMs?

This rampant upscaling of resources is no mere idle fancy. Indeed, there are large companies and organizations that need huge aggregations of compute, memory, storage and networking to handle certain specialized workloads.

This also gives me an insight into the ongoing and increasing allure of the cloud. Most datacenters simply couldn’t put the technologies together to create such mega-monster VMs for themselves. The only way place to find them is in the cloud. Further, the only way to afford them is to use them when you need them, and turn them off right way when the workload is done.

Amazing!

Facebooklinkedin
Facebooklinkedin

Busy Times for Windows 10 But…

Attentive readers will notice I haven’t posted much this week. This is deliberate. I’m taking most of the week off from blogging here at EdTittel.com. Consider this post fair warning: these are busy times for Windows 10 but yours truly is pausing for a few days to recharge his batteries and spend some time with the family.

Busy Times for Windows 10 But I’m Taking a Short Break

Over the past few weeks, I’ve worked extra hours more than normal. The Wiley Dummies custom publications group and ActualTech Media’s content machine have thrown a bunch of hurry-up projects my way. Frankly, I’ve been struggling to keep up with paying gigs. Not a bad problem to have in this time of pandemic and pandemonium. I guess I should be grateful! Good thing our US Thanksgiving holiday tomorrow will give me just the opportunity I need to voice my appreciation to our hunkered-down family crew here at Chez Tittel!

That’s not to say there hasn’t been plenty going on with Windows 10. Just this morning, I’ve seen juicy rumors about upcoming 10X features — including something fascinating called “Cloud PC” — at WinAero and WindowsLatest. We’ve also seen new releases into the Dev Channel (20262.1010, mostly just a servicing item) and Beta/Insider Preview Channels (19042.662, with oodles and scads of fixes and tweaks). Of any of the sites I follow WindowsLatest seems to be the most on top of bugs and gotchas in 20H2, and has been reporting them in some volume lately.

As for me, I’ll be back on the beat on Friday, November 27. Lord knows, I plan to have a surfeit of calories to work off from epic consumption of turkey, all the trimmings, and pumpkin pie. In the meantime for those readers who will also be on holiday tomorrow, I hope you enjoy yours as much as I plan to enjoy mine. For the rest of you working schmoes, I hope you’ll take pleasure as and when it comes your way. Best wishes to one and all, regardless.

–Ed–

Facebooklinkedin
Facebooklinkedin

20H2 Alters Alt+Tab Experience

OK, then: I get it. When you run Windows 10 20H2 the OS does something different when Edge is running. Thus, when I say “20H2 alters Alt+Tab experience,” I mean that it goes through all open Edge tabs as you keep repeating that key combination. This is a little disconcerting, but something I guess I can get used to.

Exactly How 20H2 Alters Alt+Tab Experience

Prior to 20H2 if you had three applications open, striking Alt+Tab once would take you from the current application to whichever is next in the Windows sequence of open apps. Strike it again to get the third app, and again to cycle back to the start.

In 20H2, if one of the open apps is Edge, and it has multiple tabs open,  things change. When you get to Edge you’ll transition from the first (or currently) open tab, to the next tab in sequence. This  continues until you’d cycle back to the first tab you visited in this sequence.  Whatever comes up next will be the next app in the Windows sequence, at which point things continue as always.

A Possible Alt+Tab Gotcha?

Mayank Parmar, of Windows Latest, reports that some 20H2 users may find the Alt+Tab sequence disarranged after they upgrade to this new version. He doesn’t say if it applies to upgrades only, or if clean installs qualify as well. Either way, the symptoms are that the order of apps (and tabs) is inconsistent. In addition, stopping the Alt+Tab sequence on App 2 in a 1-2-3-4 sequence may drop the user into App 3, instead of App 2 as users expect it to do.

I haven’t been able to replicate this error on any of my 20H2 machines. But if you visit Feedback Hub and search on “Alt+Tab 20H2” you’ll see the top three resulting problem reports all talk their way around this issue. MS claims this has been addressed in Beta and Release Preview channel versions already. It’s not yet clear when that fix will make it to Windows Update, but it should be “coming soon.” Stay tuned, and I’ll let you know when that happens.

Facebooklinkedin
Facebooklinkedin

Pluton Enacts Prego CPU Philosophy

Here’s a blast from the past. In 1984, jarred spaghetti sauce maker Prego immortalized the phrase “It’s in there!” for its products. (Note: the link is to a YouTube copy of that very same TV advertisement.) But the tag line lives on, and comes with occasionally interesting applications. It helped me understand that Microsoft’s introduction of Pluton enacts Prego CPU philosophy.

What in Heck Does “Pluton Enacts Prego CPU Philosophy” Mean?

It means that functions currently associated with a separate chip called the “Trusted Platform Module” (aka TPM) move onboard the CPU die. That’s why I’m stuck on the Prego tag line “It’s in there!” It succinctly sums up what Pluton is and does.

On November 17, MS Director of Enterprise and OS Security David Weston wrote a post to the Microsoft Security blog. It explains Pluton nicely. The post is entitled “Meet the Microsoft Pluton processor — the security chip designed for the future of Windows PCs.” Therein, Weston reveals the notion of a ‘Pluton Processor’ as something of a misnomer — but a useful one.  Here’s what he says to help explain Pluton, already “pioneered in Xbox and Azure Sphere.” (Note: I added the emphasis in blue bolded text):

Our vision for the future of Windows PCs is security at the very core, built into the CPU, where hardware and software are tightly integrated in a unified approach designed to eliminate entire vectors of attack. This revolutionary security processor design will make it significantly more difficult for attackers to hide beneath the operating system, and improve our ability to guard against physical attacks, prevent the theft of credential and encryption keys, and provide the ability to recover from software bugs.

Thus, Pluton is not really a processor per se. It’s a set of circuitry included on the die and tightly integrated into the CPU itself. This prevents attacks on communications lanes between a physically disjoint TPM chip and the CPU.

There’s a Scare Factor There

Apparently, recent research shows that the bus interface between TPM and CPU “provides the ability to share information between the main CPU and security processor…” At the same time, “…it also provides an opportunity for attackers to steal or modify information in-transit using a physical attack.” (Note: the preceding link takes readers to a Pulse Security research paper. It explains how sniffing attacks against a TPM permit BitLocker key extraction, used to read an encrypted drive.)

The Pulse Security paper describes ways to boost security to foil such an attack. But MS apparently took the work very seriously. In fact, it introduced Pluton to make communications lanes between CPU and a security processor  impervious to attack.

Can Pluton Boost Windows PC Security?

Sure it can. It will indeed make sniffing attacks like those Pulse Security describes nearly impossible. And it should usher in a new, more secure approach to computing. This applies directly to handling “credentials, user identities, encryption keys, and personal data” (Weston’s words).

The real key, however, is that MS has all of Windows CPU makers on board with Pluton. That means AMD, Intel and Qualcomm . It will be interesting to see how long it takes for them to incorporate Pluton into their CPUs. We’ll wait awhile before the first Pluton-bearing chips hit the marketplace. I’m betting that Pluton will show up for both Windows Server and client OS chips as well (that’s not explicit in Weston’s post).

My best guess is that we’re probably two generations out. For all three makers of CPUs mentioned, it’s likely that their next-gen designs are too far along to incorporate the redesign and layout rework that incorporating a security facility on the die will require. That’s why it’s more likely two (or more) generations out, IMO. Stay tuned, and I’ll keep you posted.

Facebooklinkedin
Facebooklinkedin

20H2 RDP Mystery Remains Unsolved Until …

I’ve been raving about the SFF Dell Optiplex 7080 Micro a fair amount lately. I remain convinced it’s a good purchase and will be a great machine for long-term use. That said, there is the proverbial “one thing” that lets me know for all its glories, it’s still a Windows PC. I’ve been dealing with an RDP mystery — as shown in the lead-in graphic for this story — that actually affects RDP traffic in both directions. Its 20H2 RDP mystery remains unsolved, as all my troubleshooting efforts so far have failed.

Read on, though: I did eventually figure this out, and get RDP working. It turned out to be a basic and obvious oversight on my part. Sigh.

What Do You Mean: 20H2 RDP Mystery Remains Unsolved?

Despite chasing down a large laundry list of things to check and set, I get password related errors when trying to RDP into or out of the 7080 micro. The lead-in graphic shows what happens when I try to RDP into the box. When I try to RDP out of the box, I get an out-and-out invalid password (“may be expired” error) instead.

Obviously, something funky is up with authentication on this Win10 install, because when I try to access the device through the File Explorer network connection, I get a request for network credentials, too. Again, presenting valid credentials doesn’t work. I see a “not accessible” error message instead:

Here’s the list of what I’ve tried so far:

  1. Double-checked Remote Access is enabled.
  2. Relaxed all relevant settings in Advanced Network Sharing for Private, Guest/Public, and All Networks categories.
  3. Enabled all Remote Access checkboxes in Defender Firewall settings.
  4. Ran the Network Troubleshooter
  5. Ran the Microsoft Support and Recovery Assistant

It’s the Account, Stupid!

After noodling about with this for a couple of hours I realized that I had defined a local acount as admin. Worse yet, I had not promoted my Microsoft Account on the Optiplex 7080 Micro from ordinary user to administrator.

Because I was using my MS account credentials to attempt network login and access, I didn’t have permission to do the password lookups in LSASS needed to make the process work. Once I promoted that account to admin level, everything started working.

Sheesh! Talk about an obvious mistake. As with many problems with Windows 10, this one turns out to be entirely self-inflicted. At least, I know who to blame!

 

Facebooklinkedin
Facebooklinkedin

Dell 7080 Micro Performance Amazes

Well, shut the front door, please! Just for grins I started running some of my desultory benchmarks and speed tests on the Dell Micro 7080 I just bought to replace the old mini-ITX box. When you see the numbers and screencaps I’ll be sharing in the following ‘graphs, you’ll understand why my title for this item is “Dell 7080 Micro Performance Amazes.”

Why say: Dell 7080 Micro Performance Amazes?

The numbers do not lie. They’re all pretty incredible, too. Here are some start/boot numbers, with the 7080 left and the (much more expensive) P-5550 numbers right:

Table 1: Shutdown, cold Boot, Restart Times
Description Action 7080 Micro P-5550
 Desktop to machine off  Shutdown  7.92 sec  13.02 sec
 Turned off to desktop  Cold boot  10.46 sec  16.01 sec
Desktop to desktop   Restart 21.26 sec  30.01 sec 

Across the board, then, the $1,200 7080 Micro is significantly faster than the $4K-plus Precision 5550 Workstation. Of course, this takes no account of the more expensive unit’s Radeon Pro GPU. The 7080 Micro simply relies on its built-in Intel UHD Graphics 630 circuitry to render bits on its Dell 2717D UltraSharp monitor, and does so reasonably well. But this comparison is unfair to the P-5550 because UHD 630 is not like a dedicated GPU, especially a professional-grade one like the P-5550’s Nvidia Quadro T2000.

But Wait, There’s More…

The CrystalDiskMark results are also mostly faster than those from the P-5550. The lead-in screenshot shows the 7080 Micro’s CDM results. Compare those for the P-5550 and you get the following, where I’ve bolded the best times in each category so you can see that the 7080 Micro beats the P-5550 in 6 out of 8 categories.

Table 2: CrystalDiskMark Comparisons
CDM Label Action 7080 Micro P-5550
 SEQ1M/Q8T1 Read 3364.8 3373.64
   Write  2790.49 2334.67 
 SEQ1M/Q1T1  Read  2147.04 1716.39 
   Write 2800.90   2056.88
 RND4K/Q32T16  Read  1972.38  630.64
   Write  2152.12  358.26
 RND4K/Q1T1  Read  60.54  41.21
   Write  108.21  119.34

I’m particularly impressed with the 4K Random write numbers with queue depth of 32 and thread count of 16, at which the 7080 Micro kills the P-5550 (read is more than 3 times faster; write is more than 6 times faster). With a queue depth and thread counts of 1 each, it’s a split decision: the 7080 Micro is almost 50% faster at reads, and the P-5550 is about 10% faster at writes. Even when the P-5550 comes out ahead it’s by less than 10% in both cases. To me, that puts the 7080 Micro way, way ahead of the P-5550, especially considering the price differential.

Am I happy with my 7080 Micro purchase? So far, heck yes! More to come as I have more time to do benchmarking. This week is jammed up, but maybe Thanksgiving week I’ll find more time. Stay tuned.

Facebooklinkedin
Facebooklinkedin

Astonishing Dell Precision 5550 Workstation Encounter

OK, then. Just yesterday, I noticed that Windows Update offered the Dell review unit I’ve got the 20H2 upgrade/enablement package. What happened next surely qualifies as an astonishing Dell Precision 5550 Workstation encounter. Bottom line: it took less than TWO MINUTES to download, install and process the enablement package for 20H2. This is easily 3 times faster than on any other machine on which I’ve run that package, including my brand-new Dell 7080 Micro PC. I knew this machine was fast and capable, but this takes the cake. Really.

It’s odd to see 16 hyperthreads/8 cores show up on a laptop. Apparently, they’re all ready (if not actually thirsty) for work.
[Image is shown 2x actual size for readability. CPU Meter Gadget.]

After Astonishing Dell Precision 5550 Workstation Encounter, Then What?

Good question! I need to run a bunch of benchmarks on this system, then gather up those results for publication here. But in the meantime, this system has taken everything I’ve thrown at it, and simply KILLED it. As you can see from the preceding CPU Meter gadget screencap, this machine comes equipped with an i7-10875H CPU and 32 GB of RAM. So far, I haven’t been able to slow it down much, if at all, by throwing work at it. Desultory benchmarks, like CrystalDiskMark, are frankly breathtaking (this is far and away the fastest system in my house right now). Even CrystalDiskMark turns in some pretty impressive read/write numbers:

By comparison, CrystalDiskMark results from my production desktop with its i7-6700, Asrock Z170 Extreme7+, and a Samsung 950 Pro 512GB SSD, are mostly lower. The top line reads: 1954 (read) and 1459 (write): 58% and 62%, respectively. The second line reads 1550 (read) and 855 (write): 90% and 41%, respectively. This changes in line 3 which reads: 1230 (read) and 391 (write): 194% and 109%, respectively. The two bottom lines are nearly identical, with a 42.49 (read) and 98.99 (write): 103% and 83%, respectively. There’s no question that newer-generation M.2 PCIe technology is faster on bulk reads and writes. And as you’d expect, random reads and writes being shorter and scattered about, those metrics don’t vary overmuch.

Performance Theory, As Usual, Beats Practice

According to its specifications, The P-5550’s SSD is an SK Hynix PC601A 1TB SSD. It’s a PCIe Gen3 x4 NVMe device with theoretical maximum of 958 MB/sec per lane, or 3,832 MB/sec for all four lanes. The actual performance is always slower, as the top-line numbers from the preceding CrystalDiskMark output show. But it’s not half-bad and is, in fact, the best-performing NVMe SSD currently at my disposal. At over US$4K for this laptop as configured, it’s pretty pricey: but you do get a lot for the money.

The Cold Boot/Restart Numbers

Here’s a set of average times, taken across three sets of measurements for typical PC on/off maneuvers:

+ From desktop to machine turned off (shutdown): 13.02 sec
+ From turned off to desktop prompt (cold boot): 16.01 sec
+ From desktop to desktop (restart): 30.01 sec

Across the rest of my stable of PCs, these times are at least 50% faster than anything else I’ve got. I still have don’t these measurements for the Dell 7080 Micro PCs, though. Given that they’re also brand-new and have similar CPUs and NVMe drives, i’m expecting numbers more like than unlike the preceding ones. Stay tuned! I’ll report that soon in another post.

For the moment, suffice it to say that the “Workstation” in the Precision 5550 product name is not just wishful thinking. This system delivers speed, graphics and compute power, in a beautiful, compact package.Facebooklinkedin
Facebooklinkedin