I’ve been buying AMD for – holy shit – 25 years now, and have never once regretted it. I don’t consider myself a fanboi; I just (a) prefer having the best performance-per-dollar rather than best performance outright, and (b) like rooting for the underdog.
But if Intel keeps fucking up like this, I might have to switch on grounds of (b)!
spoiler
(Realistically I’d be more likely to switch to ARM or even RISCV, though. Even if Intel became an underdog, my memory of their anti-competitive and anti-consumer bad behavior remains long.)
Same here. I hate Intel so much, I won’t even work there, despite it being my current industry and having been headhunted by their recruiter. It was so satisfying to tell them to go pound sand.
I did not sign with them after I had some issues with the contract provided, and the resulting interactions with my future manager. I’d say at least for someone from Europe the company culture is less than ideal from that encounter.
I can see it might appear that way if you have no knowledge or experience with recruitment or recruiters. It’s especially common in my field as it can be hard to get qualified people.
Indeed, the Ryzen laptops are very nice! I have one (the 4800H) and it lasts ~8 hours on battery, far more than what I expected from laptops of this performance level. My last laptop barely achieved 4 hours of battery life.
I had stability issues in the first year but after one of the BIOS updates it has been smooth as butter.
Sorry but after the amazing Athlon x2, the core and core 2 (then i series) lines fuckin wrecked AMD for YEARS. Ryzen took the belt back but AMD was absolutely wrecked through the core and i series.
Source: computer building company and also history
tl:dr: AMD sucked ass for value and performance between core 2 and Ryzen, then became amazing again after Ryzen was released.
I ran an AMD Phenom II x4 955 Black Edition for ~5 years, then gave it to a friend who ran it for another 5 years. We overclocked the hell out of it up to 4ghz, and there is no way you were getting gaming performance that good from Intel dollar-for-dollar, so no AMD did not suck from Core 2 on. You need to shift that timeframe up to Bulldozer, and even then Bulldozer and the other FX CPUs ended up aging better than their Intel counterparts, and at their adjusted prices were at least reasonable products.
Doesn’t change the fact AMD lied about Bulldozer, nor does it change Intel using its market leader position to release single-digit performance increases for a decade and strip everything i5 and lower down to artificially make i7 more valuable. Funny how easy it is to forget how shit it was to be a PC gamer then after two crypto booms.
I’ve had nothing but issues with some computers, laptops, etc… once I discovered the common factor was Intel, I haven’t had a single problem with any of my devices since. AMD all the way for CPUs.
Genuinely, I’ve also been an AMD buyer since I started building 12 years ago. I started out as a fan boy but mellowed out over the years. I know the old FX were garbage but it’s what I started on, and I genuinely enjoy the 4 gens of Intel since ivy bridge, but between the affordability and being able to upgrade without changing the motherboard every generation, I’ve just been using Ryzen all these years.
arm is very primed to take a lot of market share of server market from intel. Amazon is already very committed on making their graviton arm cpu their main cpu, which they own a huge lion share of the server market on alone.
for consumers, arm adoption is fully reliant on the respective operating systems and compatibility to get ironed out.
Yeah, I manage the infrastructure for almost 150 WordPress sites, and I moved them all to ARM servers a while ago, because they’re 10% or 20% cheaper on AWS.
Websites are rarely bottlenecked by the CPU, so that power efficiency is very significant.
I really think that most people who think that they want ARM machines are wrong, at least given the state of things in 2024. Like, maybe you use Linux…but do you want to run x86 Windows binary-only games? Even if you can get 'em running, you’ve lost the power efficiency. What’s hardware support like? Do you want to be able to buy other components? If you like stuff like that Framework laptop, which seems popular on here, an SoC is heading in the opposite direction of that – an all-in-one, non-expandable manufacturer-specified system.
But yours is a legit application. A non-CPU-constrained datacenter application running open-source software compiled against ARM, where someone else has validated that the hardware is all good for the OS.
I would not go ARM for a desktop or laptop as things stand, though.
If you didn’t want to game on your laptop, would an ARM device not be better for office work? Considering they’re quiet and their battery lasts forever.
ARM chips aren’t better at power efficiency compared to x84 above 10 or 15W or so. Apple is getting a lot out of them because TSMC 3nm; even the upcoming AMD 9000 series will only be on TSMC 4nm.
ARM is great for having more than one competent company in the market, though.
If you look at pfsense/OPNsense hardware recommendations, it’s almost all using chips like the Intel N5105 (10W TDP, though admittedly “TDP” is itself a messy term) or J4125 (also 10W TDP). Using ARM hardware is asked a lot in the community forums, and it’s one of those questions that will get you a flamed for not checking Google first. The power usage benefits for switching to ARM just aren’t there.
There is the Netgate 1100, which runs ARM on a proprietary build of pfsense. The community has largely ignored it in favor of Intel chips. There isn’t much of a price advantage, and the performance is lackluster.
That said, there’s lots that you can do with a sub-10W chip, and x86 has nothing modern there.
Personally, I cobbled together an OPNsense firewall out of some old desktop parts I had on hand. Power usage is a bit higher, but not so much that I care. I would like a more viable high-end ARM option, though, just because I don’t want x86 to be the only option.
As long as the apps all work. So much stuff is browser based now, but something will always turns up that doesn’t work. Something like mandatory timesheet software, a bespoke tool etc.
Linux works great on ARM, I just want something similar to most mini-ITX boards (4x SATA, 2x mini-PCIe, and RAM slots), and I’ll convert my DIY NAS to ARM. But there just isn’t anything between RAM-limited SBCs and datacenter ARM boards.
arm is a mixed bag. iirc atm the gpu on the Snapdragon X Elite is disabled on Linux, and consumer support is reliant on how well the hardware manufacturer supports it if it closed source driver. In the case of qualcomm, the history doesnt look great for it
Apparently (from another comment on a thread about arm from a few weeks ago) consumer GPU bioses contain some x86 instructions that get run on the CPU, so getting full support for ARM isn’t as simple as swapping the cards over to a new motherboard. There are ways to hack around it (some people got AMD GPUs booting on a raspberry pi 5 using its PCIe lanes with a bunch of adapters) but it is pretty unreliable.
Yeah, there are some software issues that need to be resolved, but the bigger issue AFAIK is having the hardware to handle it. The few ARM devices with a PCIe slot often don’t fully implement the spec, such as power delivery. Because of that, driver work just doesn’t happen, because nobody can realistically use it.
If they provide a proper PCIe slot (8-16 lanes, on-spec power delivery, etc), getting the drivers updated should be relatively easy (months, not years).
Datacenter cpus are actually really good for NASes considering the explosion of NVMe storage.
Most consumer CPUs are limited to just 5 m.2 drives and a 10gbit NIC. But a server mobo will open up for 10+ drives.
Something cheap like a first gen Epyc motherboard gives you a ton of flexibility and speed if you’re ok with the idle power consumption.
I’m kind of not. I don’t need a ton of drives, and I certainly don’t need them to be NVMe. I just want 2-4 SATA drives for storage and 1-2 NVMe drives for boot, and enough RAM to run a bunch of services w/o having to worry about swapping. Right now my Ryzen 1700 is doing a fine job, but I’d be willing to sacrifice some performance for energy savings.
For servers, AWS Graviton seems to be pretty solid. I honestly don’t need top performance and could probably get away with a Quartz64 SBC, I just don’t want to worry about RAM and would really like 16GB. I just need to server a dozen or so docker containers with really low load, and I want to do that with as little power as I can get away with for minimum noise. It doesn’t need to transcode or anything.
ARM laptops don’t support ACPI, which makes them really hard for Linux to support. Having to go back two years to find a laptop with wifi and gpu support on Linux isn’t practical. If Qualcomm and Apple officially supported Linux like Intel and AMD do, it would be a different story. As it is right now, even Android phones are forced to use closed-source blobs just to boot.
Those numbers from Amazon are misleading. Linus Torvalds actually builds on an Ampere machine, but they don’t actually do that well in benchmarks.
AWS’ benchmark is about lambda functions, not compile workloads, which are quite different beasts. Lambdas are about running a lot of small (so task switching), independent scripts, whereas compiling is about running heavy CPU workloads (so feeding caches). Server workloads tend to be more of the former than the latter.
That said, I’m far less interested in raw performance and way more interested in power efficiency and idle and low utilization. I’m very rarely going to be pushing any kind of meaningful load on it, and when I do, I don’t mind if it takes a little longer, provided I’m saving a lot of electricity in the meantime.
Man so many SBCs come so close to what you’re looking for but no one has that level of I/O. I was just looking at the ZimaBlade / ZimaBoard and they don’t quite get there either: 2 x SATA and a PCIe 2.0 x4. ZimaBlade has Thunderbolt 4, maybe you can squeeze a few more drives in there with a separate power supply? Seems mildly annoying but on the other hand, their SBCs only draw like 10 watts.
Not sure what your application is but if you’re open to clustering them that could be an option.
4 NAS HDD drives - will be SATA, but could use PCIe expansion; currently have 2 8TB 3.5" HDDs, want flexibility to add 2x more
minimum CPU performance - was fine on my Phenom II x4, so not a high bar, but the Phenom II x4 has better single core than ZimaBlade
Services:
I/O heavy - Jellyfin (no live transcoding), Collabora (and NextCloud/ownCloud), samba, etc
CPU heavy - CI/CD for Rust projects (relatively infrequent and not a hard req), gaming servers (Minecraft for now), speech processing (maybe? Looking to build Alexa alt)
others - actual budget, vault warden, Home Assistant
The ZimaBlade is probably good enough (would need to figure out SATA power), I’ll have to look at some performance numbers. I’m a little worried since it seems to be worse than my old Phenom II x4, which was the old CPU for this machine. I’m currently using my old Ryzen 1700, but I’d be fine downgrading a bit if it meant significantly lower power usage. I’d really like to put this under my bed, and it needs to be very quiet to do that.
Those are tough requirements to meet, I’m not sure there is a low power CPU that can do it all. You would likely need to cluster some devices but that means you need a separate NAS anyway and that kind of defeats the purpose for your case.
RISC-V isn’t there yet, but it’s moving in the right direction. A completely open architecture is something many of us have wanted for ages. It’s worth keeping an eye on.
If there were decent homelab ARM CPUs, I’d be all over that. But everything is either memory limited (e.g. max 8GB) or datacenter grade (so $$$$). I want something like a Snapdragon with 4x SATA, 2x m.2, 2+ USB-C, and support for 16GB+ RAM in a mini-ITX form factor. Give it to me for $200-400, and I’ll buy it if it can beat my current NAS in power efficiency (not hard, it’s a Ryzen 1700).
I hope so, I accidentally advised a client to snatch up a snapdragon surface (because they had to have a dog shit surface) and I hadn’t realized that a lot of shit doesn’t quite work yet. Most of it does, which is awesome, but it needs to pick up the pace
Depends on the desktop. I have a NanoPC T4, originally as a set top box (that’s what the RK3399 was designed for, has a beast of a VPU) now on light server and wlan AP duty, and it’s plenty fast enough for a browser and office. Provided you give it an SSD, that is.
Speaking of Desktop though the graphics driver situation is atrocious. There’s been movement since I last had a monitor hooked up to it but let’s just say the linux blob that came with it could do gles2, while the android driver does vulkan. Presumably because ARM wants Rockchip to pay per fucking feature per OS for Mali drivers.
Oh the VPU that I mentioned? As said, a beast, decodes 4k h264 at 60Hz, very good driver support, well-documented instruction set, mpv supports it out of the box, but because the Mali drivers are shit you only get an overlay, no window system integration because it can’t paint to gles2 textures. Throwback to the 90s.
Sidenote some madlads got a dedicated GPU running on the thing. M.2 to PCIe adapter, and presumably a lot of duct tape code.
GPU support is a real mess. Those ARM SOCs are intended for embeded systems, not PCs. None of the manufacturers want to release an open source driver and the blobs typically don’t work with a recent kernel.
For ARM on the desktop, I would want an ATX motherboard with a socketed 3+ GHz CPU with 8-16 cores, socketed RAM and a PCIe slot for a desktop GPU.
Almost all Linux software will run natively on ARM if you have a working GPU. Getting windows games to run on ARM with decent performance would probably be difficult. It would probably need a CPU that’s been optimized for emulating x86 like what Apple did with theirs.
ARM is only more power efficient below 10 to 15 W or so. Above that, doesn’t matter much between ARM and x86.
The real benefit is somewhat abstract. Only two companies can make x86, and only one of them knows how to do it well. ARM (and RISC V) opens up the market to more players.
Kinda? It really should be treated as a 1st generation product for Windows (because the previous versions were ignored by, well, everyone because they were utterly worthless), and should be avoided for quite a while if gaming is remotely your goal. It’s probably the future, but the future is later… assuming, of course, that the next gen x86 CPUs don’t both get faster and lower power (which they are) and thus eliminate the entire benefit of ARM.
And, if you DONT use Windows, you’re looking at a couple of months to a year to get all the drivers in the Linux kernel, then the kernel with the drivers into mainstream distributions, assuming Qualcomm doesn’t do their usual thing of just abandoning support six months in because they want you to buy the next release of their chips instead.
I’m having the same dream, but I don’t trust Qualcomm to not fuck everyone. I mean it’d be nice if they don’t but they’ve certainly got the history of being the scorpion and I’m going to let someone else be the frog until they’ve proven they’re not going to sting me mid-river.
Because optimization isn’t secondary or even tertiary to the average modern design philosophy. The extra power is, unfortunately, mandatory for a decent user experience.
Yes. Problem is, this is the only way our system of justice allows for keeping companies accountable. They still pay out the nose on their end.
However, in this case, there’s a lot of big companies that would also be part of the class. Some from oem desktop systems in offices, and also for some servers. The 13\14900k has a lot of cores, and there’s quite a few server motherboards that accept it. It was often a good choice over going Xeon or EPYC.
Those companies are now looking over at the 7950x, noticing it’s faster, uses less power, and doesn’t crash.
They’re not going to be satisfied with a $10 check.
Buy AMD. Got it!
I’ve been buying AMD for – holy shit – 25 years now, and have never once regretted it. I don’t consider myself a fanboi; I just (a) prefer having the best performance-per-dollar rather than best performance outright, and (b) like rooting for the underdog.
But if Intel keeps fucking up like this, I might have to switch on grounds of (b)!
spoiler
(Realistically I’d be more likely to switch to ARM or even RISCV, though. Even if Intel became an underdog, my memory of their anti-competitive and anti-consumer bad behavior remains long.)
Same here. I hate Intel so much, I won’t even work there, despite it being my current industry and having been headhunted by their recruiter. It was so satisfying to tell them to go pound sand.
It’s good to feel proud of where you work. I’m not too sure on whether or not Intel treats their workers good though, do they?
I did not sign with them after I had some issues with the contract provided, and the resulting interactions with my future manager. I’d say at least for someone from Europe the company culture is less than ideal from that encounter.
…ummblrag…
I can see it might appear that way if you have no knowledge or experience with recruitment or recruiters. It’s especially common in my field as it can be hard to get qualified people.
…assssssholeeeee…
I’ve been on AMD and ATi since the Athlon 64 days on the desktop.
Laptops are always Intel, simply because that’s what I can find, even if every time I scour the market extensively.
Honestly I was and am, an AMD fan but if you went back a few years you would not have wanted and AMD laptop. I had one and it was truly awful.
Battery issues. Low processing power. App crashes and video playback issues. And this was on a more expensive one with a dedicated GPU…
And then Ryzen came out. You can get AMD laptops now and I mean that like they exist, but also, as they actually are nice. (Have one)
But in 2013 it was Intel or you were better off with nothing.
Indeed, the Ryzen laptops are very nice! I have one (the 4800H) and it lasts ~8 hours on battery, far more than what I expected from laptops of this performance level. My last laptop barely achieved 4 hours of battery life.
I had stability issues in the first year but after one of the BIOS updates it has been smooth as butter.
Yeah I never really considered them before Ryzen, but even afterwards, it’s been very difficult to find one with the specs I want.
Sorry but after the amazing Athlon x2, the core and core 2 (then i series) lines fuckin wrecked AMD for YEARS. Ryzen took the belt back but AMD was absolutely wrecked through the core and i series.
Source: computer building company and also history
tl:dr: AMD sucked ass for value and performance between core 2 and Ryzen, then became amazing again after Ryzen was released.
AMD “bulldozer” architecture CPUs were indeed pretty bad compared to Intel Core 2, but they were also really cheap.
I ran an AMD Phenom II x4 955 Black Edition for ~5 years, then gave it to a friend who ran it for another 5 years. We overclocked the hell out of it up to 4ghz, and there is no way you were getting gaming performance that good from Intel dollar-for-dollar, so no AMD did not suck from Core 2 on. You need to shift that timeframe up to Bulldozer, and even then Bulldozer and the other FX CPUs ended up aging better than their Intel counterparts, and at their adjusted prices were at least reasonable products.
Doesn’t change the fact AMD lied about Bulldozer, nor does it change Intel using its market leader position to release single-digit performance increases for a decade and strip everything i5 and lower down to artificially make i7 more valuable. Funny how easy it is to forget how shit it was to be a PC gamer then after two crypto booms.
I’ve had nothing but issues with some computers, laptops, etc… once I discovered the common factor was Intel, I haven’t had a single problem with any of my devices since. AMD all the way for CPUs.
© upgradability and not having motherboards be disposable on purpose
I hate the way Intel is going, but I’ve been using Intel chips for over 30 years and never had an issue.
So your statement is kind of pointless, since it’s such a small data set, it’s irrelevant and nothing to draw any conclusion from.
I’m with you on all this. Fuck Intel.
Genuinely, I’ve also been an AMD buyer since I started building 12 years ago. I started out as a fan boy but mellowed out over the years. I know the old FX were garbage but it’s what I started on, and I genuinely enjoy the 4 gens of Intel since ivy bridge, but between the affordability and being able to upgrade without changing the motherboard every generation, I’ve just been using Ryzen all these years.
ARM looking pretty good too these days
arm is very primed to take a lot of market share of server market from intel. Amazon is already very committed on making their graviton arm cpu their main cpu, which they own a huge lion share of the server market on alone.
for consumers, arm adoption is fully reliant on the respective operating systems and compatibility to get ironed out.
Yeah, I manage the infrastructure for almost 150 WordPress sites, and I moved them all to ARM servers a while ago, because they’re 10% or 20% cheaper on AWS.
Websites are rarely bottlenecked by the CPU, so that power efficiency is very significant.
I really think that most people who think that they want ARM machines are wrong, at least given the state of things in 2024. Like, maybe you use Linux…but do you want to run x86 Windows binary-only games? Even if you can get 'em running, you’ve lost the power efficiency. What’s hardware support like? Do you want to be able to buy other components? If you like stuff like that Framework laptop, which seems popular on here, an SoC is heading in the opposite direction of that – an all-in-one, non-expandable manufacturer-specified system.
But yours is a legit application. A non-CPU-constrained datacenter application running open-source software compiled against ARM, where someone else has validated that the hardware is all good for the OS.
I would not go ARM for a desktop or laptop as things stand, though.
If you didn’t want to game on your laptop, would an ARM device not be better for office work? Considering they’re quiet and their battery lasts forever.
ARM chips aren’t better at power efficiency compared to x84 above 10 or 15W or so. Apple is getting a lot out of them because TSMC 3nm; even the upcoming AMD 9000 series will only be on TSMC 4nm.
ARM is great for having more than one competent company in the market, though.
Do you have a source for that? It seems a bit hard to believe.
If you look at pfsense/OPNsense hardware recommendations, it’s almost all using chips like the Intel N5105 (10W TDP, though admittedly “TDP” is itself a messy term) or J4125 (also 10W TDP). Using ARM hardware is asked a lot in the community forums, and it’s one of those questions that will get you a flamed for not checking Google first. The power usage benefits for switching to ARM just aren’t there.
There is the Netgate 1100, which runs ARM on a proprietary build of pfsense. The community has largely ignored it in favor of Intel chips. There isn’t much of a price advantage, and the performance is lackluster.
That said, there’s lots that you can do with a sub-10W chip, and x86 has nothing modern there.
Personally, I cobbled together an OPNsense firewall out of some old desktop parts I had on hand. Power usage is a bit higher, but not so much that I care. I would like a more viable high-end ARM option, though, just because I don’t want x86 to be the only option.
As long as the apps all work. So much stuff is browser based now, but something will always turns up that doesn’t work. Something like mandatory timesheet software, a bespoke tool etc.
But isn’t there x86 emulation for those edge cases?
Depends if you trust it to actually work.
Linux works great on ARM, I just want something similar to most mini-ITX boards (4x SATA, 2x mini-PCIe, and RAM slots), and I’ll convert my DIY NAS to ARM. But there just isn’t anything between RAM-limited SBCs and datacenter ARM boards.
arm is a mixed bag. iirc atm the gpu on the Snapdragon X Elite is disabled on Linux, and consumer support is reliant on how well the hardware manufacturer supports it if it closed source driver. In the case of qualcomm, the history doesnt look great for it
Eh, if they give me a PCIe slot, I’m happy to use that in the meantime. My current NAS uses an old NVIDIA GPU, so I’d just move that over.
Apparently (from another comment on a thread about arm from a few weeks ago) consumer GPU bioses contain some x86 instructions that get run on the CPU, so getting full support for ARM isn’t as simple as swapping the cards over to a new motherboard. There are ways to hack around it (some people got AMD GPUs booting on a raspberry pi 5 using its PCIe lanes with a bunch of adapters) but it is pretty unreliable.
Yeah, there are some software issues that need to be resolved, but the bigger issue AFAIK is having the hardware to handle it. The few ARM devices with a PCIe slot often don’t fully implement the spec, such as power delivery. Because of that, driver work just doesn’t happen, because nobody can realistically use it.
If they provide a proper PCIe slot (8-16 lanes, on-spec power delivery, etc), getting the drivers updated should be relatively easy (months, not years).
Datacenter cpus are actually really good for NASes considering the explosion of NVMe storage. Most consumer CPUs are limited to just 5 m.2 drives and a 10gbit NIC. But a server mobo will open up for 10+ drives. Something cheap like a first gen Epyc motherboard gives you a ton of flexibility and speed if you’re ok with the idle power consumption.
I’m kind of not. I don’t need a ton of drives, and I certainly don’t need them to be NVMe. I just want 2-4 SATA drives for storage and 1-2 NVMe drives for boot, and enough RAM to run a bunch of services w/o having to worry about swapping. Right now my Ryzen 1700 is doing a fine job, but I’d be willing to sacrifice some performance for energy savings.
Arm servers are slow, and arm laptops are not compatible with Linux.
Eh, it looks like ARM laptops are coming along. I give it a year or so for the process to be smooth.
For servers, AWS Graviton seems to be pretty solid. I honestly don’t need top performance and could probably get away with a Quartz64 SBC, I just don’t want to worry about RAM and would really like 16GB. I just need to server a dozen or so docker containers with really low load, and I want to do that with as little power as I can get away with for minimum noise. It doesn’t need to transcode or anything.
ARM laptops don’t support ACPI, which makes them really hard for Linux to support. Having to go back two years to find a laptop with wifi and gpu support on Linux isn’t practical. If Qualcomm and Apple officially supported Linux like Intel and AMD do, it would be a different story. As it is right now, even Android phones are forced to use closed-source blobs just to boot.
Those numbers from Amazon are misleading. Linus Torvalds actually builds on an Ampere machine, but they don’t actually do that well in benchmarks.
https://www.phoronix.com/review/graviton4-96-core
AWS’ benchmark is about lambda functions, not compile workloads, which are quite different beasts. Lambdas are about running a lot of small (so task switching), independent scripts, whereas compiling is about running heavy CPU workloads (so feeding caches). Server workloads tend to be more of the former than the latter.
That said, I’m far less interested in raw performance and way more interested in power efficiency and idle and low utilization. I’m very rarely going to be pushing any kind of meaningful load on it, and when I do, I don’t mind if it takes a little longer, provided I’m saving a lot of electricity in the meantime.
Man so many SBCs come so close to what you’re looking for but no one has that level of I/O. I was just looking at the ZimaBlade / ZimaBoard and they don’t quite get there either: 2 x SATA and a PCIe 2.0 x4. ZimaBlade has Thunderbolt 4, maybe you can squeeze a few more drives in there with a separate power supply? Seems mildly annoying but on the other hand, their SBCs only draw like 10 watts.
Not sure what your application is but if you’re open to clustering them that could be an option.
Here’s my actual requirements:
Services:
The ZimaBlade is probably good enough (would need to figure out SATA power), I’ll have to look at some performance numbers. I’m a little worried since it seems to be worse than my old Phenom II x4, which was the old CPU for this machine. I’m currently using my old Ryzen 1700, but I’d be fine downgrading a bit if it meant significantly lower power usage. I’d really like to put this under my bed, and it needs to be very quiet to do that.
Those are tough requirements to meet, I’m not sure there is a low power CPU that can do it all. You would likely need to cluster some devices but that means you need a separate NAS anyway and that kind of defeats the purpose for your case.
Servers being slow is usually fine. They’re already at way lower clocks than consumer chips because almost all that matters is power efficiency.
deleted by creator
RISC-V isn’t there yet, but it’s moving in the right direction. A completely open architecture is something many of us have wanted for ages. It’s worth keeping an eye on.
If there were decent homelab ARM CPUs, I’d be all over that. But everything is either memory limited (e.g. max 8GB) or datacenter grade (so $$$$). I want something like a Snapdragon with 4x SATA, 2x m.2, 2+ USB-C, and support for 16GB+ RAM in a mini-ITX form factor. Give it to me for $200-400, and I’ll buy it if it can beat my current NAS in power efficiency (not hard, it’s a Ryzen 1700).
It’s not quite there for desktop use yet, but it probably won’t be too much longer.
I hope so, I accidentally advised a client to snatch up a snapdragon surface (because they had to have a dog shit surface) and I hadn’t realized that a lot of shit doesn’t quite work yet. Most of it does, which is awesome, but it needs to pick up the pace
Depends on the desktop. I have a NanoPC T4, originally as a set top box (that’s what the RK3399 was designed for, has a beast of a VPU) now on light server and wlan AP duty, and it’s plenty fast enough for a browser and office. Provided you give it an SSD, that is.
Speaking of Desktop though the graphics driver situation is atrocious. There’s been movement since I last had a monitor hooked up to it but let’s just say the linux blob that came with it could do gles2, while the android driver does vulkan. Presumably because ARM wants Rockchip to pay per fucking feature per OS for Mali drivers.
Oh the VPU that I mentioned? As said, a beast, decodes 4k h264 at 60Hz, very good driver support, well-documented instruction set, mpv supports it out of the box, but because the Mali drivers are shit you only get an overlay, no window system integration because it can’t paint to gles2 textures. Throwback to the 90s.
Sidenote some madlads got a dedicated GPU running on the thing. M.2 to PCIe adapter, and presumably a lot of duct tape code.
GPU support is a real mess. Those ARM SOCs are intended for embeded systems, not PCs. None of the manufacturers want to release an open source driver and the blobs typically don’t work with a recent kernel.
For ARM on the desktop, I would want an ATX motherboard with a socketed 3+ GHz CPU with 8-16 cores, socketed RAM and a PCIe slot for a desktop GPU.
Almost all Linux software will run natively on ARM if you have a working GPU. Getting windows games to run on ARM with decent performance would probably be difficult. It would probably need a CPU that’s been optimized for emulating x86 like what Apple did with theirs.
hmm. not really. I can’t beat AMD. Only in power-consumption, sure, but not in real performance.
ARM is only more power efficient below 10 to 15 W or so. Above that, doesn’t matter much between ARM and x86.
The real benefit is somewhat abstract. Only two companies can make x86, and only one of them knows how to do it well. ARM (and RISC V) opens up the market to more players.
technically 3, VIA still has a x86 license, they just don’t do consumer parts.
For real?
Kinda? It really should be treated as a 1st generation product for Windows (because the previous versions were ignored by, well, everyone because they were utterly worthless), and should be avoided for quite a while if gaming is remotely your goal. It’s probably the future, but the future is later… assuming, of course, that the next gen x86 CPUs don’t both get faster and lower power (which they are) and thus eliminate the entire benefit of ARM.
And, if you DONT use Windows, you’re looking at a couple of months to a year to get all the drivers in the Linux kernel, then the kernel with the drivers into mainstream distributions, assuming Qualcomm doesn’t do their usual thing of just abandoning support six months in because they want you to buy the next release of their chips instead.
Windows is dead to me. Arm Linux would be a wet dream
I’m having the same dream, but I don’t trust Qualcomm to not fuck everyone. I mean it’d be nice if they don’t but they’ve certainly got the history of being the scorpion and I’m going to let someone else be the frog until they’ve proven they’re not going to sting me mid-river.
No.
Why?
Because optimization isn’t secondary or even tertiary to the average modern design philosophy. The extra power is, unfortunately, mandatory for a decent user experience.
deleted by creator
Not really
Smells like a future class action lawsuit to me.
You mean the type where the lawyers get eight figure payouts and you get a ten dollar check?
uber eats voucher*
I understood that reference.
There are reports that the vouchers handed out were canceled before anyone could use them.
Yes. Problem is, this is the only way our system of justice allows for keeping companies accountable. They still pay out the nose on their end.
However, in this case, there’s a lot of big companies that would also be part of the class. Some from oem desktop systems in offices, and also for some servers. The 13\14900k has a lot of cores, and there’s quite a few server motherboards that accept it. It was often a good choice over going Xeon or EPYC.
Those companies are now looking over at the 7950x, noticing it’s faster, uses less power, and doesn’t crash.
They’re not going to be satisfied with a $10 check.
Yeah that’s pretty shitty to continue to sell a part that they know is defective.
Yet they do it all the time when a higher specs CPU is fabricated with physical defects and is then presented as a lower specs variant.
Nobody objects to binning, because people know what they’re getting and the part functions within the specified parameters.
And so do these, under the updated parameters.