Apple M2 Ultra SoC isn’t faster than AMD and Intel last year desktop CPUs

240 points
1/20/1970
a year ago
by jacooper

Comments


daviddever23box

I watched the first three Mad Max films last night, in one sitting...while there is certainly an argument for absolute performance over time, it's becoming a lot more difficult to justify the power consumption of the current Intel (and, to a lesser degree, AMD) offerings, especially for sustained personal productivity and burst compilation tasks.

We cannot speak of performance without the per-watt quotient. Battery life is a real concern, and current Wintel laptops just don't compete.

Energy concerns are real (as depicted in the George Miller-directed films mentioned above), and it may be the case that the premium paid for Apple Silicon-based devices is actually a bargain.

a year ago

aunty_helen

The ultra isn’t in laptops. The ultra in the Mac Pro starts at 7 grand, if you’re going to pay a 4 grand premium, is 200$ of extra power over its lifetime really that important?

I have a m2 Max, the battery life is awe inspiring. M2 ultra doesn’t best the competition for its use case.

a year ago

kmeisthax

Don't waste your money on the Pro, you can get an M2 Ultra in a Mac Studio for $4000. The only reason to buy the Pro is if you have a bunch of really weird PCIe cards[0].

As for the desktop use case... sure, you aren't going to care about the $200 of saved power draw, but not having a very loud and hot machine at your desk has to count for something, right?

[0] GPUs won't work, their memory access is locked out in hardware and Apple removed the MPX connectors that powered the Intel GPU modules.

a year ago

Dylan16807

A proper cooler won't get loud or hot.

And as far as heating the room, okay that's an extra $50 of air conditioning to remove the $200 of heat.

a year ago

torginus

GPUs won't work at all? Can't you even use them for ML/compute stuff?

a year ago

jsheard

They don't work at all, there are no discrete GPU drivers for ARM macOS. The cooling solution and 1250W(!) power supply they carried over from the 2019 model are kind of hilariously overkill now, I don't think there's anything you still can install into the PCIe slots which comes even remotely close to needing them.

a year ago

grecy

> The cooling solution and 1250W(!) power supply they carried over from the 2019 model are kind of hilariously overkill now

I wonder if that's a small hint or giveaway of something coming in the future that will make use of it.... One can hope.

a year ago

jsheard

If they were going to make a dGPU or other big accelerator in the future then I think they would have maintained support for the MPX form factor cards. The new Mac Pro doesn't support MPX, despite being based on the same chassis that MPX debuted with.

a year ago

xattt

> The only reason to buy the Pro is if you have a bunch of really weird PCIe cards

They’re telegraphing the use case for future SOCs.

a year ago

kristianp

> They’re telegraphing the use case for future SOCs.

Can someone explain what that means?

a year ago

xattt

Telegraphing means to give away away a message, inadvertently or not.

While the M2 doesn’t include external GPU support, I think the M3 Mac Pro will in order to continue justifying the astronomical price.

Otherwise, Apple would have gone through an awful lot of trouble to design a new motherboard with PCIe slots for an otherwise limited-use case.

In releasing an M2 Mac Pro, Apple is trying to balance a consistent message regarding the ARM transition and customer expectations, but also the hiccups and limited experience of developing desktop CPUs. There may be some silicon bug or silicon-level design oversight in the M2 preventing use of an external GPU.

Discrete GPU support is supposed to be there, but not quite ready.

a year ago

kristianp

Thanks. So by "for future SOCs" they meant future iterations of Apple Silicon. And you think the use case the gp means is GPU support? I'm not sure I agree thats what they meant.

10 months ago

xattt

I was the GP. I don’t know what else to call the M-series chips :)

10 months ago

baggy_trough

I’ll take the under on that.

10 months ago

threeseed

GPUs can work provided the drivers exist.

And those really weird PCI cards are what underpins many industries for which the Mac Pro is designed for e.g. video and audio production.

a year ago

blep-arsh

If I understand correctly, accesses to address ranges mapped to the GPU have to be aligned on Apple Silicon due to the PCie controller requiring nGnRE mapping mode for external devices. There's a proof-of-concept AMD GPU driver patch for Linux that kinda makes this work but it's an incredible hack, the performance isn't great and applications still need to be reviewed and patched.

10 months ago

smoldesu

> provided the drivers exist.

Provided Apple signs 'em :p

a year ago

threeseed

Microsoft has implemented driver signing with Windows so not sure what the issue is.

And users can still use unsigned drivers provided they disable the option.

a year ago

zarzavat

Apple has beef with the worlds largest GPU maker and refuses to sign their drivers because they got screwed by that company over a laptop GPU a decade ago and still hold a grudge.

Microsoft on the other hand, is run by adults.

a year ago

astrange

Where can you get these drivers that exist but just aren't signed?

a year ago

zarzavat

You can’t because Nvidia gave up and stopped developing drivers for macOS. The OS won’t let you install unsigned drivers.

a year ago

thot_experiment

I know I'm a crazy person, but the idea of having a computer that actually won't let me run whatever I want on it is so nuts.

a year ago

astrange

If that was the only barrier you could sign them yourself. They stopped a long time before that though.

a year ago

[deleted]
a year ago

adastra22

The issue is that Apple explicitly doesn’t want to.

a year ago

ddingus

It is not the dollars per watt that matters to many users.

What matters more than people may realize is battery runtime. The Apple hardware just spanks x86 hardware, unless that hardware is fitted with a big battery, and or a secondary one.

I typically run Lenovo hardware, and on my faster machine, I got a very large battery and could get 6 solid hours. And that is at a nice 3Ghz speed too.

My current machine (i5) is far slower than the M1 and that hot running Lenovo. (i7) It has two batteries that can yield 5 to 7 hours.

Those machines are heavier, slower, bigger, and just feel crappy compared to the M1 Air I have been using.

And that thing is crazy good. Battery life is longer, sometimes by a considerable amount. It is great hardware, fast, easy to use, light, you name it.

a year ago

sneak

TFA is about the M2 Ultra chip - it is not available in any system that is battery powered. They are in the Max Studio and Mac Pro only.

a year ago

GeekyBear

Also, consider how much a PC laptop has to throttle performance if you unplug it from the wall.

a year ago

ddingus

Yes! That is often significant. Sometimes worse than half performance, or it will suck the battery dry fast enough to seriously increase battery life decay.

a year ago

comte7092

And don’t forget cooling! All that extra power draw generates heat.

a year ago

andrekandre

  > And don’t forget cooling! All that extra power draw generates heat.
which in turn can cause more throttling in a cramped space like a laptop if its not cooled (loudly) enough
a year ago

pjmlp

If we forget about servers.

a year ago

antifa

M1 mini will make a great home server, would be nice to see someone do a comparison with similar products like Intel NUC for a home server of that size. I'm using an old NUC as a home server and the only drawback is that it only has the capacity for 2 harddrives.

10 months ago

diffeomorphism

With 8gb of ram and 256gb of storage? Otherwise it is like $1000. So, yeah that buys you a decent home server, but at that price it is kinda underwhelming considering the limited OS compatibility, ports, size (much larger than a NUC), ...

10 months ago

ddingus

Sure. When talking about desktop, personal computing, servers are not really a discussion.

a year ago

pjmlp

It is when overreacting the extent of Apple achievements in the industry.

Most people in 2nd and 3rd tier countries don't have the necessary means to buy those equipments, and additionally they aren't a presence on servers and embedded markets.

So in those markets it doesn't matter if M2 Ultra smashes AMD and Intel, no one will ever use one.

a year ago

ddingus

I has never mattered. I do not understand your point.

10 months ago

threeseed

Which we should because no one is using a Mac as a server.

a year ago

iCodeSometime

If we only consider web servers.

Mac servers are pretty much required for any kind of CICD for Apple development.

Just about any company with an iOS or Mac app will have or rent a Mac server.

10 months ago

astrange

They should (or equivalent product). ARMv8 processors are much more secure than x86 and will continue to improve in ARMv9.

a year ago

1over137

People used to run Mac OS X servers, but Apple does not care for that market.

a year ago

hotstickyballs

Power isn’t just the stuff coming out of the wall, it’s also the heat you have to dissipate.

a year ago

AnthonyMouse

> it's becoming a lot more difficult to justify the power consumption of the current Intel (and, to a lesser degree, AMD) offerings, especially for sustained personal productivity and burst compilation tasks.

I don't get it.

The M2 is 15W. There are mobile Ryzen APUs with the same TDP. They're about the same speed; maybe the M2 is a little faster for single thread and the PC is a little faster for multi-thread, it's not that different. You can get a PC laptop which is under 3 lbs and has 14+ hours of battery life.

You can also get a PC laptop which is heavier and has a shorter battery life because they have no qualms about selling you what amounts to a desktop CPU which is then something like 30% faster for multi-thread. But nobody forces you to buy that one.

a year ago

pas

can someone recommend a concrete laptop then, please? I have been putting off the buying of a new one for years now. (I have an old XPS 13 and a P50 with an M1000M, so ... yeah, old dogs!)

I want to put Linux on it. So far I have been looking at the legion 5 pro, because a friend got one recently, and seems amazing, though haven't yet got my hands on it long enough to try it with an USB stick with a recent Ubuntu (with a recent kernel) to test the power management feature support.

a year ago

ChrisLTD

From what I’ve read and seen, Linux is the worst option if you’re concerned about battery life.

a year ago

AnthonyMouse

This is basically down to some hardware vendors writing crappy Linux drivers that don't include power saving features, while also not documenting the hardware so no one else can do it either. But that's only true of specific devices. For example, this is claimed to have 14 hours of battery life, on Linux:

https://system76.com/laptops/lemur

a year ago

arp242

My x270 could pull about 20 hours on Linux when it was new (it did degrade a bit over time, as batteries do). I don't recall if I fiddled with any settings as this was five years ago and the laptop since died in an accident, but I don't think I did other than setting it to "power saving" instead of "balanced".

Before that I had a Dell XPS, which lasted about 7 hours IIRC, which was pretty good for the time.

My Acer could pull about 15 hours when new, but it is a fairly underwhelming Celeron CPU so that probably helps.

Maybe other systems are worse; I don't know, but overall things always seem to work well for me.

a year ago

vGPU

Something without a second GPU. Linux never did quite get those working properly.

10 months ago

NayamAmarshe

Try the starlabs starfighter.

a year ago

jayd16

Your use case is watching a movie? Wouldn't this just hit the hardware decoder on any of these chips?

I guess I don't understand why you would want the ultra if you still prioritize TDP over performance.

a year ago

selimnairb

Yeah, my reaction the whole time to reading OP is, “now do performance per Watt…”

a year ago

Salgat

Intel and AMD sell laptop chips with comparable performance per watt to M2 laptops. If you're buying an M2 Ultra desktop computer, your primary concern is performance period, not performance per watt.

a year ago

threeseed

But many of us don't want that performance to come in a large tower that requires fans to be constantly running.

I own a Mac Studio and PC 10980xe with 4090 and the latter has to live in a separate room because of the noise.

a year ago

LeoNatan25

I don’t know what tower case you are using, or what cooling system you have set up, but in modern times, if you hear much noise from your cooling solution, you have chosen components poorly. From fans to pumps to dampened cases, you could have build a relatively silent system.

a year ago

astrange

How many hours of research and people on forums yelling at you do you have to go through before you do that?

I built a fairly stock PC, it is not silent and needs BIOS undervolting and custom fan curves to get there, and most recently I found updating the BIOS erases all of that.

Also built a NAS (which is too loud so I'd like to rebuild it) and the one time I reported a bug to FreeNAS a graybeard yelled at me on Bugzilla for not building a 1U server with fans that sound like an airplane taking off.

a year ago

LeoNatan25

Really should be no yelling! If the place you are visiting has assholes yelling, just leave for another. But like any major purchase, it takes research. I do this research every 4-5 years or so, when I will be undergoing a desktop replacement. I’d choose the CPU and GPU first, then investigate cooling solutions. Liquid cooling is not always optimal with regard to noise, so another decision to make before committing on a solution is if you’ll be overclocking or not, as there are air-based cooling solutions that could be silent, and give you all the TDP cooling your CPU needs. After you’ve chosen your cooler (liquid or air-based), investigate low-noise, high-airflow fans at the size your cooler accepts. Going the largest fans is not always the best choice for optimal airflow or noise, so factor that in the investigation. I have less experience with custom GPU coolers, as I’ve only used stock coolers—those are pretty silent and well optimized these days.

a year ago

threeseed

With a Mac Studio it isn't just relatively silent in almost cases it is silent.

With a PC you still have case fans running because even with modern motherboards they don't support switching them off at low temperature. You need to be running special Windows software.

And to cool the CPU you really need to be running AIO or dedicated water coolers which themselves will have 3 fans almost always running.

a year ago

LeoNatan25

The Mac Studio in particular is a terrible example to use here. At least the first iteration of it. It has a fan that is constantly spinning, and it’s a cheap fan, so it creates a very annoying whine sound at low RPMs:

https://www.macrumors.com/2022/05/09/mysterious-noise-sparks...

Hopefully, Apple has solved this issue with the second iteration.

a year ago

Toutouxc

> even with modern motherboards they don't support switching them off at low temperature

I have a ASUS TUF GAMING B550M-PLUS WIFI II motherboard in my desktop set to do exactly that, under I think 48 °C it stops the case fans. No Windows software required, it’s all in the MB’s BIOS.

a year ago

Ygg2

> With a PC you still have case fans running because even with modern motherboards they don't support switching them off at low temperature

What do you mean? Most motherboards support sending 0W to fans at given temps. You just install the motherboard software.

a year ago

threeseed

The motherboard software that is Windows only.

a year ago

LeoNatan25

I am not familiar with any motherboard that does not expose all its capabilities through BIOS or UEFI. Yes, it might be less convenient to have to reboot to set a fan curve, but it’s certainly possible. And the Windows software is so terrible anyway, going through BIOS is always my first choice.

10 months ago

Ygg2

Or bios, or whatever has, although non-Windows support is generally bad in hardware because it requires open sourcing drivers.

10 months ago

tomnipotent

> cool the CPU you really need to be running AIO or dedicated water coolers

There is no shortage of silent air coolers, such as the NH-D15, that are 20-30 dBA and undetectable unless the only noise in the room. It's usually stock case fans that are the culprit.

a year ago

threeseed

If you're running a PC with a dedicated GPU then you will have minimum 6 extra fans.

3 for the GPU and 3 for the case. The latter are pretty noisy and constantly run.

a year ago

tomnipotent

Sure, loud fans are loud but anyone that cares about noise is not using OEM fans, and picking up something from Noctua/Arctic. It's also common to upgrade to 140mm fans which further reduces noise.

a year ago

LeoNatan25

You seems to live in the 90s or early 00s, sorry. Consumer fan technology has come a long way since those days. Case sound dampening has also come a long way, so the little noise that fans do make, it gets pretty well dampened inside the case.

a year ago

sarsaparilyptus

[dead]

a year ago

TylerE

The fans in a max studio never actually turn off… they’re just really really quiet, and I’ve never managed to actually make the fans come off idle, even maxing out all cores.

a year ago

washadjeffmad

It takes sustained usage for the heatsink to saturate, and then the fans ramp up quickly. They do stay next to silent otherwise.

10 months ago

cbsmith

The chips it is being compared to are a year older. If there wasn't an advantage somewhere, it'd be an absolute failure.

a year ago

thatfrenchguy

Lol, indeed, the Intel chip they use for comparaison is 320W: https://www.tomshardware.com/reviews/intel-core-i9-13900ks-c...

a year ago

jay_kyburz

If the chips are designed to sacrifice performance for power usage, then they have no place in a computer without a battery.

Apple should assume that desktop machines have an unlimited power source, and that users want a maximum speed. That the users time is more expensive than the electricity being fed into the computer.

Laptops, sure, sacrifice power to extend the battery. Apple does an excellent job here.

The only reason you would buy one of these desktop machines is because you want to be in the Apple ecosystem.

a year ago

daviddever23box

Sure they do, same as mobile chipsets end up Intel NUCs. You're ignoring a whole category of devices.

a year ago

jay_kyburz

I always though NUC's are designed to be cheap, quiet, and small, not power efficient. I don't think they run on battery power.

a year ago

bee_rider

I ran one on battery power, IIRC some models run on voltages in the 12V-20V range, so it is pretty easy. Fun project. I got a portable monitor and keyboard, and… realized I’d reinvented the laptop, except it was really hard to balance all that stuff on my lap. Next time I just got a laptop, but I can still use all the old NUC peripherals so that is nice.

a year ago

Arcuru

It's easier to make something small and quiet if it's power efficient. It will need to dissipate less heat.

a year ago

[deleted]
a year ago

[deleted]
a year ago

lost_tourist

99% of the time I'm hooked up to a power source so battery life isn't nearly as important to me as the $$/performance ratio of the laptop.

a year ago

veselin

I find the article quite informative. Yes, M2 and the other chips are completely different products with different goals. If one wants to say that something completely trumps the other, it will be wrong.

But here is what is visible:

The M2 core is probably in the same ballpark as Zen 4 core, likely a tiny bit below. That may become very tiny if Zen 4 core runs at lower frequency to equalize the power. This doesn't account for the AVX512 of Zen4.

24 M2 cores manage to beat 16 Zen 4 cores also at lower power, but these are different products. Zen 4 does scale to far more cores, 96 in an EPYC chip. AMD and Intel have far more investments in interconnects and multi-die chips to do these things.

The M2 GPU is in the same league as a 300$ mid-range nVidia card. It is not competitive at all - Apple produces the largest chip it can manufacture to go against a high margin smaller chip that nVidia orders.

Again all of this doesn't mean each product is not good on its own.

a year ago

jeroenhd

Apple's GPU performance is what makes me sceptical about their gaming related advertising. Sure, you can do 1080p gaming with the highest SKU, but you're paying through the nose if you bought an M2 to play games.

It seems strange to me for Apple to advertise something they haven't exactly mastered yet on stage.

Maybe they have some kind of optimization up their sleeves that will roll out later? I can imagine Apple coming out with their own answer to DLSS and FSX2 based on their machine learning hardware, for example. On the other hand, I would've expected them to demonstrate that in the first place when they shoed off their game port toolkit.

a year ago

tstrimple

With crossover and Apple's latest release of gameportingtoolkit I'm able to maintain over 120FPS on ultra settings at native resolution on Diablo 4 with my M2 Max MBP. It was fair to be skeptical before that release this week, but there's plenty of evidence out there now that Apple silicon can handle gaming just fine. Other users are reporting 50-60 FPS with ultra settings on their 6k Studio displays.

a year ago

numpad0

Apple consistently advertised gaming performance without any substance to it.

a year ago

mcny

I thought the whole idea of M2 was “exceptional product given the power consumption”.

I don’t mind that it has nothing to show for all the talk once you throw out the need to basically sip power (like a notebook computer).

Is this something inherent with ARM though? Why can’t there be ARM based desktop and server computers that need a kilowatt of power at peak? Like how much more performance can you get for each additional watt of power? (I don’t know. I’m genuinely asking.)

a year ago

numpad0

I was one told that memory and bus bandwidth often creates disparity between benchmark and application performances in ARM CPUs. That was years ago and supposedly don’t apply to custom designs like M2, but maybe both Intel and AMD are still advantageous in that region?

a year ago

astrange

No, M2's memory bandwidth is much better than theirs since it's a unified memory design. (Latency isn't any better though.)

a year ago

GeekyBear

> I thought the whole idea of M2 was “exceptional product given the power consumption”.

When running native code.

Look at the performance of Microsoft's ARM Surface Pro when running emulated code.

> My frustration with this computer wasn’t a workload thing. It didn’t start out fast and gradually slow down as I opened more things and started more processes. It was peppered with glitches and freezes from start to finish.

I’d have only Slack open, and switching between channels would still take almost three seconds (yes, I timed it on my phone). Spotify, also with nothing in the background, would take 11 seconds to open, then be frozen for another four seconds before I could finally press play. When I typed in Chrome, I often saw significant lag, which led to all kinds of typos (because my words weren’t coming out until well after I’d written them). I’d try to watch YouTube videos, and the video would freeze while the audio continued. I’d use the Surface Pen to annotate a PDF, and my strokes would either be frustratingly late or not show up at all. I’d try to open Lightroom, and it would freeze multiple times and then crash.

It quickly became clear that I should try to stick to apps that were running natively on Arm.

https://www.theverge.com/23421326/microsoft-surface-pro-9-ar...

a year ago

saagarjha

Why are you bringing up Microsoft’s translator here? I don’t see why it is relevant?

a year ago

GeekyBear

> Apple consistently advertised gaming performance without any substance to it.

> Why are you bringing up Microsoft’s translator here? I don’t see why it is relevant?

It should be pretty obvious that emulating x86 slows performance, regardless of whose ARM OS we're discussing.

a year ago

saagarjha

Ok, but their performance characteristics are pretty different.

10 months ago

[deleted]
a year ago

astrange

Rosetta is much more efficient than this translator, partially through hardware acceleration.

a year ago

achandlerwhite

They announced MetalFX for their upscaling tech last year and by all accounts it works well if developers take advantage of it.

a year ago

littlestymaar

Not familiar with DLSS at all, does it requires developers to do something in order to take avantage of it too? I had imagined it was automatic but then again I know nothing about it beyond the marketing pitch to consumers.

a year ago

jml78

Yes, games have to do work to support DLSS.

I am not knowledgeable enough to know how much work it is but I have played games that didn’t initially support it but eventually released an updated that added support.

There are also multiple “levels” for DLSS in games that support it, eg. Quality, performance, etc

a year ago

littlestymaar

Thanks

a year ago

GeekyBear

> Not familiar with DLSS at all

Here's a video comparing DLSS and MetalFX upscaling.

https://www.youtube.com/watch?v=6iXx9lfe62w

a year ago

GeekyBear

> Apple's GPU performance is what makes me sceptical about their gaming related advertising.

The issue is that people compare games running under emulated x86 and emulated graphics APIs, when making claims about what the SOC is capable of.

There's nothing wrong with knowing how well the SOC performs when emulating games, but if you claim to be talking about what the SOC can do, then include the performance of native games as well.

a year ago

pornel

Apple's x86 emulation is otherwise very impressive, and not many games are bottlenecked on the CPU, especially at high resolutions.

Bigger overhead for AAA games is likely due to emulation of DirectX or Vulkan on Metal, but that's just Apple's stubborn choice to have it that way.

In the end, none of that matters. I won't be playing Cyberpunk at 14fps, without RTX, and comforting myself that the SoC could do maybe 28fps without emulation. Lower-tier Nvidia cards perform better, even when paired with slower CPUs.

a year ago

GeekyBear

> Bigger overhead for AAA games is likely due to emulation of DirectX or Vulkan on Metal, but that's just Apple's stubborn choice to have it that way.

This is a weird take. None of he major gaming platforms use the same graphics API.

Microsoft has DirectX on Windows and XBox, Apple has Metal on iOS and Macs, Sony has Gnmx on Playstation.

It's like saying Android gaming is terrible because they didn't use DirectX.

a year ago

thraizz

The major platforms do use the same graphics API, Vulkan. It should be preferred due to more low-level access and wider platform support (Linux, Android, Nintendo, MacOS, Windows).

On another note, problems that keep major AAA games from running on Linux (Anti-cheat solutions for example) will block many games from running ob MacOS, too.

a year ago

GeekyBear

> The major platforms do use the same graphics API, Vulkan.

By all means, share a list of XBox games that only use Vulkan.

a year ago

cmovq

The CPU is rarely a bottleneck for AAA games, so unless the x86 emulation is particularly terrible (Rosetta isn't) it shouldn't be the issue.

WINE on Linux is able to match the performance of games on Windows, so the DirectX translation layer shouldn't be a problem either.

So it's not unreasonable to assume that the M2 just doesn't have a GPU capable of running these games. And it's really not that surprising that an integrated GPU doesn't match the performance of a dedicated GPU.

a year ago

dijit

> The CPU is rarely a bottleneck for AAA games

I mean… No?

CPU bottleneck is super common, especially on slightly older engine bases like source or unreal.

I think you are assuming big AAA games at 4k, which puts an especially big strain on the GPU.

Maybe I’ve been developing games too long, but we are constantly fighting CPU bottlenecks.

a year ago

astrange

PC game players tend to believe you can't play a game unless you bought the latest custom hardware for all of it and put all the settings on maximum.

Game developers are much more willing to run their work on lower end machines if they'll get paid for it, or at least they're more capable of tuning for it.

a year ago

GeekyBear

> So it's not unreasonable to assume that the M2 just doesn't have a GPU capable of running these games

Without including comparison data on native games? It's entirely unreasonable.

For instance, The native version of the DirectX 12 game "The Medium" was shown running side by side with the emulated version at WWDC, and the native version had double the frame rate.

a year ago

alpaca128

> the M2 just doesn't have a GPU capable of running these games.

As long as AAA games are published on the Xbox Series S and shipping with graphics settings they will have no problem when running natively on an M2 chip.

a year ago

llm_nerd

>The M2 core is probably in the same ballpark as Zen 4 core, likely a tiny bit below.

The 7950x is running at 5.7Ghz when only a single thread is saturated. The M2 Ultra caps its cores at 3.5Ghz. A 62% higher clock speed, at a monster power profile, to barely beat it isn't evidence of a core advantage.

>24 M2 cores manage to beat 16 Zen 4 cores also at lower power

The M2 ultra has 16 real cores, with 8 additional efficiency cores that are very low performance. And of course the M2 Ultra could pretty handily trounce the 7950x because the latter has to dramatically scale back the clock speed, as the power profile of all 16 cores at 5.7Ghz would melt the chip. And of course the 7950x has hyper-threading and hardware for mini-versions of 16 more cores, so in a way it has more cores than the Apple chip.

>This doesn't account for the AVX512 of Zen4.

AVX512 is used by a tiny, minuscule fraction of computers ever in their history of existence. It is the most absolute non-factor going.

I mean...in an ideal world Apple would get the GPU off the core. It limits their core and power profile, and takes up a huge amount of die space. They could then individually mega-size the GPU and the CPU. They could investigate mega interconnects like nvidia's latest instead of trying to jam everything together.

Was Apple correct to call it the most powerful chip? Certainly not. And there is a huge price penalty. But they're hugely, ridiculously powerful machines that will never leave the user wanting.

a year ago

veselin

It is true that nobody competes in the low power high efficiency workstation market or maybe such a market does not exist yet and Apple is creating it.

But also as users, some were expecting the M series are so good that they are going to take many markets by storm. And it seems it is not happening.

a year ago

411111111111111

> same league as a 300$ mid-range nVidia card.

$300 midrange Nvidia card? Did you get stuck in 2010?

That's way below entry-level at this point. You're likely comparing it with a 1666 cards or something, which is based on a chip from 2012.

I wish Apple silicone was actually competitive on performance. Nvidia needs competition or they'll likely double prices again with the next generation.

a year ago

rbanffy

> The M2 GPU is in the same league as a 300$ mid-range nVidia card

It still has the advantage of a much larger memory pool.

I did a quick comparison exercise - I priced two workstations with similar configurations, one from Dell, the other from Apple. While there are x86 (and ARM) machines that'll blow the biggest M2 out of the water, the prices, as far as Apple can go, aren't much different.

https://twitter.com/0xDEADBEEFCAFE/status/166747612998729728...

a year ago

jeroenhd

If you buy anything labeled as "workstation", you're paying twice the price already.

The article describes the M2 being blown out of the water by a 4080 and a 13900KS. That's about $2000 + RAM, motherboard, and power supply. Plus you can use the built in GPU in your CPU for acceleration things like transcodes.

You can get a pre-built gaming PC with a 4090 for about $4000, that'll crush the M2 in compute if you use any kind of GPU acceleration.

Of course the M2 has some other advantages (the unified memory and macOS) and some other disadvantages (you're stuck with the amount of RAM you pick at checkout, macOS, you have to sacrifice system RAM for GPU RAM) so it all depends on your use case.

I think the M2 still reigns supreme for mobile devices, though AMD is getting closer and closer with their mobile chips, but if you've got a machine hooked into the wall you'll have to pay some pretty excessive electricity rates for the M2 to become competitive.

a year ago

DaiPlusPlus

> If you buy anything labeled as "workstation", you're paying twice the price already.

The price of workstation-class machines also includes the cost of higher build-quality and stability, things like same-day support and service - at least the option for a long-term (5-6 year) warranty, and FRUs - you don't get that with consumer-grade computers - and those things matter when a machine is something you depend on professionally.

a year ago

sounds

Your random acronym decrypter:

FRU: Field-Replaceable Unit. https://en.wikipedia.org/wiki/Field-replaceable_unit

What the poster means is that a "workstation" is designed with quickly swappable components, often not even needing to use any tools. Businesses may benefit from this.

While it doesn't necessarily mean the swappable components are standardized or easy to procure, they usually are. That's a separate item that "workstation" machines typically offer: longer availability of replacement parts.

10 months ago

Nullabillity

$4k will buy you a hell of a lot of troubleshooting time before "same-day service" actually wins out.

a year ago

xbar

I agree with your take. My plugged into the wall machine is a 128GB 13900k 4090 system. My mobile machine is an Apple Silicon Macbook Pro. There are some tasks that are still better on the unified memory of the Macbook, but only a handful. There are many tasks that are more pleasant on the Macbook because of the absurd power efficiency (DAW, Final Cut Pro).

Both machines have a quality that I appreciate: they are never, ever slow.

a year ago

selimnairb

You’re forgetting the benefit of everything just working and never having to thinking about effing with drivers ever. To me, it’s priceless. Anything truly performance bound (CPU or GPU) is going to be done on HPC systems, not on a fake Windows “workstation”.

a year ago

rbanffy

> If you buy anything labeled as "workstation", you're paying twice the price already.

We are not comparing MacPros to low-end desktops.

> You can get a pre-built gaming PC with a 4090 for about $4000, that'll crush the M2 in compute if you use any kind of GPU acceleration.

Yes, but the gaming PC will not as well built as the workstation-grade machine. And pretty much any GPU you can install on a gaming PC you can install on a MacPro - it's just that it won't be there out of the (Apple branded) box.

> you're stuck with the amount of RAM you pick at checkout

Sadly, this has been Apple for some time now - you buy the machine as it will be used for its whole intended lifetime. With the MacPro you can at least add internal storage and one or more GPU cards.

a year ago

goosedragons

AFAIK the 2023 Mac Pro doesn't support PCIe GPUs for the same reason AS Macs don't support eGPUs. It has PCIe slots you can use for other things like capture cards or whatever but not GPUs.

RAM was something you could upgrade with the 2019 Mac Pro and something you could get a lot of. 1.5TB worth. The new Mac Pro caps out at 192GB which is barely better than consumer AMD/Intel systems at the moment.

a year ago

rbanffy

I agree some MacPro users will be forced to move to workstation or server-grade PCs, but I am sure Apple knows that and they considered having integrated memory inconsequential for the majority of their users.

Also, remember, terabytes of RAM cost A LOT of money. The Dell I priced for comparison can go way higher than 192GB, but it’ll also cost you a lot more than 7K.

10 months ago

kitsunesoba

> It still has the advantage of a much larger memory pool.

I wonder if given roughly equal power to the GPUs in current gen consoles (PS5/XBSX), it'd yield some advantage in porting console games since those consoles also have a large shared pool of memory (16GB), and neither AMD nor Nvidia want to give up using VRAM as an upsell.

a year ago

jeroenhd

With the M2 Ultra prices, it'd be cheaper to buy a 4090 than to go the Apple route. With the M2 pro you'll probably still be better off with a 4080 unless you really need more than 16GB of VRAM.

I don't know the M2's efficiency for things like machine learning, but the M1's machine learning performance seemed to have been beaten 4-5x by the 3060Ti so I'm pretty sure "more VRAM" is all it's got going for it in ML tasks.

a year ago

kitsunesoba

Well yeah, the market here would be people who already have a reasonably powerful Mac and would rather have that fill their gaming needs instead of having to build or buy a separate dedicated device for that.

But what I was really getting at is the trouble that game studios have been encountering lately when porting PS5 and Xbox titles to Windows, which is that these games are so reliant on those consoles' 16GB shared memory pool that they perform terribly on PCs. The impact is double, because not only are most GPUs in usage right now anemic when it comes to VRAM (even my last-gen high end 3080 Ti comes up short at only 12GB), traditional PCs also have to copy data between RAM and VRAM. Significant re-architecting for the Windows port is required to work around this.

M-series Macs are much more similar to current gen consoles with their shared memory pool, which in theory could make porting from console to Mac (at least when targeting Macs with 16GB+ of RAM) more straightforward than porting to Windows. While some work would need to be done to support Metal, the two most popular engines already do much of that legwork and the work that remains can be shared across multiple titles.

a year ago

rbanffy

I can’t imagine using my work computer for gaming, as maintaining the software install has so many different requirements, but, then, I’m no PC gamer and would rather have a console plugged into the big TV in the living room than on my desktop monitors. It’s also much less of a hassle maintaining a console than a gaming PC.

As a side-note, my living room TV is a rather small 43 inch one (limited in size by the surrounding overflowing book shelves) but, if I were a gamer, I’d probably have gone with a 60+ inch or wall projector.

If I lived alone, I’d get an Apple Vision Pro instead of the humongous TV, as it’d be cheaper.

10 months ago

selimnairb

Cheaper in terms of money, but in terms of time? I have a hard time justifying anything that requires configuration and dicking around. I’m a grown-up and don’t have “free time”. I need things that just work. For me, that’s not Intel and Windows or Intel and Linux. It’s macOS, which is the only true workstation platform left.

a year ago

TylerE

This is exactly why I went from a monster PC rig to a Mac Studio. Just got tired of random components dying every year or so.

a year ago

lostmsu

My previous rig is approaching 6 years, and the only dead component is a cheap external USB drive. The rig was mining 24/7 when it wasn't used for development or gaming. You must be doing something very wrong.

10 months ago

doctorpangloss

> It still has the advantage of a much larger memory pool.

Why do you think NVIDIA doesn't "just add" "more memory"? To its $40,000 H100s, which top out at "just" 80GB?

The answer isn't price segmentation.

a year ago

411111111111111

Yes, it's not price segmentation, it's planned obsolescence.

The 3080 series would be fine for likely beyond the 50x0 series gpu-wise, but current games are already starting to stutter unless you downgrade textures because of its limited VRAM

a year ago

doctorpangloss

The performance of the chip is matched to the memory size.

I think it’s a U shaped curve.

Beyond 80GB, today, the larger chip would maybe all of these: yield less, scale worse, take too much power, etc.

Like this matching of compute resources to RAM is partly the difference between CPUs and GPUs.

Anyway, it’s just to say that it isn’t a business decision. The extra RAM in the M2 doesn’t help the GPU much for the same tasks the H100 excels at, because it isn’t performant enough to use that RAM anywhere near the same way an H100 would, and if it were, there would have to be less RAM. The H100 doesn’t even have a graphics engine. It’s complicated.

a year ago

wtallis

> The performance of the chip is matched to the memory size.

That may be approximately true if you only look at a single generation of consumer graphics cards at a time. If you compare across generations or include non-gaming workloads the correlation falls apart.

a year ago

redundantly

The comments here are strange.

So many people making claims that power utilization doesn't matter. Perhaps not to them, but it does for many.

Energy prices are getting higher and higher in many parts of the country, heck in the world.

Devices that consume more power generate more heat. So now one is likely using more electricity to keep their home or office cool and comfortable.

Noise matters for many people using workstations. Running a system at full tilt can be irritating, distracting, because of the active cooling.

Some people are just simply conscious of how much electricity they use and want to have a lower environmental impact.

And then there's the large scale matter. One workstation might not be a big deal in terms of energy usage, but millions of them absolutely is.

Saying no one cares how much power a workstation uses is disingenuous.

a year ago

croes

How much electricity can you pay for the price of a computer with an M2 Ultra?

a year ago

dijit

Pedantically, sysadmins always counted the power consumption (or, compute density) when upgrading servers, and would even do early upgrades as a cost saving measure.

Power costs (in datacenters at least) were high enough that buying the €10,000 server that sucked 200w more was worth less than the €15,000 machine that didnt suck that extra 200w.

So electricity prices can be more than a negligible amount on the total.

Where that line is depends on your personal situation.

I live in southern sweden and they hide the total price of power here, but aggregated my cost per watt is 5sek/kWh (roughly €0.45).

So a worst case for me at 200w with 24hrs of usage is about €800/y

a year ago

Affric

Big part of me building a decent gaming PC will be buying a house and installing solar panels (and maybe a battery considering market conditions and government incentives).

At this point we are seeing the globalised fossil fuel market endgame in this country.

a year ago

alpaca128

Where I live the electricity prices are about to rise by 90%. It's not the first significant price increase in recent years.

a year ago

ksec

Because Workstation, by a lot of people's definition, are suppose to be the power hungry in exchange for absolute performance?

a year ago

redundantly

You're doing it too. You're assuming how someone would define a workstation, you're assuming that definition includes it being power hungry.

A workstation can be a single core, 256MB RAM, 1 watt SBC. It can be a 96 core, 2TB RAM, 1 kw beast. It can be anything in between or even outside of that range.

I'm not saying everyone cares about power consumption, but several people here seem to be saying that no one does, and that's simply not true.

a year ago

wtallis

"Workstation" in this context refers to a specific target market and product segment that by industry consensus (and in some cases codified by regulators) carries specific connotations about a machine's capabilities and intended use. Your attempts to use the word in the broadest possible meaning are not helpful; you're deliberately communicating badly.

a year ago

capableweb

> A workstation is a special computer designed for technical or scientific applications

https://en.wikipedia.org/wiki/Workstation

Generally, 1 low performance core with 256RAM is not enough for technical or scientific applications. Not many professions you could get away with those kind of specs really.

a year ago

golem14

Well, this wikipedia article mentions thin clients as lower end workstations.

I would not call a computer with a single low performance core and 256 MB RAM a workstation either, but for a 4GB/8GB RPi 4, that term seems applicable.

a year ago

NotYourLawyer

> A workstation can be a single core, 256MB RAM, 1 watt SBC.

No. Words have meanings. Workstation does not mean that.

a year ago

theshrike79

What does ”Workstation” mean then? What kind of ”work” is it meant for?

Accounting? Writing? Programming? Video editing?

a year ago

veidr

The term does have a generally recognized meaning, which is something like "computer that is designed for computationally expensive tasks".

That's a relative thing, so "workstation" has continued to mean "computer that is on the upper end of the computing power spectrum".

So not writing, probably not accounting. Programming, sure, if the builds being done are computationally expensive. Video editing definitely, if you are going to be doing a lot of it, you will want the most powerful workstation your budget allows for.

a year ago

[deleted]
a year ago

ksec

-Deleted-, Thx @wtallis

a year ago

pshc

When I hear "workstation" what comes to mind is a daily driver for business purposes, so rock-solid reliability is utmost. Redundant components, ECC ram, rated to run 24/7 with ample cooling. Performance per watt is also up there.

Whereas raw performance specs are typically high but ultimately dependent on use case.

a year ago

wildrhythms

>business purposes

I can show you a company of 10,000+ people using Chromebooks and Macbooks for 'business purposes' with little regard for performance per watt.

a year ago

unusualmonkey

If power utilization matters, then a more 'efficient' device, that takes longer might actually use more power overall - esp. when you factor in the lighting, Aircon etc that might be needed by the human operator who is waiting for the workstation to finish computation.

a year ago

markhahn

Task energy is a more reasonable argument than whinging about how some people care so much about power.

Everyone cares about power, at a fairly similar level - maybe 2-4x differences, but not 10x. And most of the ones who care underestimate how rarely their machine is actually fully busy.

Idle power is probably more interesting than even task power, for anything other than an unusually busy server or cloud hypervisor.

a year ago

adastra22

50% slower than the absolute best of class NVIDIA discrete GPU offering that consumes massively more wattage is hardly a damning statement.

a year ago

brucethemoose2

Take anything from wccf with a grain of salt.

And yeah, as others point out, this is Apples to oranges. x86 desktops are great at some things, M2 Ultras are great at others, and the overlap that really matters is pretty small... Like, you have to be crazy to buy an M SoC for gaming, or buy a Nvidia GPU for workloads that won't fit in VRAM.

a year ago

perrohunter

I tested the new game porting kit on a Mac mini M2 and was surprised I was able to run Cyberpunk 2077 and it was running really well, I can imagine that more powerful M processors will be awesome, so the argument of games might go away soon

a year ago

brucethemoose2

So can my old 14" RTX 2060 laptop, on a 4K TV.

I have seen the demos, but I am skeptical of the actual practicality or value proposition until a 3rd party publishes some frametime benchmarks, and games out in the wild get battle tested.

a year ago

tzekid

Yep, but an avg. RTX 2060 laptop consumes 3x - 4x the wattage of an M2, no?

10 months ago

imbnwa

Curious about settings for an idea of the ceiling, that game is insanely unoptimized

a year ago

Grazester

What is really well? Can me have a metric? What resolution and what frame rate?

a year ago

tzekid

Something like: M1 MBA, around 12 FPS Ultra settings or high 30s on Medium at 1440p ( max res. apparently )

10 months ago

katbyte

How does it work? You install a windows game via steam and then tell it to port?

a year ago

stouset

It’s basically Wine with a custom D12 translation to Metal calls. Right now it’s not really intended for end-users so the experience involves a lot of steps that aren’t consumer-friendly, but some folks are publishing wrappers that simplify the process.

I imagine that in the future, something like Steam will wrap this functionality to provide the ability to run the whole library under the toolkit. And individually-published games will do the same so they install and run with a more consumer-friendly experience.

a year ago

danieldk

Maybe, but most of these comparisons are based on GPU performance (ie. games). For other workloads like machine learning, Tensor cores on NVIDIA GPUs will blow Apple Silicon GPUs out of the water. The M2 Ultra is 27 TFLOPS. The 4080't Tensor Cores are 48.7 TFLOPS in FP32, 194.9 TFLOPS in FP16, 389.9 TFLOPS in FP8 with FP16 accumulate. (IIRC on Apple Silicon GPUs FP16 performance is roughly the same as FP32.)

(There is the Neural Engine which supports lower precision, but it limited in various ways.)

Regardless, the strides that Apple has been making are impressive.

a year ago

samwillis

I think the exciting thing with Apple silicone and "AI" is inference, not training. Due to their unified memory, you can potentially have enormous local model inference. That's potentially much more expensive on a PC with a GPU having its own memory.

Apple have an opportunity, if they 2-4x the memory on the entry level devices (not beyond the realms of possibility), to make local inference a thing available to all.

a year ago

danieldk

Sure, but for inference it’s much easier to work in lower precision. Training with very low precision is also possible, but is often more tricky due to numerical stability.

A lot of work is going on in 8-bit inference and even 4 bit inference. So, models that need 64 GB in FP32 can do with 16GB VRAM in FP8 or INT8, which is well within the realm of consumer NVIDIA cards. And the latest NVIDIA tensor cores will absolutely destroy Apple Silicon GPUs or the Neural Engine in 8 bit.

So, I don’t think it’s really a strong argument. And as someone who is a Mac user and a ML practitioner, I’d be very happy if they started supporting eGPUs again.

Apple Silicon has many strengths and the GPU core are fine for many ends, from games to graphics apps.

But let’s not pretend that Apple is beating NVIDIA at their own game (yet). That day might come, but currently it only leads to disappointed users in ML forums who were hyped into thinking that their vanilla M2 MacBook Airs can almost compete with a 4090 in training a deep transformer model. (Yes, that happens.)

a year ago

incrudible

Multiple NVIDIA GPUs also have unified memory from an application perspective, one GPU has up to 80GBs right now. They also have a proper software stack. If NVIDIA saw any credible threat there, they could ramp that up, DRAM is not beholden to Apple.

a year ago

sunflowerfly

One differentiator is the amount of memory available. Apple's shared pool of memory means the GPU has potentially up to 96gb of ram at its disposal, where discrete GPU's are often limited to 8, 12, 16, etc. They mentioned in the keynote that the M2 can handle many tasks discrete GPUs cannot simply because they run out of ram.

a year ago

incrudible

$15,000 will buy you an A100 with 80GB of RAM at 2TB/s that will annihilate the biggest Apple chip.

a year ago

ElFitz

Sure, but then that’s the price of multiple entire laptops. Or 2 Mac Pros. Not sure what point you’re trying to make here.

a year ago

incrudible

It is also the performance of 4 Mac Pros. GP was talking about tasks that dedicated GPUs supposedly could not handle. Who is is running models needing 60+ GB of VRAM on their laptops?

a year ago

ElFitz

Some have had fun running some of the larger Llama, Vicuna & Guanaco models on their MacBooks. Mostly to experiment with local inference.

Although quantisation has now lowered the required memory, I wouldn’t be surprised if it comes in handy again in the near future.

But today that’s quite a niche use case.

a year ago

CamperBob2

FLOPS are one thing, but 192 GB of unified memory that can be used as VRAM is something else. That could be a big win on the inference side of things, where even an RTX 4090 GPU is limited to only 24 GB.

Then there's the power consumption difference to consider. This seems like one of those cases where benchmarks reveal only a fraction of the larger picture.

a year ago

Demmme

On purpose to keep a100 and h100 on an additional premium.

a year ago

caycep

granted, wccftech is a grain of salt kind of site. To wit, here's another article....from the same site!

https://wccftech.com/m2-ultra-only-10-percent-slower-than-rt...

a year ago

[deleted]
a year ago

MBCook

The complete lack of any power numbers is extremely telling.

a year ago

jacooper

As if that matters on a desktop?

a year ago

st3fan

I don't like to listen to the sound of fans constantly running ...

a year ago

throwaway5959

It does when you multiply it across millions of CPUs. Saving 100 watts across 3 million CPUs (300 megawatts) is enough to shut down the equivalent of power plants.

a year ago

nightski

I'm all for efficiency but if we are to progress as a civilization we need to find a way to not have power hold us back. We can get to a point where we have nearly unlimited power with minimal impact on the environment around us. That needs to happen as soon as possible.

a year ago

chongli

There are an estimated 3 billion people in the world who still rely on solid fuel (wood, coal, peat moss) to cook food and generate heat [1]. We are extremely far away from being able to use 100% electricity to meet two basic needs for the whole population. From that perspective power-plant-scale computing is an incredible luxury.

[1] https://www.jstor.org/stable/pdf/resrep27829.23.pdf

a year ago

nightski

That demand will only increase, which further emphasizes my point. It can't be a luxury.

10 months ago

alpaca128

> That needs to happen as soon as possible.

But it won't happen before all M2/3 Macs will be obsolete. Having goals like that is great but until we reach that point efficiency is still important.

a year ago

mort96

It matters when comparing performance numbers in general?

a year ago

jacooper

Depending on the usage, it why AMDs efficiency is impressive on laptops, but not so important on the desktop.

a year ago

ianai

Performance per TDP is always relevant. Sure individuals may choose they just want whatever to maximize a metric, but that’s a different decision entirely. Like comparing a motorcycle and a truck without mentioning all the things different between the two.

a year ago

sokoloff

TDP is the wrong measure here. i9-13900K and i7-13700K both have 125W TDP with 253W PL2 specs. By that calculation, the 13900K is more performant per TDP, so is "better" for efficiency.

Computation per kWh (or rate of computation per kW) is the right efficiency metric, not TDP (thermal design power).

a year ago

wtallis

If you have a workload that pushes the 13700K and 13900K to their turbo limits (and have ensured that those limits are actually configured to be the same), then you really will find the 13900K getting more work done for the same power and energy. That's how the extra cores help in power-limited scenarios.

a year ago

[deleted]
a year ago

smoldesu

The absolute "best of class" Nvidia card is currently the 4090, which can be up to 30% faster than the 4080 in scaling workloads.

What would be more interesting is to see how Nvidia's laptop cards fare here though - they're constrained to much lower wattage (80-120w) and would make for a much fairer fight against the ~200w M2 Ultra.

a year ago

nordsieck

> What would be more interesting is to see how Nvidia's laptop cards fare here though - they're constrained to much lower wattage (80-120w) and would make for a much fairer fight against the ~200w M2 Ultra.

Doesn't look like Apple offers Ultra in a laptop - just the Basic, Pro, and Max.

a year ago

arpanetus

It's interesting. I assumed at least their power consumption should have been much better/lower than an avg orthodox desktop setup of current days.

a year ago

smoldesu

It's not unreasonably high for a TDP, and the idle consumption of ARM is de-facto lower than power an x86 package.

That being said, it's pretty obvious that Apple's mobile-style solution isn't really working out on the desktop side of things. The new iMac feels starkly pedestrian compared to the old ones, and the Mac Mini/Studio are both neat but not unprecedented. The M2 Ultra represents a lot of engineering effort going into flipping that status quo, but its still slipping behind by a considerable margin. Don't forget that a second "Ultra" style SOC with 4x M1 Maxes was supposedly cancelled for drawing too much power and being too hot. It's just not effective or efficient to force that much silicon that close together.

a year ago

smcleod

A decent 4090 GPU alone is over $3000 in Australia - that’s half the price of a Mac Studio with Ultra - just for the GPU.

Then you’d be looking for a 24 core CPU, 64GB RAM, 1TB PCIe 5 SSD, mainboard with 6x thunderbolt ports, a silent cooling setup, high quality case that is both small and all the gear while running cool. and if you’re stuck with MS Windows - an Operating System.

a year ago

smoldesu

Yeah, you gotta pay a premium if you want the highest performance on the market. If you're just looking to match the GPU performance of the M2 Ultra though, there are several gaming laptops that will run circles around it in Blender. Many are cheaper than the base model Mac Studio.

a year ago

partiallypro

Most consumers don't care about wattage on a desktop, and when you compare the prices they are similar and it loses.

a year ago

acdha

Many people do care - many Americans have subsidized low power but that’s not globally true and even if your power is free a hot CPU means you’re listening to fan noise and heating up the room. In a cold winter that’s not bad but I knew people who had to do things like leave their office doors open for cooling.

a year ago

gruez

>many Americans have subsidized low power

Source?

a year ago

o1y32

You have confused HN population with general US population

a year ago

0zemp3c

the people you describe aren't buying $7k desktops

a year ago

acdha

They certainly are. Being able to pay for the extra power doesn’t mean you like fan noise, not to mention the people who are doing things like video production in a trailer where overheating is a concern. I used to support scientists who didn’t appreciate the heat 4 workstations made in their shared offices, either.

a year ago

incrudible

A dedicated GPU with more die area and its own large fan will run both cooler and quieter when its power limit is reduced to match the performance of the Apple chip.

a year ago

MBCook

More watts in = more heat out.

It doesn’t matter if the cooling solution gets the chops’s surface temperature lower if it’s heating the room twice as fast.

Chip surface temperature is not a useful metric for this purpose.

a year ago

incrudible

The die area matters in terms of how much performance you can still get out of it when lowering the power to match up with the (not actually) more efficient integrated chip.

If Apple clocked their GPU to match the performance of a comparable dedicated chip, it would be just as inefficient, noisy and hot. Except they can not do even do that. They turned a limitation of the design into a supposed feature.

a year ago

rbanffy

Kind of. I did a quick comparison this morning between a MacPro and a similar high-end Dell workstation and the prices are similar.

https://twitter.com/0xDEADBEEFCAFE/status/166747612998729728...

a year ago

readthenotes1

I wonder how long you have to run the Apple chip to save enough on electricity to make the price difference worthwhile? Does it only drop to one human lifetime in places where energy cost is insanely high?

a year ago

MegaDeKay

The article reports that the M2 Ultra is basically even with a 4060 Ti in OpenCL compute which they refer to as a mainstream card. It is more a notch above entry level. The "best of class" 4090 is actually 2.5x faster. But again, this is OpenCL so who knows how the two compare in a more relevant benchmark.

Regardless, wccftech is far from reliable. IIRC, /r/amd blocks links to the site.

a year ago

the8472

> best of class NVIDIA discrete GPU offering

Those are nvidia's best consumer GPUs. I think the cheese grater falls into the pro segment. In that segment nvidia has the A6000s with 48GB VRAM and 91 SP TFlops compared to the 4090's 24GB and 73 SP TFlops. But that costs as much as the Mac Pro alone. And even bigger options (segmented for server/datacenter use) are available.

a year ago

peoplefromibiza

> that consumes massively more wattage

power consumption doesn't scale linearly with performances

the absolute best of class NVIDIA discrete GPU offering could possibly outperform the Apple GPU at the same power level

Or, to put it in another way, to recover that remaining 50% of performances (2x) the increase in power consumption would be exponential (a lot more than 2x, like 10x)

a year ago

seanmcdirmid

Wait, a $7000 computer doesn’t come with a discrete GPU?

a year ago

StewardMcOy

It's not just that it doesn't come with a discrete GPU. Apple Silicon doesn't support discrete GPUs to begin with.

As far as I understand it—and this is just from watching Apple's presentations on the architecture—the lack of a discrete GPU is a big part of how the Apple Silicon machines achieve good performance per watt.

Instead of having discrete RAM or a discrete GPU with its own VRAM, all of the RAM is accessible to the CPU and and the GPU in a unified memory architecture. On the M2 Ultra, this allows for 800 GB/s of memory bandwidth, and also eliminates a lot of the need to copy data from RAM to VRAM, as both the GPU and the CPU can access the same memory. In return, this allows the GPU to match the performance of discrete GPUs that have a lot more cores.

Of course, the big downside is that you can't expand the RAM or install a beefier GPU. It's all baked in to the logic board.

a year ago

wtallis

> and also eliminates a lot of the need to copy data from RAM to VRAM, as both the GPU and the CPU can access the same memory. In return, this allows the GPU to match the performance of discrete GPUs that have a lot more cores.

Plenty of PC hardware reviewers have done sensitivity analysis experiments to see how discrete GPU performance is affected by running with a slower or narrower PCIe link. The consensus is usually that GPUs connected by PCIe have more than sufficient bandwidth, and cutting it in half only affects gaming framerates by a few percent. Tighter coupling between CPU and GPU can plausibly have a bigger impact for some GPU compute workloads, but for traditional 3D graphics it doesn't help performance much.

a year ago

astrange

The only reason it wouldn't matter is that software is designed for it to not matter.

It enables software to be designed differently ie by not having to ever copy things to VRAM.

a year ago

wtallis

That's true to an extent, but when discussing the current performance of GPU hardware the context is necessarily software that actually exists rather than hypothetical software that could exist in the future. The high CPU-to-GPU bandwidth is not currently enabling Apple's GPUs to punch above their weight class (GFLOPS or whatever) except in scenarios where competing GPUs simply don't have enough VRAM. In a future where applications are written with Apple's architecture in mind, any advantage this provides would pretty much boil down to using more VRAM because VRAM capacity isn't as limited anymore.

a year ago

Retric

You can generally ignore FPS numbers as irrelevant outside of extremes, what people notice is stutter and that’s where PCIe link bandwidth matters.

a year ago

wtallis

I think it's more accurate to say that PCIe link bandwidth matters when your VRAM capacity starts to become insufficient, a problem that Apple sidesteps. The key advantage of Apple's unified memory strategy isn't fast communication between CPU and GPU, but in granting the GPU access to a high-capacity memory pool (with some side benefits of giving the CPU access to GPU-like memory bandwidth).

a year ago

dtx1

Measured as one percent lows and again, most GPUs under most Conditions simply don't saturate PCIe enough for that to make a difference. Der8auer and GamersNexus on youtube have plenty of examples.

a year ago

Retric

1% lows in a given benchmark isn’t really measuring stutter in practice, what you want is percentage of frames that don’t finish in time.

If once a minute you miss 4 frames in a row that’s noticeable even if everything else is rock solid. The thing is people adjust their resolution/settings to reach acceptable FPS, thus it’s fairly GPU independent. What matters is rendering volatility as assets are loaded etc.

a year ago

sudosysgen

That metric is irrelevant now that we have adaptive refresh rates. You don't really "miss" a frame anymore. 1% low is now the correct metric.

a year ago

Retric

If the pipeline stalls for 70ms it’s a problem even if in theory you don’t “miss” a frame you still aren’t getting new frames until rendering finishes.

All adaptive refresh rates do is avoiding unwanted delays after the frame finishes.

a year ago

sudosysgen

That's not how it works. A 70ms pipeline stall means a ~90ms frame which means that your 1% framerate low is going to be reduced.

Adaptive refresh rate means that only frame rate matters. You can't miss a frame, you can only have a frame take longer, and that's measured perfectly well by 1% or 0.1% low frame rates.

Also I don't see why an M series CPU is going to be any better at feeding the (asynchrous, multiple frames in flight) GPU rendering pipeline than a modern x86 CPU, when both are just as fast. macOS is also pretty bad at realtime scheduling for heavy workloads compared to modern Linux and Windows.

a year ago

Retric

Yes occasional 90ms stall reduces 1% frame rates, but less than how annoying it is.

A 10 minute test at 120 FPS is 10 * 60 * 120 ~= 72,000 frames. Your 1% lows is an average of ~720 frames but so what if of 300 them are at 60 FPS you’re going to have trouble noticing.

While if you’re almost rock solid at 120FPS a dozen 70ms stalls it’s really obvious that something is wrong but the calculated 1% low’s may actually look better. This is especially true as those stalls generally correlate with interning things happening.

The M series CPU comparison people are talking about isn’t average FPS or even 1% lows where dedicated graphics cars have an advantage, the comparison is what happens when things go very wrong. And it’s in that very specific case when they may have an advantage.

a year ago

sudosysgen

This is a completely misguided analysis. First of all, there is nothing in the M series architecture that's going to avoid pipeline stalls, let aloe huge 70ms graphics pipeline stalls. As I've explained above, games keep multiple frame pipelines in flight at once, so you don't have these kind of issues unless there is a scheduling or I/O issue. There is no advantage here.

The only advantage is that the transfer from GPU to CPU is faster in terms of latency. This doesn't cause pipeline stalls, because as I've explained above, frames are kept in flight, so latency is not critical. At the same time, modern CPU-GPU interconnects have similar bandwidth to RAM.

Additionally, you're not the first person to have thought about frame pacing. Dozens of reviewers have full frame pacing graphs with the frame times of every frame, as well as 0.1% lows, and we simply don't see what you're describing. 100ms+ frames are not a problem on modern games and modern hardware, and when they are, it's because of some blocking read to storage or to some scheduling issue, not because of CPU/GPU speed.

The impact of link speed on 0.1% low framerates has already been investigated, and it's minimal, and there isn't even an advantage here.

a year ago

[deleted]
a year ago

Retric

> 100ms+ frames are not a problem on modern games and modern hardware, and when they are

There’s a multitude of such bugs which ship with modern titles. Benchmarking sites don’t use the release day build of 2077 for very good reasons.

a year ago

sudosysgen

That's a problem with release day video games being unfinished buggy messes, not something that's going to change with hardware.

10 months ago

wtallis

> While if you’re almost rock solid at 120FPS a dozen 70ms stalls it’s really obvious that something is wrong but the calculated 1% low’s may actually look better.

I think this is an instance of the coordinated omission problem: you cannot simply measure the latency of the frames that were completed, but instead have to consider the frames that should have been delivered during a stall. Looking at frame time percentiles means a stall only penalizes your metric with one bad frame, when it should be penalized for missing several frames and showing the user an increasingly stale image for several average-frametimes.

a year ago

sudosysgen

This isn't actually true. Looking at aveeages will in fact reflect correctly on average frame staleness over a sampling period. A single frame that's 5x too long or 4 frames that are 2x too long have the same impact on the average delta of frame staleness on the total sum.

Think of it like Little's law in queing theory , it's better to simply just look at different percentiles, like 99 or 99.9th. The average framerate correctly indicates the average staleness.

a year ago

wtallis

> A single frame that's 5x too long or 4 frames that are 2x too long have the same impact on the average delta of frame staleness on the total sum.

It's not clear to me: are you saying those two scenarios should be quantified as equally bad? Because it seems pretty obvious to me that the 5x outlier is qualitatively much worse.

a year ago

sudosysgen

The fact that the 5x outlier may be qualitatively worse is something that you pin down using percentiles, which is what everyone does, either 1% or 0.1% low. The argument about time-weighted frame staleness is not mathematically valid.

a year ago

Retric

There’s nothing inherently mathematically special about 1% or 0.1%, you can also use 0.0042% or 0.00069% which highlights this kind of extreme outliers more.

Benchmarking sites use 1% or 0.1% because they better highlight the commodity PC hardware rather than the games bugs and architecture.

a year ago

sudosysgen

That's not true. You can't use 0.0042% because then you don't have enough frames to be statistically significant without running for a very long time. It's also true that a frame pacing issue that is so rare that it isn't even impacting the 0.1% frame time isn't going to be noticeable, really.

These benchmarks are controlled for game bugs and architecture by simply averaging over ~40-50 games, on repeatable loops.

10 months ago

[deleted]
a year ago

numpad0

Or memory slots, or dual/quad socket capability, or compatibility with the GPU card for previous gen Mac Pro. You get one multicore ARM SoC with 192GB of on-package L4 cache and integrated graphics, and that’s it.

While 192GB of RAM is more than I would need, for people looking to use 1.5TB of RAM and a pair of NVIDIA GPU they had from previous model, they’d have to go elsewhere.

Which leaves me wondering; how much engineering at Apple are happening on Mac?

a year ago

cozzyd

Not much engineering software runs on MacOS.

a year ago

captainbland

I think a useful way of thinking about this is it has something like $300-400 worth of discrete GPU equivalent performance assuming that the article's claims are correct (not exactly a given...) as it will get you something similar to a 3060 ti or 4060 ti.

a year ago

garbagecoder

My M1 Pro is about 33% as fast as a box with a 3060 in it on PyTorch and obviously it is way more efficient and doesn't require a special power supply, which is just taken for granted as something you have to do with a PC.

The best case this article can make is that if you need to play the latest game or do intense ML stuff you probably want NVIDIA, but that's the same as it ever was.

a year ago

incrudible

It also costs a lot more. For the money you can get a laptop with a 4080m and reduce its power limit and probably get the same or better efficiency, but with room to the top if you need it.

a year ago

garbagecoder

And I like it more, so I am willing to pay more. The implicit notion that I’m getting the same thing for less just isn’t true.

10 months ago

incrudible

You are getting better performance potential and equal or better efficiency at comparable wattage, for a better price. Of course you will not get it the shape of an Apple laptop. We were talking about how Apple chips supposedly have superior efficiency, but at least in terms of GPU that is not true at all.

10 months ago

BoorishBears

They said it's not faster than the desktop parts, which is almost a false statement since it actually is faster at multithreaded than both base desktop SKUs. And then they switch to providing a specific percentage when comparing the the 4080 (also based on an deceptively narrow comparison, one their other articles counter) for dramatic effect.

I don't think it was ever meant to be the most informative article: it seems written to serve the contrarians because that's profitable from a readership perspective for a publication like this.

a year ago

whywhywhywhy

4090 is what it should be compared to, that's the equivalent card you'd be looking at for those workflows.

a year ago

incrudible

NVIDIA still beats Apple even in efficiency, or so I have read. If you care about wattage that much, you can just underclock and undervolt your GPU. It is free and the police can not stop you.

a year ago

olyjohn

[flagged]

a year ago

weego

My audi S5 is slower than a GT3 Touring car.

It's just excuses.

a year ago

jacooper

Absolute best of class? What?

A 4080 is best of class?

And second of all, for the price it should be compared to the 4090, which absolutely demolishes it.

a year ago

2OEH8eoCRo0

For $7,000 there should be no caveats, it should be beating every measurable metric outright.

a year ago

danieldk

First, a Mac Studio with M1 Ultra is $3999.

Second, people don't buy Macs only for performance. They also buy Macs for macOS, for integration between devices, for a system that is cool and quiet, for hardware acceleration of ProRes, for on-device privacy-preserving machine learning. Being a bit slower than competing AMD and Intel systems is acceptable, because you get so many other desirable properties in return.

I'd definitely consider a Mac Studio with an M2 Max or M2 Ultra, if I didn't want something portable. I would never buy a machine with a competing machine with an Intel or AMD machine, because I don't want to deal with Windows or desktop Linux.

Other people have another set of priorities and that is fine.

a year ago

liuliu

I don't know. Nowadays a top-of-the-spec Linux workstation without any cost on top (i.e. bare-components) is around ~10k. 5995WX in the graph wccf provided alone costs ~5k.

a year ago

TradingPlaces

The article does not say if it’s a Mac Studio or Mac Pro. I think that matters. My understanding is that there is no TDP on Apple Silicon, and that it has a lot of runway in a less constrained thermal environment like the new Mac Pro vs Mac Studio.

While I’m still surprised they didn’t put a second Ultra in the Mac Pro, I’m betting there’s a wider delta than people imagine between the two form factors.

a year ago

wmf

The Mac Studio doesn't throttle so there is no extra headroom for the Mac Pro.

a year ago

TradingPlaces

We’ll find out next week

a year ago

llm_nerd

The M2 has been a bit underwhelming in general through all of its iterations: Apple jumped into such a lead with the M1 [1], so it was disappointing when they slowly iterated while Intel and AMD have made enormous strides catching up. Everyone keeps citing TDP, but given that we're talking about desktops that just isn't a huge factor.

Having said that, the 7950X was released late February, and the 13900KS was released in mid January. Both of this year. Both are their premiere available chips right now in the segment. Referring to them like they're last year's junk is rather silly.

[1] Though fun fact with the M1, I remember super disappointing Geekbench results leaking before its release. People do know how low trust the site is, right? The computer identifiers on the claimed "M2 Ultra" devices claim to be Macbook Pro 14" devices....which aren't getting M2 Ultras for obvious reasons. In all likelihood someone is making guesses and posting nonsense.

a year ago

jerrygenser

> Having said that, the 7950X was released late February, and the 13900KS was released in mid January. Both of this year. Both are their premiere available chips right now in the segment. Referring to them like they're last year's junk is rather silly.

7950X was released September, 2022. It's quite literally last year's chip and given that AMD release cycle is typically about 2 years, we're roughly halfway between last release and 9000 series release.

You might be getting it confused with the Non-X versions that were released earlier in 2023 -- those are basically the same chip but power limited and maybe slightly worse selections of silicon. Of those Non-X versions it was 7600, 7700, 7900 but non-X of 7950 was released. [1]

[1] https://en.wikipedia.org/wiki/List_of_AMD_Ryzen_processors

a year ago

wtallis

I think he was getting the 7950X confused with the 7950X3D, the latter of which was released in late February.

a year ago

spiderfarmer

Desktop pcs requiring a ludicrous amount of power and cooling is absolutely problematic. The amount of people willing to put up with big honking machines is dwindling.

a year ago

zamalek

> honking

May I assume that you haven't used a desktop from some year equivalent to whatever Mac you use? Because modern desktops are far from "honking."

I have a 7950x (the high core count desktop offering). AMD require (sans one brand) water-cooling for it. That means you get big quiet fans, which are far less "honking" than the tiny loud ones that are in laptops by necessity. In fact, due to the nice large radiators that liquid coolers have the fans don't spin at all the majority of the time.

When I do need power, it's on-tap.

My 6900xt is big honking during gaming, but you can get real quiet $300 GPUs. Or just use the integrated graphics and enjoy the quiet liquid cooling.

> big

https://en.wikipedia.org/wiki/Mini-ITX

a year ago

heelix

Doing the same thing. I ended up putting a 7950x into what was intended to be my next threadripper chassis and with stock settings the fans don't turn on. The only moving part would be the water pump, which is very quiet. Once in a while I get a gurgle from a bubble.

a year ago

pdpi

> Desktop pcs requiring a ludicrous amount of power and cooling is absolutely problematic.

High-end PSUs now overlap with smaller space heaters on power output. My living room is usually 3-4 degrees hotter around my desk than it is by the dining table.I'm looking to upgrade my gaming PC, and getting the power budget under control is surprisingly challenging.

a year ago

jerrygenser

For your gaming PC you likely don't even need a current gen chip. You are probably fine with a 5600 for most games unless you NEED high FPS on certain games and have a monitor with 244hz refresh rate (or you won't even get the frames) Gaming is mostly about GPU (if you're casual gamer and want to run 1440p) but if you do need cpu and high refresh rate then you're likely hardcore FPS or RTS games and you're running at 1080P anyway and then in that situation don't need a high end GPU.

I have a AMD 7700X and I run it on Eco Mode which is approx 65W TDP instead of the 105W TDP it wants to run at.

For my general use (including cpu intensive operations) this makes absolutely no perceivable difference. I ran some benchmarks out of curiosity and I take about a 5% haircut for a massive power and heat savings.

What Intel and AMD did with the chips was essentially sell consumers a default overclocked chip that will run to the max of the thermal headroom that your fan will allow and sit there. They did this to be competitive with each other in benchmarks for marketing.

Most consumers should run these chips in some sort of eco mode since the performance per watt has severe diminishing returns and they are actually quite power efficient as long as you don't run it on the default factory overclocked settings.

a year ago

nunodonato

except if you are playing Flight Simulator... in VR. Then even a current-gen build will struggle a bit :)

a year ago

eikenberry

> For your gaming PC you likely don't even need a current gen chip.

Many gamers, like me, probably also use their display for things aside from gaming. For non-gaming computer work I like 4K w/ a high (120hz ideally) refresh rate for well rendered text and smoother movement. But as I also then want to use this monitor for gaming, I need a gaming system that supports this.

TLDR; you might need a current gen chip for reasons other than gaming.

a year ago

jerrygenser

My 49 inch 144hz monitor ran fine for this exact use case when I had an amd 3600 and Nvidia 1660. Is your work use case somehow more demanding than gaming?

a year ago

eikenberry

I've read that to run higher end games at 4k you need something like an Nvidia 3070 or higher. I currently have a 2060-Super and it struggles at times with 2k/1440p. Probably go AMD next upgrade as they are reviewed to be better at 4k and have much better heat/power reqs.

10 months ago

tiberious726

> High-end PSUs now overlap with smaller space heaters on power output

Technically true, but vacuously. The limit of a standard US circuit is 1500w, which is why you see all space heaters and the good PSUs hitting the figure

a year ago

BaculumMeumEst

AMD and Intel are jacking up the default power settings to absurd levels because it gives marketing bigger numbers to to throw around.

You can drastically reduce the power you supply to desktop chips with BIOS settings. You'll generate far less heat, can use a smaller power supply and form factor, while still getting great performance.

If Intel/AMD get to a level where their chips rival M1/M2 while power throttled, things get interesting.

a year ago

Lapha

Both companies are juicing up the default power limits which absolutely matters if you're concerned about total Wh for sustained loads, thermal considerations, PSU rating, noise, etc, as you mentioned, but to be fair to AMD here they've released an eco mode on the 7000 series which does lower the default power limit quite substantially. You could do this before with manual adjustments to voltage and clock speeds, but now it's a simple toggle.

Another point is that in terms of power efficiency AMD are absolutely mopping the floor with Intel, where even older high power R9s are consistently outperforming Intel's lower power i5 and i7s by a wide margin. This sort of discussion is often left out of reviews which only look at TDP or peak wall power. It's still not ARM levels of efficiency, but x86 vs ARM or even desktop vs laptop efficiency is an entirely separate conversation.

a year ago

ilyt

I have 120W TDP chip on default settings and it only ramps up to the max in rare cases like compiling shaders for RPCSX3

> If Intel/AMD get to a level where their chips rival M1/M2 while power throttled, things get interesting.

For laptop users, maybe. For desktop extra 40W literally doesn't matter.

a year ago

jerrygenser

In general both on my own benchmarks and from what I've read/seen youtube videos, it's about a 5% performance loss multithreaded with full core saturation and 0-1% performance haircut on single threaded -- since on single threaded you are generating lower enough heat not to go near the high end.

For me this was going from 105W to 65W on my 7700X and my results were similar.

Not sure how it would look or work with Intel though.

a year ago

burmanm

Is it? Why?

My machine is entirely silent on normal operations (Zen 3 5600X and RX 6700). All the fans, on both GPU and CPU are stopped in desktop usage. And it doesn't obviously eat that much power (my monitor probably eats more).

The only moment I could hear them is if I play games. And then I have headset on, so I can't hear them.

I really couldn't care less if it ate twice the amount of power when playing games.

a year ago

pdpi

> Is it? Why?

Because, while gaming, the vents in my case blow air at over 40 degrees Celsius, and that heat has to go somewhere. Absent a setup that can put the case several meters away from me, that "somewhere" is on top of me.

a year ago

ilyt

Flip it 180 degrees ? Cases are designed to blow the air into the back, not to have back facing the user...

a year ago

waboremo

You should care because you're paying for that energy use lol

a year ago

burmanm

In the end that sum it takes energy during gaming is meaningless. I spend more on coffee while playing than to electricity.

a year ago

waboremo

Right, and you could have afforded better/more coffee with the savings costs to consuming less power. Which is the point: energy costs are not 0. These costs vary depending on which devices you play on, regardless of games. Someone playing the same games as you on PS5 rather than a gaming PC, is going to save enough money to then buy another indie game per month or more coffees in that month than you.

Note: this isn't about debating which platform is best, rather understanding power consumption impact and the cost to your budget.

Are these costs major? Life changing? Absolutely not. But you should care about them and make decisions incorporating them, not ignoring it as if power consumption is irrelevant even if doubled!

a year ago

burmanm

In the end, living on northern region (Finland), my PC power costs in reality are meaningless. My gaming probably eats what.. few hundred kWh per year? Lets say 1000kWh to make something meaningful. 1000kWh is less than the yearly fluctuation of electricity consumption of the house in heating costs. It's less than my Tesla's winter consumption fluctuation depending on the amount of heating the battery / car requires for me to open the doors.

See the insanity of optimizing couple of watts from PC? I don't play 12 hours a day, I play after work in the evenings - at best few hours a day, but not every day.

My workstation's power usage does not have any impact on my life. Even if doubled, tripled, quadruppled, it would be lost in margins of everything else where I use electricity.

PS5 eats about the same amount of power to be fair (or my XSX) so there's not much to save here. Also, they're a bit slower than my desktop and do not provide the flexibility.

Proper gaming performance requires power as no one has invented any golden goose to reduce it (not AMD, not NVIDIA, not Apple) at the same performance. I'd even wonder if Apple's usage of GPU in M2 Ultra is that power efficient when looking at the real world games as the fps counters are so low.

Theoretical performance is pointless in the end if it doesn't actually crunch anything that fast in reality. Maybe the glue architecture is only nice on paper and that's why Apple after years of ignoring specs is now only advertising specs on their GPU parts, not what it could do.

a year ago

tester756

Depends what's the trade off

Can additional performance allow me to do more while e.g gaming?

Like doing some stuff in the background, having VMs, IDEs, Dockers running?

a year ago

tehnub

>ludicrous amount of power and cooling

>big honking machines

Buy an all-in-one water cooler for the CPU https://pcpartpicker.com/product/2PFKHx/arctic-liquid-freeze...

It's easy to install and you have a quiet, well-cooled CPU.

a year ago

kitsunesoba

AIOs aren't a great option for users who want to set it and forget it. Even the best ones lose fluid to sublimation and/or start leaking after a few years, which is why there's still a market for coolers like the NH-D15.

a year ago

sgtnoodle

I went with the NH-D15 when I built my 5950x desktop. I've never fiddled with liquid cooling, and having to locate a radiator seemed like a pain. I very much hope not to need to touch it again for a decade. After tweaking the fan profile, it's silent most of the time. Of course, it does make it kind of a pain to remove my GPU because it's so bulky; I need to use a shim to reach the PCI-E latch.

a year ago

kitsunesoba

> I need to use a shim to reach the PCI-E latch.

Similar here, there's not much room left when you've got a NH-D15 and EVGA 3080 Ti FTW3 installed.

This is more of an inadequacy on the part of motherboard manufacturers, though. PCI-E latches don't have to be terrible… look at the system Apple uses in their 2019 and 2023 Mac Pro towers for example which remain accessible regardless of what you have installed.

a year ago

ilyt

My 120W 7800X3D only managed to heat that monster to 70C. I should probably bump the thermal profile to spin to max only around 80C...

a year ago

wtallis

How much did you have to crank up the voltages on the 7800X3D to get it to 120W? I can't get mine much past 85W at stock voltage and clocks, even with the memory running at DDR5-6000.

a year ago

ilyt

Entirely stock, not even EXPO overclocking.

I saw that during shaders compile in RPCX3, pretty hard to see it otherwise.

a year ago

Kirby64

Citation needed. "The best" AIO coolers shouldn't have issues with fluid lose in a meaningful way that impacts performance.

Likewise, I'm not aware of any widescale issues with AIOs leaking. This seems like just as much of a problem as regular CPU coolers that need to have dust removed (i.e., not a problem).

a year ago

tehnub

They won't last forever but Arctic does have a six year warranty https://www.arctic.de/us/blog/extended-warranty-period-for-a...

a year ago

marsven_422

[dead]

a year ago

kitsunesoba

IMO driving down power consumption should still be a goal even for desktop machines. It means that the overwhelming majority of users get cooler, quieter, less bulky machines, while the fringe who need raw power at all costs can overclock to the moon if they really want to.

This also pushes more "desktop like" performance to both ultraportable and reasonably portable laptops, allowing these machines to fully replace desktops for the overwhelming majority without all of the caveats that come with "desktop replacement" laptops (workstation laptops, heavy duty gaming laptops, etc). A lot of people who previously wouldn't have seen laptops as capable of being their primary machines are doing exactly that with M1/M2 Pro MBPs.

a year ago

automatic6131

>while the fringe who need raw power at all costs can overclock to the moon if they really want to.

This is actually the case! The 7950X and 13900K come in non-X and non-K variants, which have vastly reduced power footprints and overall consumption, and you can even take your X or K variant and... Enforce that exact same power envelope in BIOS, for minimal loss in performance. But the purchasers of desktop X and K SKUs are the overclocking fringe (by and large). I will admit though, a lot of laptops are sold with with i|r7|9 +HX variants that shouldn't be purchased because big number means easier upsell.

a year ago

ilyt

> IMO driving down power consumption should still be a goal even for desktop machines. It means that the overwhelming majority of users get cooler, quieter, less bulky machines, while the fringe who need raw power at all costs can overclock to the moon if they really want to.

It is the goal (after all more efficiency also means fitting more powerful cores into the thermal envelope) but given the choice most desktop users would be fine with "just a bit bigger box" rather than sacrifice performance for the price.

a year ago

birdyrooster

With power in city downtowns (eg San Jose) at $0.50+/kWh it’s definitely becoming a huge factor. My Mac uses almost an order of magnitude less power than the PC and I keep the PC powered off.

a year ago

theevilsharpie

> My Mac uses almost an order of magnitude less power than the PC and I keep the PC powered off.

Either you are not measuring power consumption correctly, or there is something very wrong with your PC.

a year ago

llm_nerd

I believe they're saying they keep it powered off because it uses a magnitude more power.

And it's an entirely believable statement. A Mac Mini uses between 5w - 20w. Many PCs idle at >50W, and under use hit 500W+.

a year ago

betaby

Perhaps, but those would be the very different PCs. Also while exist, 500W+ desktops are rare, and their performance are not comparable to Mac Minis.

a year ago

yyyk

That's not a fair comparison. We should be comparing Mac Mini to NUCs if power consumption is the relevant factor. Plenty of NUCs are rated for 15w.

a year ago

alphabettsy

And those NUCs don’t match the performance.

10 months ago

wtallis

Or it's an AMD desktop. Their chiplet design really isn't good for idle power.

a year ago

ilyt

Seriously, even when doing nothing whatsoever (and cores themselves idling at 1.3W) the package is taking 20W.

a year ago

Kubuxu

Disable memory overclock or undervolt the memory and SoC. You will see the power drop down like crazy.

a year ago

yyyk

Very likely any difference was already paid for with the Apple premium.

a year ago

wmf

The computer identifiers on the claimed "M2 Ultra" devices claim to be Macbook Pro 14" devices

I think you're confusing Mac14,14 which is the internal designation for the Mac Studio with a MacBook Pro 14". The leak if anyone's interested: https://browser.geekbench.com/v5/cpu/compare/21305974?baseli...

a year ago

BirAdam

From what I understand, the M2 was released due to TSMC struggling with yields on their N3 process. Apple had to do something with the N5 node to keep hardware available to consumers, and M2 was the result.

a year ago

Demmme

How does that not matter on a fair comparison?

It's a no brainer anyway to get more performance from desktop and non apple much cheaper due to apples pricing.

Apple has a huge advantage price / performance wise with the cheap m based Mac book air.

The comparison with a Mac book pro which costs 2-3k is slightly less so.

a year ago

zamalek

> Apple has a huge advantage price / performance wise

My 7950x machine cost, excluding the enthusiast GPU, $3000. That's less than half the cost of the M2 Ultra.

a year ago

masswerk

Mind the second part of the quote,

> Apple has a huge advantage price / performance wise with the cheap m based Mac book air.

The MBA is in the US $999 with M1 and $1099 with M2. (You can get them even at about $800 in sales.) This is an entirely different segment.

a year ago

brailsafe

You spent $3000 before gpu on a custom PC!?

a year ago

sgtnoodle

$350 motherboard, $800 CPU, $200 RAM, $150 NVME, $100 cooler, $150 PSU, $200 case. $1750 + CA tax is about $1900.

You could easily add another $1K by going crazy with RAM and NVME storage.

a year ago

brailsafe

$2000 I could see, but I do think more people are overspending on CPU for gaming PCs by probably double. Maybe not overspending if you're getting a tangible productive value out of it, but I suppose they didn't specify gaming. For gaming specifically I'd probably try and balance the useful GPU power I need with the minimum necessary CPU to prevent bottlenecking.

Part of the reason I haven't upgraded my intel macbook pro, is just because I think the cost all-in seems outrageous, even for someone who exclusively works on mac. I can't rationalize $500/16gb of ram or w/e. I haven't upgraded my gaming PC much, because the cost of GPUs very quickly overwhelms the performance improvement I'd get value out of compared to my severely out of date gear that I found by the roadside.

10 months ago

zamalek

It was the height of the pandemic and shortages. I guess it could be a lot cheaper today.

a year ago

rched

This article strangely cites the cost of the Mac Pro with M2 Ultra. You can get the same M2 Ultra in a Mac Studio starting at $4000.

a year ago

mrcwinn

The results speak for themselves (Apple is slower than some) but the context does matter: Apple has been talking about performance per watt since Steve Jobs showed a keynote slide announcing the transition from PowerPC.

The cynical view is that Apple is intentionally misleading customers with their ambiguous graph axes. Another perspective is they’re simply demonstrating the metric they’ve optimized for in the first place.

a year ago

kayson

Do people really care about efficiency for plugged-in devices? There is the environmental factor, but there are people on 100% renewable a too...

a year ago

IneffablePigeon

I bought an m series device to replace my windows desktop mostly for efficiency reasons. It saves me hundreds a year in electricity costs at the moment (quite literally, I calculated it beforehand and checked real usage afterwards). It also means my office is now silent and cool. It’s a big QoL improvement for me.

a year ago

menus

Was cost of the device also taken into consideration in your overall cost calculation? What was the cost of your Windows desktop and what was the cost of your M series device you replaced it with?

Have you ever upgraded any components of your Windows desktop (RAM, GPU, CPU, motherboard) or did you discard the entire thing?

a year ago

THENATHE

Realistically, assume you were going to upgrade anyway.

$1200 PC vs $1500-$2000 MacBook is really only 300-800 difference, and MacBooks hold their value on the second hand market EXPONENTIALLY better than windows.

So 3-8 years at 100$ a year, but also it’s worth 3x as much as the comparable PC in 5 years

a year ago

menus

> Realistically, assume you were going to upgrade anyway.

Realistically, you don't need to replace the entire thing. Replace mobo+CPU, GPU, PSU, RAM, case as necessary.

Laptop is a different story. Replacing a desktop with a laptop and then comparing them is not a fair comparison.

10 months ago

tokamak-teapot

Similar story here. I replaced an AMD 3xxx series Hackintosh workstation with an M1 Air after running both together and finding they were exactly as fast as each other.

a year ago

acdha

Even if your power is free, heat means a toasty office and fan noise, not to mention being careful about air flow around your furniture. Not a deal breaker but 100% of the Apple Silicon users I know have mentioned not previously having appreciated just how much noise their old systems made.

a year ago

tokamak-teapot

I do. I don’t like noise, or having my office heated in the summer, or the cost of hundreds of watts.

a year ago

danaris

Slightly ironic, coming from someone who claims to use a tokamak to heat a teapot!

a year ago

willtemperley

Yes. Especially in warm countries when it’ll add to your air con bill too

a year ago

BoorishBears

Like I said in another comment, the results say Apple is faster than both base SKUs on multithreaded and within the margin of error on single threaded once you consider they only used one Geekbench result.

I don't think it's misleading anyone to say they're faster, when their on SoC graphics are 4060 level, they definitely are overall.

a year ago

nateb2022

CPUs are intentionally engineered for a target market. When engineering the M2, Apple had a certain market in mind, and design and performance trade-offs were made accordingly. As others have noted, power efficiency seems to be a big priority. The M2 Ultra has a TDP of 60W. The i9-13900K has a base consumption of 125W, and draws up to 253W under stress. So do the math. Intel achieves 32% better single-core performance and 41% better multi-core performance (Cinebench) for 422% of Apple's power consumption. If there's something impressive here, it's that Apple is able to do so much with so little. If Apple wanted to, they could probably conjoin two M2 Ultra's and soundly beat the i9-13900K by a considerable margin and still use about half the same power to do so. The real question is why any consumer would need that much compute, and the target market for such a niche is probably very small which is why Apple didn't do that

a year ago

pocketarc

The M2 Ultra comes on a $7000 workstation; it's not exactly consumer-level hardware. What you're saying is absolutely true - they could just double it up (like they do from the normal to the Pro, and from the Pro to the Ultra). And I hope they do, for an M3 Ultra. It'd be great if they pushed it even harder and showed Intel/AMD that they can beat them at both the low end and the high end, with plenty of power consumption to spare.

As you said though, it's a very niche thing, so even without beating Intel at the top end they're not going to be losing much. It's just a bit of a shame. The i9-13900K is only $500, it's not like it's some wildly out-of-reach thing.

a year ago

alphabettsy

It’s comes on a $4000 workstation, the Studio, but point stands.

10 months ago

bee_rider

The Apple processors are definitely impressive efficiency-wise (and by many other metrics).

But of course, combining any number of M2’s won’t increase their single core scores. Intel’s desktop chips are there for people who want high clock speeds and are happy to pay in power for that (and the power cost is super linearly related to clock frequency). The Intel chips are just designed for different use-cases, it doesn’t make sense to assume either company could just linearly scale things to reproduce each other’s products.

The relevance of multi-core scores depends on how parallel the workload is. If we all had perfectly parallel workloads I guess Xeon Phi’s would have sold better.

a year ago

yunohn

> The real question is why any consumer would need that much compute, and the target market for such a niche is probably very small which is why Apple didn't do that

This is a lazy argument. Why did they make the M1 faster than other CPUs? I would argue that most people don’t need even that performance. Why do they put M chips in iPads - they can’t even use it optimally.

a year ago

nateb2022

The way I framed my argument, it's a matter of trade-offs. Regarding the iPad, it's definitely possible that they might have used an A15/A16 from their iPhone line rather than the M1 and still have good performance, but the M1 was probably better for marketing and for consumer appeal hence its inclusion.

10 months ago

Dylan16807

> If Apple wanted to, they could probably conjoin two M2 Ultra's and soundly beat the i9-13900K by a considerable margin and still use about half the same power to do so.

You can play that in the other way too. Use multiple 13700Ts at 35W each.

a year ago

gumby

Poorly written, resulting in what could look like a bias.

The article mentions that apple has focused on single core performance while the x86 processors in question are designed for multi core use cases. This reflects two different markets being addressed and the sad state (small amount) of multicore code today.

Also it’s silly to claim that the M2 Ultra is so expensive — you can get the same performance from a 3K “studio” desktop that you do from the >7K “Mac Pro”.

I use apple for all my “terminals” (macs, iPhone, etc) but really want AMD and Intel to keep working on these multithreaded powerhouses because I depend on that on the cloud side. I don’t see this ever being in Apple’s markets. Articles that further that are needed, but this isn’t one of them.

a year ago

KerrAvon

haven’t read the article, but there are no single threaded use cases on Apple platforms that Apple cares about, so I’d take any claim that Apple is optimizing for single threaded performance with an enormous bag of salt

a year ago

gumby

I'm pretty sure I remember discussion from Srouji and others on this specific topic at the "apple silicon" WWDC introduction. But I can't find anything useful from Apple in a web search, so it seems my statement is just a "some guy on the Internet" assertion.

It seems pretty clear and unsurprising that Apple optimises their design for their use case (e.g. major consideration of bandwidth to screen in handheld devices, reminiscent of one of the Alto's design criteria) but how that plays out doesn't support my claim either. But Apple's intended use cases aren't the same as the threadripper's.

a year ago

MBCook

> but there are no single threaded use cases on Apple platforms that Apple cares about

I don’t see how that could be true. A huge amount of software tasks are basically single threaded.

Remember since Apple does everything soup-to-nuts they have a ton of performance data from their computers to know what real user workloads look like so they can optimize the hardware + the software for them.

a year ago

gumby

I made this claim (single threaded performance matters) but in a parallel comment to yours noted that I was unable to find substantiating statements from apple that I believe led to my assertion.

So in that spirit I will point out that Apple's support code / framworks etc do a bunch of multithreaded UI and network stuff even when and app's code is putatively single threaded.

Now that stuff IMHO is pretty high latency (e.g. waiting on user action) so as a developer I still think my statement, and your impression, are correct. But I'd like to see something from Apple on the topic.

a year ago

astrange

UI latency, the entire point of a device, is a single threaded use case.

Multithreaded performance is only good when you don't care about power use, but that's never true on a battery powered phone. It's actually more often the case that you optimize software by removing accidental excess concurrency than by adding it. Junior engineers love them some unstructured concurrency.

a year ago

mark_l_watson

I don’t feel comfortable with these comparisons because I think Apple designs their SoCs specifically for the software they expect to be most used on their devices. I noticed, for example, that an older iPadPro with an A series SoC compiled Swift code in Playground twice as fast as my then current beefy Intel MacBook Pro. Similar design for running specific neural network architectures, handling their displays, etc.

A few days ago I was thinking of getting a beefy Mac Studio for deep learning since M1 and M2 are now fairly well supported. I didn’t because this felt like swimming up river, using something not as it was designed to be used. (I decided to use $10 or $50/month Google Colab instead).

a year ago

cjbprime

Reasons to still prefer an M2 for a HEDT situation, despite the article claims:

* The latest Intel/AMD desktops only have two channels of DDR5, and if you put as little as 64GB RAM in them they can drop to DDR4 speeds (2DPC, dual-rank), less than 10% of M2 memory bandwidth.

* You can (surprisingly cheaply) buy an M2 with 192GB GPU VRAM, but you can't buy a DDR5 PC with the same amount. If we all start wanting to run LLMs locally that'll be a pretty big deal.

a year ago

phonon

> can't buy a DDR5 PC with the same amount.

Should be available soon.

https://www.techpowerup.com/306005/asus-teases-192-gb-ddr5-m...

a year ago

gautamcgoel

Why is it that using 64GB of RAM causes bandwidth to drop?

a year ago

cjbprime

The bandwidth is fastest with one single-rank DIMM per channel, and there are four slots and two channels, so you ideally want two single-rank (chips on one side) DIMMs total. It'll get slower with 2DPC (DIMMs Per Channel), and slower still with dual-rank 2DPC. Some people are even seeing instability, with that setup at the slow DDR4 speeds.

There aren't any 32GB single-stick DDR5 DIMMs yet, let alone 32GB single-rank single-stick, so if you want 64GB then you're using four sticks of dual-rank 16GB, the worst scenario.

10 months ago

gautamcgoel

Thanks, this makes sense!

10 months ago

spacemadness

“Apple M2 Ultra Is Slow Than Last-Year CPUs.” This site is pretty low on attention to detail. Get an editor at least.

a year ago

trustingtrust

a year ago

anthonyryan1

Where are the performance per watt numbers?

Anyone can get the performance crown by having an unlimited energy budget. Performance per watt is much more valuable in data centers (TCO) and consumer devices (battery life).

a year ago

jacooper

Well, the article is speaking about the M2 ultra, which only going to be used in the Mac pro under Mac studio, both desktop computers.

a year ago

kemayo

The Mac Pro does have a rack-mounted configuration for the non-desktop data centers case. (I have no idea whether people will actually use it that way, but it exists.)

https://www.apple.com/shop/buy-mac/mac-pro/rack

a year ago

charrondev

I can see those being bought up for Datcenters and CI use. There have been companies hosting huge racks of Mac minis for ages to do CI for MacOS and iOS software.

a year ago

peoplefromibiza

> Anyone can get the performance crown by having an unlimited energy budget

Not really. No.

> Performance per watt is much more valuable in data centers

Assuming performance can be combined.

You can't get the same performances of an Nvidia RTX 4080 using 2 M2

a year ago

solomatov

Not always, if you are connected to a source of electricity and doing something limited by speed performance is critical.

For me, Desktop use is almost perfect on Apple due to battery life and perf but professional use is much better on Intel/AMD+NVidia. Also you could get much more perf for $ on such machines

a year ago

sosborn

> professional use

I hate this terminology. How would anyone define "professional use"

a year ago

solomatov

Something with intention to make you money and directly related to powerful machine, i.e. writing code, editing videos, images, etc. As opposed to casual use where you plan simple games, browse internet, etc.

a year ago

daedalusred

'plan simple games' so are you counting complex games as professional use in your example?

Let's say I'm a professional researcher for some oil company. My job will primarily consist of browsing the web and writing stuff up, does that make that job not fit into the 'professional' category? You're being paid to browse the web and just report on what you found....

a year ago

daedalusred

I'm a professional that gets paid to do things and a MacBook Pro with M1 Max works perfectly for me. This whole 'professional' thing is absurd and full of holes.

a year ago

olyjohn

Queue the "But power consumption..." as though it makes fuckall difference on a workstation.

a year ago

satysin

Perhaps not to you but to me it matters a lot.

I live in a country where it gets very hot in the summer. I prefer to keep my environment cool without needing AC. Unfortunately I have found that is not possible to do with my beefy Dell workstation that consumes in the 1kW range thanks to the Intel CPU and Nvidia GPU.

It isn't just a feeling but a hard fact that my thermostat can confirm that running my Intel workstation puts my office up to uncomfortable temps. No such thing happens with my M1 Ultra Mac Studio running with a third to a quarter less power consumption.

Perhaps I am alone in this but I dislike having to use AC to cool my office when I can just not make it so damn hot in the first place. It is all just a waste of energy and energy isn't free.

If it's not an issue for you then fair enough buy a 1kW+ Intel/Nvidia system but to say "it makes fuckall difference on a workstation" is disingenuous.

a year ago

unusualmonkey

Have you tuned your workstation for power efficiency?

Dropping cpu and GPU frequencies even a few % might have a significant impact.

a year ago

peepeepoopoo9

*cue

a year ago

jitl

I think the most interesting benchmark of the M2’s potential advantage conventional Intel/Nvidia machines would be a memory-heavy ML training or inference workload. Apple made some noises about having 190GB of unified memory as an advantage for that scenario, since the system wouldn’t need to wait for copying from COU RAM to GPU RAM but I’d like to see that put to the test.

I’m also disappointed that Apple didn’t go for a 2-socket Mac Pro so they could offer a compute advantage over Mac Studio in addition to the PCI Express slots. Other than the IO, I can’t think of a reason to pick the Mac Pro.

a year ago

galad87

Comparing the OpenCL GeekBench score is unfair. "More popular"? I don't really think so. Anyway of course it's going to be slower than a machine with more then two times its number of cores.

a year ago

sudosysgen

Why isn't it fair? NVidia GPUs are notoriously terrible at OpenCL.

a year ago

daedalusred

OpenCL has been deprecated on macOS since the M1 was released so the M1 drivers won't optimise for OpenCL.

a year ago

pavelevst

I’ve used core i9 mbp16 for almost 3 years and this week I’ve used M1 Pro mbp, seeing hide differences in performance and knowing the actual difference in tests of these 2, it make me think that apple international slowdown and heat up intel macs, like they did (and may be still do) with iPhones

I also remember my macbook unibody 2008 and mbp 2012, with them I could do a lot with 4gb ram, even on hdd, nowadays 8gb is kinda too small for most of programming jobs, it look like macs keep getting better hardware every year and macOS using it more and more aggressively

a year ago

pram

I’ve had the i9 MBP with the 5500m from launch and it was always hot and noisy fwiw. For most people just having a second monitor plugged in made the fans always run because of a voltage issue. I sincerely doubt Apple needs to intentionally make it any worse. I hate that laptop!

a year ago

mrangle

Definitely have to adjust for TDP. The new gen Intel processors match the M2 performance for $100-150. TDP is triple but still relatively low. There isn't really anything available from non-niche retailers that approaches the TDP of Apple chips. And what does tends to be a binned just to be sold as mobile. Better to buy the standard chip and limit clock speed. What also shocks me is more expensive chips that offer inconsequential gains in processing for double TDP. I'm more likely to invest in a slight performance hit for much lower TDP.

a year ago

skippyboxedhero

Yes, there is. AMD has desktop chips with the same TDP as Mac laptops. The latest series of AMD chips are more powerful and more power-efficient.

I think power efficiency by itself is a laudable goal, but the idea that other chipmakers don't optimize this doesn't make sense...you have to if you are building a laptop. A lot of the desktop chips are ludicrously overtuned, even AMD's chips, but the laptop chips have to be power-efficient due to the limitations on cooling.

I would look in particular at the AMD APUs that are being used in handheld devices. Unlike Mac's chips, which promised much but delivered little, they are actually delivering desktop GPU-tier (in the range of 1050/1060) performance in a 20W package and under.

a year ago

mrangle

I know that there are chips with TDPs in this range. My point was that they aren't readily available to consumers as a component. Obviously, I wasn't talking about purchasing pre-built devices. Anyone can buy a mobile device, laptop, or Alibaba router / pc with 10-20 watt TDP chips. The context was for stand alone chip purchases.

Would you point out where it is trivial to purchase a new AMD chip under 30 watts TDP? I'd sincerely be interested and the question isn't asked rhetorically.

10 months ago

Havoc

>Definitely have to adjust for TDP

Considered certainly, adjust less so.

If you want to adjust for something price makes more sense than TDP

a year ago

mrangle

I consider price and TDP as the two important variables, for performance, that aren't completely reconcilable. They should be considered separately, and then together for a buying decision for example.

To illustrate and acknowledging that this calculation is more practical than satisfying to mathematicians:

I wouldn't say that a chip that takes half of the power and costs half as much, while performing three times better, than a decade old chip is twelve times better all around. Even though the gains exceed both those indicated by price and watts, taken on their own. It performs roughly six times better per dollar spent and watt consumed, respectively. These performance differences are not reconcilable into a single number that most would quickly understand, but they should be considered together. And this in fact was the specific situation for the processor that I replaced. Not accounting for processing improvements that aren't included in thread processing speed. Try playing 4k video with a decade old processor while monitoring its utilization.

Not to be argumentative as everyone is certainly entitled to their own weighting. I value TDP more than I do dollars spent. Though, both will always be unavoidably important.

a year ago

znpy

Just wondering, what is the cpu that you picked?

But I agree, tdp should really be kept in consideration.

a year ago

mrangle

Look at the i3. Though it isn't much different than last gen, and that buying decision should be mostly determined by price (for the same price, go later). Unless there are significant enough non-speed related architecture differences that I am not aware of and that are important to you (possible). Try to catch a sale. Going upline will double the TDP for little gain, especially if dollars spent is important as well. People that have a specific interest in more L2 and L3 cache may see the logic in going upline. The TDP and price equivalent Ryzen will equate to about 10% better gaming performance, and better multi-thread performance, but lesser single thread performance.

a year ago

wtallis

> Look at the i3. Though it isn't much different than last gen

IIRC, the desktop i3 and i5 processors are in many cases literally last-gen, since Intel is using a mix of Alder Lake and Raptor Lake dies on most of the "13th gen" desktop products that do not have more cores or cache than Alder Lake did.

a year ago

mrangle

Right. Price should determine what one buys. There is a marginal performance bump between them for the same advertised TDP, and so suppose if price is the same then next gen it is. Otherwise buy what is cheapest.

a year ago

murmansk

Ahhaha, Intel's TDP is 2-4 times bigger under full load (and I guess ~10-15x under moderate load) while having negligible perf advantage over M2. The author can not be serious.

a year ago

diffeomorphism

Is that like 5 acre feet per fathom? TDP makes no sense here.

a year ago

peoplefromibiza

> Intel's TDP is 2-4 times bigger under full load

Never been a problem, even when Intel TDP was 20x worse than today.

EDIT: the author forgot that the tested system is an M2 Ultra a desktop class system with a TDP of 90 watts

The i9-13900T and i7-13700T come in at a max turbo power of just 106W

which is just 1.2x

a year ago

MBCook

> The i9-13900T and i7-13700T come in at a max turbo power of just 106W

Now add in the GPU. Because the Apple number combined both.

a year ago

markemer

Yep - the total TDP of the comparison is like 600W. The Pro is also for people with strange PCIe card needs - the Mac Studio delivers this performance in a tall MacMini form factor and nearly silent operation. Not that there aren't things to complain about, but Apple Silicon architecture is impressive for what it is. The GPU suffers more for its uncommon architecture that is similar to a phones but is harder to optimize for. Adobe, Logic, and the like will make that effort. I doubt game engines will. But that's why I also have a power hungry windows laptop.

a year ago

MBCook

I think Epic could get Unreal to run just fine. Unfortunately they don’t have a great relationship with Apple (I can’t remember why but I think it was something Apple did).

The Mac Pro makes sense for only a few true professionals who need those PCI slots for non-GPU things. But those people are have BIG budgets to spend and are good customers.

Maybe the future Pros will improve, this one is a bit odd. But it has a purpose for those who truly need it.

a year ago

wtallis

> even when Intel TDP was 20x worse than today.

When has Intel's real, advertised, or specified CPU power consumption or TDP ever been 20x that of the i9-13900KS?

a year ago

peoplefromibiza

Core i9-13900KS consumes 150W with 24 cores (8P+16E)

so an average of 6.25 watts per core

which is less than a Pentium 90, a 1994 CPU running at 90Mhz

in absolute is not a 20x worse TDP, but it is relatively to the gain in frequency and performances (more than 20x actually)

a year ago

wtallis

> Core i9-13900KS consumes 150W with 24 cores (8P+16E)

No, the 13900KS has a nominal "TDP" of 150W. That's a marketing number, not a measurement and not even a control target parameter for the default boost management settings. Out of the box, a 13900KS in a typical desktop motherboard will happily draw more than twice that, indefinitely, if you can cool it and have a workload that can actually keep all of those cores busy: https://images.anandtech.com/doci/18728/13900KS%20Power%20Gr...

If you want to compare per-core power, you either have to use a power number for a workload that's actually loading all the cores, or divide the measured power by the number of cores actually in use.

a year ago

daneel_w

> "Never been a problem, even when Intel TDP was 20x worse than today."

Economically and environmentally it's absolutely a problem.

a year ago

peoplefromibiza

> Economically and environmentally it's absolutely a problem.

[citation needed]

Economically the Intel CPU of the 90s that had a very bad perf/watt compared to today's standards have been awesomely worth it

Environmentally, CPU have been getting better and better, the difference of a few 10s of watts doesn't really make any difference, unless you have numbers to back up your very strong claim.

a year ago

daneel_w

Citation needed? Do you live in some alternate reality?

> "... the difference of a few 10s of watts doesn't really make any difference, unless you have numbers to back up your very strong claim."

We have a billion power-hungry PCs running on the planet. Power-efficiency matters for economy and for environment, because power isn't free and only a tiny speck of the world runs on clean energy. It always mattered.

a year ago

peoplefromibiza

> We have a billion power-hungry PCs running

Let's do some math.

According to [1] Human production of energy is even lower at an estimated 160,000 TW-hr for all of year 2019 (a COVID year)

Let's hypothesize the difference is on average 10watt/hour (rather large for the average device), the difference for a billion devices would be 10GW/hour.

Which is exactly 1/16,000,000th of the total.

Assuming every Apple computer consumes 100watts less than the equivalent non Apple, assuming there are 20 million new low power Apple computers (probably there are much less), assuming the CPU are 100% of the time in sustained mode (of course on average CPU do not run at 100% of the power all the time continuously, but let's assume they all do in this example) it would mean 2GW/hour saved, which corresponds to a 1/80,000,000th of the total.

It would allow, maybe, to shutdown an average power plant (the largest one produces 23GW/hour)

Unfortunately there are over 65,000 power plants in the World, 2,500 of which run on coal.

Unfortunately the energy saved could come from renewables, so the difference on emissions would be even less relevant than it already is.

Economically those 2GW even at the Denmark prices ($0.50/KW) would cost one million dollars (2GW = 2,000,000KW). AKA nothing.

As you can see the difference must be quite large to make a real measurable difference.

[1] https://en.wikipedia.org/wiki/Earth%27s_energy_budget

10 months ago

[deleted]
a year ago

locustous

Most of those systems sit at idle, when they are on at all. Idle power is not the same as TDP. The idle difference is not so great that it's a big deal.

a year ago

MBCook

Do you have a source showing idle numbers? I’ve never seen a comparison. The ones I’ve seen are always about max power.

a year ago

locustous

It's not apples vs apples. Literally and figuratively. The I/O or board features on PC boards will eat power. So you will end up with a 15-30 W idle vs probably 5-8 for the Mac. I doubt the PCIe on the pro does much to the power unless there is an extra I/O die on system, you need active chips to eat power. That is vs Intel.

AMD will add a bunch more watts to the idle number as their multi CCX CPUs just eat more power at idle. You'll be closer to 30-45 W idle. This is a guess, it's widely acknowledged but not really quantified that I've found.

AMD monolithic dies are more in line with Intel. IE the laptop/mini PC line will be nice and low.

The difference is vastly diminished when you add the display that's using 80 Ws. So that's 85 vs 100 W total system power? 15% difference that gets bigger when you peg your processor.

Apple numbers are better in pretty much every way on power. But we're talking a handful of watts. Even at really high power rates it isn't going to add to up much.

Sources: I have a Mac mini and an Intel raptor lake desktop. I take measurements. And I have read various sources. This is my best information.

a year ago

daneel_w

Whataboutism.

a year ago

tester756

Yea, sure you can argue for the sake of arguing

but yea it makes difference if the difference is

0.001% 0.01% 0.1% 1% 10% 1000%

a year ago

jacooper

Since when was TDP a deciding factor in desktop purchasing decisions? Sure M SOCs are great on laptops, but there are no reasons to get them on the desktop, slower, more expensive and with less compatibility.

a year ago

mrangle

TDP reduction = heat management = reliability management but mostly noise management.

I've never tried to water cool a 125 watt processor and so I can't speak to that. But especially if one uses air cooling or ideally passive cooling, one's ability to reduce noise is proportional to the chip's TDP. Noise reduction is important to many.

a year ago

jlund-molfese

Lower power consumption means less heat, which means you might see more reliable performance (less likelihood of throttling in the middle of intensive workloads)

This doesn’t matter for everyone’s use case, but it is a factor some people might consider.

a year ago

zbrozek

Where and when electric energy is $0.51/kWh, one might care quite a bit.

a year ago

ianai

It’s such a strawman point. But this comparison is totally bonkers. If you’re doing a workload meant for a 4080/4090 then you really have no business pointing at the M2 Ultra as a competitor.

Then there’s comparing like technology to like technology. Something like the 5700G AMD line, and that’s not even nvidia. But that probably isn’t as interesting to the authors.

a year ago

readthenotes1

Whoever is doing the caring should reconsider their life choices at that price. At 51 cents per kilowatt hour, personal solar + batteries are a vailable alternative. If you cared about the planet, you'd move

a year ago

jacooper

I doubt the target customers of the M2 ultra aren't going to be able to cover extra energy costs for a cheaper and more performant device.

a year ago

zbrozek

Oh sure, but I was responding to the comment above saying that desktop users in general shouldn't care about power consumption. That's too broad.

a year ago

daneel_w

I considered it an important factor even 15 years ago, long before the reality of the environmental and energy crises really dawned on us. Surely I can't be alone.

a year ago

webaholic

AMD/Intel run at ~5 GHz at peak. Using that vs Apple's 3.7 GHz, you can see that the M2 is roughly ~20% better than the others at iso power.

a year ago

kalleboo

It's pretty clear from the rumors that Apple wanted a quad-die CPU for the Mac Pro (which would have had double this performance) but couldn't get it working, and had to compromise here to get anything out.

I wonder if they'll give up on going past 2 dies and leave their high-end eternally lacking or if there is something else coming down the pipe.

a year ago

[deleted]
a year ago

charlie0

One thing I haven't seen mentioned is thermal throttling. Performance per watt absolutely matters if you're working in a place where the temp is 75deg or hotter. Those powerful Wintel laptops will certainly throttle down. The M architecture has a lot more headroom in this regard.

a year ago

Ambix

Intel and AMD might be fast on the paper... but the RAM bandwidth matter much more now (thing of gen AI applications) than raw CPU power. So Apple Silicon is really the only CPU doing crazy 200 / 400 / 800 GB/s on the consumer market.

a year ago

visionscaper

I'm wondering to what extend these benchmarks, especially the GPU benchmarks, are really optimised for the specific Apple M2 architecture? And to what extent the high GPU-GPU bandwidth is benchmarked, this also influences real-world performance.

a year ago

shaman1

As a side note, the difference in quality between the comments from HN and the linked site is from sky to earth. Thank you HN mods and users

10 months ago

snird

The selling point is those chips' efficiency, not raw power. And in that regard, they are far better than their X86 competitors.

a year ago

kokonoko

Thanks for the correct albeit clickbait title, now let's see the power usage, performance per watt etc.

Hint: M2 destroys the competition

a year ago

partiallypro

Do general consumers care about wattage being used on a desktop? I don't think many consider it at all, they just want the performance. It matters on a mobile device like a laptop, tablet or phone, but that's about where consumers stop caring.

a year ago

MikeTheRocker

Depends on the use case. I personally undervolt my AMD 7950X in my gaming PC to use less power and thus generate less heat and fan noise. It makes a big difference.

a year ago

LegitShady

Getting quieter fans on a better cooler might make a bigger difference.

a year ago

[deleted]
a year ago

cubefox

I assume the high price makes the advantage in power consumption (lower energy cost) not that relevant.

a year ago

detrites

How is comparing a 16GB GPU to a 64GB - 192GB GPU on the same terms useful or informative?

a year ago

smoldesu

Good question, apparently it seems to matter to Apple as well: https://youtu.be/CUwg_JoNHpo?t=1834

a year ago

scarface_74

This and the introduction of the new slower less expandable - at least with respect to GPUs and memory - is indicative that it was a mistake for Apple to completely transition to ARM and should have kept high end options on Intel until they had a better GPU story to tell.

They already had the Mac Studio, they could have kept the Mac Pro on Intel.

a year ago

jitl

3rd party GPUs don’t mesh with the Metal strategy. Apple dropped and deprecated OpenGL a long time ago - and they’re the only one building metal GPUs

a year ago

scarface_74

The Intel Mac Pro supported third party GPUs and there were Thunderbolt based officially supported GPUs for later Intel Macs. Apple made a big deal about them. This was after their transition to Metal

a year ago

lxgr

Doesn’t Metal support multiple GPU architectures just like OpenGL?

a year ago

jitl

You’re right, there is some support - https://support.apple.com/en-us/HT202239

a year ago

throwaway5959

The new M2 Ultra can process 24 4K streams at once. How much more processing power do people with the Mac use case in mind need?

a year ago

smoldesu

Many need things that aren't media encoders. The 24 4K streams are impressive, but it's less impressive when you realize it comes from having overly redundant video encoders from gluing two M2 Maxes together. For raster graphics performance, BVH traversal, ray tracing and machine learning purposes, it's entirely understandable why someone might prefer an older Intel Mac for performance purposes.

a year ago

jacooper

The Intel Mac pro was too slow, unless you mean refreshing it, which I doubt is possible, because apple doesn't want to do anything related to X86 anymore.

a year ago

scarface_74

You could add GPUs to it at least. For many high end workloads, you need GPU horsepower.

How is not wanting to do x86 on the high end? The article says just as much.

a year ago

dehrmann

All all the M2 variants just Apple going down the binning path because of yields?

a year ago

wmf

M2, M2 Pro, and M2 Max are different dies; they aren't binned versions of the same die. Apple showed "photos" of the dies and you can see they are different sizes with different stuff on them.

a year ago

dehrmann

Are their yields actually that good, then? Are some of these chips massively underclocked? Do they add extra cores and cache for redundancy? Or maybe they have no idea, and TSMC delivers only chips that pass QA. For as big as Apple is, TSMC is in the stronger negotiating position.

a year ago

tiffanyg

As usual, hard to evaluate claims of A better/worse than B, and fanboyas and fanchicas massively boosting the noise:signal ratio.

https://youtu.be/u7ocWDNrfRs

Or, of course, there's the classic Twain / Disraeli quote ... take your pick.

a year ago

lopkeny12ko

One of my biggest gripes about Macs is that their thermal engineers refuse to get off their high horses and build real cooling solutions that don't artifically limit the capabilities of an objectively performance-competitive CPU.

I don't care if it consumes 200 W of power. I don't care if the chassis needs to be thicker to accommodate a larger heatsink. I don't care if you need to run the fans at 100%. Just let me use the full power of the CPU created by the chip designers god dammit!

a year ago

oneplane

> I don't care if

That's the thing: they aren't building it just for you. We can speculate about their internal requirements documents, but what's for sure is that they calculate a projected market adoption based on various factors like cost, market segmentation, power draw, heat transfer, noise etc. If it turns out that they project a bigger profit with a lower power limit than technically possible, that's what they'll do.

Personally, it would be nice of this were to apply to any CPU, GPU, SoC and even VRM and DC-DC regulators; let me worry about the dissipation, just pump out as many cycles as possible. But that's not really something that covers any significant market that Apple (or most multinationals) is targeting.

What I find much more surprising is that all of their M1/M2 deployments seem to be far fewer bin-limited; normally you'd get a crapton of SKUs for a SoC because they are so hard to manufacture, but Apple seems to get away with only 15 to 20. Perhaps this is also why there are much clearer power limits on most of their devices.

a year ago

etempleton

That is kind of the thing. The whole Mac Studio only consumes ~200 watts.

It is what I find about this article to be disingenuous. Sure, by all means compare the M2 Ultra to the Intel 13900K and the Nvidia 4080, but at least mention that it is consuming a third of the power.

a year ago

tiffanyh

I was expecting the MacPro (M-series) to be exactly that.

A high wattage, highly cooled proc.

But nope, they did what you described.

a year ago

throw0101b

> I was expecting the MacPro (M-series) to be exactly that.

MKBHD put forward an interesting hypothesis in his last podcast: this generation Mac Pro has simply to get the refresh out the door. They took their existing designs (SoC/chips, cases) and simply mashed them together. Similarly to how they simply mashed the M-chips into existing laptop case designs first, and then optimized with Gen2.

Now that they have something, they can iterate on a more harmonious solution that allows for the advantages of each.

a year ago

gumby

Yeah, I did too. The use case for the Mac Pro is even more limited than what you would expect (it's not like there are a lot of supported PCI boards). It's hard to think of many users who would choose it over a Studio. I doubt Apple will make any money on the Pro.

They could use this version as a stepping stone to a future faster device. But I suspect this form factor is a dead end for the Apple customer base. Maybe it exists for bragging (i.e. marketing) rights, the same reason Honda funds a Formula 1 team.

a year ago

florakel

It’s a fair point. Especially for the new Mac Pro they could have implemented a more robust thermal solution and clock the whole thing higher. That’s one advantage of a bigger chasis, isn’t it?

a year ago

MBCook

The took the giant overkill design meant to hold a hot Intel chip and possibly multiple big GPUs, and put a M2 Ultra in it.

It’s overcooled. There is no way it’s thermally limited. It’s got to have headroom for days.

I’m very curious where this idea I keep seeing in the comments that Apple refuses to run chips at full speed for cooling reasons comes from.

a year ago

florakel

The current Mac Pro is kind of pointless. Before you got a powerful workstation (power consumption was irrelevant), now you get a Studio Mac in a new package. I feel they just launched it for “compliance” not because it is a great product. They hit a wall with scaling the M chip architecture. Building an even larger chip than the Ultra would be insane and going multi socket is not possible.

a year ago

MBCook

They need a product for people who need expansion slots for non-GPU things, and it fits that.

I agree that it’s likely they wanted to use the quad M2 chip (jade 4 C die, if I remember the term correctly), which would have given them far more PCIE lanes and made it more desirable.

But that doesn’t seem to have worked out. And the lineup had a whole they needed to fill for some of their top end customers. So here we are.

a year ago

wmf

AFAIK none of the desktop Macs are thermally limited.

a year ago

mnd999

They used to, the G5 Macs had impressive cooling.

a year ago

nickdothutton

I wasnt a G5 owner, waited for the Intel switch, but I gather the G5 water cooling had a very high failure rate and this was at least a large part of why Jobs dropped IBM/MOT/PPC int he end. They promised him a high clock, air cooled G5 but couldn't deliver.

a year ago

etempleton

The bigger issue was that they couldn’t get the G5 to work in a laptop and this was killing Apple. Apple’s laptops have always been a cut above their PC counterparts in terms of design, but in those days it felt like Apple Laptops were from a different planet, except they were unreasonably slow in large part because they were stuck on the G4 processor for years. Switching to intel was a necessity for survival. IBM just couldn’t deliver or keep up with what Intel was doing.

a year ago

hulitu

> They promised him a high clock, air cooled G5 but couldn't deliver.

Then how were IBM processors cooled ? IBM used also PowerPC

a year ago

pram

If you've never been around an AIX Power server: they're extraordinarily loud. Usually these 3/4U rack servers that sound like a plane taking off.

a year ago

peppermint_gum

IBM version of G5, POWER4 and POWER5, was clocked a bit lower than G5, and their servers and workstations had extremely loud fans.

a year ago

pram

The G5s had to have impressive cooling lol

a year ago