575: Talking Parity with John Siracusa


Excellent! I can’t wait to listen to this. Moving it to the top of my list!

1 Like

It was a lot of fun to record. Hope you enjoy!

1 Like

There is a usecase for having small efficacy cores on the things like the macPro. The OS always has background tasks running.

If you clock a performance core down to provide the same performance as an efficacy core that performance core draw more power than the efficacy does to do that same task in the same amount of time. Also efficacy cores use up a lot less die area so if you expect you will always have some tasks that are low priority it is better to have some efficacy does to run these tasks so that there is more thermal headroom for the performance and more die area for them.

In fact in a large pro workstation machine it makes extra sense to have these cores so that when the user is doing a performance critical task the background operations of the OS do not end up jumping onto those performance cores and thus effecting data from the L2 cache. By adding a a few efficacy cores (at the cost of a small amount of die area) you can ensure the primary task can run uninterrupted.

Apple is not waiting to work out how to make the macPro, they have already sorted out how to make apple silicon for the macPro already. They would not have started the transition if they did not already have a fully 100% working macPro in the lab ready to go.

What they are likely waiting for the capacity at TSMC. iPhone season tends to book out all of apples capacity for a few months then the M1 has taking a load of capacity now likely the iPad update is taking some and after than the laptops and iMacs and then after than the macPro. The macPro will have the largest die and as TSMC get better at making 5nm cpus the yields will improve so it makes sense to keep back this largest die to the end so that they do not need to throw that many of them away due to defects. I expect we will get macPro updates by the end of 2021.

Also there is a chance apple will do what AMD did and use a McM (multi chip module) to build up the macPro chip this way they can use multiple smaller chips on the same package letting the use the same dies as they use for the iMac (and maybe even the higher end laptops) this will let them avoid needing to make custom silicon for the very low volume macPro but still be able to create a SoC with 64+ cores in it, this also improve yields (since yields go down non-linaraly with die area it is much better to have 2 small dies than 1 large one) since apple is on, and will want to continue to always be on, the latest smallest node from TSMC anything they can do to ensure yields are good is worth it otherwise they might end up throwing away a lot of dies due to defects.


I think the top end 16" MBP should have an option with a high TDP power for those users that do not want a desktop. But the entry level should be nice low power, with the high power being a larger GPU.

I expect the cellar is waiting for apple to have its own modems, putting Qualcomm modems is very unlikely due to the pricing model Qualcomm use were they get to take a % of the devices price. Apple does not want to give dualicome $800 due to a user buying a fully high spec $8000 MPB.

Worth remembering apple does not innovate in the public anymore. Apple likely have 100s of such prototypes internals but they will not ship them until they feel that the have value that can take a long time.

Good episode! I’m interested in what John Siracusa would like to see in Finder and window management upgrades.

I am one of those people who runs Mac apps in full screen most of the time. I’m still using a 2010 Apple Cinema 27” display. The More Power Users discussion will be useful when I finally decide to upgrade.


  • nVidia would be great
  • I am skeptical about the Pro being a SoC
  • I am confused what “Pros” are for Apple … just YouTubers? What about HPC?
  • Please solve the monitor calibration issues!!!

My hot take on this is that HPC is moving to the cloud. Or in on-premises data centers, running dedicated Linux servers. It’s not a Mac thing.

Oh, dear, my day job is intruding on MPU.

Probably the core market for the Pro is, yeah, YouTubers, and other professional videographers and media professionals (do podcasters need Pro power?), along with maybe hardcore gamers and game developers–maybe Apple is making a play for the deep gaming market at last? And enthusiasts like our hosts and John Siracusa, who buy high-end Macs the way other people buy sportscars.

For certain applications, yes cloud. But there’s also “desktop HPC”. For data analysis, forecast models, etc. the Mac Pro would not be the first choice. And not having nVidia cards is an issue.

1 Like
  • nVidia would be great

would not give anything apple can’t do themselves. the perf per W of apples GPU cores means they can produce gpus that are as powerful info more than nvidia without issues. Apple will not get into bed with nvidia given the poor support driver support they had in the past.

  • I am skeptical about the Pro being a SoC

having an SoC does not mean there would not also be PCIe/MPX slots it just means for users that don’t need a powerful gpu (like audio pros) they do not need to get one and therefore have more PCIe slots (very useful for the audio pro markets).

one of the key markets for the macPro is the audio market not for CPU power or GPU power but for PCIe expandability.

It’s about CUDA. If you can run stuff (data science) with OpenCL, you are fine with Apple. But a lot of stuff has been written for CUDA, so you essentially need a PC.

Expandability is what a “Mac Pro” should be about. SoC would limit available memory. What if you need 512GB or more?

1 Like

That John Siracusa is a sly one, with his Guns’n’Roses/Cool Hand Luke reference. :slight_smile:

I’m with John Siracusa: I still miss Dragthing.


Apple will likely do what intel are doing on the upcoming Xeon HETD platform. Have some on package memory that can act either directly as RAM or act as a very fast L3/L4 cache of off package memory.

I the server space people want this as it massively reduces system power draw, (round trip memory access from socketed DDR dims uses a lot of power wand when you have many many CPU cores hitting them constantly your Memory will end up drawing more power than the CPU itself! By adding a large (128GB or even 512GB) L4 cache (that if there is not memory dimes installed runs as memory) you massively reduce the number of reads/writes that need to go all the way out to the system memory. Over the linespace of a server this can same $100K worth of electricity!

CUDA support is not limited by apple, the user-space PCIe drivers (that do not require apple to approval to run) could be used to support CUDA. Nvidia however have no interest in putting the effort into doing this as these will not support rendering the system UI (aka the GPU will be compute only not display).

Late to the party, but I want to give a big shoutout to Mr. Siracusa for his comments about objectivity in the tech commenting sphere and how you need to make sure your concerns do mirror those of the rest of the world. I used to be not a fan of his work because I felt he indeed went too far into critique but this balanced and very fair take really impressed me and made me want to check all his work again with fresh eyes. Thank you for those words and for sharing your thoughts on the Mac, sir.