M1 MacBooks limits to 16Gb for now?

Or you could purchase an inexpensive Windows PC and run it via Microsoft Remote Desktop. I abandoned VMWare/Parallels and gave my users access to real PCs, and eliminate 90% of my Windows related support calls.

1 Like

That also gives the useful benefit that the Windows PC is doing the processing as well. Unless itā€™s something that requires graphics performance (in which case a VM is an odd choice as well) itā€™s kind of like adding some dedicated Windows cores to oneā€™s Mac.

:slight_smile:

1 Like

That is certainly an option.

But if do that then I lose the ability to use Coherence Mode - which really is nice for integrating the two operating systems to share all hardware and files effortlessly.

Plus my main use of Windows requires a good bit of computing power - I am using Madcap Flare to manage a large document database to be included in a writing project.

Thatā€™s true. And you would miss all the fun troubleshooting your hypervisor when a macOS update drives a stake through its heart :wink:

Thereā€™s no perfect solution. You gotta do whatā€™s best for you.

Ars Technica has a nice review of the M1:

ā€œIf Appleā€™s M1 isnā€™t the fastest single-threadā€”and quad-threadā€”consumer-available processor on the planet, it certainly isnā€™t missing it by much.ā€

1 Like

In this vein, after a few hours using the M1 MacBook Air with a suite of not particularly demanding apps (OmniFocus, Microsoft Word, Keyboard Maestro, Fantastical, 1Password, and Obsidian) @MacSparky posted:

Even though itā€™s only been a few hours, Iā€™m already using this Mac to do work and the word that just keeps jumping to my mind is ā€œsnappyā€. Iā€™ve never had a Mac that jumped to my command like this. The way apps load and leap onto the screen are reminiscent, not surprisingly, of iPad OS more than traditional macOS.

2 Likes

Iā€™m open minded about this. I do know that when mainframes went from Bipolar to CMOS in the mid 1990s we went through a series of huge speed ups and then it slowed down.

I think we know where weā€™ve been. Iā€™m not sure we know where weā€™re going - but thatā€™s fine.

I hope the 20-30% PA speed ups continue.

Cores is an interesting question.

Fun fact: In the 1960s when mainframe went from 1-core to 2-core we got 1.1X. :slight_smile:

Now, 50 years later we scale nicely to 190.

This is not my attempt to praise the mainframe but rather to impart what Iā€™ve learnt from a very long timeline with hardware and software developers who are about as clever and determined as those in Apple.

The interesting thing is how you design huge multiprocessors - to minimise what we call the ā€œMP effectā€ - namely how that 190th processor yields as close to as much as the first.

Inevitably, beyond a certain point you do it by ganging together multiple chips, each with many cores. (190, for example, is a 12-core PU chip with 20 of these chips working together via a sophisticated cache hierarchy, System Control chips, and communication protocol. Yes, lots of these 240 either arenā€™t used or are used as eg I/O processors.)

Now back to Apple Silicon: I would hope the architecture has been designed in a similar way.

I could conceive of eg Mac Pro, iMac and Mac Mini being re-architected in a similar way to get us to eg 64 or 128 cores - with good MP ratios. There is the physical space.

I could conceive of a (16ā€?) MacBook Pro with 2 PU chips, getting us to 32 cores - again with good MP ratios. Again, there probably is space and appetite for it.

I could imagine M2 being 16-core, or maybe just 12-core.

All of the above necessitate M2 supporting more than 16GB of memory and more I/O capability (bandwidth and ports).

What will get to be interesting is whether MacOS can drive this efficiently and effectively. (Our operating system, z/OS, has had to do much work over the decades to enable the hardware to achieve excellent MP ratios.)

(Just a few thoughts from a highly experienced / old :slight_smile: Performance person.)

4 Likes

ā€œAnnouncing the new Mac Pro! 64 cores, up to 512 GB of memory, and three high-speed Thunderbolt 4 ports! Thatā€™s 50% more ports than our industry leading Macbook Pro!ā€

:wink:

2 Likes

Ā« What? Only 512 Gb? Thatā€™s a third of what the previous Mac Pro did! This is in no way a pro machine and it will perform horribly! Apple is catering to the mass market! Itā€™s dead and doomed! Ā»

(All said in good humour, eh :wink:)

2 Likes

True. But if they take the cue from the M1, can you imagine putting 512 GB of RAM in the same package as a 64-core processor? I feel like that would be a very, very large chip. :slight_smile:

2 Likes

The RAM is in the same package as the SOC, but itā€™s not on the same die.

For larger RAM configurations, thereā€™s no reason that the RAM canā€™t be completely separate.

1 Like

The issue with that becomes latency.

To pick up also on the hypothetical 64-core chip/die, I would worry about low chip yields. Not to beat up on Apple but the fact we have binning/chip sorting at the 8-core level enough to make a 7-core model viable suggests yields at higher core counts are going to be a worry.

If we scale to my hypothetical 12-core or 16-core chip we might well see chip sorted 10- or 12-core models, respectively. Nothing wrong with chip sorting, of course. Seems to be a novel concept for some.

My point is itā€™s often better to design machines with more than 1 core/PU chip than to make a stonking great single chip.

And then we get back to cache hierarchy and the original ā€œwhereā€™s the memory?ā€ question this comment was supposed to be responding toā€¦ :slight_smile:

2 Likes

Yup, but it would have to be a calculated trade off, measuring the computational cost of a little extra latency against that of not having enough RAM for a given workload and both of those against the cost of producing SoC packages with enormous amounts of RAM. IF Apple were to make that call, I suspect it would only be for their highest end machines and theyā€™d use reasoning thatā€™s similar to when they trade single core performance off for larger numbers of cores in their Intel machines.

Similar reasoning here: Theyā€™d have to trade off the increased cost of lower yields (assuming not abysmally lower) and whether those costs could be passed on to users of very high end machines against the costs (including possible performance hits) of more complex multi-chip systems. Based on the state of the art right now, I think there is more room to grow with CPU cores in a package than cramming in RAM, but I am not a hardware/chip expert, at least not until Iā€™ve had a few drinks :slight_smile:

These are interesting times and Iā€™m genuinely curious to see how Apple moves forward with this transition, especially on the high end.

3 Likes

Yet another glowing review

This is getting interesting

2 Likes

Now thereā€™s an idea! Much like https://en.wikipedia.org/wiki/Z-80_SoftCard to run CP/M software on an Apple II back in 1980!

Hey, I remember seeing ads for those. :slight_smile: I meanā€¦I was reading them in magazines that were 10 years old, but I was still actually programming the Apple II computers - so it was cool to see all the stuff that could be done.

I think my favorite thing about the Apple II computers was that you could pretty much connect a dozen floppy drives with no real fear of issue, as you just told it ā€œDisk X, Slot Xā€ when it was looking for the disk. I meanā€¦probably not anything anybody ever did - except when they found an old abandoned Apple computer lab. :smiley:

1 Like

Final nail in the coffin.

The M1 is the revolution thatā€™s been promised. And itā€™s only the beginning.

2 Likes
2 Likes