Or you could purchase an inexpensive Windows PC and run it via Microsoft Remote Desktop. I abandoned VMWare/Parallels and gave my users access to real PCs, and eliminate 90% of my Windows related support calls.
That also gives the useful benefit that the Windows PC is doing the processing as well. Unless itās something that requires graphics performance (in which case a VM is an odd choice as well) itās kind of like adding some dedicated Windows cores to oneās Mac.
That is certainly an option.
But if do that then I lose the ability to use Coherence Mode - which really is nice for integrating the two operating systems to share all hardware and files effortlessly.
Plus my main use of Windows requires a good bit of computing power - I am using Madcap Flare to manage a large document database to be included in a writing project.
Thatās true. And you would miss all the fun troubleshooting your hypervisor when a macOS update drives a stake through its heart
Thereās no perfect solution. You gotta do whatās best for you.
Ars Technica has a nice review of the M1:
āIf Appleās M1 isnāt the fastest single-threadāand quad-threadāconsumer-available processor on the planet, it certainly isnāt missing it by much.ā
In this vein, after a few hours using the M1 MacBook Air with a suite of not particularly demanding apps (OmniFocus, Microsoft Word, Keyboard Maestro, Fantastical, 1Password, and Obsidian) @MacSparky posted:
Even though itās only been a few hours, Iām already using this Mac to do work and the word that just keeps jumping to my mind is āsnappyā. Iāve never had a Mac that jumped to my command like this. The way apps load and leap onto the screen are reminiscent, not surprisingly, of iPad OS more than traditional macOS.
Iām open minded about this. I do know that when mainframes went from Bipolar to CMOS in the mid 1990s we went through a series of huge speed ups and then it slowed down.
I think we know where weāve been. Iām not sure we know where weāre going - but thatās fine.
I hope the 20-30% PA speed ups continue.
Cores is an interesting question.
Fun fact: In the 1960s when mainframe went from 1-core to 2-core we got 1.1X.
Now, 50 years later we scale nicely to 190.
This is not my attempt to praise the mainframe but rather to impart what Iāve learnt from a very long timeline with hardware and software developers who are about as clever and determined as those in Apple.
The interesting thing is how you design huge multiprocessors - to minimise what we call the āMP effectā - namely how that 190th processor yields as close to as much as the first.
Inevitably, beyond a certain point you do it by ganging together multiple chips, each with many cores. (190, for example, is a 12-core PU chip with 20 of these chips working together via a sophisticated cache hierarchy, System Control chips, and communication protocol. Yes, lots of these 240 either arenāt used or are used as eg I/O processors.)
Now back to Apple Silicon: I would hope the architecture has been designed in a similar way.
I could conceive of eg Mac Pro, iMac and Mac Mini being re-architected in a similar way to get us to eg 64 or 128 cores - with good MP ratios. There is the physical space.
I could conceive of a (16ā?) MacBook Pro with 2 PU chips, getting us to 32 cores - again with good MP ratios. Again, there probably is space and appetite for it.
I could imagine M2 being 16-core, or maybe just 12-core.
All of the above necessitate M2 supporting more than 16GB of memory and more I/O capability (bandwidth and ports).
What will get to be interesting is whether MacOS can drive this efficiently and effectively. (Our operating system, z/OS, has had to do much work over the decades to enable the hardware to achieve excellent MP ratios.)
(Just a few thoughts from a highly experienced / old Performance person.)
āAnnouncing the new Mac Pro! 64 cores, up to 512 GB of memory, and three high-speed Thunderbolt 4 ports! Thatās 50% more ports than our industry leading Macbook Pro!ā
Ā« What? Only 512 Gb? Thatās a third of what the previous Mac Pro did! This is in no way a pro machine and it will perform horribly! Apple is catering to the mass market! Itās dead and doomed! Ā»
(All said in good humour, eh )
True. But if they take the cue from the M1, can you imagine putting 512 GB of RAM in the same package as a 64-core processor? I feel like that would be a very, very large chip.
The RAM is in the same package as the SOC, but itās not on the same die.
For larger RAM configurations, thereās no reason that the RAM canāt be completely separate.
The issue with that becomes latency.
To pick up also on the hypothetical 64-core chip/die, I would worry about low chip yields. Not to beat up on Apple but the fact we have binning/chip sorting at the 8-core level enough to make a 7-core model viable suggests yields at higher core counts are going to be a worry.
If we scale to my hypothetical 12-core or 16-core chip we might well see chip sorted 10- or 12-core models, respectively. Nothing wrong with chip sorting, of course. Seems to be a novel concept for some.
My point is itās often better to design machines with more than 1 core/PU chip than to make a stonking great single chip.
And then we get back to cache hierarchy and the original āwhereās the memory?ā question this comment was supposed to be responding toā¦
Yup, but it would have to be a calculated trade off, measuring the computational cost of a little extra latency against that of not having enough RAM for a given workload and both of those against the cost of producing SoC packages with enormous amounts of RAM. IF Apple were to make that call, I suspect it would only be for their highest end machines and theyād use reasoning thatās similar to when they trade single core performance off for larger numbers of cores in their Intel machines.
Similar reasoning here: Theyād have to trade off the increased cost of lower yields (assuming not abysmally lower) and whether those costs could be passed on to users of very high end machines against the costs (including possible performance hits) of more complex multi-chip systems. Based on the state of the art right now, I think there is more room to grow with CPU cores in a package than cramming in RAM, but I am not a hardware/chip expert, at least not until Iāve had a few drinks
These are interesting times and Iām genuinely curious to see how Apple moves forward with this transition, especially on the high end.
Yet another glowing review
This is getting interesting
Now thereās an idea! Much like https://en.wikipedia.org/wiki/Z-80_SoftCard to run CP/M software on an Apple II back in 1980!
Hey, I remember seeing ads for those. I meanā¦I was reading them in magazines that were 10 years old, but I was still actually programming the Apple II computers - so it was cool to see all the stuff that could be done.
I think my favorite thing about the Apple II computers was that you could pretty much connect a dozen floppy drives with no real fear of issue, as you just told it āDisk X, Slot Xā when it was looking for the disk. I meanā¦probably not anything anybody ever did - except when they found an old abandoned Apple computer lab.
Final nail in the coffin.
The M1 is the revolution thatās been promised. And itās only the beginning.