Below are small results of talks with two Apple developers (who are not CPU architects, so..).
What is Apple custom LSI and its cores? And it looks nice in benchmarks and certain apps and bad in other stuff?
Apple is directly moved mobile class, smartphone optimized chip and it initially look nice for inexperience reviewer.
May be Apple is made astonishingly good architecture? Some ARM breakthroughs? Nope, zero.
It is best ARM cores, but it is not some miracle.
Apple managed to do only one thing - they have most extreme approach to shutting down dynamically parts of CPU and GPU cores.
At any given moment almost 90% of parts of LSI that are not working right now are being shut down. It was not important much at 2001 or 2015 even.
But in 2020 it is super important, as leaks in 7nm or 5nm are very high. The smaller the process - the bigger are leaks during idle periods.
Another part is using low frequencies and totally different TSMC process to that AMD plans to use. Such approach is bad for any scaling, but it allows to reduce leaks further by using all things optimized for low power mobile chips. Developer told that version of LSI made on modern 5nm high performance process has consumption from 1.5 to 2x more having only slightly better performance (this will be targeted to premium new models).
Apple made lot of interesting smart things, like adding excessive execution units into CPU and GPU and shutting down part of them as soon as they heat up, moving all execution to distantly located execution units. And they shut them also as soon as they heat up and move all things back to cooled units. Most fun is that most time Apple do not know real local temperatures (as you have limit on sensors amount and it is big inertia) and use special estimates for this.
Huge role also plays their software team working at compilers and libraries. As they managed to offload lot of burden from LSI design into better quality code (but this is very small low level part).
Also big amount of people (up to 5000 top level coders) worked on benchmarks and leading apps optimizations adding at least 25% to numbers that we see now in many reviews.
Rumors are that Apple compiler sometimes cheat by offloading certain things into GPU and not telling this (including custom benchmarks modifications).
Another rumor is that all (100%!) modern Apple LSI cheat in single core tests actually using more than one core on chip level. This can mean also shared execution units and some other shared logic that is actually absent in real core as you move into multicore complex task.
Using DRAM on chip, as well as certain GPU solutions went against two important engineers who left during last 14 months making loud internal conflict (they showed long term consequences).
Another small secret
Same is true for speed of 8GB modules for 16GB variants.
So in reality Apple compares 2400Mhz-3200Mhz DDR4 memory (most thin notebooks actually come with 2400-2666Mhz memory) in competitors notebooks and state that it is their magnificent CPU. It is not, around 20-25% margin comes from memory alone.
Why the cover on M1 is cut such strange way?
Because LPDDR4X manufacturers couldn't provide dense and fast memory modules in volume as was originally planned. Planned modules had 2x more dense DRAM, that fit perfectly under the cover.
Only modules available had 2166-2400Mhz speeds and performance drop was too big.
Apple hoped until the August time, still manufacturing old design CPU PCB and the cover, but it turned out that they need to use Plan B and cut the existing covers.
Apple is planning for up to 4% of notebooks to have issues with cooling and early CPU failures due to inability to level off perfectly all the time the main cover and DRAM modules.
The most exciting — or frightening, if you’re a traditional PC chip company — part of Apple’s new chips is that the M1 is just the starting point. It’s Apple’s first-generation processor, designed to replace the chips in Apple’s weakest, cheapest laptops and desktops. Imagine what Apple’s laptops might do if the company can replicate that success on its high-end laptops and desktops or after a few more years of maturation for the M-series lineup.
I love modern idiots-journalists. No thought in the empty box where brain must reside.
May be someone prevented other companies from using 4500Mhz DDR4 ram? No? Actually they did not do it because almost all design is being made by 3-4 generic Taiwan houses, and only original stuff frequently is outside design.
May be something prevents to solder 4-6 channel memory and having 4-6 multi channel controllers, besides false statements on how expensive such PCB are (they are not). Just Taiwanese partners never talk about it as they don't have lot of good engineers and designing board using old approach and bad engineers is simpler.
I talked to lot of camera gear manufacturers managers and most actually do not care and have no idea how hardware that they make works and how it is designed and that actual limits exist in this design.
Apple R&D expenses are still small in relative numbers, but growing
It looks like you're new here. If you want to get involved, click one of these buttons!