Discussion about this post

User's avatar
nicball's avatar

Great article! Although I believe ice lake has up to 40 cores, not 28 :P

c3dtops's avatar

For AI workload on the cloud say when pair with Google TPU pods like hardwares, which form of interconnect best suit AI related workloads?

Mesh (Granite Rapids) or "semi-mesh like" (Turing)

And does AI workload tend to favour XEON CPU, because of the onboard accelerators and all cores are connected via a mesh?

8 more comments...

No posts

Ready for more?