Judging by AMD vs Nvidia ray tracing analysis I’ve seen, it’s just too long and complex to be inside this article rather than being its own standalone. I don’t remember details but I know they traversed the various BVH very differently.
Thank you for the broadcast, as well. It helped me to understand some of the use cases for XMx (XeSS upscaling in Cyberpunk) and INT64 and memory access. You guys rock.
Competition is essential in the chip manufacturing sector, where costs are escalating. Intel's entry into the dGPU market is inconsequential, as they are simply losing money in that space.
Thanks for the great analysis as usual. I just wanted to offer one comment: I don't think it makes much sense to compare the B580 to the A770 as you did at the beginning (obv comparing the microbenchmarks is a different matter) - in terms of branding the B580 is the successor to the A580. In terms of MSRP and W it sits between the A580 and A750.
It's really interesting how Intel has gone with a "one thread with multiple SIMD registers" instead of the warp/wavefront-based execution model of NVIDIA/AMD. I wonder what the implications are for different workloads.
Thanks Chester, great analysis! Regarding some of the points where some insider knowledge would have been helpful, here a suggestion (see disclaimer right after): Tom Petersen, one of Intel's graphics engineers, has appeared in a few YT videos from Gamers Nexus and explained IMHO rather well how they use very detailed analysis down to sub-milliseconds of single to optimize their drivers. Maybe he could give you some additional information that just can't be found anywhere else. Here the link to one of those YT videos: https://gamersnexus.net/gpus-cpus-deep-dive/fps-benchmarks-are-flawed-introducing-animation-error-engineering-discussion
Disclaimer: I don't know Tom Petersen or anyone else at Intel's graphics division, nor do I have any other inside track there.
I am missing some ray tracing analysis. Maybe later comparing Intel, AMD and Nvidia? Anyway thank you for the article.
Judging by AMD vs Nvidia ray tracing analysis I’ve seen, it’s just too long and complex to be inside this article rather than being its own standalone. I don’t remember details but I know they traversed the various BVH very differently.
Thank you for the broadcast, as well. It helped me to understand some of the use cases for XMx (XeSS upscaling in Cyberpunk) and INT64 and memory access. You guys rock.
Thank you for all the work you put into this Battlemage profile. I really hope Intel succeeds in this space; competition is vital.
Competition is essential in the chip manufacturing sector, where costs are escalating. Intel's entry into the dGPU market is inconsequential, as they are simply losing money in that space.
Thanks for the great analysis as usual. I just wanted to offer one comment: I don't think it makes much sense to compare the B580 to the A770 as you did at the beginning (obv comparing the microbenchmarks is a different matter) - in terms of branding the B580 is the successor to the A580. In terms of MSRP and W it sits between the A580 and A750.
It's really interesting how Intel has gone with a "one thread with multiple SIMD registers" instead of the warp/wavefront-based execution model of NVIDIA/AMD. I wonder what the implications are for different workloads.
It’s hilarious that when trying to test the pcie4x8 link they went straight for the jugular with modded DCS. That’s so brutal.
Thanks Chester, great analysis! Regarding some of the points where some insider knowledge would have been helpful, here a suggestion (see disclaimer right after): Tom Petersen, one of Intel's graphics engineers, has appeared in a few YT videos from Gamers Nexus and explained IMHO rather well how they use very detailed analysis down to sub-milliseconds of single to optimize their drivers. Maybe he could give you some additional information that just can't be found anywhere else. Here the link to one of those YT videos: https://gamersnexus.net/gpus-cpus-deep-dive/fps-benchmarks-are-flawed-introducing-animation-error-engineering-discussion
Disclaimer: I don't know Tom Petersen or anyone else at Intel's graphics division, nor do I have any other inside track there.
Was the testing done on linux?
Intel's compiler was it the ICPX compiler and using sycl (from oneAPI) to run the GPU code?
No, was done on Windows 10
Would you have a good website link that explains and educates on intel's GPU assembly?