SC25: The Present and Future of HPC Networking with Cornelis Networks CEO Lisa Spelman
Hello you fine Internet folks,
Today we have an interview with the CEO of Cornelis Networks, Lisa Spelman, where we talk about what makes Omnipath different to other solutions on the market along with what steps has Cornelis taken in support of Ultra Ethernet.
Hope y’all enjoy!
If you like the content then consider heading over to the Patreon or PayPal if you want to toss a few bucks to Chips and Cheese. Also consider joining the Discord.
This transcript has been lightly edited for readability and conciseness.
George: Hello, you fine internet folks. We’re here at Supercomputing 2025 at the Cornelis Networks booth. So, with me I have, Lisa Spelman, CEO of Cornelis. Would you like to tell us about what Cornelis is and what you guys do?
Lisa: I would love to! So, thanks for having me, it’s always good, the only thing we’re missing is some cheese. We got lots of cheese.
George: There was cheese last night!
Lisa: Oh, man! Okay, well, yeah, we were here at the kickoff last night. It was a fun opening. So, Cornelis Networks is a company that is laser-focused on delivering the highest-performance networking solutions for the highest-performance applications in your data center. So that’s your HPC workloads, your AI workloads, and everything that just has intense data demands and benefits a lot from a parallel processing type of use case.
So that’s where all of our architecture, all of our differentiation, all of our work goes into.
George: Awesome. So, Cornelis Networks has their own networking, called OmniPath.
Lisa: Yes.
George: Now, some of you may know OmniPath used to be an Intel technology. But Cornelis, I believe, bought the IP from Intel. So could you go into a little bit about what OmniPath is and the difference between OmniPath, Ethernet, and InfiniBand.
Lisa: Yes, we can do that. So you’re right, Cornelis spun out of Intel with an OmniPath architecture.
George: Okay.
Lisa: And so this OmniPath architecture, I should maybe share, too, that we’re a full-stack, full-solution company. So we design both a NIC, a SuperNIC ASIC, we design a Switch ASIC... Look at you- He’s so good! He’s ready to go!
George: I have the showcases!
Lisa: Okay, so we have... we design our SuperNIC ASIC, we design our Switch ASIC, we design the card for the add-in card for the SuperNIC and the switchboard, all the way up, you know, top of rack, as well as all the way up to a big old director-class system that we have sitting here.
All of that is based on our OmniPath architecture, which was incubated and built at Intel then, like you said, spun out and acquired by Cornelis. So the foundational element of the OmniPath architecture is this lossless and congestion-free design.
So it was... it was built, you know, in the last decade focused on, how do you take all of the data movement happening in highly-parallel workloads and bring together a congestion-free environment that does not lose packets? So it was specifically built to address these modern workloads, the growth of AI, the massive scale of data being put together while letting go of learning from the past, but letting go of legacy, other networks that may be designed more for storage systems or for... you know, just other use cases. That’s not what they were inherently designed for.
George: The Internet.
Lisa: Yeah.
George: Such as Ethernet.
Lisa: A more modern development. I mean, Ethernet, I mean, amazing, right? But it’s 50 years old now. So what we did was, in this architecture, built in some really advanced capabilities and features like your credit-based flow controls and your dynamic lane scaling. So it’s the performance, as well as adding reliability to the system. And so the network plays a huge role, not only increasing your compute utilization of your GPU or your CPU, but it also can play a really big role in increasing the uptime of your overall system, which has huge economic value.
George: Yeah.
Lisa: So that’s the OmniPath architecture, and the way that it comes to life, and the way that people experience it, is lowest latency in the industry on all of these workloads. You know, we made sure on micro benchmarks, like ping pong latency, all the good micros. And then the highest message rates in the industry. We have two and a half X higher than the closest competitor. So that works really great for those, you know, message-dependent, really fast-rate workloads.
And then on top of that, we’re all going to operate at the same bandwidth. I mean, bandwidth is not really a differentiator anymore. And so we measure ourselves on how fast we get to half-bandwidth, and how quickly we can launch and start the data movement, the packet movement. So fastest to half-bandwidth is one of our points of pride for the architecture as well.
George: So, speaking of sort of Ethernet. I know there’s been the new UltraEthernet consortium in order to update Ethernet to a more... to the standard of today.
Lisa: Yeah.
George: What has Cornelis done to support that, especially with some of your technologies?
Lisa: So we think this move to UltraEthernet is really exciting for the industry. And it was obviously time, I mean, you know, it takes a big need and requirement to get this many companies to work together, and kind of put aside some differences, and come together to come up with a consortium and a capability, a definition that actually does serve the workloads of today.
So we’re we’re very excited and motivated towards it. And the reason that we are so is because we see so much of what we’ve already built in OmniPath being reflected through in the requirements of UltraEthernet. We also have a little bit of a point of pride in that the the software standard for UltraEthernet is built on top of LibFabric, which is an open source, you know, that we developed actually, and we’re maintainers of. So we’re we’re all in on the UltraEthernet, and in fact, we’ve just announced our next generation of products.
George: Speaking of; the CN6000, the successor to the CN5000: what exactly does it support in terms of networking protocols, and what do you sort of see in terms of like industry uptick?
Lisa: Yes. So this is really cool. We think it’s going to be super valuable for our customers. So with our next generation, our CN6000, that’s our 800 gig product, that one is going to be a multi-protocol NIC. So our super NIC there, it will have OmniPath native in it, and we have customers that they absolutely want that highest performance that they can get through OmniPath and it works great for them. But we’re also adding into it Rocky V2. So Ethernet performance as well as UltraEthernet, the 1.0, you know, the spec, the UltraEthernet compliance as well.
So you’re going to get this multi-modal NIC, and what we what we’re doing, what our differentiation is, is that you’re you’re moving to that Ethernet transport layer, but you’re still behind the scenes getting the benefits of the OmniPath architecture.
George: OK.
Lisa: So it’s not like it’s two totally separate things. We’re actually going to take your packet, run it through the OmniPath pipes and that architecture benefit, but spit it out as Ethernet, as a protocol or the transport layer.
George: Cool. And for UltraEthernet, I know there’s sort of two sort of specs. There’s what’s sort of colloquially known as the “AI spec” and the “HPC spec”, which have different requirements. For the CN6000, will it be sort of the AI spec or the HPC spec?
Lisa: Yeah. So we’re focusing on making sure the UltraEthernet transport layer absolutely works. But we are absolutely intending to deliver to both HPC performance and AI workload performance. And one of the things I like to kind of point out is, it’s not that they’re so different; AI workloads and HPC workloads have a lot of similar demands on the network. They just pull on them in different ways. So it’s like, message rate, for example: message rate is hugely important in things like computational fluid dynamics.
George: Absolutely.
Lisa: But it also plays a role in inference. Now, it might be the top determiner of performance in a CFD application, and it might be the third or fourth in an AI application. So you need that same, you know, the latency, the message rates, the bandwidth, the overlap, the communications, you know, all that type of stuff. Just workloads pull on them a little differently.
So we’ve built a well-rounded solution that addresses all. And then, by customer use case, you can pull on what you need.
George: Awesome. And as you can see here [points to network switch on table, you guys make your own switches.
Lisa: We do.
George: And you make your own NICs. But one of the questions I have is, can you use the CN6000 NIC with any switch?
Lisa: OK, so that’s a great point. And yes, you can. So one of our big focuses as we expand the company, the customer base, and serve more customers and workloads, is becoming much more broadly industry interoperable.
George: OK.
Lisa: So we think this is important for larger-scale customers that want to maybe run multi-vendor environments. So we’re already doing work on the 800 gig to ensure that it works across a variety of, in, you know, standard industry available switches. And that gives customers a lot of flexibility.
Of course, they can still choose to use both the super NIC and the switch from us. And that’s great, we love that, but we know that there’s going to be times when there’s like, a partner or a use case, where having our NIC paired with someone else’s switch is the right move. And we fully support it.
George: So then I guess sort of the flip-side of that is if I have, say, another NIC, could I attach that to an OmniPath switch?
Lisa: You will be able to, not in a CN6000, but stay tuned, I’ll have more breaking news! That’s just a little sneak peek of the future future.
George: Well, and sort of to round this off with the most important question. What’s your favorite type of cheese, Lisa?
Lisa: OK, I am from Portland, Oregon. So I have to go with our local, to the state of Oregon, our Rogue Creamery Blues.
George: Oh, OK.
Lisa: I had a chance this summer to go down to Grants Pass, where they’re from, and headquartered, and we did the whole cheese factory tour- I thought of you. We literally got to meet the cows! So it was very nice, it was very cool. And so that’s what I have to go with.
George: One of my favorite cheeses is Tillamook.
Lisa: OK, yes! Yes, another local favorite.
George: Thank you so much, Lisa!
Lisa: Thank you for having me!
George: Yep, have a good one, folks.

