During the company's third-quarter 2024 earnings call, Intel confirmed that its future laptop chips will return to the traditional use of RAM sticks, reversing Lunar Lake's radical...
Panther Lake and Nova Lake laptops will return to traditional RAM sticks
I’ve commented many times that Arc isn’t competitive, at least not yet.
Although they were decent performers, they used twice the die size for similar performance compared to Nvidia and AMD, so Intel has probably sold them at very little profit.
Still I expected them to try harder this time, because the technologies to develop a good GPU, are strategically important in other areas too.
But maybe that’s the reason Intel recently admitted they couldn’t compete with Nvidia on high end AI?
Arcs are OK, and the competition is good. Their video encode performance is absolutely unworldly though, just incredible.
Mostly, they help bring the igpu graphics stack and performance up to full, and keep games targeting them well. They’re needed for that alone if nothing else.
Yeah true, plus I bought my a770 at pretty much half price during the whole driver issues and so eventually got a 3070 performing card for like $250, which is an insane deal for me but no way intel made anything on it after all the rnd and production costs
The main reason Intel can’t compete is the fact CUDA is both proprietary and the industry standard, if you want to use a library you have to translate it yourself which is kind of inconvenient and no datacentre is going to go for that
I think intel support it (or at least a translation later) but there’s no motivation for Nvidia to standardise to something open-source as the status quo works pretty well
I’ve commented many times that Arc isn’t competitive, at least not yet.
Although they were decent performers, they used twice the die size for similar performance compared to Nvidia and AMD, so Intel has probably sold them at very little profit.
Still I expected them to try harder this time, because the technologies to develop a good GPU, are strategically important in other areas too.
But maybe that’s the reason Intel recently admitted they couldn’t compete with Nvidia on high end AI?
Arcs are OK, and the competition is good. Their video encode performance is absolutely unworldly though, just incredible.
Mostly, they help bring the igpu graphics stack and performance up to full, and keep games targeting them well. They’re needed for that alone if nothing else.
They were competitive for customers, but only because Intel sold them at no profit.
I mean fine, but first gen, they can fix the features and yields over time.
First gen chips are rarely blockbusters, my first gen chips were happy to make it through bringup and customer eval.
Worse because software is so much of their stack, they had huge headroom to grow.
Yeah true, plus I bought my a770 at pretty much half price during the whole driver issues and so eventually got a 3070 performing card for like $250, which is an insane deal for me but no way intel made anything on it after all the rnd and production costs
The main reason Intel can’t compete is the fact CUDA is both proprietary and the industry standard, if you want to use a library you have to translate it yourself which is kind of inconvenient and no datacentre is going to go for that
AFAIK the AMD stack is open source, I’d hoped they’d collaborate on that.
I think intel support it (or at least a translation later) but there’s no motivation for Nvidia to standardise to something open-source as the status quo works pretty well