The specs look impressive. It is always good to have competition.
They announced tapeout in October with planned dev boards next year. Vaporware is when things don’t appear, not when they are on their way (it takes some time for hardware).
It’s also strategically important for Europe to have its own supply. The current and last US administration have both threatened to limit supply of AI chips to European countries, and China would do the same (as they have shown with Nexperia).
And of course you need the software stack with it. They will have thought of that.
Impressive numbers on paper, but looking at their site, this feels dangerously close to vaporware.
The bottleneck for inference right now isn't just raw FLOPS or even memory bandwidth—it's the compiler stack. The graveyard of AI hardware startups is filled with chips that beat NVIDIA on specs but couldn't run a standard PyTorch graph without segfaulting or requiring six months of manual kernel tuning.
Until I see a dev board and a working graph compiler that accepts ONNX out of the box, this is just a very expensive CGI render.
That seems like not much compared to the hundreds of billions of dollars US companies currently invest into their AI stack? OpenAI pays thousands of engineers and researchers full time.
Even if it's not vapourware, the website makes it look like one. Just look at those two graphs titled "Jotunn 8 Outperforms the Market" and "More Speed For the Bucks" (!) ; WTH?
I can ensure you it's not vaporware at all. silicon is running in the fab, application boards have finished the design phase, software stack validated...
It needs a "buy a card" link and a lot more architectural details. Tenstorrent is selling chips that are pretty weak, but will beat these guys if they don't get serious about sharing.
Edit: It kind of looks like there's no silicon anywhere near production yet. Probably vaporware.
Nice wave they've been able to ride if it's vaporware, considering they're been at it for five years. Any guesses to why no one else seemingly see the obvious you see?
Look at the CGI graphics and indications in their published material that all they have is a simulation. A
It's all there without disclosing an anticipated release date. Even their product pages and their news page don't seem to have indications of this.
Also, the 3D graphic of their chip on a circuit board is missing some obvious support pieces, so it's clearly not from a CAD model.
Lots of chip startups start as this kind of vaporware, but very few of them obfuscate their chip timelines and anticipated release dates this much. 5 years is a bit long to tapeout, but not unreasonable.
Seems they have partners as well, who describe working together with a Taiwanese company as well.
You never know, guess they could have gotten others to fall for their illusions too, it's not unheard of. But considering how long time something like this takes to bring to market, that they have dev-boards ready is months rather than years at least gives me enough to wait until then to judge them too harshly.
It loaded fine for me, but that slash before the unit was a bit smelly. :| Just a tiny edit, but it's a rather core part of their message so they should probably notice and format it correctly before publishing.
Always good to see more competition in the inference chip space, especially from Europe. The specs look solid, but the real test will be how mature the software stack is and whether teams can get models running without a lot of friction. If they can make that part smooth, it could become a practical option for workloads that want local control.
I'll believe it when I see it wishing them the best!
> To streamline development and shorten time-to-market, VSORA embraces industry standards: our toolchain is built on LLVM and supports common frameworks like ONNX and PyTorch, minimizing integration effort and customer cost.
The next generation will include another processor to offload the inference from the RISC V processors used to offload inference from the host machine.
An FP8 performance of 3200TFLOPS is impressive, could be used for training as well as inference. "Close to theory efficiency" is a bold statement. Most accelerators achieve 60-80% of theoretical peak; if they're genuinely hitting 90%+, that's impressive. Now let's see the price.
I don’t get the negativity.
The specs look impressive. It is always good to have competition.
They announced tapeout in October with planned dev boards next year. Vaporware is when things don’t appear, not when they are on their way (it takes some time for hardware).
It’s also strategically important for Europe to have its own supply. The current and last US administration have both threatened to limit supply of AI chips to European countries, and China would do the same (as they have shown with Nexperia).
And of course you need the software stack with it. They will have thought of that.
https://vsora.com/vsora-announces-tape-out-of-game-changing-...
im guessing the negativity is caused by bad branding
Impressive numbers on paper, but looking at their site, this feels dangerously close to vaporware.
The bottleneck for inference right now isn't just raw FLOPS or even memory bandwidth—it's the compiler stack. The graveyard of AI hardware startups is filled with chips that beat NVIDIA on specs but couldn't run a standard PyTorch graph without segfaulting or requiring six months of manual kernel tuning.
Until I see a dev board and a working graph compiler that accepts ONNX out of the box, this is just a very expensive CGI render.
Six months of one developer tuning the kernel?
That seems like not much compared to the hundreds of billions of dollars US companies currently invest into their AI stack? OpenAI pays thousands of engineers and researchers full time.
more like 100 developers for 2 years
very good point leo_e
indeed no mention of PyTorch in their website...honestly it looks a bit scammy as well
Even if it's not vapourware, the website makes it look like one. Just look at those two graphs titled "Jotunn 8 Outperforms the Market" and "More Speed For the Bucks" (!) ; WTH?
I can ensure you it's not vaporware at all. silicon is running in the fab, application boards have finished the design phase, software stack validated...
Having a new account promise us it's not vaporware is what I'd expect to see if it was vaporware.
It needs a "buy a card" link and a lot more architectural details. Tenstorrent is selling chips that are pretty weak, but will beat these guys if they don't get serious about sharing.
Edit: It kind of looks like there's no silicon anywhere near production yet. Probably vaporware.
Tapeout apparently completed last month, dev boards in early 2026: https://www.eetimes.eu/vsora-tapes-out-ai-inference-chip-for...
Nice wave they've been able to ride if it's vaporware, considering they're been at it for five years. Any guesses to why no one else seemingly see the obvious you see?
Look at the CGI graphics and indications in their published material that all they have is a simulation. A It's all there without disclosing an anticipated release date. Even their product pages and their news page don't seem to have indications of this.
Also, the 3D graphic of their chip on a circuit board is missing some obvious support pieces, so it's clearly not from a CAD model.
Lots of chip startups start as this kind of vaporware, but very few of them obfuscate their chip timelines and anticipated release dates this much. 5 years is a bit long to tapeout, but not unreasonable.
> Even their product pages and their news page don't seem to have indications of this.
This seems indicative enough for me, give or take a quarter or two probably, from the latest news post on their website:
> VSORA is now preparing for full-scale deployment, with development boards, reference designs, and servers expected in early 2026.
https://vsora.com/vsora-announces-tape-out-of-game-changing-...
Seems they have partners as well, who describe working together with a Taiwanese company as well.
You never know, guess they could have gotten others to fall for their illusions too, it's not unheard of. But considering how long time something like this takes to bring to market, that they have dev-boards ready is months rather than years at least gives me enough to wait until then to judge them too harshly.
"that they have dev-boards ready is months rather than years at least gives me enough to wait until then to judge them too harshly."
So far, they just talk about it.
I love that the JS loads so slow on first load that it just says "The magic number: 0 /tflops"
It loaded fine for me, but that slash before the unit was a bit smelly. :| Just a tiny edit, but it's a rather core part of their message so they should probably notice and format it correctly before publishing.
Always good to see more competition in the inference chip space, especially from Europe. The specs look solid, but the real test will be how mature the software stack is and whether teams can get models running without a lot of friction. If they can make that part smooth, it could become a practical option for workloads that want local control.
I'll believe it when I see it wishing them the best!
> To streamline development and shorten time-to-market, VSORA embraces industry standards: our toolchain is built on LLVM and supports common frameworks like ONNX and PyTorch, minimizing integration effort and customer cost.
288GB RAM on board, and RISC V processors to enable the option for offloading inference from the host machine entirely.
It sounds nice, but how much is it?
The next generation will include another processor to offload the inference from the RISC V processors used to offload inference from the host machine.
The next next generation will include memory to offload memory from the on chip memory to the memory on memory (also known as SRAM cache)
Esperanto tried to do the same but went out of business. https://www.esperanto.ai/products/
An FP8 performance of 3200TFLOPS is impressive, could be used for training as well as inference. "Close to theory efficiency" is a bold statement. Most accelerators achieve 60-80% of theoretical peak; if they're genuinely hitting 90%+, that's impressive. Now let's see the price.
reminds me of the famous tachyum prodigy vapourware https://www.tachyum.com/
[dead]