> If Nvidia truely sees ARM as an opportunity kill off Mali and expand Geforce's install base,
Perfect replacement, from one unsupported hard to use non-open-source Linux-drivered gpu to another hard to use non open source Linux drivered gpu.
> I do hope AMD can take the battle to Nvidia on ARM too, and that Qualcomm and PowerVR find ways to stay relevant.
Qualcomm's Adreno amusingly enough came from AMD, as Imageon, in 2009. I definitely hope AMD can get back in the mobile game though. Good luck to powervr & anyone else too. You are probably up for some hard competition soon!!
The first version of Adreno had some fixed-function blocks from AMD (and Bitboys), but the programmable shader core came from Qualcomm's never-commercialized Qshader architecture. The result was a buggy mess which took a ton of effort on our drivers to debug and correct.
In hindsight, I'm amazed at how well it worked given the schedule and the magnitude of the work involved in fusing those two architectures.
I really appreciate this kind of mention, thank you. I want to know so much more but this history, it feels like it evaporates so quickly, only a few people have any idea.what happened.
I have seen this concern countless times, but not why that matters to them. I can understand it matters from Linus' perspective as kernel maintainer, but from users perspective I can't really get the issue. Anyways, not all code that runs on your system is open source. Why not demand your bootloader manufacturer for open source with the same intensity. If say NVIDIA wants the driver to contain malicious backdoor, open source is not going to stop them.
> from users perspective I can't really get the issue ... If say NVIDIA wants the driver to contain malicious backdoor, open source is not going to stop them.
No, but if such a backdoor were discovered, it would be possible to do something about it. The quote from the article in top comment here says it well: https://news.ycombinator.com/item?id=23944954
> Anyways, not all code that runs on your system is open source.
Not yet, but it is my goal. If/when that's achieved, I'd also like to run it exclusively on free/libre/open (FLO) hardware.
> Why not demand your bootloader manufacturer for open source with the same intensity.
My bootloader plays a much smaller role in my computing endeavors than my gpu. And less importantly, as a practical matter, there's many more major motherboard vendors, and few FLO alternatives; whereas both nvidia alternatives (amd, integrated intel) do have FLO drivers.
It means when people are trying to do things like experiment with how to make for example frame timing more useful, such that specs like Vulkan can advance[1], we can't experiment with & try to advance & figure out what might work, because closed proprietary software doesn't allow mankind to explore & progress.
We basically have to keep going back to Nvidia & relying on them to be authorities on their own system & to be acting in everyone's interest when we try to develop extensions like VK_EXT_present_timing. This greatly injures the development of good standards, obstructing there from being a collaborative healthy environment where people can work together to make standards that work well.
Another example is EGLStreams which is not that bad but very different approach to handling video buffers from what everyone else does which has been obstructing the use of the newer Wayland display server on nvidia hardware for 6 years now[2]. Nvidia wants their thing, & closed drivers mean no one can play around & attempt to make their hardware work if they wanted to. Ridiculously harsh limitations, no choice, no experimenting.
O yea users don't give a shit. But it's further reduction and shrinking of the playing field to corporate giants that'll only share details with other giants to develop products.
The problem isn't inherently that the drivers are closed source, it's that Nvidia is actively hostile towards the ecosystem. For instance Mesa added a generic buffer management API (GBM) allowing compositors like Weston to be hardware accelerated using OpenGL. Nvidia could have followed suite and supported GBM but instead went their own route with EGLStreams. So now Wayland, XWayland and every single Wayland compositor has to implement Nvidia specific code to support their hardware.
Honestly I really don't see the Geforce play. Nvidia tried it with Tegra and failed pretty miserably. Mali and Adreno pretty much cornered the market from that era(with PowerVR pivoting over to Apple). I just don't see their IP really hitting it home with the type of workloads you see in SoCs.
The primary driver for most SoC GPUs since the qHD days is to push pixels for the UI layers which has a different set of requirements and features compared to modern GPUs use for Gaming or ML. They're almost exclusively heavily tiling based and biased more towards power consumption than raw horsepower.
> Honestly I really don't see the Geforce play. Nvidia tried it with Tegra and failed pretty miserably. Mali and Adreno pretty much cornered the market from that era
I want to be nice but I don't know what rock you've been sleeping under. TX2 is 3 years old & it's not just nvidia cornering the entire hapless AI/ML market with proprietary CUDA that keeps it & the jetson platform as the #1 most obvious go to for robotics, in spite of having a fairly trash terrible not very good ARM cpu: those couple of nv cores are way better than the rest of the arm offerings. Even outside ML, the nv gpu arm offerings radically outstrip everyone else. No one else has the ram bandwidth to begin to compete, much less the cores. 3 years have passed & the only one with the X2's 60GBps is the NV Xavier top end part with 137GBps. No one else is playing the league as NV has been playing with arm gpus. I don't know how you would call this massive raring colossal success a failure. Word nothing of the Nintendo Switch.
The Tegra thing is interesting, and I wrote about it in the other thread too.
It is a failure for NVidia in that they launched it as a mainstream mobile phone/tablet part, and it's not used in anything outside the NVidia Shield in that market that I'm aware of (and the Switch of course).
But it has seen success in robotics and self driving cars, because NVidia makes it easy to use and it has great performance.
So it's not obvious how to judge it. Commercially, compared to their initial goals it is probably a failure. But it has opened new markets that didn't exist so that's successful?
Horsepower isn't everything, usually cost and power consumption come first in SoC selection followed by feature set(of which your GPU is one part of a larger picture).
If you want to really succeed in the SoC space(which is where Arm has) then what you need is volume and I don't think Tegra ever really made any serious inroads there.
The switch is a game console and so it sits somewhat outside of the traditional high-volume SoC market.
Perfect replacement, from one unsupported hard to use non-open-source Linux-drivered gpu to another hard to use non open source Linux drivered gpu.
> I do hope AMD can take the battle to Nvidia on ARM too, and that Qualcomm and PowerVR find ways to stay relevant.
Qualcomm's Adreno amusingly enough came from AMD, as Imageon, in 2009. I definitely hope AMD can get back in the mobile game though. Good luck to powervr & anyone else too. You are probably up for some hard competition soon!!
https://en.m.wikipedia.org/wiki/Imageon