The days of Nvidia proprietary drivers being a safe bet is long gone. Especially for any sort of Wayland desktop, but it still applies to X11.
Intel drivers should be good as well, since they use the same Mesa code base.
With the ROCm stuff no longer depending on AMD Pro then there is not going to be any reason to step away from the default GPU drivers provided by your distro, provided they are relatively new.
While I am sure that there are still going to be professional-grade proprietary apps that recommend Nvidia... for most of us the only reason to actually go and choose Nvidia on Linux is because of CUDA. And, personally, I would rather lease time on the cloud or have a second GPU work horse PC separate from my desktop for that.
Unfortunately Nvidia is, by far, the most popular option for Windows users. Over 4:1 ratio according to Steam statistics.
So most new Linux users are still going to have to suffer through dealing with their GPU drivers.
Mobile users suffer more problems then people with dedicated desktop GPUs, but it still gotten a lot better.
The one thing to be careful about AMD GPUs is that for most GPU OEMs AMD is just a after thought. So they get sub-par QA and heatsinks compared to their more popular Nvidia models.
It is best to go with card makers that only sell AMD GPUs, like Sapphire, PowerColor, and XFX. I am partial to Sapphire.
Once the rewritten "amdgpu" driver came out, things got much better. The first few cards created after that (IIRC the Polaris GPUs, RX 400's), the situation reversed. I still have had occasional issues with various Nvidia cards (normally driver updates breaking things), but for almost a decade now, I have not had issues with AMD GPUs under Linux.
[0] Except for pro features while using workstation cards. You need to use a proprietary driver for those, but even those share a lot of code with the open source driver.
So it is both driver changes and architectural changes.
There is also AMDGPU-PRO, which is the proprietary version based on AMDGPU. Used to be you'd need it for ROCm, but that hasn't been true for a while not. There really isn't any reason to use the "pro" version anymore, unless you have a some special proprietary app that requires it.
Open source GPU drivers are based on Mesa stack. So they share a common code base and support for things like Vulkan.
So it is sorta similar to how DirectX works. With old-school OpenGL drivers each stack was proprietary to the GPU manufacturer so there was lots of quirks and extensions that applied to only one or another GPU. That is one of the reasons DirectX displaced OpenGL in gaming... Microsoft 'owned' DirectX/Direct3d stack.
Well the open source equivalent to that is Mesa. Mesa provides APi support in software and it is then ported to each GPU with "dri drivers".
For gaming things have improved tremendously with "Proton", which is essentially Wine with vastly improved Direct3D support.
This is accomplished with "DXVK", which is a Direct3D to Vulkan translator.
This way Linux essentially gets close to "native windows speed" for most games that support proton in one way or another.
Which means that most games run on Linux now. Probably over 75% that are available on Steam, although "running" doesn't mean it is perfect.
One of the biggest problems faced with Linux gaming nowadays is anti-cheat features for competitive online games. Most of the software anti-piracy and anti-cheating features games use can technically work on Linux, but it is really up to the game manufacturer to make it work and support it. Linux gamers can sometimes make it work, but also they get flagged and booted and even accounts locked for being suspected of cheating.
Now that I have it working I see random glitches here and there that I can't pin down. Some Electron apps I have to turn off GPU acceleration or they won't get any windows showing up - they launch, the process exists, they're in the dock as active, but the window doesn't appear at all.
Getting a new laptop from work to replace this one and I'm really hoping it won't have nvidia hardware - or at least, if it does I can disable it and the Intel GPU will work fine also.
There's a reason why a lot of us sat on the sidelines and were looking forward to the 16". There is no slippery slope here, the differentiated product lines 100% make sense.
Edit: there is another class I could see making sense - desktop replacement. Those chassis' tend to be pretty chunky because they put desktop parts into a laptop. Think 10 lb laptop with a battery that lasts 20-30 minutes. But I'm not sure if the market is large enough for them to pursue it.
E.g sockets and chipsets change and will force incompatible changes, no matter how much framework would like to keep things stable.
On the OP page, it says:
> Pick up all these upgrades from our Marketplace to extend the life of your existing Framework Laptop 16.
Unless you already have the Ryzen AI 300 motherboard - in which case you're up to date - you can upgrade your motherboard right now:
https://frame.work/marketplace?compatibility%5B%5D=amd_ryzen...
You can hardly expect Framework to reconfigure the physical structure of your laptop to support a new GPU card when the device didn't have one to begin with.
You seem to be looking for something to complain about.
I've upgraded my Framework 13 a bunch already since I bought it in 2022, and will hopefully continue to do so for years.
Framework does work with ODMs (Compal, I believe, is their main one?) to design mainboards for their chassis, which are designed specifically for Framework. It's not like they just take an off-the-shelf design and build it without any modifications.
And yes, chipsets change. (A "socket" changing isn't really a thing when we're talking about a laptop where the CPU/SoC is soldered in.) Generally this isn't a problem, though: as long as you can design something that physically fits in the chassis and supports the features you want, you're fine.
I'd be more concerned about what I'd be able to do with an older 16-inch mainboard, as the 13-inch has the Cooler Master case options.
Still rocking the Intel Tiger Lake 13-inch here, mixed Windows / Ubuntu workflow, loads of RAM.
Given both Framework and NVIDIA's checkered histories around Linux driver support, I see no reason to revisit that, but it is interesting to see the voices in this thread with positive NVIDIA experience.
Compared to AMD and Intel, NVIDIA is very much not an 'out of the box', or stable experience.
Not completely true either, it eventually supported most of the normal 3d primitives but gaming performance was never a priority because there were few developers and they weren't employed by AMD/ATI -- which also meant that some cards would only reach full feature support after their EOL, sadly.
The amdgpu also driver benefits from a lot of the groundwork that has been done since. The radeon driver is older than kernel features like KMS (kernel modesetting) and GEM (graphics execution manager), and the LLVM-based shader compiler in mesa (userspace). I'd say that the radeon driver was actually the proving ground for many of these features, because it was the most capable open source 3d driver: The Intel 845/915 hardware barely supported 3d operations, and the only 3d-capable open source driver for Nvidia was the reverse-engineered nouveau driver.
Luckily, many people working on the amdgpu driver are actually on AMD's payroll these days.
I forgot that name "fglrx", probably a mental self-defense mechanism. Those were some bad times, trying to get different display outputs to work at the same time, guessing and testing values in xorg.conf, so on. There was some community utility someone wrote to try and help with installation, reinstallation, configuration and reconfiguration, but the name eludes me now.
I would edit my post to correct it, but it seems the edit window has passed.
The only issues you may run into if you distro doesn't include the firmware. e.g. This was the case with Debian 11 and you had to enable the non-free repo.
The only other problem you can conceivably have is card isn't supported by the kernel because it is too new. This can be fixed by upgrading the kernel. In Debian you can use a backports kernel, I am sure there are similar options in other distros.
When I was using my old 1080Ti, I had constant issues with the NVIDIA drivers. Acceleration didn't work on the second screen sometimes. There was some magic setting that would unset itself.
Anyway, in case someone was interested it seems the code itself is cited as MIT, however it has a "when it becomes a Linux .ko it becomes GPLv2" clause https://github.com/NVIDIA/open-gpu-kernel-modules/blob/580.7... and they do go out of their way to say "lol, needs binary blobs" https://github.com/NVIDIA/open-gpu-kernel-modules/blob/580.7...
That XFree86.run has always struck me as "you're gonna what*?"
You assume Framework will just abandon models willy nilly and make slight model line changes to break compatibility like moving from 16” to 17”, but in reality they have no track record of doing that.
The original 13” model has been around for 5+ years and it’s been 100% forward and backward compatible through multiple iterations of parts. Framework has never discontinued a product line.
Obviously we can’t predict the future.
I believe the framework CEO himself mentioned in an interview how the chipset and socket are kinda at the core of designing the whole laptop, because it necessitates the placement of the cooling and all other components. I sadly didn't bookmark that YouTube video, so I cannot provide a link however
And fwiw, Apple is the only company that could make their laptops fully compatible and upgradeable, because they've got the relevant stack under their own control. Sadly, they're not interested in reducing ewaste, as that would mean less profit for them
Ah and hardware video decoding never ever worked again.
So much for the so called advantages of an open source driver.
Maybe its just a problem with older Nvidia gpu's, but its not a gamble I want to take
But the user-space portions are probably more significant for performance than the kernel drivers. Here we have:
- r300 and r600 (open source OpenGL backend for older hardware, sits on top of the radeon kernel driver, not much development happening)
- radeonsi (open source OpenGL backend for newer hardware, sits on top of either the radeon or amdgpu kernel drivers depending on hardware version and kernel configuration)
- fglrx (closed source OpenGL driver on top of the fglrx kernel driver, both obsolete now)
- radv (open source Vulkan driver on top of amdgpu)
- amgpu-pro (closed source Vulkan driver on top of amdgpu) - not sure if there is also still a proprietary OpenGL driver but if there is no one should care since radeonsi works well enough
- amdvlk (open source dumps of amdgpu-pro without proprietary shader compiler on top of amdgpu)
Then you have different shader compilers which also significantly affect both shader compile time and runtime performance:
- internal compiler used by r600
- LLVM (used by radeonsi and amdvlk)
- ACO (used by radv and possibly radeonsi these days)
- AMD's proprietary compiler (used by fglrx and amdgpu-pro)
And for X.org you also have different display drivers (fglrx, radeon, modesetting).
They use the same front end but that says very little about the quality of the overall driver. Performance is mostly determined by the shader compiler and other hardware-specific parts which obviously differ between Intel and AMD.