• 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: October 23rd, 2023

help-circle
  • The only thing that comes to mind is battery damage. Keeping your laptop constantly plugged in and heating up during intensive work / gaming sessions will speed up the rate at which the battery ages. However, it’s a Framework, so repairs are reasonably cheap and doable - so even this point has shakey ground to stand on.

    Aside from that - there is no loss. I, too, use my laptop as a desktop replacement with a dock nowadays. Laptops have gotten so good that while surely desktops are still better, many users - even heavy ones - are fine with the performance output that laptops give you.



  • I just cannot find enough specs online. It seems to be supported on the standard m.2 form factor (encouraging, as you can get an Intel AX210 on an old T480 as of today, so these things generally have had a degree of retro-compatibility) but I can’t seem to find info on whether it is a stand-alone interface, or CNVi. In the former case - I think it is very very very likely to work, even on AMD variants. On the latter - requires 12th gen or higher, but it should work, assuming the CNVi interface hasn’t really changed in the 13th gen (which has so far been 12th gen but more efficient and no real changes)

    There should be no harm in trying out. In the worst case, I’d try to Google to see if there is any known way to reset the CNVi firmware state on Insyde BIOS if you want to be super careful but I doubt it would cause permanent damage. On AMD there is really nothing bad it can do - there is no CNVi firmware to touch so the worst thing that can happen is it does not detected as a WiFi adapter and that’s the end of it.

    No way around trying. Someone has to do it first.


  • Thank you! I’ve already had some people give me crap for not buying a Gen 3 instead, but all data I’ve found seems to indicate that during heavy use the performance difference with a 4x4 Nvme is noticeable, and the P41 is actually more efficient than the P31 because, while the power consumption is higher when it’s doing rw, it takes about half the time to complete said operation, and it is also much faster than the P31 to race to idle. Seems like the controller on the P41 Platinum is just very, very good for power consumption, with no known rivals (except the Solidgm P44 Pro, which has the same SK Hynix Aries controller and very similarly performing flash, but it’s €250 vs. €120, and for a margin of error difference, I won’t pay that much)




  • Frankly, regardless of what the optimization guide says, I would not use TLP. TLP is widely known in the Linux community for being very troublesome in this respect. Your CPU scheduler very likely does a much better job anyway, and TLP needs to be specifically fine-tuned for a machine to work well. Personally, I think the performance penalty you pay for TLP is not worth the battery life advantage. Sure, my laptop lasts longer on TLP, but it’s also slower - not rocket science, right? If you put the brakes on your CPU, it will run slower and draw less power. Seems about right.

    I would try one of the following options:

    • Drop TLP completely, and use power-profiles-daemons plus powertop and go work on some of the tunables to turn off features you don’t need to gain more power
    • Tweak your TLP config yourself. Make sure Turbo Boost is allowed, and edit the CPU-related parameters to be less conservative. You may keep the USB device suspend stuff alone if you want.
    • Use Fedora Workstation and don’t do anything else. It already has pretty good defaults for battery life and, on all hardware I tested, it gives me longer battery run-time than Ubuntu and Arch do on a comparable configuration. Make sure to stick to GNOME on Wayland and avoid installing extensions - some of them are programmed so badly that they end up consuming a lot of power unnecessarily.

    Also, make sure you are starting up Linux with mem_sleep_default=deep to minimize power drain during standby; although, as with all modern laptops except MacBooks nowdays, you should standby as little as you can and prefer a full shutdown whenever feasible.



  • I had not really thought about that, but you’re right. The example holds throughout my experience. I use Linux daily nowdays, but I used to be a Windows power user.

    I can confidently claim Linux is easier to use than Windows. And I mean it seriously, I’m not trying to be controversial. It’s nowhere near as complex as Windows and structurally many things are easier, it has less abstraction, so, whereas it might require a bit of a learning curve in some places, when you’ve done it, you’re done because you fundamentally already know how it works. Also, while the GUIs are now slowly starting to get better and cover more of the experience, for the most part, Linux distros don’t make many attempts to hide the underlying workings to the end user, and documentation is provided for anything, so if you’re curious to know how anything works, you can just read up on it and get a decent grasp.

    Windows tries to hide how it works from the user, and that is what causes it to be so hard to debug. When something breaks on Linux, after a few years of experience and a “clinical eye”, I can pretty much always accurately guess where in the stack the disruption has happened. Maybe it takes 2 or 3 takes to get it right, but at the end of the day, I have a honest grasp of how a modern Linux system is put together and I can get my hands dirty. On Windows it’s… not that simple. I feel like I am talking multiple shots in the dark and just guessing semi randomly hoping that the next attempt will fix it instead of leaving my system in an even more broken state. Because the admin tools are there - regedit, powershell, event log, performance monitor, whatever you want - but it feels like doing surgery “blindfolded” in a way. Trusting what you read on some forum and hoping this long ass powershell command will do it.


  • Windows is the only platform with this kind of retro compatibility. If they didn’t have to worry about that, they could just hop on the Core kernel, add a compatibility layer for NT applications and end up with a superior architecture compared to that we have now.

    However, that would basically murder retro compatibility support for applications and hardware drivers, and that would be a disaster. So they cannot do that, and they’re stuck on piling up layers over layers of new stuff on top of legacy code that they need to be very careful to change.

    Apple fundamentally doesn’t care. They have the clout to be able to tell developers “alright, either you port your program to this new completely arbitrary set of requirements or all your Apple customers are going to be very unhappy”, because their clients aren’t really big enterprise deployments, but individual users. Regular customers, self-employed artists, and design firms. Client laptops for software companies. Nothing too important. This allows them to improve as much as they want with changes as breaking as they want, like moving through 3 different architectures over the lifetime of their desktop OS and having everyone just swallow the pill.

    Linux is a different beast entirely. It has frankly not been relevant on the desktop until as of late - and right now it’s in the middle of a bit of a critical period, where development has sprung back up after being fundamentally dead / very slow for a very long time, and the community has finally decided to abandon a lot of legacy that was holding Linux back and replace it with brand new shiny stuff that is much more modern and for most purposes works better: Puleaaudio is being replaced with Pipewire, the 39 years-old X server is being phased out in favour of Wayland compositors, graphical applications are now being shipped through Flatpak, gaining several benefits (isolation, choose different install locations, option to delete all data on uninstall, very easy to install add-ons to a program like OBS through the store, deploy once and have it work on all distros), and it’s doable because the market share of Linux is in that niche where it’s now high enough to be relevant, but low enough that it is still possible to make some breaking changes without too much disruption.

    Windows already has the monopoly so it’s kinda… cornered


  • Sadly this is just how it goes for Windows sometimes. You’re right in your suspicion that Linux’s driver infrastructure is miles ahead, and this wouldn’t have happened on Linux.

    Still, if I were you, I’d try doing an in-place reinstall, then double checking the driver package is the correct one, and then try to apply it. That should be the closest to a clean install without deleting any data.

    Failing that… I’ll be honest. I hate troubleshooting Windows. Hours of headaches spent with vague error messages and no documentation. When I have an issue on Windows and an in-place reinstall doesn’t do it, I just take a quick backup and reinstall the OS clean. It’s not worth fighting with. It’s so complex 1 thing may have gone wrong out of 10 thousand different things and you don’t have the tools to properly trouble-shoot it anyway unless you’re like a senior Windows sysadmin. It’s more time effective, and it also gives you the benefits of a clean Install - all the small errors and things that pile up over time on Windows just go away, any unnecessary software you didn’t really want to uninstall is no longer there, etc.