• 1 Post
  • 71 Comments
Joined 14 days ago
cake
Cake day: January 6th, 2026

help-circle
  • I think it depends a lot on what you are building.

    For bigger projects and apps leveraging the mobile platform I’m 100% with you.

    These kinds of frameworks can still be a good fit for a quick MVP demo, as a stepping stone for porting an existing web app, or if all you really want is a glorified web view (or are PWAs enough for the last one these days?)

    Specifically RN is in terrible shape and IMO something to avoid though.





  • The concept is attractive.

    Since back before “atomic” and “immutable” were fashionable buzzwords, I’ve had a few Alpine installations running something like this. Their installer supports it. https://wiki.alpinelinux.org/wiki/Immutable_root_with_atomic_upgrades

    I guess I’m also not alone in having been running OpenWrt with atomic upgrades for many years.

    Since then been running a ublue fork (Aurora) for a while now. Forking it and running the builds on my own infra instead of relying on their GitHub works after hacking up the workflow files but it’s quite redudandant and inefficient with IMO one too many intermediate layers (kinoite -> akmods -> main -> aurora/silverblue/bazzite -> iso) downloading the same things multiple times repeatedly despite spending considerable overhead on caching. It’s clear that building outside of their GitHub org is not really actively supported.

    Also tried openSUSE microOS (Aeon) a year or two back for a while. I want to like it but find zypper and transactional-update pretty uncomfortable and TBH sometimes still confusing to work with. Installing it on encrypted RAID was daunting IIRC. Rough edges. Enough out-of-date docs on the official site to make Debian wiki look like ArchWiki in comparison.

    KDE Linux looks promising but it was still in a very early and undocumented stage last I looked. Great to see the progress.

    More recently been looking more at Arkane Linux and been using it for some months now. It’s an immutable with Arch base. Much easier to customize and maintain than the ublue options and a lot less time spent triggering and waiting for builds - while having less stuff pulled from third-party servers in the process and an easy way to fork packages by cloning and submoduling an AUR repo. Lot more straightforward to make work without relying on GitHub. If you’re looking at rolling your own builds and are comfortable with Arch, I highly recommend checking it out. My fav so far.

    https://arkanelinux.org/

    https://codeberg.org/arkanelinux/arkdep

    Given the self-contained nature of Debian - cloning the Debian sources is enough to do a complete offline build of everything - I think it’d be the most interesting base for a sustainable immutable distro unless you go to the opposite end with “distroless” (no comment). Looking forward to one.


  • kumi@feddit.onlinetoLinux@lemmy.mlDrag and Drop is an absolute mess
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    2 days ago

    It’s not as black and white as they say. Flatpak is not a bad choice per se but not without tradeoffs and they can come with catches like this because of the security model. There is no one-size-fits-everyone here. If you want all your apps to have access to everything your user does and value convenience over the sandboxing, flatpaks might not be the best choice for your situation. Also like for any repo with external third-party uploads, quality varies a lot between apps and maintainers on flathub. Some are excellent and some are in a sorry state. Before installing from fllathub its a good idea to some basic due diligence on the package and maintainer before jumping in.

    I agree with the IanTwenty that the UX has room for improvement in making it more obvious what’s going on and making it easier to manage customizations and overrides. For the time being, getting comfortable with Flatseal and learning more about Flatpaks seems like the best way for a user to make it work for them if defaults don’t work out.

    Flatpak has tradeoffs and whatever is on flathub is not guaranteed to always be your best pick. That doesn’t make it Bad. Going as far as calling them harmful in general is hyperbole. It can still be a great option for many users.






  • Apart from what others said about power/throttling, I wonder if the filled up memory during the upgrade (or other memory-heavy use) pushes some central pages to swap and then they stay there after?

    After the upgrade and you have plenty of free memory again you can force back everything to RAM by temporarily disabling swap:

    swapoff $swapdev && swapon $swapdev  
    

    To list swap devices, just run swapon.

    Also switching to an X11 window manager can be quite a lot snappier than modern GNOME for older hardware. You could try Xfce, Cinnamon, MATE, or KDE with the X session.

    If it’s not throttling/thernals, I wouldn’t be surprised if those two together is what made things worse after migrating dist.

    If you’ve been swapping heavily over time you might also want to check disk health with smartctl and check that you don’t have related errors in dmesg.

    If you press tab in htop you can also see if there is high IO load going on.






  • Phone. And Location 🙃

    One example of how permissions UI on Android is too coarse. Arguably mocking location is a questionable use but this pattern crops up everywhere. I think users must have more fine-grained control over what apps can access regardless of what devs put in their mainfests. It’s reasonable that a user wants an app to have access to GPS coordinates and network access but not cell or wifi info.

    In general GrapheneOS gives more flexibility and power to the user than stock but I’m not sure if they go far enough to support what you want to do.



  • Possibly oversimplifying and didn’t have a proper read yet: If you trust the hardware and supply-chain security of Intel but not the operational security of Cloudflare or AWS, this would allow you to exchange messages with the LLM without TLS-encryption-stripping infrastructure operators being able to read the messages in cleartext.

    This is a form of Confidential Computing based on Trusted Execution Environments. IMO the real compelling use of TEEs is Verifiable Computing. If you have three servers all with chips and TEEs from different vendors, you can run the same execution on all of them and compare results, which should always agree. You will be safe from the compromise of any single one of them. For Confidential Computing, any single one being compromised means the communication is compromised. The random nature of LLM applications makes Verifiable Computing non-trivial and I’m not sure what the state-of-art is there.

    And yes it does look like it has overhead.

    This seems impossible from a scalability perspective, as even small LLMs require huge quantities of RAM and compute. Did I miss something fundamental here?

    Well isn’t it the other way around? If the per-user resources are high, the additional sublinear overhead of isolating gets relatively smaller. It costs more to run 1000 VMs with 32MB RAM each vs 2 VMs with 16GB RAM each.

    However I guess this might get in the way of batching and sharing resources between users? Is this mentioned?