I know how awkward that title is and I apologize.

OS: Home Assistant 11.2

Core: 2023.12.3

Computer: Raspberry Pi 4 Model B Rev 1.5

Explanation: I run a set of data collection scripts on my home network and one of the pieces of data is getting the computer model. In all my other SBCs, the below symlink gets that data.

Symlink: /proc/device-tree/model

File Location: /sys/firmware/devicetree/base/model

The symlink is broken and when I went to check the firmware directory, it is completely empty. The last update date for /sys/firmware according to ls -la is December 10 at 2:40 which when I checked my backups, is when core_2023.12.0 installed.

Attached is what should be in the firmware folder on my other Raspberry Pi 4 Model B Rev 1.5 right now.

I did a find from root for either the model file or anything vaguely resembling it and I can’t find it. Anyone else have this problem or is it just happening to me? Or am I missing something?

  • whaleross@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    9 months ago

    Every time I read of issues like this, I so much wish ha devs could be bothered to complement the rolling release as it is today with a quarterly “stable” branch that gets all the bug fixes and patches but none of the monthly new features.

    The stable branch could lag 3 months in features, it doesn’t matter with latest features when all you want is a recently patched and updated system that runs your house without going bzzpth.

    • Seperis@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      Since probably October, I’ve noticed some really really random problems show up that never used to. And for once, I know it wasn’t me messing with the code; I took a sabbatical from HA to learn how to use Proxmox a couple of months ago. and everything worked fine. It was actually a clean install to a new Raspberry Pi as my Odroid decided to stop working and I haven’t had time to learn to solder (hopefully this week, tho). I was kind of wondering if it was the Pi that was the problem.

    • realitista@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      9 months ago

      I always submit this idea any time there’s a “Month of What The Heck”. Devs don’t care much for it. Honestly at this point I’d be happy to give up all new features and just stay on a stable branch.

      • whaleross@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        9 months ago

        Yeah, it’s totally a fun feature driven project reliant on community efforts despite there is a commercial venture behind it nowadays. The core devs still treat it as their baby hobby project and nobody wants to do the boring job of maintaining a stable branch so it’s not going to happen.

        Some while ago I saw another discussion on this topic that was shot down with the opposing arguments that all users have to do is stay up date with the latest version, while also saying that users are at fault for things breaking because they update when a notification tells them that there is a new latest version.

        I think it’s arrogant and irresponsible and a ticking time bomb for a big time bug or zero day exploit and not how any serious project should be administered.

        I wish I could have Linus Torvalds give his colorful opinion on this mindset on developing the operating system for peoples homes.

    • brb@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I think you are more than welcome to make a Fork and do this. You can backport all fixes and not implement new features.

      I think new features is one of the core reasons the projects gets more contributors, and it sorely needs those. I understand why the focus is now on that.

      If you want stability, I’d suggest maybe finding some likely minded people and go and maintain a Long Term Stable version.

      • whaleross@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Indeed I could, but this is the boring job you have the paid employees for rather than putting it on users to ensure a stable version of your product. .

    • Seperis@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      I know, I’m trying to write up a clear bug report on this, but I’m honestly not sure if it actually has any effect other than messing up my data collection scripts. Yeah, it’s annoying the hell out of me but I’ve been going through the documented issues with the core and it doesn’t look like anyone else noticed a problem. I’ve been trying to figure out if it’s created by an alpine package that I can run, but not much luck there.

      Note: I enabled root for Home Assistant OS and the symlink and file are fine there.

  • Skull giver@popplesburger.hilciferous.nl
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    /sys is a virtual filesystem, it doesn’t store any real files. Most of the time, it’s populated by the host OS (though running code inside Docker could restrict parts of it). I’m not sure what the point of backing up that part of the filesystem.

    If it’s missing any files, it’s probably because your packages broke, or there’s something weird about your kernel.

    • Seperis@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Which is why I"m not sure I need a bug report. The part I have non-root access to is inside a docker container and that’s all I needed to collect data. But it’s such a random thing to go missing.since that core update.

      • Skull giver@popplesburger.hilciferous.nl
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        My point is more that I don’t think Home Assistant controls these files, so there’s not a lot they can do to fix this. I’ve looked a bit more into this, and Docker indeed blocks off access to these files.

        Home Assistant Core (the one doing basically no installation other than a virtual environment) certainly won’t affect the kernel like this. If you’re using a python virtualenv, your problem lies with the OS you’re running. That said, what they call “core” and what users call “core” seem to be two very different things.

        If the Docker based Home Assistant installs have been providing access to these files, they’ve overridden the explicit choices of the Docker people themselves, which seems rather risky. There’s an issue upstream with a suggestion (“just turn off all protection Docker offers” basically) but the Docker team hasn’t allowed access to these files for almost two years now.

        Perhaps it’s only happening now because you updated Docker recently?

        If you’re not using Docker, you should check with the upstream Linux distro to see why you can’t access the devicetree information.

        If you are using Docker, you could consider asking the home assistant people to put back whatever hack they used to expose this information, or perhaps they could instruct you with how to apply this hack yourself. Currently, Docker explicitly hides these files, though, so to get this information you’ll have to fight the underlying software.

        • Seperis@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Full disclosure: I just–and I mean just–got my head wrapped around docker and containers due to installing Proxmox on my server. Right now, my Proxmox server runs a LXC container for docker, and in docker I run Handbrake and MakeMKV images that run the GUIs in a browser or run with command line. They connect to each other through mounting the LXC’s /home/user into both., then added a connection to the remote shares on my other server so I can send them to my media server. Yes, I did have to map all the mountings out first before I started but hey, that’s how I learn.

          Long way of saying: I am just now able to start understanding how Home Assistant works–someone said Home Assistant OS was basically really a hypervisor overseeing a lot of containers and now that I use Proxmox, that really helped–but I’m still really unfamiliar with the details.

          I installed the full Home Assistant on a dedicated Pi4, so it’s the only thing it does. Until yesterday, the only part I actually interacted with was the data portion, which is where all my files are, where I configure my GUI and script, store addons, etc. The container for this portion runs on Alpine Linux; I can and have and do install/update/change/build packages I need or like to use. in there It’s ephemeral; anything I do outside the data directory (it holds /config, /addons, etc) gets wiped clean on update, so I reinstall them whenever HA does an update .

          When I run my data collection scripts on my Home Assistant SBC, they take their information from the container aka Alpine Linux., including saying my OS was Alpine. All of this worked correctly up until–according to the directory dates, December 10th at 2:40 AM when the /sys/firmware was last updated and everything in it vanished, breaking the symlink to /proc/device-tree/model. This also updated the container OS to Alpine 3.19.0. Data collection runs hourly; one of my Pis ssh’s into each computer to run four data collection scripts and updates a browser page I run off apache, so I can check current presence and network status and also check the OS/hardware/running services of all my computers from the browser (the services script doesn’t work on Alpine yet; different structure). I didn’t notice until recently because work got super busy, so I only verified availability and network status regularly.

          These are the packages I install or switch to an updated/different version the Alpine container to help with this or just have fun: -figlet (it’s just cute ASCII art for an ssh banner), -iproute2 (network info, when updated has option to store network info in a variable as a json),

          • iw (wireless adapter info),
          • jq (reads and processes json files),
          • procps-ng (updated uptime package for more options),
          • sed (updated can do more than the installed one),
          • util-linux (for column command in bash),
          • wireless-tools (iwconfig, more wireless data if iw doesn’t have it) (Note: I think tr may also be updated by one of these.)

          These are the ones I use for data collection that are already installed:

          • lscpu (“Model name” “Vendor ID” “Architecture” “CPU(s)” “CPU min MHz” “CPU max MHz”)
          • uname (kernel)

          These are the files I access for data collection:

          • /proc/device-tree/model (Computer model)
          • /proc/meminfo (RAM)
          • /proc/uptime (Uptime)
          • /etc/os-release (Current OS data)
          • /sys/class/thermal/thermal_zone0/temp (CPU temperature for all my SBCs except BeagleBone Black)

          Until this month, all of those files were accessible both before I do the package updates and after. The only one affected was maybe /proc/uptime by the uptime update to get more options. Again: I’ve been running these scripts or versions of them for well over a year and I test individually on each SBC before adding them to my data collection scripts to run remotely; all of these worked on every computer, including whatever SBC was running Home Assistant. (Odroid N2+ until it died a few months ago) And all of them work right now–except /proc/device-tree/model on my Home Assistant SBC. The only way I can get model info is to add an extra ssh to Home Assistant itself as root and grab the data off that file (and while I"m there, get the OS data for Home Assistant instead of Alpine), save it to my shell script directory in my data container, and have the my script process that file for my data after it gets the rest from the container.

          That’s why I’m weirded out; this is one of the things that is the same on every single Linux OS I’ve used and on Alpine, so why on earth would this one thing change?

          This could conceivably be an Alpine issue; I downloaded Alpine 3.19.0 to run in Proxmox when I get a chance, and I kind of hope that it’s a deliberate change in Alpine, because otherwise, I can’t imagine why on earth the HA team would alter Alpine to break that symlink. Or they could be templating Alpine for the container each time and this time it accidentally broke. The entire thing is just so weird. Or maybe–though not likely–a bug in Alpine 3.19.0, but I doubt it; I can’t possibly be the first to notice, it was released at least three weeks ago and I googled a lot.

          I’m honestly not sure it affects anything at all, but it bothers me so here we are. Though granted, it did make me finally get off my ass and figure out how to login as root into HA as well as do a badly needed refactor of my main data collection script (the one that does the ssh’ing) as well as clean and refactor my computer information scripts, so maybe it was destiny.

          • Skull giver@popplesburger.hilciferous.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            I run HASS on an amd64 virtual machine, so it’s possible there are difference between our devices. However, because both seem to be based on maintaining a set of Docker containers, I don’t think there’s much difference (other than the ARM specific virtual devicetree directory not existing on my machine).

            If you run an up-to-date version of Docker, you should not have access to /sys/firmware by default. That’s a decision the Docker folks made because that directory contains things like bootloader configuration/information and Windows license keys.

            On the Linux OS itself, there shouldn’t be any such restriction. If you can’t access these files outside of Docker, there’s something wrong, probably with your kernel. You said ssh’ing into the machine works as a workaround, so I don’t think this is the case.

            What seems more likely to me, is that your current host OS comes with a recent version of Docker that shields the /sys/firmware directory from Docker containers by default. If the Docker version didn’t change, then I think what you’re seeing is what you should’ve been seeing all along.

            The only way I can think of that Home Assistant could have changed this behaviour, is that it could’ve changed the configuration of the default containers. As you can read in the github issue I linked, there’s a way to tell Docker to basically disable security features (run in privileged more and allow access to all of sysfs). It’s possible that Home Assistant used to configure Docker in this manner, but no longer does.

            Running a full application in privileged mode is normally a hack to work around other problems (i.e. not exposing the proper device paths with proper access controls and just allowing the container to do whatever and probably break out of isolation), so it could be that they enabled these workarounds to work around some unrelated issue. If the unrelated issue was fixed, and the containers no longer needed to run privileged, they could’ve disabled the workaround and broken your access to sysfs in the process.

            The small Home Assistant supervisor daemon that acts as a sort of “”“hypervisor”“” (which handles updates of the other containers) does need to run in privileged mode; it needs to control Docker, so of course Docker can’t be configured to stop it from doing that. It’s a rather small service, though. However, I have noticed that on some installs, the supervisor daemon seems to lose its privileged mode due to a bug. It’s possible that this is bug also affected your seemingly privileged main container. If that’s the case, running the installation script again should fix the issue.

            I googled all the terms I could think of that could affect your problem with “home assistant” but when it comes to devicetree access, only your Lemmy post seems to come up. I think your data collection setup may be rather unique among HASS users, so perhaps you really are the only one affected by this, or at least the only one who’s written a post about it.

            In my tests, none of the normal (unprivileged) Docker containers I’m running on my servers could access /sys/firmware. I tested this under Ubuntu, Debian, Manjaro, and Arch hosts. Accessing various firmware related virtual files worked fine outside Docker, of course, but inside Docker, /sys/firmware is empty. I don’t have an Alpine install but I’d be surprised if that’d handle this directory any different.

            Normally, you could work around the limitations here by just marking your home assistant container as privileged and ignoring the potential security implications, as you may have unknowingly been doing. I think that’s not exactly an unacceptable risk for a dedicated Raspberry Pi (though it would be bad to default to this configuration). Unfortunately, Home Assistant’s supervisor recreates containers for you during updates, so marking the containers as privileged can be more of a pain than you’d expect. You can try looking into ways to customise the Home Assistant Docker configuration to grant these permissions, perhaps there’s a config file I’m not aware of that you can use to make sure the supervisor recreates the containers with the appropriate configuration. As stupid as it may be, I would personally look towards alternative solutions, like your SSH workaround; perhaps your script can check for an empty /sys/firmware directory and apply the workaround from there?

            tl;dr: it’s a kernel bug if you’re not running your data collecting script inside Docker, otherwise it could be a home assistant bug/update that caused the change, but as of a year or two ago you’re not supposed to be able to read these files from within a Docker container anyway.

            For what it’s worth, I disagree with Docker’s blanket block of /sys/firmware and I hope the issue that’s open about this change will be resolved. You don’t want to leak Windows keys, but there should be an obvious way to expose the board info without disabling basic container security…