• 1 Post
  • 15 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2024

help-circle
  • Eh - maybe - there are definitely hoarders with the ability to absorb 300TB. They’re not common, but they do exist. There are probably close to zero hoarders that could spare 3PB, especially for a collection that they won’t listen to a majority of. It’s like saying that it isn’t worth digitizing wax tube recordings because the source is so low quality. If preservation is the goal, anything is better than nothing.


  • Normally you’re mostly right, but in this case I have to agree that lossy existence is better than lossless absence. 300TB puts it at the upper limit of pro-sumer capacity, but it’s still doable from a personal archive perspective. If you went FLAC lossless, though, you’re looking at 3-6PB. That quantity is almost completely unattainable by hobbyists, and presents challenges even for enterprise entities. This archive is the “photo of the original document” for the collection. It’s not optimal, and there’s a lot of room for improvement, but the alternative is to just not do it at all


  • Sab might have its own mask settings - it would be worth looking at. Same thing applies here - subtract the mask part from 7 to get the real permissions. In this case, mask 002 translates into 775. This gives the uid and gid that the container is running under (probably defined in a variable somewhere) Read/Write/Execute, but anyone else Read/Execute. The “anyone else” would just be any account on the system (regardless of access method) that didn’t match on the actual uid or gid value.




  • Yeah, I’ve stopped using plex entirely. I was grandfathered in, but it just got to be too much nonsense. The license changes to unRAID don’t meet that bar, IMO. Yeah, the old license model is gone, but “buy once upgrade free forever” is what caused plex to go the route it did. I honestly never expected to get upgrades forever - I assumed that it would have to go one of a few ways for the devs to be able to feed their families, and what they choose is definitely one of the lesser evils. For a lot of use cases, it even makes sense. I stayed on 6.x for probably close to 3 years, so i would have saved money with the new scheme. I’m also willing to admit that if you’re truly dead set on free (both libre and gratis), then there are plenty of solid choices there, too


  • Yeah, they did something goofy with it, but they’re at least trying to not be nakedly evil. I got in when it was just a perpetual license, but the new model isn’t as bad as a lot of people think. TrueNAS is good too - I use the enterprise version at work and it’s done well. The biggest differences are that the Ent. version doesn’t expose containers or lxc so i don’t know how that works, and TrueNAS/ZFS requires same-size disks where unRAID allows mixing sizes while retaining up to 2 parity disks. At work, I buy specific drives, so zfs is great - at home I buy what’s affordable, so zfs isn’t so good. I also saw one of your other comments, and unRAID supports hardware pass-through to containers, so exposing your AMD iGPU to jellyfin should be pretty simple. I can’t speak to how TNas would handle that


  • I’ll make the obligatory unRAID suggestion. It fits a lot of less intensive scenarios like what you’re describing. It does carry a cost, and the licensing model is “interesting”, but it has top-tier ease of use, especially around container apps. It would also allow you to use that 1tb ssd as a cache drive since the OS would run from usb (well, in-memory but stored on usb). You can also trial it for free for 30 days and if you don’t like it, there’s plenty of good suggestions in the thread already