Strictly speaking, they’re leveraging free users to increase the number of domains they have under their DNS service. This gives them a larger end-user reach, as it in turn makes ISPs hit their DNS servers more frequently. The increased usage better positions them to lead peering agreement discussions with ISPs. More peering agreements leads to overall cheaper bandwidth for their CDN and faster responses, which they can use as a selling point for their enterprise clients. The benefits are pretty universal, so is actually a good thing for everyone all around… that is unless you’re trying to become a competitor and get your own peering agreement setup, as it’d be quite a bit harder for you to acquire customers at the same scale/pace.
I tend to recommend sticking with more reputable providers, even if it means a couple of dollars extra on a recurring basis. Way too many kiddie hosts popping up, trying to make a quick buck during spring break/summer and then fail to provide adequate services when it actually comes time to provide service.
It may also be a good idea to check LET/WHT before committing into paying longer than month-to-month term with a provider.
Apple community, we don’t care about android; sorry but not sorry. And yes, it is possible, I don’t think anyone is arguing that. The discussion is more what should happen in “the Apple way” where things just work — and that’s something Apple can mandate on app developers… something that might be pretty foreign and alien to the android crowd.
I use AdGuard.
There’s lifetime family plan (for more devices) to be had on stack social almost as a permanent fixture. You can occasionally stack it with extra coupons.
There’s also the problem that sadly Lemmy is filled with vocal users with skewed view of the world, and they tend to be extreme polarizing. The “if you’re not one of us, who firmly believes the world should work a certain way, and if you’re not willing to shoot yourself in the foot with a shotgun to prove it as a point, then you’re one of them; you should get the eff off of Lemmy and crawl back to Reddit” kind of way. They’re so scared of losing that pedestal that they’re going to go out of their way to alienate anyone who doesn’t drink their koolaid and push them off the platform so they can remain dominant. Sadly, these people also never really learned much of the real world, so those that are more experienced / educated gets pushed off the platform, and we end up with a bunch of weird superstonk culty kind of vibe everywhere.
I find myself more and more just make a comment and don’t look back. It’s quite literally futile and pointless trying to expect any discussion of any actual sustenance. You wonder why it’s just shitposting… well this is why.
It is probably best to think nothing on Lemmy is private. Any instance with at least one user subscribed to a community will receive updates (messages and votes) on the community. Instance admin can go into the database to see any private message between any user on that instance.
I know 1Password can detect the app ID and use that as a match criteria. The problem is that it is not user intuitive to get the app ID to key into password manager; also doesn’t change the fact that the app most of the time just front end to some website, which already has an entry for the website, and shouldn’t require the user to go out of their way to find App ID to work around the dev’s inability to surfacing basic metadata about their service.
Bitter that you’re not able to use new features much? Reap what you sow.
It is unreasonable to expect platforms to open up everything to be ripped out and swapped for their competitors.
I expect platforms get more and more cautious as to what they release into unfavorable regulatory environments that offer only marginal economical benefits.
OP Currently has in their possession 2 drives.
OP has confirmed they’re 12TB each, and in total there is 19TB of data across the two drives.
Assuming there is only one partition, each one might look something like this:
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 12345678-9abc-def0-1234-56789abcdef0
Device Start End Sectors Size Type
/dev/sda1 2048 23437499966 23437497919 12.0T Linux filesystem
OP wants to buy a new drive (also 12TB) and make a RAID5 array without losing existing data. Kind of madness, but it is achievable. OP buys a new drive, and set it up as such:
Device Start End Sectors Size Type
/dev/sdc1 2048 3906252047 3906250000 2.0T Linux RAID
Unallocated space:
3906252048 23437500000 19531247953 10.0T
Then, OP must shrink the existing partition to something smaller, say 10TB for example, and then make use of the rest of the space as part of their RAID5 :
Device Start End Sectors Size Type
/dev/sda1 2048 19531250000 19531247953 10.0T Linux filesystem
/dev/sda2 19531250001 23437499999 3906250000 2.0T Linux RAID
Now with the 3x 2TB partitions, they can create their RAID5 initially:
sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda2 /dev/sdb2 /dev/sdc1
Make ext4 partition on md0, copy 4TB of data (2TB from sda1 and 2TB from sdb1) into it, verify RAID5 working properly.
Once OP is happy with the data on md0, they can delete the copied data from sda1 and sdb1, shrink the filesystem there (resize2fs
), expand sda2 and sdb2, expand the sdc1, and resize the raid (mdadm --grow ...
)
Rinse and repeat, at the end of the process, they’d end up having all their data in the newly created md0
, which is a RAID5 volume spanning across all three disks.
Hope this is clear enough and that there is no more disconnect.
deleted by creator
Fun story but I’m most impressed with the earbud part of the story. WOW. Absolutely amazing and unexpected.
This is smart! Should help reduce the number of loops they’d need to go through and could reduce the stress on the older drives.
I’m afraid I don’t have an answer for that.
It is heavily dependent on drive speed and number of times you’d need to repeat. Each time you copy data into the RAID, the array would need to write the data plus figuring out the parity data; then, when you expand the array, the array would need to be rebuilt, which takes more time again.
My only tangentially relatable experience with something similar scale is with raid expansion for my RAID6 (so two parity here compared to one on yours) from 5x8TB using 20 out of 24TB to 8x8TB. These are shucked white label WD red equivalents, so 5k RPM 256Mb cache SATA drives. Since it was a direct expansion, I didn’t need to do multiple passes of shrinking and expanding etc., but the expansion itself I think took my server a couple of days to rebuild.
Someone else mentioned you could potentially move some data into the third drive and start with a larger initial chunk… I think that could help reduce the number of passes you’d need to do as well, may be worth considering.
They’re going for RAID5, not 6, so with the third drive these’s no additional requirement.
Say for example if they have 2x 12T drive with 10T used each (they mentioned they’ve got 20T of data currently). They can acquire a 3rd 12T drive, create a RAID5 volume with 3x 1TB, thereby giving them 2TB of space on the RAID volume. They can then copy 2TB of data into the RAID volume, 1TB from each of the existing, verify the copy worked as intended, delete from outside, shrink FS outside on each of the drives by 1TB, add the newly available 1TB into the RAID, rebuild the array, and rinse and repeat.
At the very end, there’d be no data left outside and the RAID volume can be expanded to the full capacity available… assuming the older drives don’t fail during this high stress maneuver.
Even if you could free up only 1GB on each of the drives, you could start the process with a RAID5 of 1GB per disk, migrate two TB of data into it, free up the 2GB in the old disks, to expand the RAID and rinse and repeat. It will take a very long time, and run a lot of risk due to increased stress on the old drives, but it is certainly something that’s theoretically achievable.
A couple of years is a life time in tech, but despite that, I think the one thing that should be the deciding factor is if you’re actually going to need the space in the mean time. If not, waiting won’t make a difference. On the flip side, if you’re going to need it in the next couple of years anyway, then it might be easier to recognize that $320 over 2 years is less than $0.50/day… taking the initial hit and take advantage of it earlier will probably work out great.
I’ve always wanted to have a flushed back with no camera mesa, but instead a thicker battery. Never happened because allegedly batteries are too heavy and people don’t lift.
Pretty sure UniFi Access can also control the lock mechanism they’re describing. So it’d be a nicely integrated solution.
I think it would be a good idea to take a step back and ask what is it that you’re trying to achieve.
Userbase, the service linked, is a backend as a service platform that offers you authentication and basic database that you can access via their api. You’d then code your own front end web app to interact with their service and store data there. You pay only per storage used by their storage tiers, which are frankly fairly fair priced. If that is something you’d need, that’s a good idea, but you’d be coding the front end yourself.
If you’re only looking for authentication with OAuth, and then coding your own API backend, then something like Authentik would be a nice self hosted authentication provider. Others that commonly gets mentioned but I’ve got limited/no experience with worlds new keycloak, or fusionauth. Managed services here would be your Auth0, Okta, etc.
If you’ve got a specific use case in mind, then it may be a good idea to say what service you’re thinking about, and the community may be able to suggest prebuilt solutions that good better and require less lift.