![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://beehaw.org/pictrs/image/c0e83ceb-b7e5-41b4-9b76-bfd152dd8d00.png)
The number refers to the horizontal resolution. FHD is nearly 2K pixels wide, just as 4K resolutions are nearly 4K pixels wide, although FHD is the typical term for the resolution and QHD is more commonly called 2K instead than FHD
Cryptography nerd
The number refers to the horizontal resolution. FHD is nearly 2K pixels wide, just as 4K resolutions are nearly 4K pixels wide, although FHD is the typical term for the resolution and QHD is more commonly called 2K instead than FHD
Tapes themselves are cheaper, but the drive (and potentially operating cost?) can definitely be higher for the industrial stuff
And my TV is still a cheap full HD (2K) screen from 2011, so I’ve got no reason to buy media in higher quality
13647/F/a weird anime
This was in a conversation about what kind of abusive behavior is acceptable. Do you think it’s also acceptable to be mean to athletes because they too cause damage to their own bodies?
Right and other people don’t get to decide to put a virus in my body, so vaccinate or mask up!
What’s the point when herd immunity is necessary?
Humans learn a lot through repetition, no reason to believe that LLMs wouldn’t benefit from reinforcement of higher quality information. Especially because seeing the same information in different contexts helps mapping the links between the different contexts and helps dispel incorrect assumptions. But like I said, the only viable method they have for this kind of emphasis at scale is incidental replication of more popular works in its samples. And when something is duplicated too much it overfits instead.
They need to fundamentally change big parts of how learning happens and how the algorithm learns to fix this conflict. In particular it will need a lot more “introspective” training stages to refine what it has learned, and pretty much nobody does anything even slightly similar on large models because they don’t know how, and it would be insanely expensive anyway.
Yes, but should big companies with business models designed to be exploitative be allowed to act hypocritically?
My problem isn’t with ML as such, or with learning over such large sets of works, etc, but these companies are designing their services specifically to push the people who’s works they rely on out of work.
The irony of overfitting is that both having numerous copies of common works is a problem AND removing the duplicates would be a problem. They need an understanding of what’s representative for language, etc, but the training algorithms can’t learn that on their own and it’s not feasible go have humans teach it that and also the training algorithm can’t effectively detect duplicates and “tune down” their influence to stop replicating them exactly. Also, trying to do that latter thing algorithmically will ALSO break things as it would break its understanding of stuff like standard legalese and boilerplate language, etc.
The current generation of generative ML doesn’t do what it says on the box, AND the companies running them deserve to get screwed over.
And yes I understand the risk of screwing up fair use, which is why my suggestion is not to hinder learning, but to require the companies to track copyright status of samples and inform ends users of licensing status when the system detects a sample is substantially replicated in the output. This will not hurt anybody training on public domain or fairly licensed works, nor hurt anybody who tracks authorship when crawling for samples, and will also not hurt anybody who has designed their ML system to be sufficiently transformative that it never replicates copyrighted samples. It just hurts exploitative companies.
Remember when media companies tried to sue switch manufacturers because their routers held copies of packets in RAM and argued they needed licensing for that?
https://www.eff.org/deeplinks/2006/06/yes-slashdotters-sira-really-bad
Training an AI can end up leaving copies of copyrightable segments of the originals, look up sample recover attacks. If it had worked as advertised then it would be transformative derivative works with fair use protection, but in reality it often doesn’t work that way
See also
Sometimes the bet works 🤷
https://olympics.com/en/news/steven-bradbury-australia-s-last-man-standing
And that scene where she can’t pull in the non-accelerated astronaut colleague while still being in atmosphere thin enough that he wouldn’t fall behind, so he just drifts away through magic
In Swedish, mat is food
Depends on the kind of seat too. If they’re thinner ones it’s harder to avoid, especially if you’re leaning forwards. It’s not hard with normal wider seats for me, the actual reason I have a seat cushion for my bike is to protect my ass when the terrain is rough.
Yeah, but who would be able to prove it?
Instance admins could easily patch it in for their local communities (just add a filter ignoring API actions like posting and voting for some users), but it’s not official and probably won’t ever be official behavior
Bluesky does strict content addressing with hashes plus post ID (unique per repository, this allows edits). So you can choose which version to refer to. If you need to archive or mirror stuff you can use the hash, and threads can have both methods so you can see which version of a comment somebody replied to, etc.
Without content addressing that’s almost impossible
A lot of this doesn’t work easily on the activitypub model, because accounts and posts and communities live on their host instances, and every interaction has to be relayed to them and updates have to be retrieved from them.
While you can set up mirrors with arbitrary additional moderation that can be seen from everywhere, you can’t support submission of content from instances blocked by the host instance.
The bluesky model with content addressing can create that experience by allowing the creation of “roaming” communities where posts and comments can be collected by multiple hosts who each can apply their own filtering. Since posts are signed and comment trees use hashes of the parent you can’t manipulate others’ posts undetected.
Bluesky already has 3rd party moderation label services and 3rd party feed generators for its Twitter-like service, and a fork replicating a forum model could have 3rd party forum views and 3rd party moderation applied similarly.
Already broken
https://en.wikipedia.org/wiki/8K_resolution