• megopie@beehaw.org
    link
    fedilink
    arrow-up
    34
    ·
    edit-2
    10 months ago

    If I had to guess, they probably did a shit job labeling training data or used pre labeled images, now where in the world could they have found huge amounts of pictures of women on the internet with the specific label of “Asian”?

    Almost like, most of what determines the quality of the output is not “prompt engineering” but actually the back end work of labeling the training data properly, and you’re not actually saving much labor over more traditional methods, just making the labor more anonymous, easier to hide, and thus easier to exploit and devalue.

    Almost like this shit is a massive farce just like the “meta verse” and crypto that will fail to be market viable and waist a shit ton of money that could have been spent on actually useful things.

    • webghost0101@sopuli.xyz
      link
      fedilink
      arrow-up
      8
      ·
      10 months ago

      They did literally nothing and seem to use the default stable diffusion model which is supposed to be a techdemo. Would have been easy to put “(((nude, nudity, naked, sexual, violence, gore)))” as the negative prompt

      • megopie@beehaw.org
        link
        fedilink
        arrow-up
        7
        ·
        10 months ago

        The problem is that negative prompts can help, but when the training data is so heavily poisoned in one direction, stuff gets through.