• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle



  • I think that the left-right dichotomy is inherently flawed. A lot of what I believe might be considered “right-leaning” or “left-leaning,” but I cannot say that I prescribe to either sort of ideology fully or with any fidelity.

    I will always be opposed to any view with a pervasive “moral” authority, and both the so-called left and right are obsessed with their own versions of this. The problem we run into is the false supposition that beliefs can be categorized on a spectrum spanning right to left (or, even more liberally, a spectrum spread across two dimensions). It has been a ridiculous notion from its inception, whenever that might have been.

    Building one’s identity (another silly notion, in general—identity itself being a frivolous construct that functions only as a fulcrum for the extortion of social power) upon a supposed spectrum is likewise ridiculous. You can be conservative or liberal, or anything, really. But those beliefs do not exist in a linear or planar dimension. They are so far removed from each other that one cannot fathom sliding incrementally from one to the next.

    And to each respective party, “left” and “right,” the other can be demonized as evil, even without full comprehension of the other. It’s all just so damned tribalistic and silly.




  • Let’s remove the context of AI altogether.

    Say, for instance, you were to check out and read a book from a free public library. You then go on to use some of the book’s content as the basis of your opinions. More, you also absorb some of the common language structures used in that book and unwittingly use them on your own when you speak or write.

    Are you infringing on copyright by adopting the book’s views and using some of the sentence structures its author employed? At what point can we say that an author owns the language in their work? Who owns language, in general?

    Assuming that a GPT model cannot regurgitate verbatim the contents of its training dataset, how is copyright applicable to it?

    Edit: I also would imagine that if we were discussing an open source LLM instead of GPT-4 or GPT-3.5, sentiment here would be different. And more, I imagine that some of the ire here stems from a misunderstanding of how transformer models are trained and how they function.







  • It’s a bit of a mixed bag. I do enjoy Lemmy. I think that the conversations that take place here are interesting (though many now revolve around Reddit in one way or another). I don’t really find the front page to be as good as Reddit’s.

    And then, of course, I think the most important difference is that Lemmy draws a specific type of person, even after the Reddit migration, and there aren’t as many of us as there are average Internet users. I’m not saying Lemmings are a special breed; rather, I’m saying that we’re the sort of people who might have used Usenet at its peak. We’re the sort who might be Linux users. Many of us are morally aligned with open source technology and the ethics thereof. This makes the discussions a little less diverse on Lemmy than they are on Reddit (which can be good and bad, depending on the sort of conversation).