• 0 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle

  • FWIW, that father may have grown up with a father just like you. He just made different choices. Just like you can see that those are different choices. You could and probably will make different choices too, it’s the only way we ever change. It’s not by retroactively having perfect circumstances. It’s by choosing to be better each day moving forward.

    Also, as a 40+ year old myself it’s always important to take a clear stock of the ways you’re similar to your parents (I find more every day) and also the myriad ways you are your own individual.




  • What they actually mean is rather “these two things are very dissimilar”, or “these two things are unequal”.

    I think what they mean is “This is an invalid comparison”. For instance, the idea that two concepts are “apples and oranges” invokes the idea that apples and orange can’t be compared. But of course they could be compared as fruits (which would you prefer to get in line at the cafeteria? Aren’t you inherently inviting a comparison? I’m with you on that).

    However, if one were asking whether golden delicious apples are better than honeycrisp apples and someone butts in that navel oranges are the best, they’d get the same “navel oranges and golden delicious/honeycrisp apples can’t be compared” response because they’ve brought up an invalid comparison in the context of the comparison. Apples and oranges.



  • LLMs are conversation engines (hopefully that’s not controversial).

    Imagine if Google was a conversation engine instead of a search engine. You could enter your query and it would essentially describe, in conversation to you, the first search result. It would basically be like searching Google and using the “I’m feeling lucky” button all the time.

    Google, even in its best days, would be a horrible search engine by the “I’m feeling lucky” standard, assuming you wanted an accurate result and accurate means “the system understood me and provided real information useful to me”. Google instead return(ed)s(?) millions or billions of results in response to your query, and we’ve become accustomed to finding what we want within the first 10 results back or, we tweak the search.

    I don’t know if LLMs are really less accurate than a search engine from that standpoint. They “know” many things, but a lot of it needs to be verified. It might not be right on the first or 2nd pass. It might require tweaking your parameters to get better output. It has billions of parameters but regresses to some common mean.

    If an LLM returned results back like a search engine instead of a conversation engine, I guess I mean it might return billions of results and probably most of them would be nonsense (but generally easily human-detectable) and you’d probably still get what you want within the first 10 results, or you’d tweak your parameters.

    (Accordingly I don’t really see LLMs saving all that much practical time either since they can process data differently and parse requests differently but the need to verify their output means that this method still results in a lot of back and forth that we would have had before. It’s just different.)

    (BTW this is exactly how Stable Diffusion and Midjourney work if you think of them as searching the latent space of the model and the prompt as the search query.)

    edit: oh look, a troll appeared and immediately disappeared. nothing of value was lost.