Let’s get the AMAs kicked off on Lemmy, shall we.

Almost ten years ago now, I wrote RFC 7168, “Hypertext Coffeepot Control Protocol for Tea Efflux Appliances” which extends HTCPCP to handle tea brewing. Both Coffeepot Control Protocol and the tea-brewing extension are joke Internet Standards, and were released on Apr 1st (1998 and 2014). You may be familiar with HTTP error 418, “I’m a teapot”; this comes from the 1998 standard.

I’m giving a talk on the history of HTTP and HTCPCP at the WeAreDevelopers World Congress in Berlin later this month, and I need an FAQ section; AMA about the Internet and HTTP. Let’s try this out!

  • M-Reimer@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    The number one question I would ask about HTTP would be: Why was the “Referer” header initially added and why wasn’t it removed from standard to this day. In my opinion the server, I’m going to, should never know where I came from.

    • Two9A@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I’ve just done some quick browsing to see if there’s a written-down motivation for Referer existing, and there’s this on the Wikipedia: “Many blogs publish referrer information in order to link back to people who are linking to them, and hence broaden the conversation.”

      Which I guess makes sense, in the context of the original use of HTTP as an academic publishing protocol, but it’s gained cruft and nefariousness since wider adoption came about.

      There are good arguments for stripping Referer from the standard, and yours is one of the most cogent; if Referer is still a thing in another 30 years, I’d be surprised.

        • Two9A@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          It should, certainly. But the original draft introducing the header had a typo, and now we’re all stuck with it.

      • kalleboo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        In the early days of hypertext there was also a lot of talk of “the semantic web”, where one proposal was that all links should be two-way, refer may have been a compromise to let people try to implement that on top of the one-way HTTP/HTML

      • PlasmaK@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I hope that user agent will be gone too. It does nothing except demand that you install chrome or spy on you

        • Supermariofan67@lemmy.fmhy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          There are far more robust methods of fingerprinting to spy on users anyway (adding up all the details of screen size, available fonts, language, os, etc, etc), so I don’t think removing the user agent would have much impact in reducing fingerprinting alone. It’s also useful as a quick and simple way to check the type of device, os, or browser the user is on and serve the correct content (download link for one’s OS) or block troublesome clients (broken bots)

          • PlasmaK@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            (adding up all the details of screen size, available fonts, language, os, etc, etc),

            not if you just simply turn off javascript.

            • intensely_human@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              I bet you can detect window size with css media queries and invisible “background-url” values for rendered items.

              I don’t know if “display: none” prevents loading of background-url targets though.

              • PlasmaK@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Then browsers should just download ALL background-url images beforehand