• Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    21 hours ago

    I don’t see any reason why this can’t be discussed. I think people here are just extremely anti AI. It is almost like forcing AI on people was a bad idea.

    • nandeEbisu@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 hours ago

      I think there’s a useful discussion for why these technologies can be effective at getting people to connect with them emotionally, but they themselves don’t experience emotions any more than a fictional character in a book experiences emotion.

      Our mental model of them can, but the physical representation is just words. In the book I’m reading there was a brutal torture scene. I felt bad for the character, but if there was an actual being experiencing that kind of torment, making and reading the book would be horrendously unethical.

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      5
      ·
      15 hours ago

      i don’t even understand why it’s worth discussing in the first place. “can autocomplete feel?” “should compilers form unions?” “should i let numpy rest on weekends?”

      wake me up when what the marketers call “ai” becomes more than just matrix multiplication in a loop.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        If it a broad discussion of intelligence then I could see it.

        I do agree that we are no where close anything that resembles actual intelligence