This might also be an automatic response to prevent discussion. Although I’m not sure since it’s MS’ AI.

  • otp@sh.itjust.works
    link
    fedilink
    arrow-up
    36
    arrow-down
    6
    ·
    edit-2
    8 months ago

    I think the LLM won here. If you’re being accusational and outright saying its previous statement is a lie, you’ve already made up your mind. The chatbot knows it can’t change your mind, so it suggests changing the topic.

    It’s not a spokesperson/bot for Microsoft, not a lawyer. So it knows when it should shut itself off.

    • naevaTheRat@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      13
      ·
      8 months ago

      The chatbot doesn’t know anything. It has no state like that, your text just gets appended to it’s text.

      It has been prompted to disengage from disagreement or something similar. By a human designer.

      • otp@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        I don’t know why the discourse about AI has become so philosophical.

        When I’m playing a single-player game and I say “the AI opponents know I’m hiding behind cover, so they threw a grenade!”, I don’t mean that the video game gained sentence and discovered the best thing to do to win against me.

        When playing a stealth game, we say “The enemy can’t see you if you’re behind cover”, not “The enemy has been programmed to not take any action the player character when said player character is identified as being granted the Cover status”.

    • webghost0101@sopuli.xyz
      link
      fedilink
      arrow-up
      10
      ·
      8 months ago

      To add, i have seen this behavior the moment you get to argumentative so its not like its purposely singling some topics out.