I was using Bing to create a list of countries to visit. Since I have been to the majority of the African nation on that list, I asked it to remove the african countries…

It simply replied that it can’t do that due to how unethical it is to descriminate against people and yada yada yada. I explained my resoning, it apologized, and came back with the same exact list.

I asked it to check the list as it didn’t remove the african countries, and the bot simply decided to end the conversation. No matter how many times I tried it would always experience a hiccup because of some ethical process in the bg messing up its answers.

It’s really frustrating, I dunno if you guys feel the same. I really feel the bots became waaaay too tip-toey

  • marmo7ade@lemmy.world
    link
    fedilink
    arrow-up
    30
    arrow-down
    65
    ·
    1 year ago

    It’s not confusing at all. ChatGPT has been configured to operate within specific political bounds. Like the political discourse of the people who made it - the facts don’t matter.

    • TheKingBee@lemmy.world
      link
      fedilink
      arrow-up
      35
      arrow-down
      3
      ·
      1 year ago

      Or it’s been configured to operate within these bounds because it is far far better for them to have a screenshot of it refusing to be racist, even in a situation that’s clearly not, than it is for it to go even slightly racist.

      • Iceblade@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Yes, precisely. They’ve gone so overboard with trying to avoid potential issues that they’ve severely handicapped their AI in other ways.

        I had quite a fun time exploring exactly which things chatGPT has been forcefully biased on by entering a template prompt over and over, just switching out a single word for ethnicity/sex/religion/animal etc. and comparing the responses. This made it incredibly obvious when the AI was responding differently.

        It’s a lot of fun, except for the part where companies are now starting to use these AIs in practical applications.

        • HardlightCereal@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          So you said the agenda of these people putting in the racism filters is one where facts don’t matter. Are you asserting that antiracism is linked with misinformation?

          • Iceblade@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Kindly don’t claim that I said or asserted things that I didn’t. I would consider that to be rather rude.

              • Iceblade@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                So you said the agenda of these people putting in the racism filters is one where facts don’t matter.

                This quote from your previous comment is a statement, not a question, just like the one you now posted, false. You seem to have an unfortunate tendency to make claims that are incorrect. My condolences.

                • HardlightCereal@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 year ago

                  Oh, sorry, I thought you were asking me not to make claims about what you asserted, since that made a lick of sense. Because the alternative is that you’re bald-facedly lying.

      • Hellsadvocate@kbin.social
        link
        fedilink
        arrow-up
        13
        arrow-down
        2
        ·
        1 year ago

        Probably moral guidelines that are left leaning. I’ve found that chatGPT 4 has very flexible morals whereas Claude+ does not. And Claude+ seems more likely to be a consumer facing AI compared to Bing which hardlines even the smallest nuance. While I disagree with OP I do think Bing is overly proactive in shutting down conversations and doesn’t understand nuance or context.

            • feedum_sneedson@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              I’m not sure. I’m not even sure what genuine social progress would look like anymore. I’m fairly certain it’s linked to material needs being met, rather than culture war bullshit (from either side of the aisle).

              • HardlightCereal@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                Social progress looks like a world where law enforcement applies the law equally to everyone, engages in restorative justice instead of punitive, where everyone complete freedom over their own body, mind, and relationships so long as it does not violate the rights of others, where immigration borders are a thing of the past, where disabilities are reasonably accommodated, where hate based on identity is gone, where slavery, human trafficking, and wage slavery are abolished, etc etc

    • Spyder@kbin.social
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      1 year ago

      @marmo7ade

      There are at least 2 far more likely causes for this than politics: source bias and PR considerations.

      Getting better and more accurate responses when talking about Europe or other English speaking countries while asking in English should be expected. When training any LLM model that’s supposed to work with English, you train it on English sources. English sources have a lot more works talking about European countries than African countries. Since there’s more sources talking about Europe, it generates better responses to prompts involving Europe.

      The most likely explanation though over politics is that companies want to make money. If ChatGPT or any other AI says a bunch of racist stuff it creates PR problems, and PR problems can cause investors to bail. Since LLMs don’t really understand what they’re saying, the developers can’t take a very nuanced approach to it and we’re left with blunt bans. If people hadn’t tried so hard to get it to say outrageous things, there would likely be less stringent restrictions.

      @Razgriz @breadsmasher

      • Coliseum7428@kbin.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        If people hadn’t tried so hard to get it to say outrageous things, there would likely be less stringent restrictions.

        The people who cause this mischief are the ones ruining free speech.