The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.

“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”

Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.

“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”

  • 14th_cylon@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    5 months ago

    The 5 orders of magnitude gained from general computers to asics is standard knowledge, you learn it in the first year of any comp sci class. You can find it all over, for example.

    so, it is just your wishful thinking. you have no proof that this is going to be true, you just blindly extrapolate from the past… wait, that is how this discussion started… 😂

    There isn’t a difference. We don’t have some super magical mystical human thing that sets us apart.

    yes, there is, i have already answered that.

    A way to imagine how it can be possible for a computer to have thoughts and ideas

    just imagine this thing that is at the moment impossible and we have no idea how to do it or whether it will ever be possible.

    and see, once you imagine this impossible thing becoming true, this other impossible thing also becomes true.

    q.e.d.

    how easy, huh 😂

    I think it’s important to now let what we want to be true to interfere with our analysis of what is true.

    if only you would take your own medicine.

    • AIhasUse@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      5 months ago

      The past is where we get all of our information from. To pretend like we can’t use the past to predict the future makes us unable to do anything. We don’t have a time machine to go see exactly how the future plays out.

      It is more common than you realise for their to be predictable trends in computing. Just go look at Moore’s law and how long it has held up(with just minor adjustments). What would be way more surprising is if we are all of a sudden at a massive turning point where we can no longer anticipate what is next. You don’t have to take my word for this. Find anyone with a background in computing to independently verify it. Even chatgpt could really help you understand this.

      The specialized hardware efficiency gain isn’t even a mystery at all. It is simply the consequence of designing hardware that does a specific task very well. It isn’t nearly as much of a guess as you think it is. To help you picture it, imagine a vehicle that works on land, sea, and sky. It is not such a leap to say that a vehicle made to work for just the land would be much more efficient at being on land. This really isn’t anything that anyone in the computing world disagrees with. It is just your outsider point of view that is making it seem like magic to you. Again, don’t take my word. This is comp science 101 stuff that really isn’t disputed.

      So far as the thought experiment with replacing neurons. The technology to do so doesn’t need to exist for the point to hold true. That simply isn’t a logical requirement for thought experiments. This has nothing to do with computing or anything. This is just true of logical arguments. In order to make points, we can use thought experiments. This is something that Einstein was famous for, and not many people question his ability to form solid arguments.

      I understand that you feel passionate about this, and you really want this idea that humans are somehow magical and fundamentally different from machines. It really is understandable. I’ve given plenty of solid arguments that you really haven’t responded to at all. It has never been true that people can’t use thought experiments or past trends to help make conclusions about the future. It is very telling that these are things that you feel like you must discard in order to defend your stance. These are both things that have been reliably used for hundreds and even thousands of years.

      I would really encourage you to get ahold of some logical reasoning material and try to take a step back to some basics if this is something that you are interested in digging a bit deeper into this. It is almost never the case that initial hunches turn out to be kept after thurough investigation.

      • 14th_cylon@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        5 months ago

        jesus fucking christ, are you using some chatbot to drown me in a wall of text? just stop…

        We don’t have a time machine to go see exactly how the future plays out.

        if you think you know exactly how the future plays out, you are just insane. i am not reading the rest of it. bye.

        • AIhasUse@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          5 months ago

          You’ve completely misunderstood. I specifically said we don’t have a time machine to see how the future plays out. All we can do is make our best guesses based on the past.

          You’ve had to throw away basic reasoning tools that have been used for ages in order for your stance to remain “safe.” I understand your fear, but honestly, you are better off embracing and understanding instead of putting your head in the sand and saying that we shouldn’t use the past to make predictions of the future.