• MrAlternateTape@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      Well, why did it not do it right the first time then? If the doublecheck gives a different result, then which is the right result? If I can ask the same question twice and I get two different answers, how I or the machine known which is the right answer? And if the machine knows, then why would it need to doublecheck? A machine can do it right the first time if it knows how, right?

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 day ago

        You said:

        As long as AI does not get it 100% right every time it is not touching my house. And yes, a professional doesn’t reach that rate either, but at least they know and doublecheck themselves and know how to fix things.

        Well, why didn’t the human professional not do it right the first time then? If it’s okay for a human professional to make mistakes because they can double check and fix their mistakes, why is not okay for machines to do likewise?

        • MrAlternateTape@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          24 hours ago

          Because a machine is expected to do it right the first time. Because it’s supposed to do the exact same thing everytime with the exact same input parameters. If you give it the exact same input every time and you get a different result every time it is not reliable to function as automation.

          Humans are just that. Humans. They make mistakes sometimes. The reason humans can keep doing the work is that there is no better alternative. Machines can’t do it, so who else is gonna do it? Either humans build your house or nobody does. There is little choice there.

          So if a machine is to take over that job, it better do it right and reliable and cheaper.Because humans can already do it right and reliable. And there’s little money saving if a human still needs to check all the work.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            21 hours ago

            Because a machine is expected to do it right the first time.

            No, it’s not. And it doesn’t have to because as I pointed out it can check its work.

            You’ve got a mistaken impression of how AI works, and how machines in general work. They can make mistakes and can recognize and correct those mistakes. I’m a programmer, I have plenty of first hand experience. I’ve written code that does it myself.

            So if a machine is to take over that job, it better do it right and reliable and cheaper.

            Yes, that’s the plan.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        10
        arrow-down
        3
        ·
        2 days ago

        The term “artificial intelligence” has been in use since the 1950s and it encompasses a wide range of fields in computer science. Machine learning is most definitely included under that umbrella.

        Why do you think an AI can’t double check things and fix them when it notices problems? It’s a fairly straightforward process.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 day ago

            What are you trying to argue, that humans aren’t Turing-complete? Which would be an insane self-own. That we can decide the undecidable? That would prove you don’t know what you’re talking about, it’s called undecidable for a reason. Deciding an undecidable problem makes as much sense as a barber who shaves everyone who doesn’t shave themselves.

            Aside from that why would you assume that checking results would, in general, involve solving the halting problem.

            • dustyData@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              19 hours ago

              It has nothing to do with whether humans are Turing complete or not. No Turing machine is capable of solving an undecidable. But humans can solve undecidables. Machines cannot solve the problem the way a human would. So, no, humans are not machines.

              This by definition limits the autonomy a machine can achieve. A human can predict when a task will cause a logic halt and prepare or adapt accordingly, a machine can’t. Unless intentionally limited by a programmer to stop being Turing complete and account for the undecidables before hand (thus with the help of the human). This is why machines suck at unpredictable or ambiguous task that humans fulfill effortlessly on the daily.

              This is why a machine that adapts to the real world is so hard to make. This is why autonomous cars can only drive in pristine weather, on detailed premapped roads with really high maintenance, with a vast array of sensors. This is why robot factories are extremely controlled and regulated environments. This is why you have to rescue your roomba regularly. Operating on the biggest undecidable there is (e.g. future parameters of operations) is the biggest yet unsolved technological problem (next to sensor integration on world parametrization and modeling). Machine learning is a step towards it, in a several thousand miles long road yet to be traversed.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                18 hours ago

                But humans can solve undecidables.

                No, we can’t. Or, more precisely said: There is no version of your assertion which would be compatible with cause and effect, would be compatible with physics as we understand it.

                Don’t blame me I didn’t do it. The universe just is that way.

                • dustyData@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  18 hours ago

                  Yet we live in a world where millions of humans assert their will over undecidables every day. Because we can make irrational decisions, logic be damned. Explain that one.

                  • barsoap@lemm.ee
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    17 hours ago

                    That’s not deciding anything in the information-theoretical sense. We rely a lot on approximations and heuristics when it comes to day to day functioning.

                    You can’t decide the halting problem by saying “I’ll have a glance at it and go with whatever I think after thinking about it for half a second”. That’s not deciding the problem that’s giving up on it and computers are perfectly capable of doing that.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            3
            arrow-down
            3
            ·
            1 day ago

            The halting problem is an abstract mathematical issue, in actual real-world scenarios it’s trivial to handle cases where you don’t know how long the process will run. Just add a check to watch for the process running too long and break into some kind of handler when that happens.

            I’m a professional programmer, I deal with this kind of thing all the time. I’ve literally written applications using LLMs that do this.