As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.
As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.
deleted by creator
I just want to make the distinction, that AI like this literally are black boxes. We (currently) have no ability to know why it chose the word it did for example. You train it, and under the hood you can’t actually read out the logic tree of why each word was chosen. That’s a major pitfall of AI development, its very hard to know how the AI arrived at a decision. You might know it’s right, or it’s wrong…but how did the AI decide this?
At a very technical level we understand HOW it makes decisions, we do not actively understand every decision it makes (it’s simply beyond our ability currently, from what I know)
example: https://theconversation.com/what-is-a-black-box-a-computer-scientist-explains-what-it-means-when-the-inner-workings-of-ais-are-hidden-203888
Of course you can, you can look at every single activation and weight in the network. It’s tremendously hard to predict what the model will do, but once you have an output it’s quite easy to see how it came to be. How could it be bloody otherwise you calculated all that stuff to get the output, the only thing you have to do is to prune off the non-activated pathways. That kind of asymmetry is in the nature of all non-linear systems, a very similar thing applies to double pendulums: Once you observed it moving in a certain way it’s easy to say “oh yes the initial conditions must have looked like this”.
What’s quite a bit harder to do for the likes of ChatGPT compared to double pendulums is to see where they possibly can swing. That’s due to LLMs having a fuckton more degrees of freedom than two.
I don’t disagree with anything you said but wanted to just weigh in on the more degrees of freedom.
One major thing to consider is that unless we have 24/7 sensor recording with AI out in the real world and a continuous monitoring of sensor/equipment health, we’re not going to have the “real” data that the AI triggered on.
Version and model updates will also likely continue to cause drift unless managed through some sort of central distribution service.
Any large Corp will have this organization and review or are in the process of figuring it out. Small NFT/Crypto bros that jump to AI will not.
IMO the space will either head towards larger AI ensembles that tries to understand where an exact rubric is applied vs more AGI human reasoning. Or we’ll have to rethink the nuances of our train test and how humans use language to interact with others vs understand the world (we all speak the same language as someone else but there’s still a ton of inefficiency)
You can observe what it does and understand its biases. If you don’t like it, you can change it by training it.
The thing is a lot of people are not using for that. They think it is a living omniscient sci-fi computer who is capable of answering everything, just like they saw in the movies. Noone thought that about keyboard auto-suggestions.
And with regards to people who aren’t very knowledgeable on the subject, it is difficult to blame them for thinking so, because that is how it is presented to them in a lot of news reports as well as adverts.
Oh that’s nothing new:
deleted by creator
@Reva “Hey, should we use this statistical model that imitates language to replace my helpdesk personnel?” is an ethical question because bosses don’t listen when you outright tell them that’s a stupid idea.
deleted by creator
There are people who genuinelly think there’s actual artificial intelligent thinking behind something like ChatGPT.
Reminds me of my grandmother - a poor illiterate peasant woman - when she came to live with us in the big city and who got really confused when the same actor appeared in multiple soap operas on TV. She saw the “living truthfully in imaginary circunstances” of good actors (or, lets be honest, the make-believe of most soap opera actors) and because of here complete ignorance on the subject confused acting with real life.
I think there’s a lot of this going on and, hopefully, like with my grandmother most such people will eventually understand that the well done lifelike surface-level impression does not guarantee that what is behind it is a matching reality (people really living that life in the soap opera or an actual intelligence in this).
The word AI has at least 3 different meanings. People who understand the subject usually just mean machine learning. But there is also AI we see in movies (which is usually a sentient computer) and AI in games (which is just programmed NPCs). I think most people confuse the stuff they see in movies with machine learning.
I think marketing execs are COUNTING on that misinterpretation to make the product seem like more than it is.
Yeah, it’s kinda scary to see how much people don’t understand modern technology. If some non-expert tells them AI can’t be trusted, they just believe it. I’ve noticed the same thing with cryptocurrencies. A non-expert says it’s a scam and people believe it even though it’s clear they don’t understand anything about that technology or what it’s made for.