ThefuzzyFurryComrade@pawb.social to Fuck AI@lemmy.world · 5 days agoOn AI Reliabilitypawb.socialimagemessage-square11fedilinkarrow-up1506arrow-down16file-text
arrow-up1500arrow-down1imageOn AI Reliabilitypawb.socialThefuzzyFurryComrade@pawb.social to Fuck AI@lemmy.world · 5 days agomessage-square11fedilinkfile-text
minus-square𝕸𝖔𝖘𝖘@infosec.publinkfedilinkarrow-up60·5 days agoUnless something improved, they’re wrong more than 60% of the time, but at least they’re confident.
minus-squarehenfredemars@infosec.publinkfedilinkEnglisharrow-up17·5 days agoThis is an excellent exploit of the human mind. AI being convincing and correct are two very different ideals.
minus-squaredavidgro@lemmy.worldlinkfedilinkarrow-up10·5 days agoAnd they are very specifically optimized to be convincing.
minus-squarejsomae@lemmy.mllinkfedilinkarrow-up13·5 days agoThis is why LLMs should only be employed in cases where a 60% error rate is acceptable. In other words, almost none of the places where people are currently being hyped to use them.
minus-squarefriend_of_satan@lemmy.worldlinkfedilinkEnglisharrow-up3·5 days agoHaha, yeah, I was going to say 40% is way more impressive than the results I get.
Unless something improved, they’re wrong more than 60% of the time, but at least they’re confident.
This is an excellent exploit of the human mind. AI being convincing and correct are two very different ideals.
And they are very specifically optimized to be convincing.
This is why LLMs should only be employed in cases where a 60% error rate is acceptable. In other words, almost none of the places where people are currently being hyped to use them.
Haha, yeah, I was going to say 40% is way more impressive than the results I get.