It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.
Wow. Where are all the news stories about THIS?
If you try to start learning how they work, the first thing you realize is that hallucinations are fundamental to how the technology works. Of course they are unfixable. That’s literally how they work.
They’re broken clocks that happen to be right more than just twice a day, but still broken nonetheless.
Exactly like humans.
It’s an inherent issue with deep learning. Awareness of this among people who are regularly using these tools is very low, which is troubling.
https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained
That article explains the issues well and clearly. Thanks for sharing.
I think it should be shared more broadly.
You’re reading one right now?