I can’t help but to feel like this is happening with all AI. Social media comments from Facebook, Reddit, X etc are low effort and flushed out with bots.
This was predicted early on with LLMs that the information would eventually go into a feedback loop where the AI feeds off other AI hallucinations and they all go downhill fast.
I don’t remember having heard any practical solutions to the problem so far. They work best on real data, but they rapidly grew to the point where they are generating dramatically more artificial data than humans are generating real data, so they have hopelessly polluted their own well.
Its a very difficult problem to deal with no obvious solutions that are at all cheap, easy, or even feasible, so someone’s going to have a really, really smart idea for them to get over that hurdle. Add on to that the fact the types of AIs most impacted by his problem, the LLMs, are the ones that are currently the most heavily subsidized by venture capital. So, not only are they facing increasing technical hurdles, they are about to get increasingly expensive to operate at the same time as the seed funding is used up and they have to switch to a revenue-positive business model.
Interesting. Maybe they will have to start proactively surveying mass amounts of people instead of relying on free internet social media.
I don’t understand the appeal of AI for most things. The amount of incorrect information it gives is already too high making it unreliable. The benefit seems to be with brainstorming ideas or dealing with fiction.
I can’t help but to feel like this is happening with all AI. Social media comments from Facebook, Reddit, X etc are low effort and flushed out with bots.
This was predicted early on with LLMs that the information would eventually go into a feedback loop where the AI feeds off other AI hallucinations and they all go downhill fast.
Do you know how they planned to fix the problem?
I don’t remember having heard any practical solutions to the problem so far. They work best on real data, but they rapidly grew to the point where they are generating dramatically more artificial data than humans are generating real data, so they have hopelessly polluted their own well.
Its a very difficult problem to deal with no obvious solutions that are at all cheap, easy, or even feasible, so someone’s going to have a really, really smart idea for them to get over that hurdle. Add on to that the fact the types of AIs most impacted by his problem, the LLMs, are the ones that are currently the most heavily subsidized by venture capital. So, not only are they facing increasing technical hurdles, they are about to get increasingly expensive to operate at the same time as the seed funding is used up and they have to switch to a revenue-positive business model.
Interesting. Maybe they will have to start proactively surveying mass amounts of people instead of relying on free internet social media.
I don’t understand the appeal of AI for most things. The amount of incorrect information it gives is already too high making it unreliable. The benefit seems to be with brainstorming ideas or dealing with fiction.
Nothing is reliable. AI should get us into the habit of double checking everything.
Very true