arstechnica ,
@arstechnica@mastodon.social avatar

Researchers describe how to tell if ChatGPT is confabulating

Finding out whether the AI is uncertain about facts or phrasing is the key.

https://arstechnica.com/ai/2024/06/researchers-describe-how-to-tell-if-chatgpt-is-confabulating/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

greenpepper22 ,

@arstechnica so the best way to train llms is with highly curated data, not just random web scraping? Who would’ve guessed except anyone with a brain

eric ,
@eric@mammut.ericmitch.com avatar

@arstechnica so like when it’s trained on Wikipedia?

nazokiyoubinbou ,
@nazokiyoubinbou@mastodon.social avatar

@arstechnica The idea is sound enough. Would be nice if there were some more automated method people might use on local LLMs or things not owned by large corporations that take your information to actually use this. For local LLMs and such one would just have to do this manually I guess.

Really the basic idea is so simple we all pretty much were already doing this really. Just the way this is described one realizes it could be automated perhaps.

donlamb_1 ,
@donlamb_1@mastodon.online avatar

@arstechnica why are we creating tools which lie to us and provide inaccurate info? I'm supposed to become Sherlock Holmes now?

freediverx ,
@freediverx@mastodon.social avatar
grrrr_shark ,
@grrrr_shark@supervolcano.angryshark.eu avatar

@arstechnica AI cannot be certain or uncertain.

It does not KNOW anything.

It's JUST PREDICTIVE TEXT, FFS.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines