@arstechnica the only work that is needed is shutting off the server farms and taking a big ole' electromagnet to whatever backups they have of the code. Or maybe just dropping all of the drives and backup in salt water.
@arstechnica Waiting for the moment (any day now) where, after illegally scraping the web to train their LLMs, these Silicon Valley lunatics will sue into oblivion the poor trolls/kids who joked 10 years ago on Reddit. ...While random Joe who actually ate the glued-on pizza (ok Joe's not very bright) will loose in court against google because, you know, it's not like we promised accurate answers or anything.
@arstechnica
This isn't going away with cleaner data. The problem is the models inherent lack of ability to tag jokes and fiction. Again these models are essentially predictive text. LLMs were never meant to return factual information and are not programmed to verify their answers.
@arstechnica@KyleOrl I think what's most interesting about the weird results is how succinctly they demonstrate that google is sometimes just straight up republishing unique, original content without consent.
@arstechnica it doesn’t need work it needs a fundamental rethink of whether the technology makes sense outside of specific research or narrow use cases.
It should never have made it out of research labs or opt-in curiosities for technologists.
None of these details are interesting and almost aren’t even worth reporting on.
This is a stupid, stupid bubble and saying they need to work on parts of it is like saying we’re close but just need refinement which is concretely untrue.