@arstechnica They already have. It isn't as if they're all going to pull their services down and throw away all the money they've invested. This changes exactly nothing.
@arstechnica oh this addresses all of my concerns about building LLMs from stolen material and the dramatic environmental impact and the existing dangers of these models hallucinating information and presenting it as fact.
I'm also sure they will stick to this when they want to release something after a competitor makes a big update and when shareholders demand more profit.
@arstechnica Sounds like “severe” may be doing a lot of work there.
If you hypothesize an existential risk to all of humanity from psuedo-#AI, that becomes your definition of a “severe” risk. Mere algorithmic enforcement and exacerbation of existing social ills is barely a blip on that scale, so why worry about that?
They’re ALREADY deploying tools with severe risks. We already have #deepfake & search degradation problems that are not being mitigated by these companies.
@arstechnica without transparency and auditing these promises are largely worthless. Google ai has already started eating my search results with incoherent slop and if I wanted to give feedback I had to do it on each line of word salad text independently.