@arstechnica I couldn't draw humans properly without studying the #nude form; I don't know why anyone thought that restricting an #AI Library to safe for work images was a good idea.
If you still have a Facebook account, I would suggest that now is an excellent time to delete all the content and upload a ton of random mutant scribbles to screw with their training data.
You want to screw with my data?
My data will screw with you.
@arstechnica it doesn’t need work it needs a fundamental rethink of whether the technology makes sense outside of specific research or narrow use cases.
It should never have made it out of research labs or opt-in curiosities for technologists.
None of these details are interesting and almost aren’t even worth reporting on.
This is a stupid, stupid bubble and saying they need to work on parts of it is like saying we’re close but just need refinement which is concretely untrue.
@arstechnica Sounds like “severe” may be doing a lot of work there.
If you hypothesize an existential risk to all of humanity from psuedo-#AI, that becomes your definition of a “severe” risk. Mere algorithmic enforcement and exacerbation of existing social ills is barely a blip on that scale, so why worry about that?
They’re ALREADY deploying tools with severe risks. We already have #deepfake & search degradation problems that are not being mitigated by these companies.
@arstechnica Classic example of trying to shift the Overton window, i.e. to move the unthinkable (dystopian) into the realm of the possible in the public discourse. The main use is not the product itself, but the profits that MS can extract from the shift in the discourse space: us all to be more ready to accept the lesser evil.
AI have many use, but it is definitely not for occupying our phone storage and does nothing useful. My phone have been stuffed by meta chatbot, and I don’t want another chatbot in my phone that have no real application.