[Thread, post or comment was deleted by the author]

  • Loading...
  • You are only browsing one thread in the discussion! All comments are available on the post page.

    Return

    Thorry84 ,

    This highlights why names matter.

    In computer science the term AI has been long used to describe various algorithms and techniques that can be used to process or generate data or solve some predefined problem space. Nobody in CS would confuse those things for anything actually smart, even the dumbest AI algorithm is still an AI. For a long time the general public treated it mostly like that, calling NPCs in games AI, even though some of those are just a handful of if statements. Only in the movies AI was used to mean something more than a algorithm.

    Intelligence like humans and arguably some animals possess is called AGI or artificial general intelligence. Nobody in CS seriously believe we have anything close to that or that the large data models we’ve been using is for sure the path to get there (things like Watson and AlphaGo come to mind). (Unless you’re writing a grant application or marketing blurb, then it for sure is)

    But the way the latest generations of generative models have been presented to the public is with the name AI, but heavily implied to be AGI. Now the name AI has been ruined and the general public thinks we have Terminators right around the corner. This has lead to a feedback loop of companies hyping up the AI, the general public treating it like something it’s not and companies feeding on that. They’ve spouted off use cases that don’t exist and are even actively unsuitable.

    I’m not sure where this went wrong, we all know science communication is very hard. But I do know the name is a big part of it. It should have been very explicit these are generative tools. They create data where it hadn’t existed before, this new data can contain some actual (previously known) information, or it can not. This makes them suitable for some use cases (and even then only as a starting point), but not for others.

    Of course the fault isn’t purely with the companies pushing their AI products. The people abusing them are just as bad. The judge was totally right in asking how often the lawyer cites cases they haven’t read before. But there’s a gray area in between and I think the companies have a responsibility to not present their product as something it’s not.

    Especially Microsoft branding their thing Copilot and dressing it up as a virtual assistant. I wonder if they’ll ever be held accountable for that. And if these new tools find a place and become actually useful or have the bubble burst and it all mostly go away again.

    As someone active in CS my personal opinion is these large data models are interesting, but not the way forward. I feel wishful thinking has gotten in the way of progress and I’m sad to see so much money being poured into it. A friend of mine active in physics said they felt the same about string theory and after talking it over I agree. I hope we don’t waste the next 20 years betting everything on a dead end.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • [email protected]
  • All magazines