patrickcorreia ,

@arstechnica “Language models are a good source of promising ideas, but they aren’t great at logical reasoning. Sometimes they make mistakes or get confused.”

No they don’t. Language models don’t make mistakes or get confused because they are not trying to be right. They don’t even have a concept of truth or correctness. This kind of sloppy thinking leads to dangerous misuse and wasteful dead ends.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines