Neuraltimes uses ChatGPT to synthesize an article using 6 sources: 2 left, 2 center, 2 right, as they stated. It's like a beefed up version of Reddit's AutoTLDR bot, but it's contaminated by sources that may not be trustworthy (regardless of their political bias). And if the AI makes a mistake, says something wrong, you can bet that it's developer probably won't proofread it or correct it. It's a machine with zero accountability. I wouldn't trust it to tell me the sky is blue.