Just because your system is automated doesn't mean it's free of bias. LLMs are trained on human-generated content. Human-generated content has biases. The model will reflect those biases. There's also the proclivity of GPT to be confidently incorrect, like when it made up completely bogus court cases and a credulous lawyer used them in an actual case. I wouldn't want to get my news from a source that may be lying to my face.
But that’s exactly the thing. I don’t get my news from comanies that outright lie. With the LLMs you don’t really know, so it’s not exactly trustworthy either.