The rapid emergence of generative AI models in the media sector demands a critical examination of the narratives these models produce, particularly in relation to sensitive topics, such as politics, racism, immigration, public health, gender and violence, among others. The ease with which generative AI can produce narratives on sensitive topics raises concerns about potential harms, such as amplifying biases or spreading misinformation. Our study juxtaposes the content generated by a state-of-the-art generative AI, specifically ChatGPT-4, with actual articles from leading UK media outlets on the topic of immigration. Our specific case study focusses on the representation of Eastern European Roma migrants in the context of the 2016 UK Referendum on EU membership. Through a comparative critical discourse analysis, we uncover patterns of representation, inherent biases and potential discrepancies in representation between AI-generated narratives and mainstream media discourse with different political views. Preliminary findings suggest that ChatGPT-4 exhibits a remarkable degree of objectivity in its reporting and demonstrates heightened racial awareness in the content it produces. Moreover, it appears to consistently prioritise factual accuracy over sensationalism. All these features set it apart from right-wing media articles in our sample. This is further evidenced by the fact that, in most instances, ChatGPT-4 refrains from generating text or does so only after considerable adjustments when prompted with headlines that the model deems inflammatory. While these features can be attributed to the model’s diverse training data and model architecture, the findings invite further examination to determine the full scope of ChatGPT-4’s capabilities and its potential shortcomings in representing the full spectrum of social and political perspectives prevalent in society.