In this paper, we investigate whether it is possible to automatically annotate texts with ChatGPT or generate both artificial texts and annotations for them. We prepared three collections of texts annotated with emotions at the level of sentences and/or whole documents. CLARIN-Emo contains the opinions of real people, manually annotated by six linguists. Stockbrief-GPT consists of real human articles annotated by ChatGPT. ChatGPT-Emo is an artificial corpus created and annotated entirely by ChatGPT. We present an analysis of these corpora and the results of Transformer-based methods fine-tuned on these data. The results show that manual annotation can provide better-quality data, especially in building personalized models.