Knowledge graphs (KGs) are becoming popular in recommender systems in recent times because of the wealth of side information they provide. Many researchers rely on KGs to help resolve the issues of cold start, diversity, and explainability in recommendations. However, the existing approaches usually ignore entity descriptions, which are essential in providing content information for entities in KGs. In this work, we propose a contextual language model for KG completion known as ELECTRA-KG (Efficiently Learning an Encoder that Classifies Token Replacements Accurately). We formulate the recommendation task as a KG link prediction task where we have an incomplete knowledge graph and we use state-of-the-art approaches to complete it. We do this by identifying missing facts among entities from our test data. To evaluate and validate our method, we perform a couple of experiments. First, we run experiments to demonstrate how well our model compares to state-of-the-art KG embedding models. Second, we run further experiments with our model on the tag recommendation task and compare our results to existing baselines. Our results show that our model outperforms the existing baselines on the tag recommendation task.