Automatically extracting food entities from cooking recipes has recently gained significant attention as a means of facilitating data-and AI-driven solutions for healthy and sustainable diets. The state-of-the-art approaches in the literature involve a high overhead, as their supervised functionality requires a large number of labelled instances. To facilitate food entity extraction applications, we explore the use of Large Language Model (LLMs) under zero-shot settings, where no labelled instance is required. Instead, we propose a generic methodology that exclusively focuses on prompt and response engineering. We apply it to small LLMs with just 7b parameters so as to increase their time-efficiency, while minimizing the necessary resources. Our experimental analysis yields promising results, but there is room for significant improvements.