Multi-agent reinforcement learning holds tremendous potential for revolutionizing intelligent systems across diverse domains. However, it is also concomitant with a set of formidable challenges, which include the effective allocation of credit values to each agent, real-time collaboration among heterogeneous agents, and an appropriate reward function to guide agent behavior. To handle these issues, we propose an innovative solution named the Graph Attention Counterfactual Multiagent Actor-Critic algorithm (GACMAC). This algorithm encompasses several key components: First, it employs a multi-agent actor-critic framework along with counterfactual baselines to assess the individual actions of each agent. Second, it integrates a graph attention network to enhance real-time collaboration among agents, enabling heterogeneous agents to effectively share information during handling tasks. Third, it incorporates prior human knowledge through a potential-based reward shaping method, thereby elevating the convergence speed and stability of the algorithm. We tested our algorithm on the StarCraft Multi-Agent Challenge (SMAC) platform, which is a recognized platform for testing multi-agent algorithms, and our algorithm achieved a win rate of over 95% on the platform, comparable to the current state-of-the-art multi-agent controllers.