Knowledge Base Question Answering (KBQA) integrates multiple disciplines, enabling users to retrieve answers from a knowledge base (KB) without specialized query language skills. However, KBQA systems depend on the information within their respective KBs, limiting their ability to answer questions involving facts or data not present in the KB. To address this, Geographic KBQA (GeoKBQA) systems have been designed to learn from and respond to geographic questions using a specialized Geographic Knowledge Base (GeoKB). This includes specific facts and information about geographic spaces, enabling them to handle complex geographic queries. Nevertheless, current GeoKBQA systems face significant challenges due to their reliance on rule-based Entity Linking models. These challenges are threefold: First, the rule-based Entity Linking approach limits adaptability to datasets beyond the original studies. Second, the rule-based structure of Mention Detection impedes accurate word semantics interpretation, requiring extra steps for understanding. Third, the absence of Entity Disambiguation hinders resolving typos and interpreting abbreviations in queries. Our study addresses these issues by developing a model that employs the BERT model for training geographic questionmention label datasets. This approach enhances Mention Detection and includes an Entity Disambiguation process, achieving high F1-scores and effectively connecting to the GeoKB. The model interprets complex geographic queries with improved accuracy and can be seamlessly integrated into existing GeoKBQA systems, offering a significant performance boost.