Many currently available deep neural network (DNN) accelerators are highly application specific and have focused on supervised learning. In addition, many accelerators have rigid architectures and algorithms that prevent adapting to dynamic environments. In this work, we propose a neuromorphic architecture implementing a self-organizing feature map (SOFM) using ferroelectric field-effect transistors (FeFETs) for in-memory error computation. The neuromorphic architecture takes inspiration from biological networks and is able to grow neurons to adapt to the application. Furthermore, it is able to modulate the distance between neurons to provide more fluidity to its topography. We demonstrate that the ability of the network to adapt to various datasets and even exhibit lifelong learning and self-repair. We further demonstrate the architecture's efficiency in terms of both power and speed as well as its robustness to device variability.