State-space models (SSMs) are a powerful statistical tool for modelling time-varying systems via a latent state. In these models, the latent state is never directly observed. Instead, a sequence of data points related to the state are obtained. The linear-Gaussian state-space model is widely used, since it allows for exact inference when all model parameters are known, however this is rarely the case. The estimation of these parameters is a very challenging but essential task to perform inference and prediction. In the linear-Gaussian model, the state dynamics are described via a state transition matrix. This model parameter is known to be particularly hard to estimate, since it encodes the between-step relationships of the state elements, which are never observed. In many real-world applications, this transition matrix is sparse since not all state components directly affect all other state components. However, most contemporary parameter estimation methods do not exploit this feature. In this work, we take a fully probabilistic approach and propose SpaRJ, a novel simulation method that obtains sparse samples from the posterior distribution of the transition matrix of a linear-Gaussian state-space model. We exploit the sparsity of the latent space by uncovering its underlying structure. Our proposed method is the first algorithm to provide a fully Bayesian quantification of the sparsity in the model. SpaRJ belongs to the family of reversible jump Markov chain Monte Carlo methods. Our method obtains sparsity via exploring a set of models that exhibit differing sparsity patterns in the transition matrix. The algorithm implements a new set of transition kernels that are specifically tailored to efficiently explore the space of sparse matrices. Moreover, we also design new effective rules to explore transition matrices within the same level of sparsity. This novel methodology has strong theoretical guarantees and efficiently explores sparse subspaces, which unveils the latent structure of the data generating process, thereby enhancing interpretability. The excellent performance of SpaRJ is showcased in a synthetic example with dimension 144 in the parameter space, and in a numerical example with real data.