• The selection range of the activation function could be expanded by the GLIT method. • A tolerance solution theorem based upon neural network system is given and proved. • The input could fall into the specified interval in the sense of probability. • The GLIT method is more suitable for high-dimension data. Weight initialization of neural networks has an important influence on the learning process, and the selection of initial weights is related to the activation interval of the activation function. It is proposed that an improved and extended weight initialization method for neural network with asymmetric activation function as an extension of the linear interval tolerance method (LIT), called ‘GLIT’ (generalized LIT), which is more suitable for higher-dimensional inputs. The purpose is to expand the selection range of the activation function so that the input falls in the unsaturated region, so as to improve the performance of the network. Then, a tolerance solution theorem based upon neural network system is given and proved. Furthermore, the algorithm is given about determining the initial weight interval. The validity of the theorem and algorithm is verified by numerical experiments. The input could fall into any preset interval in the sense of probability under the GLIT method. In another sense, the GLIT method could provide a theoretical basis for the further study of neural networks.