• We propose two matrix factorization methods DSMF and ISMF with l 2 , 1 norm, where the former directly minimizes F 2 -norm loss function whiling the latter indirectly optimize the upper bound of F -norm function. • We theoretically prove the convergence property of DSMF and discuss the convergence condition of ISMF. • The experiments on on the simulation and benchmark datasets show that our methods achieve the comparable performance with the deep learning-based matrix completion methods. Matrix factorization is a popular matrix completion method, however, it is difficult to determine the ranks of the factor matrices. We propose two new sparse matrix factorization methods with l 2 , 1 norm to explicitly force the row sparseness of the factor matrices, where the rank of the factor matrices is adaptively controlled by the regularization coefficient. We further theoretically prove the convergence property of our algorithms. The experimental results on the simulation and the benchmark datasets show that our methods achieve superior performance than its counterparts. Moreover our proposed methods can attain comparable performance with the deep learning-based matrix completion methods.