Matrix completion refers to recovering a low-rank matrix from only a subset of its possibly noisy entries, and has a variety of important applications because many real-world signals can be modeled by a n 1 × n 2 matrix with rank r ≪ min(n 1 , n 2 ). Most existing techniques for matrix completion assume Gaussian noise and, thus, they are not robust to outliers. In this paper, we devise two algorithms for robust matrix completion based on low-rank matrix factorization and ℓ p -norm minimization of the fitting error with 0 <; p <; 2. The first method tackles the low-rank matrix factorization with missing data by iteratively solving (n 1 + n 2 ) linear ℓ p -regression problems, whereas the second applies the alternating direction method of multipliers (ADMM) in the ℓ p -space. At each iteration of the ADMM, it requires performing a least squares (LS) matrix factorization and calculating the proximity operator of the pth power of the ℓ p -norm. The LS factorization is efficiently solved using linear LS regression while the proximity operator has closed-form solution for p = 1 or can be obtained by root finding of a scalar nonlinear equation for other values of p. The two proposed algorithms have comparable recovery capability and computational complexity of O(K|Ω|r 2 ), where |Ω| is the number of observed entries and K is a fixed constant of several hundreds to thousands and dimension independent. It is demonstrated that they are superior to the singular value thresholding, singular value projection, and alternating projection schemes in terms of computational simplicity, statistical accuracy, and outlier-robustness.