The growing popularity of Machine learning (ML) that appreciates high quality training datasets collected from multiple organizations raises natural questions about the privacy guarantees that can be provided in such settings. Our work tackles this problem in the context of multi-party secure ML wherein multiple organizations provide their sensitive datasets to a data user and train a Naive Bayes (NB) model with the data user. We propose PPNB, a privacy-preserving scheme for training NB models, based on Homomorphic Cryptosystem (HC) and Differential Privacy (DP). PPNB achieves a balance performance between efficiency and accuracy in multi-party secure ML, enabled flexible switch among different tradeoffs by parameter tuning. Extensive experimental results validate the effectiveness of PPNB.