Joint estimation of grasped object pose and extrinsic contacts is central to robust and dexterous manipulation. In this paper, we introduce MultiSCOPE, a state-estimation algorithm that leverages sequential frictional contacts (e.g., pokes) to jointly estimate contact locations and grasped object poses using exclusively proprioception and tactile feedback. Our method addresses the problem of reducing object pose uncertainty by using two complementary particle filters over a series of actions: one to estimate contact location (CPFGrasp) and another to estimate object poses (SCOPE). Our method addresses uncertainty in both robot proprioception and force-torque measurements, which is important for estimating in-hand object pose in the real world. We implement and evaluate our approach on simulated and real-world single-arm and dual-arm robotic systems. We demonstrate that by bringing two objects into contact several times, the robots can infer contact location and object poses simultaneously.