Imitation learning (IL) is a promising method for programming dual-arm manipulation easily by imitating demonstrations from human experts. However, IL for dual-arm manipulation is still challenging because operating two robotic arms to collect demonstrations requires considerable effort. Therefore, we present a novel IL framework for dual-arm manipulation: learning dual-arm manipulation from demonstration translated from a human and robotic arm (LfDT). LfDT collects demonstrations of one human and one robotic arm. Thus, a human expert can easily and precisely adjust its arm movements according to the movement of the robotic arm. LfDT collects demonstrations of one human and one robotic arm, whereas IL methods typically demand demonstrations of two robotic arms. Therefore, LfDT employs a domain-translation network to convert the demonstrations of one human and one robotic arm into demonstrations of two robotic arms, which are then used to learn dual-arm manipulation via IL. The experiments demonstrate that LfDT successfully converts the demonstrations and learns the dual-arm manipulation in both simulation and real-world.