Across domains, many machine learning problems involve data which naturally comprises multiple views. Multi-view Learning is a machine learning technique that can utilize multiple views. Here we focus on co-training style multi-view learning algorithm under semi-supervised conditions which leverages both labeled and unlabeled data. In many domains, amount of (unlabeled) data available is very huge in size, which makes it impossible to learn serially in a single machine. In this work, we study various distributed multi-view learning using both consensus and complementary principles. We also propose an efficient computational design on Hadoop for learning multiple classifiers.