Existing multi-view learning methods based on the information bottleneck principle exhibit impressing generalization by capturing inter-view consistency and complementarity. They leverage cross-view joint information (consistency) and view-specific information (complementarity) while discarding redundant information. By fusing visual features, multi-view learning methods help medical image processing to produce more reliable predictions. However, multi-views of medical images often have low consistency and high complementarity due to modal differences in imaging or different projection depths, thus challenging existing methods to balance them to the maximal extent. To mitigate such an issue, we improve the information bottleneck (IB) loss function with a balanced regularization term, termed IBB loss, reassembling the constraints of multi-view consistency and complementarity. In particular, the balanced regularization term with a unique trade-off factor in IBB loss helps minimize the mutual information on consistency and complementarity to strike a balance. In addition, we devise a triplet multi-view network named TM net to learn the consistent and complementary features from multi-view medical images. By evaluating two datasets, we demonstrate the superiority of our method against several counterparts. The extensive experiments also confirm that our IBB loss significantly improves multi-view learning in medical images.