Feature pyramid network (FPN) is widely used for multi-scale object detection. While lots of FPN based methods have been proposed to improve detection performance, there exists semantic difference between cross-scale features. Therefore, simple connections bring spatial or channel information loss, while excessive connections bring extra parameters and inference cost. Besides, the fusion of too many features may lead to the information decay and feature aliasing. To deal with the above problems, we propose the Multi-scale Attention-based Feature Pyramid Networks (MAFPN), to fully exploit spatial and channel information and generate a better feature representation for each level from multi-scale features. Taking scale, spatial and channel information into consideration at the same time, MAFPN can process multi-scale input more comprehensively than most conventional methods. The experimental results show that our MAFPN can improve the detection performance of both two-stage and one-stage detectors with an acceptable increase of inference cost.