Corneal endothelial cell segmentation plays an important role in quantifying clinical indicators for the cornea health state evaluation. Although Convolution Neural Networks (CNNs) are widely used for medical image segmentation, their receptive fields are limited. Recently, Transformer outperforms convolution in modeling long-range dependencies but lacks local inductive bias so the pure transformer network is difficult to train on small medical image datasets. Moreover, Transformer networks cannot be effectively adopted for secular microscopes as they are parameter-heavy and computationally complex. To this end, we find that appropriately limiting attention spans and modeling information at different granularity can introduce local constraints and enhance attention representations. This paper explores a hierarchy full self-attention lightweight network for medical image segmentation, using Local and Global (LoGo) transformers to separately model attention representation at low-level and high-level layers. Specifically, the local efficient transformer (LoTr) layer is employed to decompose features into finer-grained elements to model local attention representation, while the global axial transformer (GoTr) is utilized to build long-range dependencies across the entire feature space. With this hierarchy structure, we gradually aggregate the semantic features from different levels efficiently. Experiment results on segmentation tasks of the corneal endothelial cell, the ciliary body, and the liver prove the accuracy, effectiveness, and robustness of our method. Compared with the convolution neural networks (CNNs) and the hybrid CNN-Transformer state-of-the-art (SOTA) methods, the LoGo transformer obtains the best result.