Recent study has emphasized the importance of establish multi-dimensional information dependencies between weight vectors and input feature maps, in the process of calculating attention. However, although existing networks establish the connection from different perspectives, the connection presented is relatively limited, and the network's differentiation between important and non-important information is insufficiency, which inevitably leads to effective information loss. This article studies an efficient channel attention mechanism that can fuse multi-dimensional feature information, implement the interaction of channel and spatial position feature from both independent channels and global cross channels dimensions, and able to expand important information while suppress unimportant information. We propose the SW-SE block, which assigns the spatial position information of the cross channel to the process of calculating channel attention, strengthens information exchange between multiple channels, establishes closer connections, and obtains channel weight vectors with better expressiveness while greatly enhancing feature sampling ability. We have conducted ablation experiments on various mainstream network structures, and have achieved fine results in multiple aspects, e.g., classification, object detection and visualization. We reached 3.12% and 1.41% top-1 accuracy growth based on Resnet 50/100 on CIFAR10/100 respectively, and 4.01% on light weight network, along with 8.57% increased on for object detection on PASCAL VOC2007/2012, with only a small number of parameters and computation time increased.