Accurate and rapid diagnosis of COVID-19 using chest X-ray (CXR) plays an important role in large-scale screening and epidemic prevention. Unfortunately, identifying COVID-19 from the CXR images is challenging as its radiographic features have a variety of complex appearances, such as widespread ground-glass opacities and diffuse reticular-nodular opacities. To solve this problem, we propose an adaptive attention network (AANet), which can adaptively extract the characteristic radiographic findings of COVID-19 from the infected regions with various scales and appearances. It contains two main components: an adaptive deformable ResNet and an attention-based encoder. First, the adaptive deformable ResNet, which adaptively adjusts the receptive fields to learn feature representations according to the shape and scale of infected regions, is designed to handle the diversity of COVID-19 radiographic features. Then, the attention-based encoder is developed to model nonlocal interactions by self-attention mechanism, which learns rich context information to detect the lesion regions with complex shapes. Extensive experiments on several public datasets show that the proposed AANet outperforms state-of-the-art methods.