Deep facial recognition (FR) has reached very high accuracy on various demanding datasets and encourages successful real-world applications, even demonstrating strong tolerance to illumination change, which is commonly viewed as a major danger to FR systems. In the real world, however, illumination variance produced by a variety of lighting situations cannot be adequately captured by the limited facsimile. To this end, we first propose the physical model- based adversarial relighting attack (ARA) denoted as albedo- quotient-based adversarial relighting attack (AQ-ARA). It generates natural adversarial light under the physical lighting model and guidance of FR systems and synthesizes adversarially relighted face images. Moreover, we propose the auto-predictive adversarial relighting attack (AP-ARA) by training an adversarial relighting network (ARNet) to automatically predict the adversarial light in a one-step manner according to different input faces, allowing efficiency-sensitive applications . More importantly, we propose to transfer the above digital attacks to physical ARA (Phy- ARA) through a precise relighting device, making the estimated adversarial lighting condition reproducible in the real world. We validate our methods on three state-of-the-art deep FR methods, i.e., FaceNet, ArcFace, and CosFace, on two public datasets. The extensive and insightful results demonstrate our work can generate realistic adversarial relighted face images fooling FR easily, revealing the threat of specific light directions and strengths.