A vast amount of pictures are taken every day by using cameras mounted on various mobile devices. Even though the clarity of such acquired images has been significantly improved due to the advance of the image sensor technology, the visual quality is hardly guaranteed under varying illumination conditions. In this paper, a novel yet simple method for low-light image enhancement is proposed via the maximal diffusion value. The key idea of the proposed method is to estimate the illumination component, which is likely to appear as the bright pixel even under the low-light condition, by exploring multiple diffusion spaces. Specifically, the illumination component can be accurately separated from the scene reflectance by selecting the maximal value at each pixel position of those diffusion spaces, and thus independently adjusted for the visual quality enhancement. That is, we propose to adopt the maximal value among diffused intensities at each pixel position, so-called maximal diffusion value, as the illumination component since illumination components buried in the dark tend to be revealed with bright intensities through the iterative diffusion process. In contrast to previous approaches that still pose difficulties to balance between over-saturated and conservative restorations, the proposed method improves the image quality without any significant distortion while successfully suppressing the problem of noise amplification. Experimental results on benchmark datasets show the efficiency and robustness of the proposed method compared to previous approaches introduced in literature.