Single-image super-resolution imaging methods are increasingly being employed owing to their immense applicability in numerous domains, such as medical imaging, display manufacturing, and digital zooming. Despite their widespread usability, the existing learning-based super-resolution (SR) methods are computationally expensive and inefficient for resource-constrained IoT devices. In this study, we propose a lightweight model based on a multi-agent reinforcement-learning approach that employs multiple agents at the pixel level to construct super-resolution images by following the asynchronous actor-critic policy. The agents iteratively select a predefined set of actions to be executed within five time steps based on the new image state, followed by the action that maximizes the cumulative reward. We thoroughly evaluate and compare our proposed method with existing super-resolution methods. Experimental results illustrate that the proposed method can outperform the existing models in both qualitative and quantitative scores despite having significantly less computational complexity. The practicability of the proposed method is confirmed further by evaluating it on numerous IoT platforms, including edge devices.
Keywords: computer vision; image super-resolution; internet of things; lightweight image super-resolution; reinforcement learning.