This study analyzes the effectiveness of various loss functions on performance improvement for Single Image Super-Resolution (SISR) using Convolutional Neural Network (CNN) models by surrogating the reconstructive map between Low Resolution (LR) and High Resolution (HR) images with convolutional filters. In total, eight loss functions are separately incorporated with Adam optimizer. Through experimental evaluations on different datasets, it is observed that some parametric and non-parametric robust loss functions promise impressive accuracies whereas remaining ones are sensitive to noise that misleads the learning process and consequently resulting in lower quality HR outcomes. Eventually, it turns out that the use of either Difference of Structural Similarity (DSSIM), Charbonnier or L1 loss functions within the optimization mechanism would be a proper choice, by considering their excellent reconstruction results. Among them, Charbonnier and Ll loss functions are fastest ones when the computational time cost is examined during training stage. (C) 2019 Elsevier Inc. All rights reserved.