With the similar framework of, Zeyde et al. Subsequently, they introduced a coupled dictionary training approach with two acceleration schemes which overcame the sparse coding bottleneck. proposed a classic sparse model by training a joint dictionary pair in the sparse domain for image reconstruction. In this case, it has been widely studied for its superior performance in producing clear images with low computational complexity. Compared with neighbor-based and regression-based SR that always heavily rely on the quality and the size of sample images, the sparsity-based SR is capable of learning more compact dictionaries based on signal sparse representation. During the past decades, numerous mapping formulations have been designed, the most representative methods include neighbor-based SR, regression-based SR, sparsity-based SR, and the ones using deep neural networks. Basically, these algorithms and models exploit the prior texture knowledge from extensive sample images to learn the underlying mapping relations between LR and HR images. However, these methods are usually inadequate in producing novel details and perform unsatisfactory under high scaling factor.Ĭompared with the aforementioned methods, the learning-based SR is generally superior since it is capable of generating convincing novel details that are almost lost in the low-resolution image. As a result, they are able to produce sharper edges and clearer textures while removing the undesired artifacts. Using different prior assumptions, this family of approaches are capable of enhancing the features of low-resolution images through a regularized cost function. In addition to sampling, reconstruction-based SR assumes that the LR image is obtained by a series of degradations: down-sampling, blurring, and additive noise. However, considerable blurring and aliasing artifacts are often inevitable in the up-scaled images.ĭifferent from the interpolation-based SR, the reconstruction-based methods refine the observation model. Now, the interpolation-based methods are often used as the comparison baseline. It assumes that the LR observations are degraded by down-sampling, and the unknown HR pixels can be estimated from their observed neighbors. The interpolation-based SR usually utilizes fixed function or adaptive structure kernels to predict the missing pixels in HR grid. In general, the existing SISR algorithms can be classified into three categories: interpolation-based, reconstruction-based and learning-based SR. Among them, single-image super-resolution (SISR) is an important branch, which enlarges an image based on the image itself as the observation. Through the past three decades, varieties of SR algorithms and models have been developed to deal with this problem. The concept of image SR was first proposed and studied by Tsai and Huang in the 1980s. Although image SR is a frequent manipulation in image processing, this problem still remains challenging because it is under constrained and there is no closed form without extra constraints. The problem of enlarging images to the ones with bigger spatial size is regarded as image super-resolution (SR), which builds the mathematical relation between the low-resolution (LR) image and the high-resolution (HR) image. It implies that our method could be more effect in real applications and that the HR-LR mapping relation is more complicated than bicubic interpolation. Compared with the state-of-the-art SR algorithms, the experimental results reveal that the proposed method can produce sharper boundaries and suppress undesired artifacts with the present of Gaussian blur. In order to deal with Gaussian blur, a local dictionary is generated and iteratively updated by K-means principal component analysis (K-PCA) and gradient decent (GD) to model the blur effect during the down-sampling. In addition, we introduce the Gaussian blur to the LR images to eliminate a widely used but inappropriate assumption that the low resolution (LR) images are generated by bicubic interpolation from high-resolution (HR) images. In order to present and differentiate the feature mapping in different scales, a global dictionary set is trained in multiple structure scales, and a non-linear function is used to choose the appropriate dictionary to initially reconstruct the HR image. In this paper, we propose a hybrid super-resolution method by combining global and local dictionary training in the sparse domain.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |