Share this page:

An Experimental-Based Review of Image Enhancement and Image Restoration Methods for Underwater Imaging

Authors: Yan Wang, Wei Song, Giancarlo Fortino, Li-Zhe Qi, Wenqiang Zhang, Antonio Liotta

Published in IEEE Xplore 30 July 2019 View in IEEE Xplore

ieee research papers on image restoration

Underwater images play a key role in ocean exploration but often suffer from severe quality degradation due to light absorption and scattering in water medium. Although major breakthroughs have been made recently in the general area of image enhancement and restoration, the applicability of new methods for improving the quality of underwater images has not specifically been captured. In this paper, we review the image enhancement and restoration methods that tackle typical underwater image impairments, including some extreme degradations and distortions. First, we introduce the key causes of quality reduction in underwater images, in terms of the underwater image formation model (IFM). Then, we review underwater restoration methods, considering both the IFM-free and the IFM-based approaches. Next, we present an experimental-based comparative evaluation of the state-of-the-art IFM-free and IFM-based methods, considering also the prior-based parameter estimation algorithms of the IFM-based methods, using both subjective and objective analyses (the used code is freely available at  https :// github . com / wangyanckxx / Single – Underwater – Image – Enhancement – and – Color – Restoration ). Starting from this paper, we pinpoint the key shortcomings of existing methods, drawing recommendations for future research in this area. Our review of underwater image enhancement and restoration provides researchers with the necessary background to appreciate challenges and opportunities in this important field.

View this article on IEEE Xplore

At a Glance

  • Journal: IEEE Access
  • Format: Open Access
  • Frequency: Continuous
  • Submission to Publication: 4-6 weeks (typical)
  • Topics: All topics in IEEE
  • Average Acceptance Rate: 27%
  • Impact Factor: 3.4
  • Model: Binary Peer Review
  • Article Processing Charge: US $1,995

Featured Articles

ieee research papers on image restoration

A Broad Ensemble Learning System for Drifting Stream Classification

View in IEEE Xplore

ieee research papers on image restoration

Increasing Light Load Efficiency in Phase-Shifted, Variable Frequency Multiport Series Resonant Converters

ieee research papers on image restoration

Interference-Aware Intelligent Scheduling for Virtualized Private 5G Networks

Submission guidelines.

© 2024 IEEE - All rights reserved. Use of this website signifies your agreement to the IEEE TERMS AND CONDITIONS.

A not-for-profit organization, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

AWARD RULES:

NO PURCHASE NECESSARY TO ENTER OR WIN. A PURCHASE WILL NOT INCREASE YOUR CHANCES OF WINNING.

These rules apply to the “2024 IEEE Access Best Video Award Part 2″ (the “Award”).

  • Sponsor: The Sponsor of the Award is The Institute of Electrical and Electronics Engineers, Incorporated (“IEEE”) on behalf of IEEE Access , 445 Hoes Lane, Piscataway, NJ 08854-4141 USA (“Sponsor”).
  • Eligibility: Award is open to residents of the United States of America and other countries, where permitted by local law, who are the age of eighteen (18) and older. Employees of Sponsor, its agents, affiliates and their immediate families are not eligible to enter Award. The Award is subject to all applicable state, local, federal and national laws and regulations. Entrants may be subject to rules imposed by their institution or employer relative to their participation in Awards and should check with their institution or employer for any relevant policies. Void in locations and countries where prohibited by law.
  • Agreement to Official Rules : By participating in this Award, entrants agree to abide by the terms and conditions thereof as established by Sponsor. Sponsor reserves the right to alter any of these Official Rules at any time and for any reason.  All decisions made by Sponsor concerning the Award including, but not limited to the cancellation of the Award, shall be final and at its sole discretion. 
  • How to Enter: This Award opens on July 1, 2024 at 12:00 AM ET and all entries must be received by 11:59 PM ET on December 31, 2024 (“Promotional Period”).

Entrant must submit a video with an article submission to IEEE Access . The video submission must clearly be relevant to the submitted manuscript.  Only videos that accompany an article that is accepted for publication in IEEE Access will qualify.  The video may be simulations, demonstrations, or interviews with other experts, for example.  Your video file should not exceed 100 MB.

Entrants can enter the Award during Promotional Period through the following method:

  • The IEEE Author Portal : Entrants can upload their video entries while submitting their article through the IEEE Author Portal submission site .
  • Review and Complete the Terms and Conditions: After submitting your manuscript and video through the IEEE Author Portal, entrants should then review and sign the Terms and Conditions .

Entrants who have already submitted a manuscript to IEEE Access without a video can still submit a video for inclusion in this Award so long as the video is submitted within 7 days of the article submission date.  The video can be submitted via email to the article administrator.  All videos must undergo peer review and be accepted along with the article submission.  Videos may not be submitted after an article has already been accepted for publication. 

The criteria for an article to be accepted for publication in IEEE Access are:

  • The article must be original writing that enhances the existing body of knowledge in the given subject area. Original review articles and surveys are acceptable even if new data/concepts are not presented.
  • Results reported must not have been submitted or published elsewhere (although expanded versions of conference publications are eligible for submission).
  • Experiments, statistics, and other analyses must be performed to a high technical standard and are described in sufficient detail.
  • Conclusions must be presented in an appropriate fashion and are supported by the data.
  • The article must be written in standard English with correct grammar.
  • Appropriate references to related prior published works must be included.
  • The article must fall within the scope of IEEE Access
  • Must be in compliance with the IEEE PSPB Operations Manual.
  • Completion of the required IEEE intellectual property documents for publication.
  • At the discretion of the IEEE Access Editor-in-Chief.
  • Disqualification: The following items will disqualify a video from being considered a valid submission:
  • The video is not original work.
  • A video that is not accompanied with an article submission.
  • The article and/or video is rejected during the peer review process.
  • The article and/or video topic does not fit into the scope of IEEE Access .
  • The article and/or do not follow the criteria for publication in IEEE Access .
  • Videos posted in a comment on IEEE Xplore .
  • Content ​is off-topic, offensive, obscene, indecent, abusive or threatening to others.
  • Infringes the copyright, trademark or other right of any third party.
  • Uploads viruses or other contaminating or destructive features.
  • Is in violation of any applicable laws or regulations.
  • Is not in English​.
  • Is not provided within the designated submission time.
  • Entrant does not agree and sign the Terms and Conditions document.

Entries must be original. Entries that copy other entries, or the intellectual property of anyone other than the Entrant, may be removed by Sponsor and the Entrant may be disqualified. Sponsor reserves the right to remove any entry and disqualify any Entrant if the entry is deemed, in Sponsor’s sole discretion, to be inappropriate.

  • Entrant’s Warranty and Authorization to Sponsor: By entering the Award, entrants warrant and represent that the Award Entry has been created and submitted by the Entrant. Entrant certifies that they have the ability to use any image, text, video, or other intellectual property they may upload and that Entrant has obtained all necessary permissions. IEEE shall not indemnify Entrant for any infringement, violation of publicity rights, or other civil or criminal violations. Entrant agrees to hold IEEE harmless for all actions related to the submission of an Entry. Entrants further represent and warrant, if they reside outside of the United States of America, that their participation in this Award and acceptance of a prize will not violate their local laws.
  • Intellectual Property Rights: Entrant grants Sponsor an irrevocable, worldwide, royalty free license to use, reproduce, distribute, and display the Entry for any lawful purpose in all media whether now known or hereinafter created. This may include, but is not limited to, the IEEE A ccess website, the IEEE Access YouTube channel, the IEEE Access IEEE TV channel, IEEE Access social media sites (LinkedIn, Facebook, Twitter, IEEE Access Collabratec Community), and the IEEE Access Xplore page. Facebook/Twitter/Microsite usernames will not be used in any promotional and advertising materials without the Entrants’ expressed approval.
  • Number of Prizes Available, Prizes, Approximate Retail Value and Odds of winning Prizes: Two (2) promotional prizes of $350 USD Amazon gift cards. One (1) grand prize of a $500 USD Amazon gift card. Prizes will be distributed to the winners after the selection of winners is announced. Odds of winning a prize depend on the number of eligible entries received during the Promotional Period. Only the corresponding author of the submitted manuscript will receive the prize.

The grand prize winner may, at Sponsor’ discretion, have his/her article and video highlighted in media such as the IEEE Access Xplore page and the IEEE Access social media sites.

The prize(s) for the Award are being sponsored by IEEE.  No cash in lieu of prize or substitution of prize permitted, except that Sponsor reserves the right to substitute a prize or prize component of equal or greater value in its sole discretion for any reason at time of award.  Sponsor shall not be responsible for service obligations or warranty (if any) in relation to the prize(s). Prize may not be transferred prior to award. All other expenses associated with use of the prize, including, but not limited to local, state, or federal taxes on the Prize, are the sole responsibility of the winner.  Winner(s) understand that delivery of a prize may be void where prohibited by law and agrees that Sponsor shall have no obligation to substitute an alternate prize when so prohibited. Amazon is not a sponsor or affiliated with this Award.

  • Selection of Winners: Promotional prize winners will be selected based on entries received during the Promotional Period. The sponsor will utilize an Editorial Panel to vote on the best video submissions. Editorial Panel members are not eligible to participate in the Award.  Entries will be ranked based on three (3) criteria:
  • Presentation of Technical Content
  • Quality of Video

Upon selecting a winner, the Sponsor will notify the winner via email. All potential winners will be notified via their email provided to the sponsor. Potential winners will have five (5) business days to respond after receiving initial prize notification or the prize may be forfeited and awarded to an alternate winner. Potential winners may be required to sign an affidavit of eligibility, a liability release, and a publicity release.  If requested, these documents must be completed, signed, and returned within ten (10) business days from the date of issuance or the prize will be forfeited and may be awarded to an alternate winner. If prize or prize notification is returned as undeliverable or in the event of noncompliance with these Official Rules, prize will be forfeited and may be awarded to an alternate winner.

  • General Prize Restrictions:  No prize substitutions or transfer of prize permitted, except by the Sponsor. Import/Export taxes, VAT and country taxes on prizes are the sole responsibility of winners. Acceptance of a prize constitutes permission for the Sponsor and its designees to use winner’s name and likeness for advertising, promotional and other purposes in any and all media now and hereafter known without additional compensation unless prohibited by law. Winner acknowledges that neither Sponsor, Award Entities nor their directors, employees, or agents, have made nor are in any manner responsible or liable for any warranty, representation, or guarantee, express or implied, in fact or in law, relative to any prize, including but not limited to its quality, mechanical condition or fitness for a particular purpose. Any and all warranties and/or guarantees on a prize (if any) are subject to the respective manufacturers’ terms therefor, and winners agree to look solely to such manufacturers for any such warranty and/or guarantee.

11.Release, Publicity, and Privacy : By receipt of the Prize and/or, if requested, by signing an affidavit of eligibility and liability/publicity release, the Prize Winner consents to the use of his or her name, likeness, business name and address by Sponsor for advertising and promotional purposes, including but not limited to on Sponsor’s social media pages, without any additional compensation, except where prohibited.  No entries will be returned.  All entries become the property of Sponsor.  The Prize Winner agrees to release and hold harmless Sponsor and its officers, directors, employees, affiliated companies, agents, successors and assigns from and against any claim or cause of action arising out of participation in the Award. 

Sponsor assumes no responsibility for computer system, hardware, software or program malfunctions or other errors, failures, delayed computer transactions or network connections that are human or technical in nature, or for damaged, lost, late, illegible or misdirected entries; technical, hardware, software, electronic or telephone failures of any kind; lost or unavailable network connections; fraudulent, incomplete, garbled or delayed computer transmissions whether caused by Sponsor, the users, or by any of the equipment or programming associated with or utilized in this Award; or by any technical or human error that may occur in the processing of submissions or downloading, that may limit, delay or prevent an entrant’s ability to participate in the Award.

Sponsor reserves the right, in its sole discretion, to cancel or suspend this Award and award a prize from entries received up to the time of termination or suspension should virus, bugs or other causes beyond Sponsor’s control, unauthorized human intervention, malfunction, computer problems, phone line or network hardware or software malfunction, which, in the sole opinion of Sponsor, corrupt, compromise or materially affect the administration, fairness, security or proper play of the Award or proper submission of entries.  Sponsor is not liable for any loss, injury or damage caused, whether directly or indirectly, in whole or in part, from downloading data or otherwise participating in this Award.

Representations and Warranties Regarding Entries: By submitting an Entry, you represent and warrant that your Entry does not and shall not comprise, contain, or describe, as determined in Sponsor’s sole discretion: (A) false statements or any misrepresentations of your affiliation with a person or entity; (B) personally identifying information about you or any other person; (C) statements or other content that is false, deceptive, misleading, scandalous, indecent, obscene, unlawful, defamatory, libelous, fraudulent, tortious, threatening, harassing, hateful, degrading, intimidating, or racially or ethnically offensive; (D) conduct that could be considered a criminal offense, could give rise to criminal or civil liability, or could violate any law; (E) any advertising, promotion or other solicitation, or any third party brand name or trademark; or (F) any virus, worm, Trojan horse, or other harmful code or component. By submitting an Entry, you represent and warrant that you own the full rights to the Entry and have obtained any and all necessary consents, permissions, approvals and licenses to submit the Entry and comply with all of these Official Rules, and that the submitted Entry is your sole original work, has not been previously published, released or distributed, and does not infringe any third-party rights or violate any laws or regulations.

12.Disputes:  EACH ENTRANT AGREES THAT: (1) ANY AND ALL DISPUTES, CLAIMS, AND CAUSES OF ACTION ARISING OUT OF OR IN CONNECTION WITH THIS AWARD, OR ANY PRIZES AWARDED, SHALL BE RESOLVED INDIVIDUALLY, WITHOUT RESORTING TO ANY FORM OF CLASS ACTION, PURSUANT TO ARBITRATION CONDUCTED UNDER THE COMMERCIAL ARBITRATION RULES OF THE AMERICAN ARBITRATION ASSOCIATION THEN IN EFFECT, (2) ANY AND ALL CLAIMS, JUDGMENTS AND AWARDS SHALL BE LIMITED TO ACTUAL OUT-OF-POCKET COSTS INCURRED, INCLUDING COSTS ASSOCIATED WITH ENTERING THIS AWARD, BUT IN NO EVENT ATTORNEYS’ FEES; AND (3) UNDER NO CIRCUMSTANCES WILL ANY ENTRANT BE PERMITTED TO OBTAIN AWARDS FOR, AND ENTRANT HEREBY WAIVES ALL RIGHTS TO CLAIM, PUNITIVE, INCIDENTAL, AND CONSEQUENTIAL DAMAGES, AND ANY OTHER DAMAGES, OTHER THAN FOR ACTUAL OUT-OF-POCKET EXPENSES, AND ANY AND ALL RIGHTS TO HAVE DAMAGES MULTIPLIED OR OTHERWISE INCREASED. ALL ISSUES AND QUESTIONS CONCERNING THE CONSTRUCTION, VALIDITY, INTERPRETATION AND ENFORCEABILITY OF THESE OFFICIAL RULES, OR THE RIGHTS AND OBLIGATIONS OF ENTRANT AND SPONSOR IN CONNECTION WITH THE AWARD, SHALL BE GOVERNED BY, AND CONSTRUED IN ACCORDANCE WITH, THE LAWS OF THE STATE OF NEW JERSEY, WITHOUT GIVING EFFECT TO ANY CHOICE OF LAW OR CONFLICT OF LAW, RULES OR PROVISIONS (WHETHER OF THE STATE OF NEW JERSEY OR ANY OTHER JURISDICTION) THAT WOULD CAUSE THE APPLICATION OF THE LAWS OF ANY JURISDICTION OTHER THAN THE STATE OF NEW JERSEY. SPONSOR IS NOT RESPONSIBLE FOR ANY TYPOGRAPHICAL OR OTHER ERROR IN THE PRINTING OF THE OFFER OR ADMINISTRATION OF THE AWARD OR IN THE ANNOUNCEMENT OF THE PRIZES.

  • Limitation of Liability:  The Sponsor, Award Entities and their respective parents, affiliates, divisions, licensees, subsidiaries, and advertising and promotion agencies, and each of the foregoing entities’ respective employees, officers, directors, shareholders and agents (the “Released Parties”) are not responsible for incorrect or inaccurate transfer of entry information, human error, technical malfunction, lost/delayed data transmissions, omission, interruption, deletion, defect, line failures of any telephone network, computer equipment, software or any combination thereof, inability to access web sites, damage to a user’s computer system (hardware and/or software) due to participation in this Award or any other problem or error that may occur. By entering, participants agree to release and hold harmless the Released Parties from and against any and all claims, actions and/or liability for injuries, loss or damage of any kind arising from or in connection with participation in and/or liability for injuries, loss or damage of any kind, to person or property, arising from or in connection with participation in and/or entry into this Award, participation is any Award-related activity or use of any prize won. Entry materials that have been tampered with or altered are void. If for any reason this Award is not capable of running as planned, or if this Award or any website associated therewith (or any portion thereof) becomes corrupted or does not allow the proper playing of this Award and processing of entries per these rules, or if infection by computer virus, bugs, tampering, unauthorized intervention, affect the administration, security, fairness, integrity, or proper conduct of this Award, Sponsor reserves the right, at its sole discretion, to disqualify any individual implicated in such action, and/or to cancel, terminate, modify or suspend this Award or any portion thereof, or to amend these rules without notice. In the event of a dispute as to who submitted an online entry, the entry will be deemed submitted by the authorized account holder the email address submitted at the time of entry. “Authorized Account Holder” is defined as the person assigned to an email address by an Internet access provider, online service provider or other organization responsible for assigning email addresses for the domain associated with the email address in question. Any attempt by an entrant or any other individual to deliberately damage any web site or undermine the legitimate operation of the Award is a violation of criminal and civil laws and should such an attempt be made, the Sponsor reserves the right to seek damages and other remedies from any such person to the fullest extent permitted by law. This Award is governed by the laws of the State of New Jersey and all entrants hereby submit to the exclusive jurisdiction of federal or state courts located in the State of New Jersey for the resolution of all claims and disputes. Facebook, LinkedIn, Twitter, G+, YouTube, IEEE Xplore , and IEEE TV are not sponsors nor affiliated with this Award.
  • Award Results and Official Rules: To obtain the identity of the prize winner and/or a copy of these Official Rules, send a self-addressed stamped envelope to Kimberly Rybczynski, IEEE, 445 Hoes Lane, Piscataway, NJ 08854-4141 USA.

Recent progress in digital image restoration techniques: : A review

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, view options, recommendations, image retrieval using digital image inpainting techniques.

Image retrieval is an inverse problem in digital image processing. In this paper, the authors deal with restoration of image using digitally image inpainting methods. In this inpainting technique, one can extract a missing an important part or can ...

A Literature Survey on Blur Detection Algorithms for Digital Imaging

Development of blur detection algorithms has attracted many attentions in recent years. The blur detection algorithms are found very helpful in real life applications and therefore have been developed in various multimedia related research areas ...

Blurred image restoration: A fast method of finding the motion length and angle

Motion blur in photographic images is a result of camera movement or shake. Methods such as Blind Deconvolution are used when information about the direction and size of blur is not known. Restoration methods, such as Lucy and Richardson or Wiener ...

Information

Published in.

Academic Press, Inc.

United States

Publication History

Author tags.

  • Digital image
  • Image restoration
  • Degradation
  • Transformation
  • Review-article

Contributors

Other metrics, bibliometrics, article metrics.

  • 0 Total Citations
  • 0 Total Downloads
  • Downloads (Last 12 months) 0
  • Downloads (Last 6 weeks) 0

View options

Login options.

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Share this publication link.

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

Preserving Artistic Heritage: A Comprehensive Review of Virtual Restoration Methods for Damaged Artworks

  • Review article
  • Published: 05 September 2024

Cite this article

ieee research papers on image restoration

  • Praveen Kumar 1 &
  • Varun Gupta   ORCID: orcid.org/0000-0002-2633-5920 1  

Restoration of damaged artwork is an important task to preserve the culture and history of humankind. Restoration of damaged artwork is a delicate, complex, and irreversible process that requires preserving the artist’s style and semantics while removing the damages from the artwork. Digital restoration of artworks can guide artists in physically restoring artworks. This paper groups the virtual artwork restoration methods into various categories: image processing, machine learning, encoder-decoder neural networks, and generative adversarial network-based methods. This paper discusses and analyses different restoration methods’ underlying merits and demerits. The category-wise review of various artwork restoration methods reveals that the generative adversarial network-based methods have attracted the attention of researchers in recent years for restoring damaged artworks. This paper describes datasets used for training and testing of artwork restoration methods and discusses various metrics used for performance evaluation of the artwork restoration methods. This paper compares the restoration results of various methods quantitatively using performance evaluation metrics and qualitatively using visual inspection of the results. Further, the paper also identifies research gaps, challenges, and future directions for research in this field. The proposed review aims to provide researchers with an important reference for working in the artwork restoration field.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

ieee research papers on image restoration

Explore related subjects

  • Artificial Intelligence

https://www.kaggle.com/thedownhill/art-images-drawings-painting-sculpture-engraving .

http://dsr.nii.ac.jp/china-caves/dunhuang/061.html.en .

https://visit.gent.be/en/see-do/ghent-altarpiece-supreme-divine-art .

Liu G, Reda FA, Shih KJ, Wang T, Tao A, Catanzaro B (2018) Image inpainting for irregular holes using partial convolutions, vol 11206. Springer, New York

Google Scholar  

Pei SC, Zeng YC, Chang CH (2004) Virtual restoration of ancient Chinese paintings using color contrast enhancement and Lacuna texture synthesis. IEEE Trans Image Process 13(3):416–429. https://doi.org/10.1109/TIP.2003.821347

Article   Google Scholar  

Cornelis B, Yang Y, Vogelstein JT, Dooms A, Daubechies I, Dunson D (2013) Bayesian crack detection in ultra high resolution multimodal images of paintings. In: 2013 18th int. conf. digit. signal process. DSP 2013, no. April 2014, 2013, https://doi.org/10.1109/ICDSP.2013.6622710

Ballester C, Caselles V, Verdera J (2004) Disocclusion by joint interpolation of vector fields and gray levels. Multiscale Model Simul 2(1):80–123. https://doi.org/10.1137/S1540345903422458

Article   MathSciNet   Google Scholar  

Criminisi A, Pérez P, Toyama K (2004) Region filling and object removal by exemplar-based image inpainting. IEEE Trans Image Process 13(9):1200–1212. https://doi.org/10.1109/TIP.2004.833105

Levin A, Zomet A, Weiss Y (2003) Learning how to inpaint from global image statistics. Proc IEEE Int Conf Comput Vis 1:305–312. https://doi.org/10.1109/iccv.2003.1238360

Ballester C, Caselles V, Verdera J (2001) Disocclusion by joint interpolation of vector fields and gray levels. IEEE Trans Image Process 10(8):1200–1210. https://doi.org/10.1137/s1540345903422458

Telea A (2004) An image inpainting technique based on the fast marching method. J Graph Tools 9(1):23–34. https://doi.org/10.1080/10867651.2004.10487596

Nicolaus EK, Westphal C (1999) The restoration of paintings, pp 465–469. https://doi.org/10.1109/ICIEV.2018.8641016

Efros AA, Freeman WT (2005) Image quilting for texture synthesis and transfer. In: Proc. 28th annu. conf. comput. graph. interact. tech., no. August, pp 341–346. https://doi.org/10.1145/383259.383296

Barnes C, Shechtman E, Finkelstein A, Goldman DB (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans Graph 28(3):24. https://doi.org/10.1145/1531326.1531330

Efros AA, Leung TK (1999) Texture synthesis by non-parametric sampling. Proc IEEE Int Conf Comput Vis 2:1033–1038. https://doi.org/10.1109/iccv.1999.790383

Zeng Y, Gong Y (2018) Nearest neighbor based digital restoration of damaged ancient chinese paintings. In: 2018 IEEE 23rd int. conf. digit. signal process, pp 1–5

Barnes C, Shechtman E, Finkelstein A, Goldman DB (2009) PatchMatch. In: ACM SIGGRAPH 2009 Pap.—SIGGRAPH ’09, vol 28, no 3, p 1. https://doi.org/10.1145/1576246.1531330

Cislariu M, Gordan M, Vlaicu A, Florea C, Ciungu S (2011) Electronics and telecommunications defect detection and restoration of cultural heritage images electronics and telecommunications, vol 52, no 4, pp 49–55

Marcelo Bertalmio CB, Sapiro G, Caselles V (2000) Image inpainting. In: Proc. 27th annu. conf. comput. graph. interact. tech., pp 417–424, 2000. https://doi.org/10.1055/s-0031-1298199

Cornelis B et al (2013) Crack detection and inpainting for virtual restoration of paintings: the case of the Ghent Altarpiece. Signal Process 93(3):605–619. https://doi.org/10.1016/j.sigpro.2012.07.022

Mol VR, Maheswari PU (2021) The digital reconstruction of degraded ancient temple murals using dynamic mask generation and an extended exemplar-based region-filling algorithm. Herit Sci 9(1):1–18. https://doi.org/10.1186/s40494-021-00604-2

Barnes C, Shechtman E, Finkelstein A, Goldman DB (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans Graph 28(3):1–12. https://doi.org/10.1145/1531326.1531330

Gupta A, Khandelwal V, Gupta A, Srivastava MC (2008) Image processing methods for the restoration of digitized paintings. Thammasat Int J Sci Technol 13(3):66–72

Purkait P, Chanda B (2012) Digital restoration of damaged mural images. ACM Int Conf Proc Ser. https://doi.org/10.1145/2425333.2425382

Zhou P, Hou M, Lv S, Zhao X, Wu W (2019) Virtual restoration of stained Chinese paintings using patch-based color constrained poisson editing with selected hyperspectral feature bands. Remote Sens 11(11):1–18. https://doi.org/10.3390/rs11111384

Cao N et al (2021) Restoration method of sootiness mural images based on dark channel prior and Retinex by bilateral filter. Herit Sci 9(1):1–19. https://doi.org/10.1186/s40494-021-00504-5

Kumar KMP, Kumar M, Bhargav BVS, Ghorai M (2014) Digital restoration of deteriorated mural images. In: Proc.—2014 5th int. conf. signal image process. ICSIP 2014, pp 36–41. https://doi.org/10.1109/ICSIP.2014.10 .

Hou M et al (2018) Virtual restoration of stains on ancient paintings with maximum noise fraction transformation based on the hyperspectral imaging. J Cult Herit 34(2017):136–144. https://doi.org/10.1016/j.culher.2018.04.004

Garg S, Sahoo G (2013) Virtual restoration of old digital paintings. Int J Comput Sci Eng 2(3):35–46

Wang H, Li Q, Jia S (2019) A global and local feature weighted method for ancient murals inpainting. Int J Mach Learn Cybern. https://doi.org/10.1007/s13042-019-01032-2

Cao J, Li Y, Zhang Q, Cui H (2019) Restoration of an ancient temple mural by a local search algorithm of an adaptive sample block. Herit Sci. https://doi.org/10.1186/s40494-019-0281-y

Jiang C, Jiang Z, Shi D (2022) Computer-aided virtual restoration of frescoes based on intelligent generation of line drawings. Math Probl Eng. https://doi.org/10.1155/2022/9092765

Navab N, Hornegger J, Wells WM, Frangi AF (2015) U-Net: convolutional networks for biomedical image segmentation. Lect. Notes Comput. Sci. vol 9351, pp 234–241. https://doi.org/10.1007/978-3-319-24574-4

Maali Amiri M, Messinger DW (2021) Virtual cleaning of works of art using deep convolutional neural networks. Herit Sci 9(1):1–19. https://doi.org/10.1186/s40494-021-00567-4

Gupta V, Sambyal N, Sharma A, Kumar P (2021) Restoration of artwork using deep neural networks. Evol Syst 12(2):439–446. https://doi.org/10.1007/s12530-019-09303-7

Zhang R, Isola P, Efros AA (2016) Colorful image colorization. ECCV 9905:649–666. https://doi.org/10.1007/978-3-319-46448-0

He K, Gkioxari G, Dollar P, Girshick R (2017) Mask R-CNN. Proc IEEE Int Conf Comput Vis 2017:2980–2988. https://doi.org/10.1109/ICCV.2017.322

Zeng Y, Gong Y, Zeng X (2020) Controllable digital restoration of ancient paintings using convolutional neural network and nearest neighbor. Pattern Recognit Lett 133:158–164. https://doi.org/10.1016/j.patrec.2020.02.033

Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: 3rd int. conf. learn. represent. ICLR 2015—conf. track proc., pp 1–14

Goodfellow IJ, Pouget-abadie J, Mirza M, Xu B, Warde-farley D. Generative adversarial nets, pp 1–9

Zou Z, Zhao P, Zhao X (2021) Virtual restoration of the colored paintings on weathered beams in the Forbidden City using multiple deep learning algorithms. Adv. Eng. Informatics 50:101421. https://doi.org/10.1016/j.aei.2021.101421

Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation, pp 1–8

Isola P, Zhu JY, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proc.—30th IEEE conf. comput. vis. pattern recognition, CVPR 2017, vol 2017, pp 5967–5976. https://doi.org/10.1109/CVPR.2017.632

Park T, Liu MY, Wang TC, Zhu JY (2019) GauGAN: semantic image synthesis with spatially adaptive normalization. In: ACM SIGGRAPH 2019. https://doi.org/10.1145/3306305.3332370

Isola P, Efros AA, Ai B, Berkeley UC. Image-to-image translation with conditional adversarial networks

Adhikary A, Bhandari N, Markou E, Sachan S (2021) ArtGAN: artwork restoration using generative adversarial networks. In: 2021 13th int. conf. adv. comput. intell. ICACI 2021, pp 199–206. https://doi.org/10.1109/ICACI52617.2021.9435888

Zhang Y, Tian Y, Kong Y, Zhong B, Fu Y (2014) Residual dense network for image restoration, vol 13, no 9, pp 1–16

Kumar P, Gupta V (2023) Restoration of damaged artworks based on a generative adversarial network. Multimed Tools Appl. https://doi.org/10.1007/s11042-023-15222-2

Zhu L, Deng R, Maire M, Deng Z, Mori G, Tan P (2018) Sparsely aggregated convolutional networks. In: Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol 11216 LNCS, pp 192–208. https://doi.org/10.1007/978-3-030-01258-8_12

Cao J, Zhang Z, Zhao A, Cui H, Zhang Q (2020) Ancient mural restoration based on a modified generative adversarial network. Herit Sci 8(1):1–14. https://doi.org/10.1186/s40494-020-0355-x

Zou Z, Zhao P, Zhao X (2021) Automatic segmentation , inpainting , and classification of defective patterns on ancient architecture using multiple deep learning algorithms, pp 1–18. https://doi.org/10.1002/stc.2742

Bolya D, Zhou C, Xiao F, Lee YJ (2019) YOLACT: real-time instance segmentation. In: Proc. IEEE int. conf. comput. vis., pp 9156–9165. https://doi.org/10.1109/ICCV.2019.00925

Improved Training ofWasserstein GANs Ishaan (2014) https://doi.org/10.3997/2214-4609.201405839

Li J, Wang H, Deng Z, Pan M, Chen H (2021) Restoration of non-structural damaged murals in Shenzhen Bao’an based on a generator–discriminator network. Herit Sci 9(1):1–14. https://doi.org/10.1186/s40494-020-00478-w

Luo R, Luo R, Guo L, Yu H (2022) An ancient Chinese painting restoration method based on improved generative adversarial network. J Phys Conf Ser. https://doi.org/10.1088/1742-6596/2400/1/012005

Gan W. Wasserstein GAN

Zeng Y et al (2021) Virtual restoration of missing paint loss of mural based on generative adversarial network. J Phys Conf Ser 2400:1–5. https://doi.org/10.1088/1742-6596/2400/1/012005

Kumar P, Gupta V (2023) Artwork restoration using paired image translation-based generative adversarial networks. In: ITM Web Conf. 54, 01013 (2023)I3CS-2023, vol 01013, pp 1–12

Wu M, Chang X, Wang J (2023) Fragments inpainting for tomb murals using a dual-attention mechanism GAN with improved generators. Appl Sci. https://doi.org/10.3390/app13063972

Zhu L, Yang Y (2018) Computer vision—ECCV 2018, vol 11211. Springer, Cham

Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proc. IEEE int. conf. comput. vis., pp 2242–2251. https://doi.org/10.1109/ICCV.2017.244

Wang HL et al (2018) Dunhuang mural restoration using deep learning. In: SIGGRAPH Asia 2018 Tech. Briefs, SA 2018. https://doi.org/10.1145/3283254.3283263

Jay J, Renou J-P, Voinnet O, Navarro L (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks Jun-Yan. In: Proc. IEEE int. conf. comput. vis., pp 183–202. https://doi.org/10.1007/978-1-60327-005-2_13

Sizyakin R, Cornelis B, Meeus L, Martens M, Voronin V, Pižurica A (2018) A deep learning approach to crack detection in panel paintings. In: Image Process. Art Investig., pp 40–42. http://closertovaneyck.kikirpa.be/

Sizyakin R et al (2020) Crack detection in paintings using convolutional neural networks. IEEE Access 8:74535–74552. https://doi.org/10.1109/ACCESS.2020.2988856

Zou Z, Zhao X, Zhao P, Qi F, Wang N (2019) CNN-based statistics and location estimation of missing components in routine inspection of historic buildings. J Cult Herit 38:221–230. https://doi.org/10.1016/j.culher.2019.02.002

van Noord N, Postma E (2017) Learning scale-variant and scale-invariant features for deep image classification. Pattern Recognit 61:583–592. https://doi.org/10.1016/j.patcog.2016.06.005

Li X, Zeng Y, Gong Y (2019) Chronological classification of ancient paintings of mogao grottoes using convolutional neural networks. In: 2019 IEEE 4th int. conf. signal image process. ICSIP 2019, pp 51–55. https://doi.org/10.1109/SIPROCESS.2019.8868392

Zou Q, Cao Y, Li Q, Huang C, Wang S (2014) Chronological classification of ancient paintings using appearance and shape features. Pattern Recognit Lett 49:146–154. https://doi.org/10.1016/j.patrec.2014.07.002

Obeso AM, Vázquez MSG, Acosta AAR, Benois-Pineau J (2017) Connoisseur: classification of styles of Mexican architectural heritage with deep learning and visual attention prediction. In: ACM int. conf. proceeding ser., vol Part F1301. https://doi.org/10.1145/3095713.3095730

Szegedy C et al (2015) Going deeper with convolutions. In: Proc. IEEE comput. soc. conf. comput. vis. pattern recognit., vol 07–12-June, pp 1–9. https://doi.org/10.1109/CVPR.2015.7298594

Krizhevsky BA, Sutskever I, Hinton GE (2012) Cnn实际训练的. Commun ACM 60(6):84–90

Llamas J, Lerones PM, Medina R, Zalama E, Jaime G (2017) Applied sciences classification of architectural heritage images using deep learning techniques. https://doi.org/10.3390/app7100992

Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proc. IEEE comput. soc. conf. comput. vis. pattern recognit., pp 2818–2826. https://doi.org/10.1109/CVPR.2016.308

Cao J et al (2020) Studies in conservation ancient mural classification method based on improved AlexNet network ancient mural classi fi cation method based on improved AlexNet network. Stud Conserv. https://doi.org/10.1080/00393630.2019.1706304

Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting. In: Proc. IEEE comput. soc. conf. comput. vis. pattern recognit., pp 2536–2544. https://doi.org/10.1109/CVPR.2016.278

Liu G, Reda FA, Shih KJ, Wang TC, Tao A, Catanzaro B (2018) Partial convolutions. In: Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol 11215 LNCS, pp 89–105

Gatys LA, Ecker AS, Bethge M (2016) Image style transfer using convolutional neural networks. In: Proc. IEEE comput. soc. conf. comput. vis. pattern recognit., pp 2414–2423. https://doi.org/10.1109/CVPR.2016.265

Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol 9906 LNCS, pp 694–711. https://doi.org/10.1007/978-3-319-46475-6_43

Shelhamer E, Long J, Darrell T (2017) Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 39(4):640–651. https://doi.org/10.1109/TPAMI.2016.2572683

Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39(12):2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615

Song Y et al (2018) Contextual-based image inpainting: infer, match, and translate. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol 11206 LNCS, no. d, pp 3–18. https://doi.org/10.1007/978-3-030-01216-8_1

Xie J, Xu L, Chen E (2012) Image denoising and inpainting with deep neural networks. Adv Neural Inf Process Syst 1:341–349

Liu G, Reda FA, Shih KJ, Wang TC, Tao A, Catanzaro B (2018) Image inpainting for irregular holes using partial convolutions, vol 11215. LNCS. Springer

Xu L, Ren JSJ, Liu C, Jia J (2014) Deep convolutional neural network for image deconvolution. Adv Neural Inf Process Syst 2:1790–1798

Zamir SW et al (2021) Multi-stage progressive image restoration. In: 2021 IEEE/CVF conf. comput. vis. pattern recognit. http://arxiv.org/abs/2102.02808

Gatys L, Ecker A, Bethge M (2016) A neural algorithm of artistic style. J Vis 16(12):326. https://doi.org/10.1167/16.12.326

Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Trans Graph. https://doi.org/10.1145/3072959.3073659

Wan Z et al (2020) Old photo restoration via deep latent space translation. IEEE Trans Pattern Anal Mach Intell 45:2071–2087

Nazeri K, Ng E, Joseph T, Qureshi FZ, Ebrahimi M (2019) EdgeConnect: generative image inpainting with adversarial edge learning. http://arxiv.org/abs/1901.00212

Zhang K, Zuo W, Gu S, Zhang L, Kong H. Learning deep CNN denoiser prior for image restoration

Yu J, Lin Z, Yang J, Shen X, Lu X, Huang T (2019) Free-form image inpainting with gated convolution. In: Proc. IEEE int. conf. comput. vis., pp 4470–4479. https://doi.org/10.1109/ICCV.2019.00457

Goodfellow I et al (2020) Generative adversarial networks. Commun ACM 63(11):139–144. https://doi.org/10.1145/3422622

Waqas S, Aditya Z, Salman A, Munawar K. Multi-stage progressive image restoration number of parameters (millions), pp 14821–14831

Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2018) Generative image inpainting with contextual attention. In: Proc. IEEE comput. soc. conf. comput. vis. pattern recognit., pp 5505–5514. https://doi.org/10.1109/CVPR.2018.00577

Chen Y, Hu H (2018) An improved method for semantic image inpainting with GANs: progressive inpainting. Neural Process Lett. https://doi.org/10.1007/s11063-018-9877-6

Isola P, Zhu JY, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proc.—30th IEEE conf. comput. vis. pattern recognition, CVPR 2017, pp 5967–5976. https://doi.org/10.1109/CVPR.2017.632

Karras T, Aila T. A style-based generator architecture for generative adversarial networks

Faster DDO. DeblurGAN-v2: deblurring (orders-of-magnitude) faster and better

Jiang Y et al (2021) EnlightenGAN: deep light enhancement without paired supervision. IEEE Trans Image Process 30(8):2340–2349. https://doi.org/10.1109/TIP.2021.3051462

Wang TC, Liu MY, Zhu JY, Tao A, Kautz J, Catanzaro B (2018) High-resolution image synthesis and semantic manipulation with conditional GANs. In: Proc. IEEE comput. soc. conf. comput. vis. pattern recognit., pp 8798–8807. https://doi.org/10.1109/CVPR.2018.00917

Li C, Wand M (2016) Precomputed real-time texture synthesis with markovian generative adversarial networks. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 9907 LNCS, pp 702–716. https://doi.org/10.1007/978-3-319-46487-9_43

She W (2020) Digital object restoration using generalized regression neural network deep learning—taking Dunhuang mural restoration as an example. Int J Electr Eng Educ. https://doi.org/10.1177/0020720920928549

Download references

Author information

Authors and affiliations.

Department of Computer Science and Engineering, Chandigarh College of Engineering and Technology, Chandigarh, India

Praveen Kumar & Varun Gupta

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Varun Gupta .

Ethics declarations

Conflict of interest.

Authors have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Kumar, P., Gupta, V. Preserving Artistic Heritage: A Comprehensive Review of Virtual Restoration Methods for Damaged Artworks. Arch Computat Methods Eng (2024). https://doi.org/10.1007/s11831-024-10175-7

Download citation

Received : 26 October 2023

Accepted : 12 July 2024

Published : 05 September 2024

DOI : https://doi.org/10.1007/s11831-024-10175-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sensors-logo

Article Menu

ieee research papers on image restoration

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Underwater image enhancement based on luminance reconstruction by multi-resolution fusion of rgb channels.

ieee research papers on image restoration

1. Introduction

2. related work, 2.1. physical model-based methods, 2.2. non-physical model-based methods, 2.3. deep learning-based methods, 3. proposed algorithm, 3.1. decomposition of image color space, 3.2. multi-resolution fusion-based luminance reconstruction, 3.3. color correction, 4. results and discussion, 4.1. luminance reconstruction evaluation, 4.2. qualitative evaluation, 4.3. quantitative evaluation, 5. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Marini, S.; Fanelli, E.; Sbragaglia, V.; Azzurro, E.; Del Rio Fernandez, J.; Aguzzi, J. Tracking Fish Abundance by Underwater Image Recognition. Sci. Rep. 2018 , 8 , 13748. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Lin, S.; Chi, K.-C.; Li, W.-T.; Tang, Y.-D. Underwater Optical Image Enhancement Based on Dominant Feature Image Fusion. Acta Photonica Sin. 2020 , 49 , 13. [ Google Scholar ] [ CrossRef ]
  • Deluxni, N.; Sudhakaran, P.; Kitmo; Ndiaye, M.F. A Review on Image Enhancement and Restoration Techniques for Underwater Optical Imaging Applications. IEEE Access 2023 , 11 , 111715–111737. [ Google Scholar ] [ CrossRef ]
  • Wang, N.; Zheng, H.; Zheng, B. Underwater Image Restoration via Maximum Attenuation Identification. IEEE Access 2017 , 5 , 18941–18952. [ Google Scholar ] [ CrossRef ]
  • Wang, Y.; Liu, H.; Chau, L.P. Single Underwater Image Restoration Using Adaptive Attenuation-Curve Prior. IEEE Trans. Circuits Syst. I Regul. Pap. 2018 , 65 , 992–1002. [ Google Scholar ] [ CrossRef ]
  • Ma, J.; Fan, X.; Wu, Z.; Zhang, X.; Shi, P.; Gengren, W. Underwater dam crack image enhancement algorithm based on improved dark channel prior. J. Image Graph. 2016 , 21 , 1574–1584. [ Google Scholar ] [ CrossRef ]
  • Francescangeli, M.; Marini, S.; Martínez, E.; Del Río, J.; Toma, D.M.; Nogueras, M.; Aguzzi, J. Image dataset for benchmarking automated fish detection and classification algorithms. Sci. Data 2023 , 10 , 5. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Uemura, T.; Lu, H.; Kim, H. Marine Organisms Tracking and Recognizing Using YOLO ; Springer: Cham, Switzerland, 2020; pp. 53–58. [ Google Scholar ]
  • Rova, A.; Mori, G.; Dill, L. One Fish, Two Fish, Butterfish, Trumpeter: Recognizing Fish in Underwater Video. In Proceedings of the APR Conference on Machine Vision Applications, IAPR MVA 2007, Tokyo, Japan, 16–18 May 2007; pp. 404–407. [ Google Scholar ]
  • Lebart, K.; Smith, C.; Trucco, E.; Lane, D.M. Automatic indexing of underwater survey video: Algorithm and benchmarking method. IEEE J. Ocean. Eng. 2003 , 28 , 673–686. [ Google Scholar ] [ CrossRef ]
  • Kahanov, Y.A.; Royal, J.G. Analysis of hull remains of the Dor D Vessel, Tantura Lagoon, Israel. Int. J. Naut. Archaeol. 2001 , 30 , 257–265. [ Google Scholar ] [ CrossRef ]
  • Peng, Y.T.; Cosman, P.C. Underwater Image Restoration Based on Image Blurriness and Light Absorption. IEEE Trans. Image Process. 2017 , 26 , 1579–1594. [ Google Scholar ] [ CrossRef ]
  • Lu, H.; Li, Y.; Xu, X.; Li, J.; Liu, Z.; Li, X.; Yang, J.; Serikawa, S. Underwater image enhancement method using weighted guided trigonometric filtering and artificial light correction. J. Vis. Commun. Image Represent. 2016 , 38 , 504–516. [ Google Scholar ] [ CrossRef ]
  • Chu, X.; Fu, Z.; Yu, S.; Tu, X.; Huang, Y.; Ding, X. Underwater Image Enhancement and Super-Resolution Using Implicit Neural Networks. In Proceedings of the 2023 IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 8–11 October 2023; pp. 1295–1299. [ Google Scholar ]
  • Singh, N.; Bhat, A. A systematic review of the methodologies for the processing and enhancement of the underwater images. Multimed. Tools Appl. 2023 , 82 , 38371–38396. [ Google Scholar ] [ CrossRef ]
  • Han, M.; Lyu, Z.; Qiu, T.; Xu, M. A Review on Intelligence Dehazing and Color Restoration for Underwater Images. IEEE Trans. Syst. Man Cybern. Syst. 2020 , 50 , 1820–1832. [ Google Scholar ] [ CrossRef ]
  • Chiang, J.Y.; Chen, Y.C. Underwater Image Enhancement by Wavelength Compensation and Dehazing. IEEE Trans. Image Process. 2012 , 21 , 1756–1769. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zhou, J.; Zhang, D.; Zhang, W. A multifeature fusion method for the color distortion and low contrast of underwater images. Multimed. Tools Appl. 2021 , 80 , 17515–17541. [ Google Scholar ] [ CrossRef ]
  • Deng, X.; Wang, H.; Liu, X. Underwater Image Enhancement Based on Removing Light Source Color and Dehazing. IEEE Access 2019 , 7 , 114297–114309. [ Google Scholar ] [ CrossRef ]
  • Zhang, W.; Dong, L.; Pan, X.; Zhou, J.; Qin, L.; Xu, W. Single Image Defogging Based on Multi-Channel Convolutional MSRCR. IEEE Access 2019 , 7 , 72492–72504. [ Google Scholar ] [ CrossRef ]
  • Zhou, J.; Hao, M.; Zhang, D.; Zou, P.; Zhang, W. Fusion PSPnet Image Segmentation Based Method for Multi-Focus Image Fusion. IEEE Photonics J. 2019 , 11 , 1–12. [ Google Scholar ] [ CrossRef ]
  • Schechner, Y.Y.; Averbuch, Y. Regularized Image Recovery in Scattering Media. IEEE Trans. Pattern Anal. Mach. Intell. 2007 , 29 , 1655–1660. [ Google Scholar ] [ CrossRef ]
  • Drews, P.L.J.; Nascimento, E.R.; Botelho, S.S.C.; Campos, M.F.M. Underwater Depth Estimation and Image Restoration Based on Single Images. IEEE Comput. Graph. Appl. 2016 , 36 , 24–35. [ Google Scholar ] [ CrossRef ]
  • Li, C.Y.; Guo, J.C.; Cong, R.M.; Pang, Y.W.; Wang, B. Underwater Image Enhancement by Dehazing With Minimum Information Loss and Histogram Distribution Prior. IEEE Trans. Image Process. 2016 , 25 , 5664–5677. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Carlevaris-Bianco, N.; Mohan, A.; Eustice, R.M. Initial results in underwater single image dehazing. In Proceedings of the OCEANS 2010 MTS/IEEE SEATTLE, Seattle, WA, USA, 20–23 September 2010; pp. 1–8. [ Google Scholar ]
  • Schechner, Y.Y.; Karpel, N. Clear underwater vision. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, Washington, DC, USA, 27 June–2 July 2004; pp. 1–9. [ Google Scholar ]
  • Uplavikar, P.; Wu, Z.; Wang, Z. All-In-One Underwater Image Enhancement using Domain-Adversarial Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019; pp. 1–8. [ Google Scholar ]
  • Emberton, S.; Chittka, L.; Cavallaro, A. Underwater image and video dehazing with pure haze region segmentation. Comput. Vis. Image Underst. 2018 , 168 , 145–156. [ Google Scholar ] [ CrossRef ]
  • Zhang, W.; Dong, L.; Pan, X.; Zou, P.; Qin, L.; Xu, W. A Survey of Restoration and Enhancement for Underwater Images. IEEE Access 2019 , 7 , 182259–182279. [ Google Scholar ] [ CrossRef ]
  • McGlamery, B. A Computer Model for Underwater Camera Systems ; SPIE: Bellingham, WA, USA, 1980; Volume 0208. [ Google Scholar ]
  • Xie, K.; Pan, W.; Xu, S. An Underwater Image Enhancement Algorithm for Environment Recognition and Robot Navigation. Robotics 2018 , 7 , 14. [ Google Scholar ] [ CrossRef ]
  • Trucco, E.; Olmos-Antillon, A.T. Self-Tuning Underwater Image Restoration. IEEE J. Ocean. Eng. 2006 , 31 , 511–519. [ Google Scholar ] [ CrossRef ]
  • Kaiming, H.; Jian, S.; Xiaoou, T. Single image haze removal using dark channel prior. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1956–1963. [ Google Scholar ]
  • Drews, P., Jr.; Nascimento, E.d.; Moraes, F.; Botelho, S.; Campos, M. Transmission Estimation in Underwater Single Images. In Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 2–8 December 2013; pp. 825–830. [ Google Scholar ]
  • Galdran, A.; Pardo, D.; Picón, A.; Alvarez-Gila, A. Automatic Red-Channel underwater image restoration. J. Vis. Commun. Image Represent. 2015 , 26 , 132–145. [ Google Scholar ] [ CrossRef ]
  • Tang, Z.; Zhou, B.; Dai, X.; Gu, H. Underwater Robot Visual Enhancements Based on the Improved DCP Algorithm. Jiqiren/Robot 2018 , 40 , 222–230. [ Google Scholar ] [ CrossRef ]
  • Peng, Y.T.; Cao, K.; Cosman, P.C. Generalization of the Dark Channel Prior for Single Image Restoration. IEEE Trans. Image Process. 2018 , 27 , 2856–2868. [ Google Scholar ] [ CrossRef ]
  • Hou, G.; Li, J.; Wang, G.; Yang, H.; Huang, B.; Pan, Z. A novel dark channel prior guided variational framework for underwater image restoration. J. Vis. Commun. Image Represent. 2020 , 66 , 102732. [ Google Scholar ] [ CrossRef ]
  • Xie, J.; Hou, G.; Wang, G.; Pan, Z. A Variational Framework for Underwater Image Dehazing and Deblurring. IEEE Trans. Circuits Syst. Video Technol. 2022 , 32 , 3514–3526. [ Google Scholar ] [ CrossRef ]
  • Li, Y.; Hou, G.; Zhuang, P.; Pan, Z. Dual High-Order Total Variation Model for Underwater Image Restoration. arXiv 2024 , arXiv:2407.14868. [ Google Scholar ]
  • Iqbal, K.; Odetayo, M.; James, A.; Rosalina Abdul, S.; Abdullah Zawawi Hj, T. Enhancing the low quality images using Unsupervised Colour Correction Method. In Proceedings of the 2010 IEEE International Conference on Systems, Man and Cybernetics, Istanbul, Turkey, 10–13 October 2010; pp. 1703–1709. [ Google Scholar ]
  • Abdul Ghani, A.S.; Mat Isa, N.A. Underwater image quality enhancement through integrated color model with Rayleigh distribution. Appl. Soft Comput. 2015 , 27 , 219–230. [ Google Scholar ] [ CrossRef ]
  • Abdul Ghani, A.S.; Mat Isa, N.A. Enhancement of low quality underwater image through integrated global and local contrast correction. Appl. Soft Comput. 2015 , 37 , 332–344. [ Google Scholar ] [ CrossRef ]
  • Abdul Ghani, A.S. Image contrast enhancement using an integration of recursive-overlapped contrast limited adaptive histogram specification and dual-image wavelet fusion for the high visibility of deep underwater image. Ocean Eng. 2018 , 162 , 224–238. [ Google Scholar ] [ CrossRef ]
  • Fu, X.; Fan, Z.; Ling, M.; Huang, Y.; Ding, X. Two-step approach for single underwater image enhancement. In Proceedings of the 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Xiamen, China, 6–9 November 2017; pp. 789–794. [ Google Scholar ]
  • Fu, X.; Zhuang, P.; Huang, Y.; Liao, Y.; Zhang, X.P.; Ding, X. A retinex-based enhancing approach for single underwater image. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 4572–4576. [ Google Scholar ]
  • Zhang, S.; Wang, T.; Dong, J.; Yu, H. Underwater image enhancement via extended multi-scale Retinex. Neurocomputing 2017 , 245 , 1–9. [ Google Scholar ] [ CrossRef ]
  • Tang, C.; von Lukas, U.F.; Vahl, M.; Wang, S.; Wang, Y.; Tan, M. Efficient underwater image and video enhancement based on Retinex. Signal Image Video Process. 2019 , 13 , 1011–1018. [ Google Scholar ] [ CrossRef ]
  • Ancuti, C.O.; Ancuti, C.; Vleeschouwer, C.D.; Bekaert, P. Color Balance and Fusion for Underwater Image Enhancement. IEEE Trans. Image Process. 2018 , 27 , 379–393. [ Google Scholar ] [ CrossRef ]
  • Li, X.; Hou, G.; Li, K.; Pan, Z. Enhancing underwater image via adaptive color and contrast enhancement, and denoising. Eng. Appl. Artif. Intell. 2022 , 111 , 104759. [ Google Scholar ] [ CrossRef ]
  • Zhang, W.; Jin, S.; Zhuang, P.; Liang, Z.; Li, C. Underwater Image Enhancement via Piecewise Color Correction and Dual Prior Optimized Contrast Enhancement. IEEE Signal Process. Lett. 2023 , 30 , 229–233. [ Google Scholar ] [ CrossRef ]
  • Li, C.; Anwar, S.; Hou, J.; Cong, R.; Guo, C.; Ren, W. Underwater Image Enhancement via Medium Transmission-Guided Multi-Color Space Embedding. IEEE Trans. Image Process. 2021 , 30 , 4985–5000. [ Google Scholar ] [ CrossRef ]
  • Pham, T.T.; Mai, T.T.N.; Lee, C. Deep Unfolding Network with Physics-Based Priors for Underwater Image Enhancement. In Proceedings of the 2023 IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 8–11 October 2023; pp. 46–50. [ Google Scholar ]
  • Anwar, S.; Li, C. Diving deeper into underwater image enhancement: A survey. Signal Process. Image Commun. 2020 , 89 , 115978. [ Google Scholar ] [ CrossRef ]
  • Roska, T.; Chua, L.O. The CNN universal machine: An analogic array computer. IEEE Trans. Circuits Syst. II Analog Digit. Signal Process. 1993 , 40 , 163–173. [ Google Scholar ] [ CrossRef ]
  • Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. DehazeNet: An End-to-End System for Single Image Haze Removal. IEEE Trans. Image Process. 2016 , 25 , 5187–5198. [ Google Scholar ] [ CrossRef ]
  • Hou, M.; Liu, R.; Fan, X.; Luo, Z. Joint Residual Learning for Underwater Image Enhancement. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 4043–4047. [ Google Scholar ]
  • Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An Underwater Image Enhancement Benchmark Dataset and Beyond. IEEE Trans. Image Process. 2020 , 29 , 4376–4389. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Islam, M.J.; Xia, Y.; Sattar, J. Fast Underwater Image Enhancement for Improved Visual Perception. IEEE Robot. Autom. Lett. 2020 , 5 , 3227–3234. [ Google Scholar ] [ CrossRef ]
  • Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. WaterGAN: Unsupervised Generative Network to Enable Real-Time Color Correction of Monocular Underwater Images. IEEE Robot. Autom. Lett. 2018 , 3 , 387–394. [ Google Scholar ] [ CrossRef ]
  • Espinosa, A.R.; McIntosh, D.; Albu, A.B. An Efficient Approach for Underwater Image Improvement: Deblurring, Dehazing, and Color Correction. In Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), Waikoloa, HI, USA, 3–7 January 2023; pp. 206–215. [ Google Scholar ]
  • Zhou, W.-H.; Zhu, D.-M.; Shi, M.; Li, Z.-X.; Duan, M.; Wang, Z.-Q.; Zhao, G.-L.; Zheng, C.-D. Deep images enhancement for turbid underwater images based on unsupervised learning. Comput. Electron. Agric. 2022 , 202 , 107372. [ Google Scholar ] [ CrossRef ]
  • Fabbri, C.; Islam, M.J.; Sattar, J. Enhancing Underwater Imagery Using Generative Adversarial Networks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 7159–7165. [ Google Scholar ]
  • Wen, P.-Z.; Chen, J.-M.; Xiao, Y.-N.; Wen, Y.-Y.; Huang, W.-M. Underwater image enhancement algorithm based on GAN and multi-level wavelet CNN. J. Zhejiang Univ. (Eng. Sci.) 2022 , 56 , 213–224. [ Google Scholar ]
  • Vanmali, A.V.; Gadre, V.M.J.S. Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility. Sadhana 2017 , 42 , 1063–1082. [ Google Scholar ] [ CrossRef ]
  • Hou, X.; Zhang, L. Saliency Detection: A Spectral Residual Approach. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [ Google Scholar ]
  • Ebner, M. Color Constancy ; Wiley Publishing: Hoboken, NJ, USA, 2007. [ Google Scholar ]
  • Liu, R.; Fan, X.; Zhu, M.; Hou, M.; Luo, Z. Real-World Underwater Enhancement: Challenges, Benchmarks, and Solutions Under Natural Light. IEEE Trans. Circuits Syst. Video Technol. 2020 , 30 , 4861–4875. [ Google Scholar ] [ CrossRef ]
  • Panetta, K.; Gao, C.; Agaian, S. Human-Visual-System-Inspired Underwater Image Quality Measures. IEEE J. Ocean. Eng. 2016 , 41 , 541–551. [ Google Scholar ] [ CrossRef ]
  • Yang, M.; Sowmya, A. An Underwater Color Image Quality Evaluation Metric. IEEE Trans. Image Process. 2015 , 24 , 6062–6071. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Yang, N.; Zhong, Q.; Li, K.; Cong, R.; Zhao, Y.; Kwong, S. A reference-free underwater image quality assessment metric in frequency domain. Signal Process. Image Commun. 2021 , 94 , 116218. [ Google Scholar ] [ CrossRef ]
  • Hou, G.; Zhang, S.; Lu, T.; Li, Y.; Pan, Z.; Huang, B. No-reference quality assessment for underwater images. Comput. Electr. Eng. 2024 , 118 , 109293. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

DCP [ ]2.9550.5790.6640.5817.2673.9430.5750.5060.5576.780
GDCP [ ]2.6520.6040.8660.5997.1951.4790.5690.6610.5867.383
Two-step [ ]3.7790.4880.6190.5517.4572.1340.4760.3970.4957.215
Fusion-based [ ]4.2190.4500.5200.5487.4192.7240.4270.3270.4847.148
UTV [ ]2.3070.5740.6080.5746.1121.1050.5260.3570.4945.313
UNTV [ ]3.4120.5360.8820.5880.7442.4210.5110.5700.5707.048
PCDE [ ]4.9360.5060.4470.5107.7192.5200.4860.3390.6056.893
Our method4.9620.4970.7720.5917.6553.3300.4770.4400.5507.393
DCP [ ]1.6560.5790.4530.4327.3201.9800.5770.4890.5227.290
GDCP [ ]4.7430.5630.4570.4747.5394.8240.5650.5010.4697.565
Two-step [ ]3.6870.4370.4570.4507.2303.8080.4390.4470.4447.245
Fusion-based [ ]3.7230.4050.3130.4227.1843.8600.4150.3330.5157.230
UTV [ ]−0.3760.5140.2480.3715.294−0.0520.5040.2460.3645.580
UNTV [ ]3.8880.5050.7480.4947.6393.9700.5130.7520.4847.580
PCDE [ ]3.9250.5050.7010.5027.5723.9410.5040.7070.5037.527
Our method4.9750.4910.6740.5357.7644.8410.4930.7030.5377.719
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Wang, Y.; Chen, Z.; Yan, G.; Zhang, J.; Hu, B. Underwater Image Enhancement Based on Luminance Reconstruction by Multi-Resolution Fusion of RGB Channels. Sensors 2024 , 24 , 5776. https://doi.org/10.3390/s24175776

Wang Y, Chen Z, Yan G, Zhang J, Hu B. Underwater Image Enhancement Based on Luminance Reconstruction by Multi-Resolution Fusion of RGB Channels. Sensors . 2024; 24(17):5776. https://doi.org/10.3390/s24175776

Wang, Yi, Zhihua Chen, Guoxu Yan, Jiarui Zhang, and Bo Hu. 2024. "Underwater Image Enhancement Based on Luminance Reconstruction by Multi-Resolution Fusion of RGB Channels" Sensors 24, no. 17: 5776. https://doi.org/10.3390/s24175776

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

This week: the arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

Title: awracle: all-weather image restoration using visual in-context learning.

Abstract: All-Weather Image Restoration (AWIR) under adverse weather conditions is a challenging task due to the presence of different types of degradations. Prior research in this domain relies on extensive training data but lacks the utilization of additional contextual information for restoration guidance. Consequently, the performance of existing methods is limited by the degradation cues that are learnt from individual training samples. Recent advancements in visual in-context learning have introduced generalist models that are capable of addressing multiple computer vision tasks simultaneously by using the information present in the provided context as a prior. In this paper, we propose All-Weather Image Restoration using Visual In-Context Learning (AWRaCLe), a novel approach for AWIR that innovatively utilizes degradation-specific visual context information to steer the image restoration process. To achieve this, AWRaCLe incorporates Degradation Context Extraction (DCE) and Context Fusion (CF) to seamlessly integrate degradation-specific features from the context into an image restoration network. The proposed DCE and CF blocks leverage CLIP features and incorporate attention mechanisms to adeptly learn and fuse contextual information. These blocks are specifically designed for visual in-context learning under all-weather conditions and are crucial for effective context utilization. Through extensive experiments, we demonstrate the effectiveness of AWRaCLe for all-weather restoration and show that our method advances the state-of-the-art in AWIR.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: [cs.CV]
  (or [cs.CV] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. Super-resolution image reconstruction

    ieee research papers on image restoration

  2. (PDF) Analysis and Comparison of Image Restoration Algorithms Using MATLAB

    ieee research papers on image restoration

  3. Panel 4: What Types Of Research Papers Should We Be Writing?

    ieee research papers on image restoration

  4. A student's guide to research

    ieee research papers on image restoration

  5. (PDF) Design a model of Image Restoration using AI in Digital Image

    ieee research papers on image restoration

  6. (PDF) Overview of Digital Image Restoration

    ieee research papers on image restoration

VIDEO

  1. Image quality evaluation of deep learning image reconstruction and denoising in clinical CT

  2. A publication roadmap to an IEEE research paper

  3. After Winter Solstice & Joseph Smith Was Born On The 23rd of December 1805

  4. Best sites to find and download research papers for FREE. How to do literature search

  5. Best Photo Restoration With AI! (2022 results highlights)

  6. How to Access IEEE Research Papers and Articles for Free

COMMENTS

  1. A Comprehensive Review of Deep Learning-Based Real-World Image Restoration

    Real-world imagery does not always exhibit good visibility and clean content, but often suffers from various kinds of degradations (e.g., noise, blur, rain drops, fog, color distortion, etc.), which severely affect vision-driven tasks (e.g., image classification, target recognition, and tracking, etc.). Thus, restoring the true scene from such degraded images is of significance. In recent ...

  2. Image Repair and Restoration Using Deep Learning

    The art of image restoration lies in finding missing information inside the image and returning it back to its original state. While the field of fine arts and photographic restoration has been known and studied for years, recent technological advances have eliminated time-consuming processes and made image restoration far more feasible and easier to do. A novel method known as image ...

  3. Research on Image Restoration Algorithms Based on ...

    In recent years, with the rapid development of computer vision, there have been significant advancements in image restoration techniques. However, many current algorithms still struggle with accurately restoring degraded images with lost information. In this paper, we present a new image restoration algorithm called CodeFormerGAN, which is based on the Transformer model. This algorithm ...

  4. An Experimental-Based Review of Image Enhancement and Image Restoration

    Starting from this paper, we pinpoint the key shortcomings of existing methods, drawing recommendations for future research in this area. Our review of underwater image enhancement and restoration provides researchers with the necessary background to appreciate challenges and opportunities in this important field. View this article on IEEE Xplore

  5. Introduction to the Issue on Deep Learning for Image/Video Restoration

    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 15, NO. 2, FEBRUARY 2021 1 ... Another active area of research is perceptual image restora-tion and SR. Variations of the GAN architecture have been ... 13 papers on image/video restoration and super-resolution, 5 papers on image/video compression, and 2 papers on point ...

  6. PDF SUBMISSION TO IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Deep Likelihood

    We primarily focus on three image restoration tasks, i.e., inpainting, interpolation, and SISR (in which image blurring is also introduced), hence what we mean image restoration in the paper indicates the three tasks. Our main contributions are: We propose a novel and general method to generalize off-the-shelf image restoration CNNs to succeed ...

  7. Restormer: Efficient Transformer for High-Resolution Image Restoration

    Since convolutional neural networks (CNNs) perform well at learning generalizable image priors from large-scale data, these models have been extensively applied to image restoration and related tasks. Recently, another class of neural architectures, Transformers, have shown significant performance gains on natural language and high-level vision tasks. While the Transformer model mitigates the ...

  8. PDF Dformer: Learning Eficient Image Restoration with Perceptual Guidance

    Dformer: Learning Efficient Image Restoration with Perceptual Guidance. This CVPR Workshop paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.

  9. An Old Photo Image Restoration Processing Based on Deep Neural Network

    This technology can only repair images with simple structures and small damaged areas and is difficult to apply in people's daily lives. The emergence of deep learning technology has accelerated the pace of research on image restoration. This article will discuss the methods of repairing old photos based on deep neural networks.

  10. Recent progress in digital image restoration techniques: A review

    In the future, image restoration can be applied for images with multiple degradations because most papers are presenting images for single degradation only [23], [87]. Thus, it will be another very promising research direction in image restoration is to use combinations of different kinds of distortions. 4.2.

  11. Research on Image Restoration Based on CNN and Transformer

    Image restoration is an important task in the field of computer vision. Its main goal is to fill in damaged areas or remove unwanted objects to make the image look more complete and natural. In recent years, with the powerful feature extraction capabilities of deep neural networks and the use of deep learning techniques to solve the problem of image restoration, significant progress has been ...

  12. A survey of deep learning approaches to image restoration

    In this paper, we present an extensive review on deep learning methods for image restoration tasks. Deep learning techniques, led by convolutional neural networks, have received a great deal of attention in almost all areas of image processing, especially in image classification.However, image restoration is a fundamental and challenging topic and plays significant roles in image processing ...

  13. A survey of deep learning approaches to image restoration

    The ideas and novel approaches in image restoration can benefit these aforementioned tasks, and vice versa. This survey is intended as a timely update and overview of deep learning approaches to image restoration and is organised as follows. Section 2 reviews existing deep neural networks for image restoration in general, followed by detailed ...

  14. Recent progress in digital image restoration techniques:

    L. Ankita, Research paper on image restoration using decision based filtering techniques, Int. J. Eng. Dev. Res. 4 (2016) 477-481. ... V. Papyan, M. Elad, Multi-scale patch-based image restoration, IEEE Trans. Image Process. 25 (2015) 249-261. Digital Library. Google Scholar [54]

  15. [2207.01074] Variational Deep Image Restoration

    This paper presents a new variational inference framework for image restoration and a convolutional neural network (CNN) structure that can solve the restoration problems described by the proposed framework. Earlier CNN-based image restoration methods primarily focused on network architecture design or training strategy with non-blind scenarios where the degradation models are known or assumed ...

  16. Image Restoration Application and Methods for Different ...

    Digital images are impression of a particular scenario composed of picture elements in form of pixels. Image restoration is a technique that is used to reinstitute the source or original image by extracting noise and blur from the image. We can obtain images in a wide range from day-to-day photography to astronomy, medical imaging, microscopy, remote sensing, and so on. Researchers have put ...

  17. PDF Restormer: Eficient Transformer for High-Resolution Image Restoration

    Image restoration is the task of reconstructing a high-quality image by removing degradations (e.g., noise, blur, rain drops) from a degraded input. Due to the ill-posed na-ture, it is a highly challenging problem that usually requires strong image priors for effective restoration. Since con-volutional neural networks (CNNs) perform well at learn-

  18. Vision Transformers in Image Restoration: A Survey

    The Vision Transformer (ViT) architecture has been remarkably successful in image restoration. For a while, Convolutional Neural Networks (CNN) predominated in most computer vision tasks. Now, both CNN and ViT are efficient approaches that demonstrate powerful capabilities to restore a better version of an image given in a low-quality format. In this study, the efficiency of ViT in image ...

  19. Multiple Adaptive Derivative Passive Image Processing Approach to

    The robustness and adaptive capacities of the CNN-based methods are insufficient to meet the requirements of practical applications because most current trend of CNN low-light image enhancement and corrective adjustment are done after feature extraction and classification [] processes and not on real-time.The method should adaptively adjust to different application conditions and types of low ...

  20. The research on image restoration algorithm based on ...

    In this paper, the author deals with the research on image restoration algorithm based on improved total variation model. It is based on the regularization technique, puts forward an adaptive TV (Total Variation) model to achieve smooth denoising and protecting the details of image; it is the minimizing process of converting image restoration into cost function. By using conjugate gradient ...

  21. [2108.10257] SwinIR: Image Restoration Using Swin Transformer

    Image restoration is a long-standing low-level vision problem that aims to restore high-quality images from low-quality images (e.g., downscaled, noisy and compressed images). While state-of-the-art image restoration methods are based on convolutional neural networks, few attempts have been made with Transformers which show impressive performance on high-level vision tasks. In this paper, we ...

  22. Preserving Artistic Heritage: A Comprehensive Review of Virtual

    Restoration of damaged artwork is an important task to preserve the culture and history of humankind. Restoration of damaged artwork is a delicate, complex, and irreversible process that requires preserving the artist's style and semantics while removing the damages from the artwork. Digital restoration of artworks can guide artists in physically restoring artworks. This paper groups the ...

  23. UniFRD: A Unified Method for Facial Image Restoration ...

    This paper presents a Unified Facial image and video Restoration method based on the Diffusion probabilistic model (UniFRD), designed to effectively address both single- and multi-type image degradation. The noise predictor in UniFRD consists of a ViT-based encoder and a novel Separation Fusion Decoding Module (SFDM). The flexible feature optimization strategy allows for decoding complex ...

  24. Sensors

    Underwater image enhancement technology is crucial for the human exploration and exploitation of marine resources. The visibility of underwater images is affected by visible light attenuation. This paper proposes an image reconstruction method based on the decomposition-fusion of multi-channel luminance data to enhance the visibility of underwater images. The proposed method is a single ...

  25. TO APPEAR IN IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Image Restoration

    represent different image restoration problems; for example: image denoising when H is the n nidentity matrix I n, image inpainting when H is a selection of mrows of I n, and image deblurring when H is a blurring operator. In all of these cases, a prior image model s(x) is required in order to successfully estimate x from the observations y ...

  26. Digital image restoration

    The article introduces digital image restoration to the reader who is just beginning in this field, and provides a review and analysis for the reader who may already be well-versed in image restoration. The perspective on the topic is one that comes primarily from work done in the field of signal processing. Thus, many of the techniques and works cited relate to classical signal processing ...

  27. AdaptIR: Parameter Efficient Multi-task Adaptation

    Image restoration, aiming to recover high-quality images from their degraded counterparts, is a fundamental and long-standing problem in computer vision and further has a wide range of sub-problems, including super-resolution, image denoising, deraining, low-light image enhancement, etc.Early researches [18, 68, 54] typically focus on studying each task independently, while neglecting the ...

  28. [2409.00263] AWRaCLe: All-Weather Image Restoration using Visual In

    All-Weather Image Restoration (AWIR) under adverse weather conditions is a challenging task due to the presence of different types of degradations. Prior research in this domain relies on extensive training data but lacks the utilization of additional contextual information for restoration guidance. Consequently, the performance of existing methods is limited by the degradation cues that are ...