• español
    • English
  • English 
    • español
    • English
  • Login
View Item 
  •   DSpace Home
  • Producción Científica
  • Artículos, capítulos, libros...UCO
  • View Item
  •   DSpace Home
  • Producción Científica
  • Artículos, capítulos, libros...UCO
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Automatic Identification and Description of Jewelry Through Computer Vision and Neural Networks for Translators and Interpreters

Thumbnail
View/Open
applsci-15-05538-v2.pdf (6.906Mb)
Author
Alcalde-Llergo, José M.
Ruiz Mezcua, Aurora
Ávila Ramírez, Rocío
Zingoni, Andrea
Taborri, Juri
Yeguas-Bolívar, Enrique
Publisher
MDPI
Date
2025
Subject
Image captioning
Accessory classification
Jewelry recognition
Deep learning
Computer vision
Natural language descriptions
METS:
Mostrar el registro METS
PREMIS:
Mostrar el registro PREMIS
Metadata
Show full item record
Abstract
Identifying jewelry pieces presents a significant challenge due to the wide range of styles and designs. Currently, precise descriptions are typically limited to industry experts. However, translators and interpreters often require a comprehensive understanding of these items. In this study, we introduce an innovative approach to automatically identify and describe jewelry using neural networks. This method enables translators and interpreters to quickly access accurate information, aiding in resolving queries and gaining essential knowledge about jewelry. Our model operates at three distinct levels of description, employing computer vision techniques and image captioning to emulate expert analysis of accessories. The key innovation involves generating natural language descriptions of jewelry across three hierarchical levels, capturing nuanced details of each piece. Different image captioning architectures are utilized to detect jewels in images and generate descriptions with varying levels of detail. To demonstrate the effectiveness of our approach in recognizing diverse types of jewelry, we assembled a comprehensive database of accessory images. The evaluation process involved comparing various image captioning architectures, focusing particularly on the encoder–decoder model, crucial for generating descriptive captions. After thorough evaluation, our final model achieved a captioning accuracy exceeding 90%.
URI
http://hdl.handle.net/10396/33210
Fuente
Alcalde-Llergo, J. M., Ruiz-Mezcua, A., Ávila-Ramírez, R., Zingoni, A., Taborri, J., & Yeguas-Bolívar, E. (2025). Automatic Identification and Description of Jewelry Through Computer Vision and Neural Networks for Translators and Interpreters. Applied Sciences, 15(10), 5538. https://doi.org/10.3390/app15105538
Versión del Editor
https://doi.org/10.3390/app15105538
Collections
  • DCSH-Artículos, capítulos, libros...
  • DACETE-Artículos, capítulos, libros...
  • Artículos, capítulos, libros...UCO

DSpace software copyright © 2002-2015  DuraSpace
Contact Us | Send Feedback
© Biblioteca Universidad de Córdoba
Biblioteca  UCODigital
 

 

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

LoginRegister

Statistics

View Usage Statistics

De Interés

Archivo Delegado/AutoarchivoAyudaPolíticas de Helvia

Compartir


DSpace software copyright © 2002-2015  DuraSpace
Contact Us | Send Feedback
© Biblioteca Universidad de Córdoba
Biblioteca  UCODigital