Fast gradual matching measure for image retrieval based on visual similarity and spatial relations

2006 ◽  
Vol 21 (7) ◽  
pp. 711-723
Author(s):  
Jean-François Omhover ◽  
Marcin Detyniecki
Author(s):  
Richard Chbeir ◽  
Youssef Amghar ◽  
Andre Flory

Several approaches are proposed for retrieving images. Each of them describes image according to application domain requirements. No global approach exists to resolve retrieving image in complex domains (as medical one), in which content is multifaceted. A framework to retrieve medical images is presented. In this paper, we expose our three-dimensional approach applied to medical domain, and required elements for both knowledge base and retrieval process. The proposed approach, built on multifaceted aspect, offers all possibilities to describe image within multifaceted content (context, physical and semantic). Conceptual relations are presented for designing knowledge base for coherent and efficient indexing and retrieval processes. Required spatial relations of processes are also exposed.


2011 ◽  
Vol 62 (2) ◽  
pp. 479-505 ◽  
Author(s):  
Carlos Arturo Hernández-Gracidas ◽  
Luis Enrique Sucar ◽  
Manuel Montes-y-Gómez

Author(s):  
Sagarmay Deb

Images are generated everywhere from various sources. It could be satellite pictures, biomedical, scientific, entertainment, sports and many more, generated through video camera, ordinary camera, x-ray machine, and so on. These images are stored in image databases. Content-based image retrieval (CBIR) technique is being applied to access these vast volumes of images from databases efficiently. Some of the areas, where CBIR is applied, include weather forecasting, scientific database management, art galleries, law enforcement, and fashion design. Initially image representation was based on various attributes of the image like height, length, angle and was accessed using those attributes extracted manually and managed within the framework of conventional database management systems. Queries are specified using these attributes. This entails a high-level of image abstraction (Chen, Li & Wang, 2004). Also there was feature-based object-recognition approach where the process was automated to extract images based on color, shape, texture, and spatial relations among various objects of the image. Recently combining these two approaches, efficient image representation and query-processing algorithms, have been developed to access image databases. Recent CBIR research tries to combine both of these above mentioned approach and has given rise to efficient image representations and data models, query-processing algorithms, intelligent query interfaces and domain-independent system architecture. As we mentioned, image retrieval can be based on lowlevel visual features such as color (Antani, Rodney Long & Thoma, 2004; Deb & Kulkarni, 2007; Deb & Kulkarni, 2007a; Ritter & Cooper, 2007; Srisuk & Kurutach, 2002; Sural, Qian & Pramanik, 2002; Traina, Traina, Jr., Bueno, & Chino, 2003; Verma & Kulkarni, 2004), texture (Antani et al., 2004; Deb & Kulkarni, 2007a; Zhou, Feng & Shi, 2001), shape (Ritter & Cooper, 2007; Safar, Shahabi & Sun, 2000; Shahabi & Safar, 1999; Tao & Grosky, 1999), high-level semantics (Forsyth et al., 1996), or both (Zhao & Grosky, 2001). But most of the works done so far are based on the analysis of explicit meanings of images. But image has implicit meanings as well, which give more and different meanings than only explicit analysis provides. In this paper we provide the concepts of emergence index and analysis of the implicit meanings of the image which we believe should be taken into account in analysis of images of image or multimedia databases.


Author(s):  
Wenjie Wang ◽  
Yufeng Shi ◽  
Shiming Chen ◽  
Qinmu Peng ◽  
Feng Zheng ◽  
...  

Zero-shot sketch-based image retrieval (ZS-SBIR), which aims to retrieve photos with sketches under the zero-shot scenario, has shown extraordinary talents in real-world applications. Most existing methods leverage language models to generate class-prototypes and use them to arrange the locations of all categories in the common space for photos and sketches. Although great progress has been made, few of them consider whether such pre-defined prototypes are necessary for ZS-SBIR, where locations of unseen class samples in the embedding space are actually determined by visual appearance and a visual embedding actually performs better. To this end, we propose a novel Norm-guided Adaptive Visual Embedding (NAVE) model, for adaptively building the common space based on visual similarity instead of language-based pre-defined prototypes. To further enhance the representation quality of unseen classes for both photo and sketch modality, modality norm discrepancy and noisy label regularizer are jointly employed to measure and repair the modality bias of the learned common embedding. Experiments on two challenging datasets demonstrate the superiority of our NAVE over state-of-the-art competitors.


2008 ◽  
Author(s):  
Maciej A. Mazurowski ◽  
Brian P. Harrawood ◽  
Jacek M. Zurada ◽  
Georgia D. Tourassi

Sign in / Sign up

Export Citation Format

Share Document