scholarly journals SUPERPIXEL CUT FOR FIGURE-GROUND IMAGE SEGMENTATION

Author(s):  
Michael Ying Yang ◽  
Bodo Rosenhahn

Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.

Author(s):  
Michael Ying Yang ◽  
Bodo Rosenhahn

Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ha Min Son ◽  
Wooho Jeon ◽  
Jinhyun Kim ◽  
Chan Yeong Heo ◽  
Hye Jin Yoon ◽  
...  

AbstractAlthough computer-aided diagnosis (CAD) is used to improve the quality of diagnosis in various medical fields such as mammography and colonography, it is not used in dermatology, where noninvasive screening tests are performed only with the naked eye, and avoidable inaccuracies may exist. This study shows that CAD may also be a viable option in dermatology by presenting a novel method to sequentially combine accurate segmentation and classification models. Given an image of the skin, we decompose the image to normalize and extract high-level features. Using a neural network-based segmentation model to create a segmented map of the image, we then cluster sections of abnormal skin and pass this information to a classification model. We classify each cluster into different common skin diseases using another neural network model. Our segmentation model achieves better performance compared to previous studies, and also achieves a near-perfect sensitivity score in unfavorable conditions. Our classification model is more accurate than a baseline model trained without segmentation, while also being able to classify multiple diseases within a single image. This improved performance may be sufficient to use CAD in the field of dermatology.


2021 ◽  
Vol 15 (2) ◽  
pp. 1-25
Author(s):  
Amal Alhosban ◽  
Zaki Malik ◽  
Khayyam Hashmi ◽  
Brahim Medjahed ◽  
Hassan Al-Ababneh

Service-Oriented Architectures (SOA) enable the automatic creation of business applications from independently developed and deployed Web services. As Web services are inherently a priori unknown, how to deliver reliable Web services compositions is a significant and challenging problem. Services involved in an SOA often do not operate under a single processing environment and need to communicate using different protocols over a network. Under such conditions, designing a fault management system that is both efficient and extensible is a challenging task. In this article, we propose SFSS, a self-healing framework for SOA fault management. SFSS is predicting, identifying, and solving faults in SOAs. In SFSS, we identified a set of high-level exception handling strategies based on the QoS performances of different component services and the preferences articled by the service consumers. Multiple recovery plans are generated and evaluated according to the performance of the selected component services, and then we execute the best recovery plan. We assess the overall user dependence (i.e., the service is independent of other services) using the generated plan and the available invocation information of the component services. Due to the experiment results, the given technique enhances the service selection quality by choosing the services that have the highest score and betters the overall system performance. The experiment results indicate the applicability of SFSS and show improved performance in comparison to similar approaches.


2014 ◽  
Vol 548-549 ◽  
pp. 1179-1184 ◽  
Author(s):  
Wen Ting Yu ◽  
Jing Ling Wang ◽  
Long Ye

Image segmentation with low computational burden has been highly regarded as important goal for researchers. One of the popular image segmentation methods is normalized cut algorithm. But it is unfavorable for high resolution image segmentation because the amount of segmentation computation is very huge [1]. To solve this problem, we propose a novel approach for high resolution image segmentation based on the Normalized Cuts. The proposed method preprocesses an image by using the normalized cut algorithm to form segmented regions, and then use k-Means clustering on the regions. The experimental results verify that the proposed algorithm behaves an improved performance comparing to the normalized cut algorithm.


1971 ◽  
Vol 13 (2) ◽  
pp. 329-336 ◽  
Author(s):  
W. Rutter ◽  
T. R. Laird ◽  
P. J. Broadbent

SUMMARY1. Forty Greyface (Border Leicester ♂ × Blackface ♀) and 40 North Country Cheviot ewes, due to lamb to a Suffolk ram in April 1969, were housed during December 1968 in eight groups of 10 ewes, the breeds being penned separately.2. One pen of each breed received a basal diet of either hay, grass silage or arable silage with a ‘high’ level of concentrate supplementation and one pen of each breed was given hay with a ‘low’ level of concentrate supplementation. Within each pen, five of the 10 ewes were clipped at housing and the other five were clipped in the following June.3. Voluntary intakes of the basal diets declined with advancing stage of pregnancy, particularly for those receiving grass silage. Feed had no differential effects on the performance of the ewes in terms of wool yield, body-weight change, birth weight of lamb per ewe or perinatal lamb mortality.4. The Greyfaces clipped at housing yielded less wool than those Greyfaces clipped in the following June. Time of clipping had no influence on the wool yield of the Cheviots. Wool grades were not affected by time of clipping. Ewes clipped in December performed significantly better than ewes not clipped until June with a higher proportion of ewes producing multiple births (P<0·1), a higher total birth weight of lamb per ewe (P<0·01) and a reduced perinatal mortality of the lambs (P<0·05). The total effect of this improved performance was that the clipped ewes produced 53 ·5 % more live lambs than those ewes not clipped until the following June.


1969 ◽  
Vol 28 (2) ◽  
pp. 623-629 ◽  
Author(s):  
Charles G. Halcomb ◽  
Peggy Blackwell

This research was designed to test the hypothesis that relevant incentives would result in improved performance on a visual monitoring task. Course credit was used as an incentive due to its apparent relevance for the college population. Two groups of Ss were employed. One group received credit made contingent on performance; the other group received credit for participation. The contingent group performed at a higher level than did the non-contingent group. Level of performance for both groups was high, suggesting that a relevant incentive can be effective in maintaining a high level of performance over time.


Author(s):  
R E Crump ◽  
J G E Bryan ◽  
D Nicholson ◽  
R Thompson ◽  
G Simm

In order that genetic progress in British beef breeds could be improved, performance traits have been recorded by the Meat and Livestock Commission for many years. A large number of pedigree beef herds have recorded with the Meat and Livestock Commission during this period. Until recently, these records were only made use of via within herd contemporary comparisons such that the results for animals could not be compared across herds or time.Through the use of Individual Animal Model Best Linear Unbiased Prediction (BLUP), differences between herds and contemporary groups within herds can be accounted for provided there are genetic links between herds and contemporary groups. As a result of the small pedigree herd size in Great Britain, typically less than 20, sires are often chosen from outside the herd in order to reduce inbreeding. This practise has resulted in there being a relatively high level of connectedness between contemporary groups and this enables the BLUP procedure to disentangle management and genetic effects.


Sign in / Sign up

Export Citation Format

Share Document