Highlighting with Flicker Some Basic Visual Search Considerations

Author(s):  
Karl F. Van Orden ◽  
Joseph DiVita

Previous research has demonstrated that search times are reduced when flicker is used to highlight color coded symbols, but that flicker is not distracting when subjects must search for non-highlighted symbols. This prompted an examination of flicker and other stimulus dimensions in a conjunctive search paradigm. In all experiments, at least 15 subjects completed a minimum of 330 trials in which they indicated the presence or absence of target stimuli on a CRT display that contained either 8, 16 or 32 items. In Experiment 1, subjects searched for blue-steady or red-flickering (5.6 Hz) circular targets among blue-flickering and red-steady distractors. Blue-steady targets produced a more efficient search rate (11.6 msec/item) than red-flickering targets (19.3 msec/item). In Experiment 2, a conjunction of flicker and size (large and small filled circles) yielded the opposite results; the search performance for large-flickering targets was unequivocally parallel. In Experiment 3, conjunctions of form and flicker yielded highly serial search performance. The findings are consistent with the response properties of parvo and magnocellular channels of the early visual system, and suggest that search is most efficient when one of these channels can be filtered completely.

Author(s):  
Rachel J. Cunio ◽  
David Dommett ◽  
Joseph Houpt

Maintaining spatial awareness is a primary concern for operators, but relying only on visual displays can cause visual system overload and lead to performance decrements. Our study examined the benefits of providing spatialized auditory cues for maintaining visual awareness as a method of combating visual system overload. We examined visual search performance of seven participants in an immersive, dynamic (moving), three-dimensional, virtual reality environment both with no cues, non-masked, spatialized auditory cues, and masked, spatialized auditory cues. Results indicated a significant reduction in visual search time from the no-cue condition when either auditory cue type was presented, with the masked auditory condition slower. The results of this study can inform attempts to improve visual search performance in operational environments, such as determining appropriate display types for providing spatial information.


Perception ◽  
10.1068/p2933 ◽  
2000 ◽  
Vol 29 (2) ◽  
pp. 241-250 ◽  
Author(s):  
Jiye Shen ◽  
Eyal M Reingold ◽  
Marc Pomplun

We examined the flexibility of guidance in a conjunctive search task by manipulating the ratios between different types of distractors. Participants were asked to decide whether a target was present or absent among distractors sharing either colour or shape. Results indicated a strong effect of distractor ratio on search performance. Shorter latency to move, faster manual response, and fewer fixations per trial were observed at extreme distractor ratios. The distribution of saccadic endpoints also varied flexibly as a function of distractor ratio. When there were very few same-colour distractors, the saccadic selectivity was biased towards the colour dimension. In contrast, when most of the distractors shared colour with the target, the saccadic selectivity was biased towards the shape dimension. Results are discussed within the framework of the guided search model.


2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Alejandro Lleras ◽  
Zhiyuan Wang ◽  
Anna Madison ◽  
Simona Buetti

Recently, Wang, Buetti and Lleras (2017) developed an equation to predict search performance in heterogeneous visual search scenes (i.e., multiple types of non-target objects simultaneously present) based on parameters observed when participants perform search in homogeneous scenes (i.e., when all non-target objects are identical to one another). The equation was based on a computational model where every item in the display is processed with unlimited capacity and independently of one another, with the goal of determining whether the item is likely to be a target or not. The model was tested in two experiments using real-world objects. Here, we extend those findings by testing the predictive power of the equation to simpler objects. Further, we compare the model’s performance under two stimulus arrangements: spatially-intermixed (items randomly placed around the scene) and spatially-segregated displays (identical items presented near each other). This comparison allowed us to isolate and quantify the facilitatory effect of processing displays that contain identical items (homogeneity facilitation), a factor that improves performance in visual search above-and-beyond target-distractor dissimilarity. The results suggest that homogeneity facilitation effects in search arise from local item-to-item interaction (rather than by rejecting items as “groups”) and that the strength of those interactions might be determined by stimulus complexity (with simpler stimuli producing stronger interactions and thus, stronger homogeneity facilitation effects).


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 32-32 ◽  
Author(s):  
A von Mühlenen ◽  
H J Müller ◽  
R Groner

Three visual search experiments were designed to investigate the processes involved in the efficient detection of motion - form conjunction targets. In experiment 1 the number of movement directions in the display were varied, and we tried to establish whether or not the target direction was predictable. Search was less efficient when items moved in multiple directions compared to just one direction; whether items moved in two, three, or four directions made relatively little difference. Pre-cuing of the target direction facilitated the search to a small, but non-negligible, extent; the facilitation was not due to better predictability of the display region that contained the target at the start of a trial. Experiment 2 was designed to estimate the relative contributions of stationary and moving nontargets to the search rate. Search rates were primarily determined by the number of moving nontargets; stationary nontargets sharing the target form also exerted a significant effect, but this was only about half as strong as that of moving nontargets; stationary nontargets not sharing the target form had little influence. In experiment 3 we examined the effects of movement speed and item size on search performance. Increasing the speed of the moving items (> 1.5 deg s−1) facilitated target detection when the task required segregation of the moving from the stationary items; when no segregation was necessary, increasing the movement speed impaired performance. When the display items were ‘large’, motion speed had little effect on target detection; but when the items were ‘small’, search efficiency declined with item movement faster than 1.5 deg s−1. A ‘parallel continuous processing’ account of motion form conjunction search is proposed.


2019 ◽  
Author(s):  
Yunhui Zhou ◽  
Yuguo Yu

AbstractHumans perform sequences of eye movements to search for a target in complex environment, but the efficiency of human search strategy is still controversial. Previous studies showed that humans can optimally integrate information across fixations and determine the next fixation location. However, their models ignored the temporal control of eye movement, ignored the limited human memory capacity, and the model prediction did not agree with details of human eye movement metrics well. Here, we measured the temporal course of human visibility map and recorded the eye movements of human subjects performing a visual search task. We further built a continuous-time eye movement model which considered saccadic inaccuracy, saccadic bias, and memory constraints in the visual system. This model agreed with many spatial and temporal properties of human eye movements, and showed several similar statistical dependencies between successive eye movements. In addition, our model also predicted that the human saccade decision is shaped by a memory capacity of around 8 recent fixations. These results suggest that human visual search strategy is not strictly optimal in the sense of fully utilizing the visibility map, but instead tries to balance between search performance and the costs to perform the task.Author SummaryDuring visual search, how do humans determine when and where to make eye movement is an important unsolved issue. Previous studies suggested that human can optimally use the visibility map to determine fixation locations, but we found that such model didn’t agree with details of human eye movement metrics because it ignored several realistic biological limitations of human brain functions, and couldn’t explain the temporal control of eye movements. Instead, we showed that considering the temporal course of visual processing and several constrains of the visual system could greatly improve the prediction on the spatiotemporal properties of human eye movement while only slightly affected the search performance in terms of median fixation numbers. Therefore, humans may not use the visibility map in a strictly optimal sense, but tried to balance between search performance and the costs to perform the task.


2019 ◽  
Author(s):  
Elizabeth J. Halfen ◽  
John F. Magnotti ◽  
Md. Shoaibur Rahman ◽  
Jeffrey M. Yau

AbstractAlthough we experience complex patterns over our entire body, how we selectively perceive multi-site touch over our bodies remains poorly understood. Here, we characterized tactile search behavior over the body using a tactile analog of the classic visual search task. Participants judged whether a target stimulus (e.g., 10-Hz vibration) was present or absent on the upper or lower limbs. When present, the target stimulus could occur alone or with distractor stimuli (e.g., 30-Hz vibrations) on other body locations. We varied the number and spatial configurations of the distractors as well as the target and distractor frequencies and measured the impact of these factors on search response times. First, we found that response times were faster on target-present trials compared to target-absent trials. Second, response times increased with the number of stimulated sites, suggesting a serial search process. Third, search performance differed depending on stimulus frequencies. This frequency-dependent behavior may be related to perceptual grouping effects based on timing cues. We constructed models to explore how the locations of the tactile cues influenced search behavior. Our modeling results reveal that, in isolation, cues on the index fingers make relatively greater contributions to search performance compared to stimulation experienced on other body sites. Additionally, co-stimulation of sites within the same limb or simply on the same body side preferentially influence search behavior. Our collective findings identify some principles of attentional search that are common to vision and touch, but others that highlight key differences that may be unique to body-based spatial perception.New & NoteworthyLittle is known about how we selectively experience multi-site touch over the body. Using a tactile analog of the classic visual search paradigm, we show that tactile search behavior for flutter cues is generally consistent with a serial search process. Modeling results reveal the preferential contributions of index finger stimulation and two-site interactions involving ipsilateral and within-limb patterns. Our results offer initial evidence for spatial and temporal principles underlying tactile search behavior over the body.


2015 ◽  
Vol 74 (1) ◽  
pp. 55-60 ◽  
Author(s):  
Alexandre Coutté ◽  
Gérard Olivier ◽  
Sylvane Faure

Computer use generally requires manual interaction with human-computer interfaces. In this experiment, we studied the influence of manual response preparation on co-occurring shifts of attention to information on a computer screen. The participants were to carry out a visual search task on a computer screen while simultaneously preparing to reach for either a proximal or distal switch on a horizontal device, with either their right or left hand. The response properties were not predictive of the target’s spatial position. The results mainly showed that the preparation of a manual response influenced visual search: (1) The visual target whose location was congruent with the goal of the prepared response was found faster; (2) the visual target whose location was congruent with the laterality of the response hand was found faster; (3) these effects have a cumulative influence on visual search performance; (4) the magnitude of the influence of the response goal on visual search is marginally negatively correlated with the rapidity of response execution. These results are discussed in the general framework of structural coupling between perception and motor planning.


Author(s):  
Gwendolyn Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

Abstract According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591–621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. Significance statement This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.


Ergonomics ◽  
1992 ◽  
Vol 35 (3) ◽  
pp. 243-252 ◽  
Author(s):  
DOHYUNG KEE ◽  
EUI S. JUNG ◽  
MIN K. CHUNG

Sign in / Sign up

Export Citation Format

Share Document