Research
  1. News and insights
  2. Research impact
  3. Statistics and facts
  4. Research areas and groups
  5. Find an academic
  6. City's research strategy
  7. Research integrity
  8. Ethics
  1. Case studies
Research

How well can we summarise what we see?

Understanding the Efficiency of Visual Statistics

Does our visual system get it right – or does it exaggerate? What does this tell us about how we interpret what we see?

With a quick glance, we can get the gist of what a photo is showing.

If we see more than 20 trees, we reckon we’re looking at a forest. If most of those trees look tilted, we guess that the camera was tilted. We can even estimate the camera angle.

Whereas it is easy to estimate the tilts of individual trees when we look at them, we are forced to rely on statistics like average tilt when asked to describe the trees in our peripheral vision.

In this project, City, University of London researchers investigated how good we are at forming statistical summaries, such as average tilt.

Their conclusions are influencing developments in our understanding of sight, disease development and how people read radiological images.

What did we explore and how?

Researchers Joshua Solomon and Michael Morgan wanted to find out how efficiently our central and peripheral vision calculates image statistics.

Solomon’s team measured how well people could distinguish patterns of greater and lesser degrees of regularity. They also measured how well people could distinguish groups of circles on the basis of average size and size variance.

In both cases, performances were consistent with the use of approximately five items in a group (or five components of the pattern).

The results suggest hard-wired neural circuits connected to our vision. One computes the average size of several things at once; another computes the variance of the distances between things.

There also seems to be an after-effect from looking at a series of patterns that are fairly regular. Afterwards, less regular patterns (i.e. those with greater variance in the distances between components) appear even less regular than they really are.

If our intuition regarding crowding (i.e. that it reflects an obligatory statistical analysis with high efficiency) proves correct, then our work may influence the much larger group of scientists currently trying to understand that topic.

Joshua Adam Solomon (Principal Investigator)

Benefits and influence of the research

The team’s findings have already inspired other researchers’ work including projects to:

  • understand how people read radiological images. Researchers found that, after looking at a series of dense radiological images, (mammograms), the next images look less dense. This could help understand how the order and type of images (fatty vs dense) can affect diagnoses. (Kompaniez, Abbey, Boone, and Webster)
  • study patterns in development and disease. Researchers exploring where bristle cells on fruit flies occur used a version of the model that Solomon built for this work. They showed that as the fruit flies get older, the patterns become more regular. (Protonotarios, Baum et al (J. R. Soc. Interface 2014))
  • create rules for quickly measuring how well people can distinguish between objects and background (contrast sensitivity).
  • assess how mice recognise motion.
  • measure signals sent by the eye’s rod cells used in peripheral vision and night vision.

The researchers hope their work will also help scientists investigating crowding, the theory that a real or perceived lack of space affects human behaviour.

According to Google Scholar, Solomon’s H-index in August 2018 was 28.

Also, according to Google Scholar, Watson, Yang, Solomon, & Villasenor (1997, IEEE Trans. Image Proc. 6:1164) has been cited in 8 patents. Assignees include Canon Inc, Microsoft Corp, Lucent Technologies Inc, HRL Labs LLC, and NASA. Co-author Watson worked on behalf of the latter Assignee, who also cited another paper (Watson & Solomon, 1997, J. Opt. Soc. Am. A 14: 2379), in one further patent.

Solomon & Pelli (1994, Nature 369:395) has been cited in 4 patents. Assignees include Sharp Laboratories of America, Allergan Inc, and Perception Tech LLC.

Current projects include an investigation into the human visual system's ability to discriminate increased blur (a disproportionate loss of high spatial frequencies) from a more general reduction in contrast (i.e., at all spatial frequencies) and an investigation of individual differences in adaptability.

The researchers

  • Joshua Adam Solomon (Principal Investigator)
  • Michael Morgan (Co-Investigator)

More about this research

  • School of Health Sciences
  • Related academics: Professor Joshua Solomon, Professor Michael Morgan
  • Status: Completed
  • Topics: Computer science and informatics, Health
  • Industry/sector: Optometry and healthcare technologies
  • Funder: EPSRC
  • Project partners/stakeholders: City, University of London, United Kingdom (Lead Research Organisation) and EPSRC, United Kingdom (Co-funder)
  • Publication link: External link