One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping.
Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot.
I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss.
An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest.
I have a wide monitor for work so I tried it ... without success.
Perhaps that is not surprising since the data does not move of its own accord. But reflecting on this experience I realised that I already do things to help find those patterns, and I might think of them as moving the data.
For example, given an unknown data set in a spreadsheet I will try things like this:
- use formatting to highlight specific values, or ranges, or superimpose a heatmap.
- use pivot tables to repeatedly slice the data in different ways.
- create derived data, such as totals or percentages or comparisons.
- transpose the data.
- sort by each column.
- plotting pairs of variables against each other.
In these operations I don't know what I am looking for. I may not even know what I am looking at. What I'm trying to do, by moving the data around, is expose any birds hiding inside. My own experience is that this can work, and work very well.
But perhaps someone else has already documented ways to exploit peripheral vision in data analysis? I didn't find anything directly relevant but did come across a handful of interesting ideas.
Peripheral vision is related to inattentional blindness because, as I understand it, it gives us the false impression of seeing the whole vista in front of us, despite our focus being only on a small piece at a time. (Peripheral vision is mainly for looking rather than seeing)
While peripheral vision might not be a magic wand for data analysis, it does appear that it's possible to improve the sharpness with which peripheral vision operates through learning by repeated exposure to the same kinds of scenarios within a feedback loop. (Peripheral vision and pattern recognition: A review)
My, again, amateur reading of this interprets it as a potential explanation for the way, say, a professional radiologist can lean to rapidly, perhaps even subconsciously, identify abnormalities in x-rays. (Analysis of Perceptual Expertise in Radiology – Current Knowledge and a New Perspective)
And if some of that talk about feedback and learning sounds AI-ish to you, you won't be surprised to find that there's ongoing research in simulating Peripheral Vision for AI Models.Which is fine and all that, but gets us nowhere particularly and so some would dismiss it, and this post, as wasted effort.
To me that kind of thinking is strictly for the birds.
I had an idea and I followed some of its threads to places that were interesting, made some connections, and gave no practical right-now value. But that's all good. That's how exploration works. The thrill of the chase, the euphoria of discovery, and the stashing away of information that might come in handy in some other situation can be their own reward.
Image: Bing Image Creator
Comments
Post a Comment