Bayesian Hallucination ?

I've noticed that, when I'm using the computer, I "hallucinate" user interface states in my peripheral vision. Usually, these hallucinations are related to instant messenger or e-mail status notifications. I imagine seeing someone is on-line who is not, or imagine that a new message has been received when it has not.

I'm not actually loosing my mind. In fact, my brain is automatically trading off the costs of checking notifications with unreliable sensory input. This phenomena has a nice explanation in terms of optimizing costs and benefits using unreliable information from peripheral vision.
  • let u be the utility ( benefit ) of responding to a notification,
  • let c be the cost of verifyng whether a notification is real or imagined
  • let p(present) be the probability that a notification is really there
I want to check a notification as long as the expected benefit of responding to the notification outweighs the cost : check notification if and only if E(u)>c
[0] E(u) = u * p(present)
[1] check notification if and only if : u * p(present) > c
This is nice, but simplified. How do I know p(present) given some unreliable observation θ in peripheral vision, that is p(present|θ) ? This can be computed using Bayes' theorem : [2]
[2] p(present|θ)=p(θ|present)*p(present)/p(θ)
So, p(θ|present) is the probability of observing θ when the notification is really there, p(θ) is the probability of observing θ overall, and p(present) is the background probability of the notification being present.

So, plugging in expression [2] for p(present|θ) into equation [1] :
[3] check if and only if : u * p(θ|present) * p(present) / p(θ) > c
All of these terms are measurable, though I can't really say how my visual system learned their values. Peripheral observations θ are noisy, and θ will have different but overlapping distributions depending on whether or nor a stimulus is present.

If the expected benefit from checking a notification is high, this can lower the threshold for checking a notification. So, my visual system has automatically optimized unreliable peripheral vision for my benefit. Anyway, this is the story I'm telling myself to make me feel better about seeing things that aren't there.


  1. I recently did a lab involving similar psycho-physics. Detecting signal in noise. Sadly it was much less implemented than your pseudo coded example, but nevertheless, a fun problem in perception.

    Here's the model we used

    I only have a cursory knowledge of Signal Detection Theory, but I'd be willing to say u*p(θ|present) relates to d' and criterion is cost.

    Then again, criterion is a measure of action, whether you actually check to confirm if its real. Its assumed you will only check if you're sure you've perceived a signal...

    ...and from the way you described your hallucinations it sounds like you're always sure the signal is there and would always check. I'm not that up on the lit regarding this subject so I'm don't have a super stringent definition of criterion. I really want to say that criterion only affects the action in a perception action loop, but I'll play it safe and wont.

    Back to seeing shit that isn't there.

    So for whatever reason you've managed to lower d' so that you can't distinguish signal from noise. I guess the best experiment to test this would be to increase cost/criterion and see if the hallucinations persist.

  2. "So for whatever reason you've managed to lower d' so that you can't distinguish signal from noise. I guess the best experiment to test this would be to increase cost/criterion and see if the hallucinations persist."

    good call.

    So, its not really surprising that the brain is optimizing to determine whether or not to check into some noisy signal. What is surprising is that, subjectively, I don't experience "that looks a bit like a notification, lets go check it", I experience "that definitely is a notification, lets go investigate", even when there's nothing there. The net behaviors are identical, except that in the fist case I am aware of the degree of noise in the signal. In reality, I think my visual cortex is making decisions for me and not telling me about how reliable that choice is. Which means non-conscious mechanisms have trained visual cortex to understand the cost/benefit trade of of investigating computer notifications in the periphery. It also means that I've been exposed to these user interface components so much that their appearance has been stored in visual systems, I think, which is a little weird.