How to see what one sees

Following the previous article's premise I'm keen on finding a valid way to light up the eye via infrared without harming it.
In the meantime I also progressed on a different path to help him, sending his family a Tobii Eye Tracker 5 to start learning how to communicate in this new way.

I also completely forgot to explain what i wanted to do in the first place and why infrared is important in our use case.

Firstly, my friend is -at least temporarily- sensitive to light, which instantly removes the possibility of lighting up the eye with an LED in the visible light spectrum.
Secondly, and I think it's an almost equally relevant reason: light in the visible spectrum introduces glare, which we don't like as it complicates reading of the eye.
Thirdly, it's cooler in dark mode, and that's also important.

Glare showcase

The final idea for this project is to use a webcam without the IR-cut filter, a light source to bright up the eye for easier visualization and a softwarre to plot the eye in 3D, grab the orientation vector and move the mouse accordingly.

This was the easy part, then comes playing with the pupil area and eyelid movements, which, if you remember well, in our case are mostly involuntary.

How does the eye work?

Wanna skip? Here a short summary.

Light goes through the cornea, which bends it, then through the iris, which opens in the pupil and from there through the lens, which bends the light again.

Behind the latter there’s the 80% of the eye, made of vitreous humor, that finally brings the light to the retina, in which photoreceptors read the signal and sends it to the brain via the optic nerve.

O

Given that the pupil changes its shape to accomodate for the amount of light received, we can use this data (once properly computed) to hopefully assess if the movement was voluntary or not.
My hope is that the pupil would not change the size when undergoing an involuntary movement, and that the only muscles that are concerned by it are the eyelid ones.
Assuming this last theory holds, it should make it easier to assess whether the movement we just saw was voluntary or not.

As soon as i solve the IR LED issue i'll ship one of the prototype cameras to his family and they'll record a video of his eye.
The content of the video will be of his eye only, while performing some simple tasks such as following a dot or reading a paragraph.
I think we'll also need a video of what's in front of him during the eye recording as well, started at the same time.
This should facilitate matching what he sees to what's on the screen and out of it.
And it would also avoid us the labeling part, which is tedious, so overall a win-win.

According to these two papers, the pupil also slightly dilates going further from the center of view, leading to possible errors in the reading.
Or -if we account for it- a cleaner reading of the eye's gaze vector.

Pupil dilation

A chommie shared this video, which showcases 2 cameras i have apart from the one I already shared, and some workarounds i can implement for the infrared.

I've been a bit busy lately, i'm leaving my current job and moving between countries, so i'll only be able to research/study for a few weeks unfortunately.

On the bright side my friend's family has also received a Tobii Dynavox in the meantime, and another good news is that he'll start rehabilitation therapy soon ❤️