Akhil Nagori, an eighth-grade student, has designed a pair of glasses that can transcribe text into audio content in real-time for people who are blind or visually impaired.
The hardware consists of a Raspberry Pi Zero 2W, a battery, and a first-party camera, all attached to a pair of glasses.
When the wearer presses a button, the camera takes a picture of the text, which is then read aloud to the wearer through speech synthesis.
This project showcases how machine learning and artificial intelligence have advanced and can now be used to create devices that assist people with disabilities in performing tasks unaided.
Such technology was previously unimaginable and would have required a lot of research, but thanks to today’s advanced AI and machine vision libraries, similar devices can be created cost-effectively.
This design could be improved further to recognise text without the need for a button-activated picture, thereby making it completely hands-free.