5.1 The Beginning: Mainly a Facilitator (2000–2005)
5.2 Early Phase: Focus on Embedded Systems (2005–2010)
5.2.1 ASSISTECH and COP315
A 30-member user group was involved in the early stages of the product design and helped arrive at the requirements
The product was validated by 150 users in 6 cities before its eventual launch on 31st March 2014. We have not come across any disability product anywhere in the world that has been tested by so many users before its launch
Off-peak time when it is convenient for travel due to buses being less crowded, it may happen that the bus stop has no other passenger waiting
Even if there are many other passengers waiting, inability of the VI person to choose the right person for information may imply that help is being sought from someone busy (may be on his/her phone) or himself a visitor resulting often in unpleasant situations
5.3 Collaborations and Research: Formation of ASSISTECH (2010–2013)
5.3.1 Student Projects to Research
5.3.2 NVDA Activities
5.3.3 TacRead and DotBook
5.4 Change of Focus: Technology to Users (2013–2016)
5.4.1 Tactile Graphics Project
5.4.2 More Research Projects and International Collaboration
5.5 Consolidation and Growth (2016 - )
The legacy documents available in the digital library in Indian languages use fonts that are not recognized by screen reader software. There is a need to convert these into formats like ePUB that are accessible.
Mathematics still poses a major challenge as very often in pdf documents the equations are available as images. Even otherwise delivering a complex equation in audio format that is linear and comprehensible is a challenge. One of the visually impaired students in the group (Mr. Akashdeep Bansal) has taken it up as his PhD research topic. We are also collaborating with Prof. Volker Sorge in University of Birmingham (UK) who has had extensive experience in this field.
Navigating through tables efficiently needs some research as well as tooling.
Diagrams require associated description for delivery. Recent AI techniques are making great progress in automatically describing images and we would like to adapt these techniques for automatic generation of diagram descriptions
Street stray animals like dogs and cattle for safety
Potholes at a distance again for safety
Multi-lingual street signage for assisting in navigation
Face detection for social inclusion