Golisano College presents Syed Billah, faculty candidate in HCI/Accessibility.
Abstract: People with vision impairments require special-purpose assistive technologies, such as screen readers and screen magnifiers, to interact with computing devices. These have come a long way from being an afterthought into becoming a mainstream built-in technology of many modern computing devices ranging from desktops to laptops to tablets to phones to wearables. However, there remains a significant accessibility gap between how people with and without vision impairments seem to benefit from these modern devices. For instance, screen readers are still locked-in to a single platform; do not support cross-platform interoperability; they still use keyboard-only interfaces to adapt 2D Graphical User Interfaces (UI) designed for “point-and-click” interaction with a mouse. Analogously, screen magnifiers magnify the raw screen pixels indiscriminately as a blanket operation---ignoring the semantics of the underlying screen content, such as whitespace vs. non-whitespace in UI elements. All these shortcomings hamper the productivity of people with vision impairments, creating disproportionate barriers to education, employment, and empowerment.
This talk will present my research on transformative assistive technologies for lowering the aforementioned accessibility gap. Towards that, my research identifies the sources causing this gap across the board ranging from Operating Systems to application UIs to input devices, and fixes them by leveraging techniques and best practices from Systems, AI, and user-centric design in HCI. The outcome of my research will potentially enable people with vision impairments to access any device---ranging from desktops to mobile devices, to cloud servers, to wearables---using interfaces that are uniform across all these devices and platforms, efficient to interact with and promote independence, such as filling out non-digital, printed forms independently with a conventional pen.
Bio: Syed Masum Billah is a PhD candidate in the Computer Science Department at Stony Brook University. His primary research interests are at the intersection of Human-Computer Interaction (HCI), Accessible Computing, Computer Systems, and Applied Machine Learning. His PhD thesis has focused on making computer accessibility ubiquitous, frictionless, and efficient for people with vision impairments. To that end, he builds interactive systems, designs intelligent interaction techniques, as well as works on improving the accessibility API in Operating Systems to address fundamental limitations in today's assistive technologies. His research has appeared in CHI, ASSETS, IUI and EuroSys conferences. In recognition of his contributions to accessibility, he recently received the Catacosinos fellowship for excellence in Computer Science in Stony Brook University.
When and Where
12:00 PM-1:00 PM
Open to the Public