DIAGRAM Center
I just attended two days of meetings in Washington DC on the first year of the DIAGRAM Center, held at the Office of Special Education Programs in the Department of Education. The goal of the DIAGRAM R&D Center is to greatly improve access to graphical information for students with print disabilities (for example, helping blind students get access to important graphics inside textbooks). This is becoming crucially important as the problem of delivering access to text is increasingly solved by the move to ebook publishing and solutions like our Bookshare library. Of course, just as we're solving the text problem, more and more content is moving to richer, more visual forms like graphics, simulations and flash!
The first exciting part of our work has been delivered by the National Center on Accessible Media, one of our two key partners in DIAGRAM (along with the DAISY Consortium). The initial part of the project was to do a detailed survey of existing assistive technology products, to get a baseline for current support for accessible graphics. But, it’s turned out to be one of the best surveys of assistive technology we’ve ever seen. Should be a huge resource for the field: check out the product matrices in the Research and Development section of the DIAGRAM web site.
We are also building a content model for making images more accessible. The intention is to define an XML content model which will make it easier to present alternatives to the original graphical content for persons who are blind or print disabled. We are using the modular approach of the DAISY Authoring and Interchange Framework, which defines modules and profiles for the representation of books, journals, etc. Using this modular approach, HTML and EPUB documents would have graphical elements linked to specific instances of descriptions or image alternatives that use the content model. So, a blind student looking at a complex scientific diagram for their high school science course would be able to hear a detailed description of the main elements in that diagram. The goal is to be able to gain access to the same learning a sighted student would get from that diagram.
I saw an initial demonstration of a web-based image description tool, called Poet. This makes it possible for people describing graphics (publisher production people, illustrators, alternative media producers like Benetech, and volunteers) to work from a standard web browser and interactively add image descriptions to DAISY books. The described book can then be re-published and made available to users who wish to have image descriptions voiced by digital talking book software tools or players, or have the descriptions in Braille.
We’ve also engaged in a major technical standards issue. The main tool for image description in the current version of HTML has been the LONGDESC attribute. It can be attached to a graphic, and many assistive technology products (like screen readers for the blind) know how to alert the user to the existence of a long description and how to read it aloud (or provide it in refreshable Braille). There had been a recommendation in the HTML 5 standards process to drop LONGDESC, that was greatly concerning to us. We're hearing a startling lack of sensitivity to accessibility in this process. With some of the other top leaders working on DIAGRAM, we need to inform the disability activists of this issue before it's too late!
The first exciting part of our work has been delivered by the National Center on Accessible Media, one of our two key partners in DIAGRAM (along with the DAISY Consortium). The initial part of the project was to do a detailed survey of existing assistive technology products, to get a baseline for current support for accessible graphics. But, it’s turned out to be one of the best surveys of assistive technology we’ve ever seen. Should be a huge resource for the field: check out the product matrices in the Research and Development section of the DIAGRAM web site.
We are also building a content model for making images more accessible. The intention is to define an XML content model which will make it easier to present alternatives to the original graphical content for persons who are blind or print disabled. We are using the modular approach of the DAISY Authoring and Interchange Framework, which defines modules and profiles for the representation of books, journals, etc. Using this modular approach, HTML and EPUB documents would have graphical elements linked to specific instances of descriptions or image alternatives that use the content model. So, a blind student looking at a complex scientific diagram for their high school science course would be able to hear a detailed description of the main elements in that diagram. The goal is to be able to gain access to the same learning a sighted student would get from that diagram.
I saw an initial demonstration of a web-based image description tool, called Poet. This makes it possible for people describing graphics (publisher production people, illustrators, alternative media producers like Benetech, and volunteers) to work from a standard web browser and interactively add image descriptions to DAISY books. The described book can then be re-published and made available to users who wish to have image descriptions voiced by digital talking book software tools or players, or have the descriptions in Braille.
We’ve also engaged in a major technical standards issue. The main tool for image description in the current version of HTML has been the LONGDESC attribute. It can be attached to a graphic, and many assistive technology products (like screen readers for the blind) know how to alert the user to the existence of a long description and how to read it aloud (or provide it in refreshable Braille). There had been a recommendation in the HTML 5 standards process to drop LONGDESC, that was greatly concerning to us. We're hearing a startling lack of sensitivity to accessibility in this process. With some of the other top leaders working on DIAGRAM, we need to inform the disability activists of this issue before it's too late!
Comments