Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

10-2021

Abstract

Audio descriptions (ADs) can increase access to videos for blind people. Researchers have explored different mechanisms for generating ADs, with some of the most recent studies involving paid novices; to improve the quality of their ADs, novices receive feedback from reviewers. However, reviewer feedback is not instantaneous. To explore the potential for real-time feedback through automation, in this paper, we analyze 1,120 comments that 40 sighted novices received from a sighted or a blind reviewer. We find that feedback patterns tend to fall under four themes: (i) Quality; commenting on different AD quality variables, (ii) Speech Act; the utterance or speech action that the reviewers used, (iii) Required Action; the recommended action that the authors should do to improve the AD, and (iv) Guidance; the additional help that the reviewers gave to help the authors. We discuss which of these patterns could be automated within the review process as design implications for future AD collaborative authoring systems.

Keywords

Audio Description, collaborative writing, video accessibility, visual impairment

Discipline

Graphics and Human Computer Interfaces | Software Engineering

Publication

ASSETS '21: Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility, Virtual, October 18-22

First Page

1

Last Page

4

ISBN

9781450383066

Identifier

10.1145/3441852.3476550

Publisher

ACM

City or Country

New York

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1145/3441852.3476550

Share

COinS