"A picture is worth a thousand words". But does this old adage holds true for modern-day documentation and collaboration workspaces like confluence?
We believe it is true. Here is a simple example, I want to tell people how important it is to take care of your car. On the left-hand side is a boring table containing the yearly values of the car, and on the right is a sleek graph. The conveys the same information as a graph on the right, but it is harder to read and somehow, even though it has more accurate data points, it conveys the message with less pizzaz. The graph on the right tells you clearly "if you don't take care of your car, the car value value goes down drastically".
But here is the accessibility conundrum. if the document only contains the graph, how does a screen reader read it?
WCAG guidelines mention that images must contain alt-text, which can be used by assistive technology. However, enforcing a rule like that in a collaborative workspace like confluence is an administrative nightmare.
Addteq's Unstoppable for Confluence plugin uses Azure Cognitive services to auto-generate captions for images. Automatic image captioning helps users access the important content in any image, be it a graph, chart, generic image or an image of text. The video below compares how images without a user-generated alt-text in confluence, would be treated by NVDA screen reader, with and without Unstoppable.
Unstoppable strives to make Atlassian tools compliant to 508, and WCAG norms. World Wide Web Consortium has multiple guidelines on images and alt-text and captions of the images, including two WCAG guidelines, namely Images of text (level AA), and Images of text (Level AAA). With this feature unstoppable will help you Confluence achieve AAA conformance in images.