Article 386 out of 1760
Facebook has launched a new automatic text feature which uses machine learning technology to identify objects in photographs. Users of Facebook who are blind or visually impaired can now benefit from the technology when using the social networking site.
The artificial intelligence system uses image recognition technology to caption photographs with keywords to improve the accessibility of the site for disabled users.
Writing on the 'Research at Facebook' Blog, software engineer Shaomei Wu said: “While visual content provides a fun and expressive way for people to communicate online, it also creates challenges for people with low vision or blindness. The challenges arise in both creating and consuming visual content. As a result, some people can feel isolated and frustrated when they can not fully participate in the interaction around visual content.
“To achieve our mission of making the world more open and connected, we have to connect people of all backgrounds and abilities. We work closely with the blind and visually impaired community to ensure that Facebook works well, even with a News Feed full of photos and videos.”
More than two billion people share photos across Facebook, Instagram, Messenger and WhatsApp everyday worldwide and visual content provides a fun, alternative way for people to communicate with each other online, though photographs, gifs and videos can prove to be problematic for people are blind or visually impaired.
Nearly 40 million people are blind across with the world, while 246 million are severely visually impaired and can feel excluded from conversations online when photos or visual content is involved.
Though the technology is still under development and will launch on Apple iOS systems first, the social network is taking its first steps towards making Facebook more accessible to its blind or visually impaired users.
Automatic alternative text, or automatic alt text, generates a description of a photograph by using object recognition technology. People who use screen readers on iOS devices will be able to hear a list of items that the photo may contain as they swipe past photos on the social network.
Previously, people using screen readers would only hear the name of person who shared the photo followed by the term ‘photo. Facebook says now it can offer a ‘richer description of what’s in a photo’ for instance, someone might hear “Image may contain three people, smiling, outdoors".
In March 2016, Facebook published a new study in partnership with Cornell University based on surveys and interviews with blind Facebook users to help them understand their everyday experiences when using Facebook.
The new study has helped to inform the development of the AI-powered automatic alternative (alt) text to provide better captioning for visual content on Facebook.
A previous study revealed that people with visual impairments comment and like photos as often as those without visual impairments though they create and share less photos that the average Facebook users.
Writing about Facebook’s plans to improve the social network for people with low vision, Ms Wu concluded: “Learning about the challenges blind people face in a visual web inspired us to develop new technologies that we hope can help address some of the issues.
“We are in the process of rolling out AI-powered automatic alt text to all screen reader users on mobile and web, and will continue to explore ways to use our innovative technologies to make Facebook useful and enjoyable for all, regardless of background or physical ability.”