it came out yesterday and I have had it since then.
On 6 Apr 2016, at 15:26, Dominique Farrell <hollyandopal@xxxxxxxxx> wrote:
THOSE WHO ARE blind or are vision impaired will soon be able to find out what
is contained in photos on Facebook.
The system, called ‘automatic alternative text’, was launched today and
creates captions for photos posted on the site. Using a screen reader, which
uses a device’s built-in text-to-speech function to read out text, the system
will offer a brief summary of an image when it sees it.
So for example, a photo with your friends outside may be described as “image
may contain three people, smiling, outdoors”.
Currently, the system will only read the person posting it and mention the
term ‘photo’ on its own before skipping to the next post. This update will
try to describe what is in the photo.
The description appears at the bottom of the image and is repeated by the
phone’s text-to-speech function.
The descriptions aren’t the most comprehensive, instead focusing on basic
labels like people, trees, and cars, but it only identifies objects when it’s
confident it knows what’s in it. It is a work in progress, but Facebook
hopes it will continue to improve over time.
How it manages to do this is through object recognition technology developed
by its accessibility team, which was created five years ago. In recent
months, the company had been demoing the service and explaining how it used
artificial intelligence to power it.
Those who use the iOS version in the US will have access to it first while
the feature will roll out to users in other countries over time.
When it arrives, you need to turn on VoiceOver on iOS to use it. Go into
Settings > General > Accessibility > VoiceOver and you will see the different
options and settings for it.