/* ---- Google Analytics Code Below */

Wednesday, October 14, 2020

Microsoft says its AI can describe images 'as well as people do'

 Recall some portion of this claim being made, but had not heard much new from Microsoft,  Know sight impaired people who could use the capability.     We also worked on the general idea of  'captioning', which turns out to be tough to do generally well.  

Microsoft says its AI can Describe Images 'as Well as People Do'  By Devindra Hardawar, @devindra  in Engdget 

It’s a new milestone for AI that could genuinely help the visually impaired. 

Describing an image accurately, and not just like a clueless robot, has long been the goal of AI. In 2016, Google said its artificial intelligence could caption images almost as well as humans, with 94 percent accuracy. Now Microsoft says it’s gone even further: Its researchers have built an AI system that’s even more accurate than humans — so much so that it now sits at the top of the leaderboard for the nocaps image captioning benchmark. Microsoft claims its two times better than the image captioning model it’s been using since 2015. 

And while that’s a notable milestone on its own, Microsoft isn’t just keeping this tech to itself. It’s now offering the new captioning model as part of Azure's Cognitive Services, so any developer can bring it into their apps. It’s also available today in Seeing AI, Microsoft's app for blind and visually impaired users that can narrative the world around them. And later this year, the captioning model will also improve your presentations in PowerPoint for the web, Windows and Mac. It’ll also pop up in Word and Outlook on desktop platforms.

No comments: