LOADING...

Our Blog

About / Our Blog

Google DeepMind ?

Yes, Google has made an algorithm with DeepMind which can able to read the lips moment of Human, Shocking right?? Yes, many are wondering how actually it’s working, but Google Deepmind has over beaten human’s lip reading. Do you know the algorithm behind A pair of new studies shows that a machine can understand what you are saying without hearing a sound.

Google deepmind!! Artificial Intelligence!! Now both are working together for helping millions of hearing-impaired people read what's being said in the world around them.

Researchers at Oxford University & Google DeepMind have developed an artificial intelligence system that has trained on thousands of hours of BBC video broadcasts that far outperforms a professional lip-read

Step 1: First ever step involve is you have to check the windows operating system which you have with you. This is an important thing to know because based on the operating system which you have the steps get changed for solving the issues from Wanny cry malware program. It is quite easy and simple. You just need to have follows the points mentioned below for checking the windows operating system.

What Google DeepMind Consists of?

"A machine that can lip read opens up a host of applications: 'dictating' instructions or messages to a phone in a noisy environment; transcribing and redubbing archival silent films; resolving multi-talker simultaneous speech; and, improving the performance of automated speech recognition in general,"

images

How Google DeepMind AI’s Algorithm works???

A video clip of BBC over which the said software was tested has to be first prepared using machine learning. The problem faced by the technology scientists was that the audio and video streams were not getting synced by about one second. This made AI unaccustomed to the sync of the words said and the way the speaker moved his lips

Therefore, a computer system was made to learn the tuning between the sounds and mouth shapes making the audio and video aligned together with perfect accuracy. Now the AI technology was supposed to automatically process all 5000 hours of audio and video together with exact accuracy and surprisingly, it did the same with no lag detected..

Google DeepMind AIs Algorithm even Works for translating..!!!

Google translate, the company’s translation tool which was earlier working on Neural Machine Translation to illustrate whole phrases in place of single words will now be using Multilingual Neural Machine Translation, the company stated.

The organization will be using the new translation system but will not be completely overlooking the old one as it will be taken as the base for the new one. Machine Learning and Data Mining is the basic concepts of this technological invention.

Stay Better with Integration of Google DeepMind & AI:

images
    This system is relevant to any context that uses speech recognition and a camera, such as:
  • Adding speech recognition to hearing aids. Lip reading systems can be used to improve hearing aids by dubbing conversations in real-time. Around 20% of Americans suffer from hearing loss, according to the Hearing Loss Association of America. By age 65, one of three people has hearing loss. With the aging population, demand for hearing aids or lip-reading devices is only going to increase.
  • Augmenting camera-equipped sunglasses. This technology products like Spectacles, Snap's camera-equipped sunglasses. Anyone with this spectacles would theoretically be able to receive full transcriptions of conversations in real-time, if they’re able to get a close enough look at the speaker’s lips. This could be useful in loud locations.
  • Enabling silent dictation and voice commands. Another intresting use case for lip reading technology is allowing people to mouth commands to their devices in silence. In this schema , user’s wouldn’t have to speak out loud to Siri anymore. It also opens the doors to visual passwords, because people's lips move differently. And big reason consumers are reluctant to use voice an assistant is because they're shy to speak out loud to their devices, especially in public.

BBC Appreciates Google DeepMind AI’s Interface

Researchers at Oxford University used Google's DeepMind to watch more than 5,000 hours of TV including shows such as Newsnight, BBC Breakfast and Question Time for the 'Lip Reading Sentences in the Wild' study. The AI analyzed a total of 118,000 sentences – a much larger sample than in previous pieces of research such as the LipNet study for example - which only contained 51 unique words.

The sample used in this DeepMind study composed no fewer than 17,500 unique words, which made it a significantly harder challenge, but ultimately resulted in a much more accurate algorithm.

Tweaking the timing...

What added to the task was the fact that often the video and audio in the recordings were out of sync by up to a second.

To initially prepare all samples to be ready for the machine learning method, DeepMind first had to assume that the majority of clips were in sync, watch them all and try to learn from them a basic relationship between mouth shapes and sounds, and using that knowledge, rewatch all clips and correct the audio of anywhere the lips were out of sync with the speech.

images

Scan the computer for Viruses:

It was only then that it was able to go through all 5,000 hours once more to do the deep analysis of learning exactly which words related to which mouth shapes and movements.

A deeply stunning result – not just lip-service
The result of this research and development was a system that can interpret human speech across a wide range of speakers found in a variety of lighting and filming environments.

The system successfully deciphered phrases such as “We know there will be hundreds of journalists here as well” and “According to the latest figures from the Office of National Statistics”.

    Practical Applications are to be forecasted with Google DeepMind and AI
  • Silent Dictation in Public Spaces. (Siri does not need to hear your voice from now onwards).
  • Speech Recognition in Noisy Environments.
  • Improved practical hearing.

So finally we can say that DeepMind is also in a position to provide an AI based app (for phones) naming “Streams” which will help the doctors in finding the patients at risk. It looks like the inclusion of Artificial Intelligence in the daily life of people will be a new normal in the upcoming age. May be 10 years from now or might be just five. Watch out for that.