Google’s new artificial intelligence model can hear, speak and translate!

Google’s new artificial intelligence model can hear, speak and translate!

[ad_1]

Artificial intelligence is constantly evolving. While many people use artificial intelligence to help them do their jobs, this technology can be much more practical. Google has been looking for access to the full capacity of artificial intelligence for years, and the latest technology of this company is called AudioPaLM; A model that can listen, speak or translate with very high accuracy.

Google researchers have introduced AudioPaLM as a new language model that can listen to other people’s speech with surprising accuracy and translate it into other languages. This model uses multimodal architecture and combines the strengths of two current models, PaLM-2 and AudioLM.

AudioLM does a great job of preserving information such as speaker identity and tone of voice. By combining these two language models, the new AudioPaLM model was obtained, which also takes advantage of the linguistic expertise of PaLM-2 to have a complete understanding of text and speech.

written by GizmochinaAudioPaLM uses a common vocabulary that can represent speech and text using a limited number of discrete tokens; This capability allows the model to provide tasks such as speech recognition, text-to-speech, and speech-to-speech translation with a single architecture and training process.

AudioPaLM outperforms existing systems in speech translation and can even perform speech-to-text translation with never-before-encountered language combinations. This new artificial intelligence can transfer sounds between languages ​​based on short spoken prompts and record and reproduce distinct sounds in different languages.

[ad_2]

Source link

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *