Chat At Your Own Risk: Google Issues Warning As It Launches AI Chatbot

0

New Delhi: Google has opened its experimental artificial intelligence (AI) chatbot for the public and you can now register to chat with the AI-driven bot trained on the company’s controversial language model. Google has already warned that early previews of its LaMDA (Language Model for Dialogue Applications) model “may display inaccurate or inappropriate content”.Also Read – Zomato Begins Pilot Test For Grocery Delivery Via Blinkit On Its Main App. Deets Inside

‘AI Test Kitchen’ by Google is an app where people can learn about, experience, and give feedback on Google’s emerging AI technology. “Our goal is to learn, improve and innovate responsibly on AI together. We’ll be opening up to small groups of people gradually,” said the company. Also Read – Meta To Roll Out New VR Headsets In October, Confirms Mark Zuckerberg. All You Need To Know

According to Alphabet and Google CEO Sundar Pichai, ‘AI Test Kitchen’ is “meant to give you a sense of what it might be like to have LaMDA in your hands”. The ability of these language models to generate infinite possibilities shows potential, “but it also means they don’t always get things quite right”. Also Read – Google Announces Online Safety Initiatives, Child Safety Toolkit | Deets Inside

Risk minimised, not eliminated: Google

“And while we’ve made substantial improvements in safety and accuracy in the latest version of LaMDA, we’re still at the beginning of a journey,” said Google. “We have added multiple layers of protection to the AI Test Kitchen. This work has minimised the risk, but not eliminated it,” it added.

5 points to know about Google and Meta AI chatbots:

  1. Both Google and Meta (formerly Facebook) have recently unveiled their AI conversational chatbots, asking the public to give feedback.
  2. The initial reports are scary as the Meta chatbot named BlenderBot 3 thought Mark Zuckerberg is “creepy and manipulative” and Donald Trump will always be the US president.
  3. Meta said last week that all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks.
  4. “BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better,” the company mentioned in a blogpost.
  5. Last month, Google fired an engineer over breaching its confidentiality agreement after he made a claim that the tech giant’s conversation AI is “sentient” because it has feelings, emotions and subjective experiences.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment