New Delhi: Google has opened its experimental artificial intelligence (AI) chatbot for the public and you can now register to chat with the AI-driven bot trained on the company’s controversial language model. Google has already warned that early previews of its LaMDA (Language Model for Dialogue Applications) model “may display inaccurate or inappropriate content”.Also Read – Zomato Begins Pilot Test For Grocery Delivery Via Blinkit On Its Main App. Deets Inside
‘AI Test Kitchen’ by Google is an app where people can learn about, experience, and give feedback on Google’s emerging AI technology. “Our goal is to learn, improve and innovate responsibly on AI together. We’ll be opening up to small groups of people gradually,” said the company. Also Read – Meta To Roll Out New VR Headsets In October, Confirms Mark Zuckerberg. All You Need To Know
According to Alphabet and Google CEO Sundar Pichai, ‘AI Test Kitchen’ is “meant to give you a sense of what it might be like to have LaMDA in your hands”. The ability of these language models to generate infinite possibilities shows potential, “but it also means they don’t always get things quite right”. Also Read – Google Announces Online Safety Initiatives, Child Safety Toolkit | Deets Inside
Risk minimised, not eliminated: Google
“And while we’ve made substantial improvements in safety and accuracy in the latest version of LaMDA, we’re still at the beginning of a journey,” said Google. “We have added multiple layers of protection to the AI Test Kitchen. This work has minimised the risk, but not eliminated it,” it added.
5 points to know about Google and Meta AI chatbots:
- Both Google and Meta (formerly Facebook) have recently unveiled their AI conversational chatbots, asking the public to give feedback.
- The initial reports are scary as the Meta chatbot named BlenderBot 3 thought Mark Zuckerberg is “creepy and manipulative” and Donald Trump will always be the US president.
- Meta said last week that all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks.
- “BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better,” the company mentioned in a blogpost.
- Last month, Google fired an engineer over breaching its confidentiality agreement after he made a claim that the tech giant’s conversation AI is “sentient” because it has feelings, emotions and subjective experiences.
$(document).ready(function(){ $('#commentbtn').on("click",function(){ (function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_US/all.js#xfbml=1&appId=178196885542208"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));
$(".cmntbox").toggle(); }); });
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here