ChatGPT Or Google Bard? Privacy Or Performance? Outstanding Questions Answered

0

Large language models (LLMs) are becoming increasingly popular, with two of the most well-known being Google Bard and ChatGPT. Both systems are capable of generating human-quality text, but there are some key differences between the two, both functionally and legally. While there are plenty of articles discussing the functionality, few have dove into the legal repercussions of each product, according to their terms. This article includes a light overview of technical differences and then a deeper dive into the legal differences of engaging with each tool.

Technically Speaking, They’re Not That Different

If you ask Bard what the differences are it’ll say Google Bard is a more accurate, relevant, and readable chatbot than ChatGPT. How convenient. It also seemed to hallucinate within this prompt because it claimed that ChatGPT is more accessible. When prompted to respond to why ChatGPT is more accessible, “I apologize for the confusion. I said that ChatGPT is more accessible because it is available in a free tier and a paid tier.” Clearly the system had forgotten that it, Google Bard, also has a free tier—it’s entire product.

When I ask ChatGPT what the difference is, it responded with “As of my knowledge cutoff in September 2021, there isn’t an AI product called ‘Google Bard’ that was developed by Google. Perhaps you’ve confused the name or the product is something that has been introduced after my last update.” Similarly unhelpful.

What Are They Good For?

Both tools are great for rough drafts. They’re relatively reliable with structured data and highly curated prompts. They’re great for creative imagination and projects that don’t require legal responsibility. They’re decently reliable when providing answers about objective facts that have been validated throughout history.

Neither tool should be used, or be allowed to be used without human oversight, in decisions that produce legal or similarly significant effect concerning consumers. Neither tool should be used to produce final, published drafts of journalism or reliable news media. Neither tool should be assumed to be reliable when deciphering unstructured data or new, untested information.

In sum, both products are fancy Magic 8 Balls. They’re very powerful in many ways but reliability and accuracy should be assumed to be experimental at best, a note that Google Bard has made very clear in its onboarding experience.

What Data Are They Using?

Google Bard has been trained on what Google is calling an “infiniset” of data chosen to enhance its dialogue and has access to the internet in real-time, meaning Google Bard should be able to pull more accurate and relevant information. ChatGPT has been trained on an undisclosed data set ranging up to 2021 and does not currently have access to the internet for all users. However, Plus users can access the beta feature by switching on the “Web browsing” toggle in the Settings and using the ChatGPT-4 Model in the prompt view.

While the idea of giving ChatGPT or Bard complete, unfettered access to the internet seems exciting, this has been the downfall of every other internet-connected chatbot in history and its safer to assume that neither of these systems will result in anything much different. Why? The internet has not raised its moral standards, and algorithms are only incrementally better at deciphering safe vs not-safe content in regards to what should be used as training data.

It is problematic that these companies are assuming all data on the internet (copyrighted or not, trademarked or not, etc) is safe and free to use. It’s unnerving that they’re assuming there is no libel or slander in the data they’re using for training or that their AI can remove that from its own training data before its pumped in. It’s belligerent that they’re assuming their automations will vet hate speech and pedophilia, which we’ve seen to be untrue over multiple decades.

It will be interesting to see how these services develop but ultimately I’m not expecting high results from any dataset that includes the cesspool of the entire internet, uncurated. At the very least, there will have to be guard rails of some sort.

Which tool is more privacy protecting?

The short summary is that neither is that great. Both are highly invasive tools with robust access to incredibly sensitive information and relatively little control of any data that has been entered into their system. At a high-level, ChatGPT should probably be considered to be more privacy protecting but let’s go through the details as to why that’s true.

Overview of Google Bard’s Privacy Practices

When prompted “which tool is more privacy protecting” Bard told me, “Google Bard is more privacy protecting than ChatGPT because it does not collect or store any personal information about its users. This means that Google Bard cannot track your browsing activity or use your data for advertising purposes. ChatGPT, on the other hand, does collect some personal information about its users, such as their IP address and device information. This information can be used to track your browsing activity and target you with advertising.” While Google claims to be more privacy protecting it is simply because they’re playing on the societal narrative that all privacy has to do with whether or not the data is used for advertising purposes. This is a false dichotomy and incredibly dangerous to privacy.

The problem with this is that most people will believe it to be true. However, if you read the fine print this is a lie—or perhaps a hallucination—from Google Bard.

First of all, you have to be logged into your Google account to use Bard, you aren’t allowed to use a non-Google or anonymous account, so don’t tell me Google doesn’t know who I am. Second, Google does allow users to delete their prompts (as does ChatGPT) but that doesn’t mean it deletes the conversations.

As noted in their policies, Google sends conversations to human reviewers for annotation and, as you can see in the screenshot above, it does not delete conversations annotated by human reviewers. Even when you turn off the activity tracker, they keep the information for 48 hours to “process any feedback”.

To their claim, Google does delete any account information associated with the conversation so it doesn’t send your account info connected with the conversation. However, if a user were to put any PII into the system via conversation; if the user were to share confidential or proprietary information; if the user was talking to the system about their life, their finances, or anything else sensitive, all of that would be maintained by Google’s system.

Despite the wealth of personally identifiable information being sent within input text, Google’s public-facing claim is that the text they save should be labeled as “anonymous” because they took the user’s name, user ID, and other “identifiers” off the actual conversation.

This is privacy greenwashing. The problem with this form of privacy is that it’s enough to satisfy those who are unaware of what’s going on but it’s nowhere near the privacy-protecting requirements defined by law. Moreover, we’ve seen that even when Google promises to delete data it’s generally a PR statement.

Additionally, if you read the Generative AI Additional Terms of Service from Google you’ll realize that this 300-word notice simply states that when using Google Bard you’re not only agreeing to the Generative AI Additional Terms of Service, you’re agreeing to its company-wide Terms of Service, meaning all the data they retain can be used across their company in Google’s giant data slush fund.

So, whether you’re an individual using these tools or a corporate entity testing them out for internal use, just be aware that anything you put into the system can and will be used by Google if an annotator has touched it, which you’ll never know or be capable of verifying. That said, it’s safer to just assume everything you put in will be saved and used to train Google’s systems.

Overview of ChatGPT’s Privacy Practices

While Google’s Bard sounds invasive—and it is—the same can mostly be said about ChatGPT. We’ve already seen breaches of ChatGPT’s data and irresponsible use by corporate actors who gave away proprietary company information or source code to the system. And that’s just what’s been reported. Remember, once it’s in, there’s no pulling it out, and there’s a good chance that if someone asks a question that’s similar or could use your content for as a response that your proprietary information will then be repurposed by the system.

However, OpenAI’s strategy of building in public has shown that the company is at least publicly promising to do better. Another benefit of ChatGPT is that it has an API, which means it’s connected to more plugins and also has been integrated into more tools. Notably, ChatGPT is widely available in the low- and no-code community meaning it’s easier to prototype systems with and can be accessed by more people—perhaps this is what Bard meant but didn’t fully explain when it said ChatGPT is more accessible.

Which Tool Is Best?

The best chatbot for you will depend on your specific needs and preferences. If you are looking for a chatbot that is connected to real-time internet results then Google Bard is a good option, but there’s no API and no way to use it in production. If you are looking for a chatbot that is more accessible and immediately usable in production code (or No-Code), then ChatGPT is a better choice. The best way to decide which chatbot is right for you is to test them out and see which one provides the results you prefer in the form you need.

While Bard and ChatGPT are the biggest names in the game I would advise readers not to stop looking. There are thousands of alternatives popping up around the world, many of which are doing much better to protect users’ privacy and give companies a platform to build upon now and into the future.

If you’re looking for a tool that allows you to use the power of ChatGPT in production but protects the privacy of your confidential company data as well as your users’ data, check out Privacy AI’s Private GPT, a service created to ensure all data going to ChatGPT is monitored and any PII or sensitive details are extracted before being sent through—a privacy-focused spigot to the firehose, if you will. Private AI does this by identifying, removing, and replacing PII, PCI, and PHI within semi- and unstructured data to replicate what otherwise would have been sent through.

Ultimately, we’re living through a giant social experiment and legally it’s allowed to happen because they have a disclaimer stating that this is experimental. I hope we see a future where there are more private instances stood up, more tools like Private GPT created, and more professionals aware of their impact. I’m a big supporter of the future of GAI and that does include LLMs, but we need to take this conversation about privacy seriously while developing this future.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment