Google’s attempts to make Bard as good as ChatGPT might mean that ethics have taken a back seat
Once upon a time, like two years ago in 2021, Alphabet Inc. — parent company of Google LLC — vowed to study the ethics of AI. Well, a group dedicated to ethics regarding AI already existed at Google and they dabbled in providing feedback about how morally okay the company's products are.
So before March of 2023 — which marks the release of Bard to a closed-off group or individuals — the ethics “committee” was hard at work to convince Google that Bard wasn’t exactly ready for an even limited roll out.
Despite that, however, the company proceeded as planned and released Bard, seemingly with the only aim of having an existing competitor to ChatGPT and Microsoft’s OpenAI. This, in turn, left the ethics group demoralized and in shambles, as numerous key-players left Google soon after.
Bloomberg reports that Google has seemingly fed Bard low-quality information just so it could unveil it early. This info is backed up by some examined, yet unspecified, internal company documents and the words of existing and ex-ethics group members.
Naturally, the Big G didn’t take this lying down. It claims that ethics is still a top priority regarding Bard and AI in general. And that only makes sense, given that Google was hesitant to dabble in AI for years, exactly because of the moral dilemmas.
And Google very much believes that all safety checks have been put into place, before it released Bard… as an experiment and not a product. Is this label a form of risk prevention? And if it is, how is it possible that we’re expecting numerous features for services such as Docs, Slides, Gmail and YouTube — which are effectively standalone products — powered by said experiment? Is that ethical?
The ethics group of Google has a response and it is hesitation. The hesitation to speak up and discuss, for they reportedly receive a “You are just trying to slow down the process” in response. Has ethics taken a back seat to business ventures? Food for thought.
Before releasing Bard in March, Google granted access to the AI internally to its employees in order to gather feedback. Here are some snippets of what Google employees had to say about Bard:
Google launched Bard anyway. But here is a different perspective: limited, but still public access, is an opportunity for Bard to learn and correct itself. After all, Google is prolific in terms of algorithms, so is it farfetched to imagine a reality where all of this is part of a real plan to let Bard learn and grow, just like ChatGPT had done in the past?
Again: food for thought.
So before March of 2023 — which marks the release of Bard to a closed-off group or individuals — the ethics “committee” was hard at work to convince Google that Bard wasn’t exactly ready for an even limited roll out.
Did Google forego ethics just to release Bard early?
Bard can do loads more than just chitchat and provide information. Composition, coding and image rendering are among its many talents.
Bloomberg reports that Google has seemingly fed Bard low-quality information just so it could unveil it early. This info is backed up by some examined, yet unspecified, internal company documents and the words of existing and ex-ethics group members.
Naturally, the Big G didn’t take this lying down. It claims that ethics is still a top priority regarding Bard and AI in general. And that only makes sense, given that Google was hesitant to dabble in AI for years, exactly because of the moral dilemmas.
Yet, it seemingly took the condition of rising competition for the company to change its overall stance on the manner. ChatGPT and OneAI — and any other AI for that matter — would not exist without Google’s very own research, so is it wrong to want a piece of the delicious pie, if you grew the ingredients for it?
Google's Bard: pathological liar or fast learner?
Now, imagine a Pixel X Pro with tons of exclusive features, powered by Bard. Is that not a product?
The ethics group of Google has a response and it is hesitation. The hesitation to speak up and discuss, for they reportedly receive a “You are just trying to slow down the process” in response. Has ethics taken a back seat to business ventures? Food for thought.
Before releasing Bard in March, Google granted access to the AI internally to its employees in order to gather feedback. Here are some snippets of what Google employees had to say about Bard:
- Pathological liar
- Cringe-worthy
- Provided advice that would end in disaster
- "... worse than useless: please do not launch"
Google launched Bard anyway. But here is a different perspective: limited, but still public access, is an opportunity for Bard to learn and correct itself. After all, Google is prolific in terms of algorithms, so is it farfetched to imagine a reality where all of this is part of a real plan to let Bard learn and grow, just like ChatGPT had done in the past?
Again: food for thought.
Things that are NOT allowed: