- Google is warning staff not to put confidential material into chatbots, Reuters reported.
- Several companies have issued similar warnings, but Google’s warning also includes its own Bard bot.
- Google also told engineers not to use AI for writing code as it could give undesirable suggestions.
Google, one of the biggest players in the AI race, is warning its staff about using chatbots — including its own, Reuters reported.
Its parent company, Alphabet, told employees they shouldn’t put confidential material into the likes of ChatGPT and Bard, four people familiar with the situation told the news agency.
That’s because AI companies train their chatbots’ understanding of language by using messages sent by users. Human reviewers could read the chats and therefore see internal information, or the AI could reproduce and leak it by itself, as one study found.
Similar alerts have been issued by companies including Walmart, Microsoft, and Amazon — which said it’d seen ChatGPT answers that “closely” resembled internal material.
Google’s warning is a particularly interesting because it covers its own chatbot as well.
It also cautioned its engineers not to use the AI tools for writing code.
Earlier this month, Google announced that Bard is now able to execute code by itself to better answer questions about logic and math. That followed an April update that allowed it to generate code.
The company told Reuters that while Bard can be helpful for programmers, it may also give undesirable code suggestions.
Insider reported in February that Google previously warned staff training Bard ahead of its release not to give it internal information.
In guidance given to Googlers rewriting the chatbot’s responses, the company said: “Don’t describe Bard as a person, imply emotion, or claim to have human-like experiences.”
Google did not immediately respond to a request for comment from Insider, made outside normal working hours.
Read the full article here