New Delhi: Following Microsoft, Google has become the next regular headline grabber for incorporating AI features into its apps and businesses. However, it appears that the corporation wishes to be cautious about its use in the workplace. According to Reuters, citing insiders, the corporation has advised its employees on how to use AI chatbots such as Bard, Google’s competing alternative to ChatGPT.
According to insiders, Alphabet, Google’s parent company, urged staff not to inject personal information into these AI models, which the business acknowledged citing its information safeguard policy.
Google warns staffs regarding AI models
ChatGPT and Bard are two prominent generative AI technologies that can answer a variety of questions in a conversational format. However, there is concern about using this data in training and having it assessed by “human reviewers.” The notice also extended to the direct usage of computer code developed by these models. The corporation highlighted Bard’s ability to give “unwanted code suggestions” as the basis for the change.
The corporation is not the first to issue a similar warning to its employees. Apple and Samsung have also reportedly warned its employees against using AI in the workplace in order to avoid critical data leaks.
According to Reuters, which cited Insider, the company instructed its employees not to provide internal details to the Bard during its testing before to launch. According to the company’s privacy notice, which was amended on June 1, “Don’t include confidential or sensitive information in your Bard conversations.” Meanwhile, Bard is now available in more than 180 countries and 40 languages.