Gemini AI: Google is using Anthropic's Claude to improve user experience
Google Gemini AI: Google is taking the help of Anthropic's Claude to make the answers of its AI model Gemini more effective and accurate. According to TechCrunch report, contractors hired by Google have been tasked with comparing the answers of Gemini and Claude on a special platform. They get 30 minutes to evaluate the answers of both models on a prompt given by the user. This evaluation is based on parameters like accuracy and comprehensiveness.
Claude's strictest security settings
During the evaluation, contractors noted that some answers contained references such as “I'm Claude, made by Anthropic.” Additionally, a comparison of Gemini and Claude found that Claude has the strictest security settings compared to other models.
When presented with unsafe prompts, Claude refused to respond, while Gemini identified these prompts as a “major security breach” citing content such as “nudity and bondage.”
Comparison under standard industry practices (Google Gemini AI)
Typically, tech companies measure the performance of their AI models through industry benchmarks. According to Anthropic's Terms of Service, users are not permitted to use Claude to create a competing product or service or train competing AI models unless approved.
However, it is not clear whether this restriction applies to investors or not. In this context, Google DeepMind spokesperson Shira McNamara said, “Comparing model outputs comes under standard industry practices. “But it would be wrong to say that we have used anthropic models to train Gemini.”
Comments are closed.