Questions over the security of the new AI system: a case of unauthorized access

Concerns over security of new AI system

A shocking information has come from the technical field, in which questions are being raised about the security of a new AI system.

A group of anonymous Discord users have claimed to have gained access to a new AI model from a company called Anthropic, which has not yet been made public. The name of this model is said to be Claude Mythos Preview, which the company made available only to a select few people.

Anthropic says that this model is so powerful that it can identify and exploit software vulnerabilities on a large scale. Therefore, it was limited to specific partners under a special initiative to enhance the security of critical systems.

However, according to reports, the group did not use any complex technical process but instead gained access by guessing the model’s likely online location based on the company’s old naming patterns. The process reportedly provided additional assistance to a member of the group who already had limited rights as an external contractor.


According to the information, this group is active on a private online platform, where the work of collecting information about new and unreleased models is done. A member of the group said that they did not use this model for any wrongdoing, but for normal tasks like creating a simple website.

Meanwhile, Anthropic has taken this claim seriously and has started investigating the matter. However, it is not yet clear whether any other unauthorized person has accessed this model.

It is worth noting that the news of unauthorized access to the model which the company had said would bring about a change in cyber security in the future, has become a matter of concern.


Comments are closed.