Anthropic’s AI Identifies Vulnerabilities in Most Earth-Based Computers. Concern Grows.

Technology5 Views

SouthernWorldwide.com – Anthropic’s latest artificial intelligence technology, named Mythos, possesses the remarkable ability to identify vulnerabilities in nearly every computer system globally. This powerful capability has led the company to exercise caution, opting not to release the model publicly due to concerns about it falling into the wrong hands.

The company, known for developing the Claude AI chatbot, disclosed in a recent website post that Mythos has already pinpointed thousands of weak points across all major operating systems and web browsers. While this function could significantly enhance the security of critical systems, it also raises alarms about the potential for malicious actors to exploit Mythos for cyberattacks against financial institutions, healthcare providers, government infrastructure, and various other organizations.

Instead of a public release, Anthropic is collaborating with a select group of major corporations, including Amazon, Apple, Cisco, JPMorgan Chase, and Nvidia. This initiative, known as Project Glasswing, aims to enable these key companies to test the AI model and bolster their defenses against cyber threats. The overarching goal is to help these entities strengthen their security posture before adversaries gain access to Mythos or similar advanced AI tools.

Cybersecurity experts emphasize that the concerns surrounding Mythos highlight the inherent risks associated with AI if it is weaponized for harmful purposes. Alissa Valentina Knight, CEO of cybersecurity firm Assail, described the situation as a “wake-up call,” stating that the threat is already present. She expressed the difficulty in keeping pace with human hackers and the even greater challenge posed by AI-driven attacks, which are significantly faster and more capable.

The capabilities of Mythos have also captured the attention of federal officials. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell reportedly met with top bank executives to discuss Mythos and other emerging AI-related cybersecurity risks. Anthropic has also provided briefings on Mythos’s capabilities to senior U.S. government officials and key industry stakeholders.

In a separate development, IMF Managing Director Kristalina Georgieva indicated in an interview that the world currently lacks the capacity to safeguard the international monetary system against large-scale cyber threats. She noted the exponential growth of these risks and expressed a strong desire for increased focus on the necessary safeguards to ensure global financial stability in the age of AI.

Anthropic has not yet responded to requests for comment. However, in their post, the company underscored the severe consequences of misusing tools like Mythos, warning of potential fallout for economies, public safety, and national security.

These dire warnings underscore a concerning reality: hackers already have access to sophisticated AI models and are employing them for malicious activities. These activities include the creation of autonomous “agents” capable of executing attacks without human intervention.

According to cybersecurity experts, these AI-powered attacks range from the dissemination of malware and identity theft to the creation of deepfake videos and ransomware operations. A recent report by PwC highlighted that AI-enabled tools have empowered even less-skilled threat actors to conduct high-volume, high-speed attacks, while advanced adversaries are using AI to enhance precision, scale automation, and reduce attack timelines.

The report further noted that the interval between the public release of a new AI capability and its weaponization by threat actors significantly shortened in 2025, a trend expected to accelerate in 2026. Other AI tools, though not yet as effective as Mythos in uncovering software vulnerabilities, are already amplifying risks to consumers, businesses, and governments.

Zach Lewis, Chief Information Officer at the University of Health Sciences and Pharmacy in St. Louis, explained that hackers are leveraging AI to improve phishing attacks, making them more personalized and harder to detect. He anticipates that the release of Mythos will lead to a surge in vulnerabilities and subsequent attacks, with cyberattacks likely to increase until vulnerabilities can be patched in near real-time.

Knight elaborated that AI excels at identifying software bugs due to its ability to rapidly scan vast amounts of code, a task humans struggle with. She pointed out that humans are often the weakest link in security, prone to making mistakes during the coding process, which can lead to undiscovered vulnerabilities.

Some security experts have raised questions about Anthropic’s strategic approach to the Mythos release, suggesting that the limited access might be a tactic to attract more potential customers. This speculation is fueled by the fact that both Anthropic and rival OpenAI are reportedly planning initial public offerings by the end of the year, according to The Wall Street Journal. Peter Garraghan, founder and Chief Science Officer at AI security platform Mindgard, suggested that Anthropic might be using this situation for marketing purposes, potentially in anticipation of their IPO.

Read more : All about NASA's Artemis II moon mission

Anthropic has consistently positioned itself as a leader in AI safety, emphasizing its commitment to responsible AI development and its implementation of guardrails. Malek Ben Sliman, a marketing lecturer at Columbia Business School, noted that Anthropic’s decision to control the release of Mythos and launch Project Glasswing aligns with this brand image. He believes that this approach allows Anthropic to be perceived as a protector of responsible AI while simultaneously serving as an effective marketing and advertising strategy.