New AI trust score reveals DeepSeek leads in sensitive information disclosure

Đăng bởi: Ngày: 17/03/2025

A recent assessment has spotlighted the Chinese AI model DeepSeek as a leading performer in sensitive information disclosure, outperforming notable American competitors such as Meta’s Llama. This revelation comes from the newly unveiled AI Trust Score, created by Tumeryk, which evaluates AI systems based on nine essential factors including security, toxic content management, and the handling of sensitive outputs.

New AI trust score reveals DeepSeek leads in sensitive information disclosure

DeepSeek’s model, referred to as DeepSeek NIM, has achieved an impressive score of 910 in the disclosure of sensitive information category. This places it significantly ahead of Anthropic Claude, which scored 687, and Meta Llama, which trailed with a score of 557. These results challenge the established perceptions of the safety and compliance standards of foreign AI models, particularly in the light of ongoing concerns regarding data handling practices in the tech industry.

The AI Trust Manager developed by Tumeryk plays a crucial role in these evaluations. It is tailored for security professionals aiming to ensure AI systems are both secure and compliant while identifying vulnerabilities and monitoring real-time performance. The tool also provides actionable recommendations for bolstering security measures, making it an essential resource for enterprises integrating AI technologies into their operations.

According to reports from Betanews, the growing body of evidence suggests that DeepSeek and its fellow Chinese AI models exhibit higher standards of safety and compliance than previously understood, particularly on US platforms such as NVIDIA and SambaNova. This creates a significant opportunity for companies interested in deploying AI technologies securely and ethically, as compliance with international regulations becomes paramount.

As the AI landscape continues to evolve, the importance of unbiased, data-driven assessments will become increasingly vital for fostering transparency and trust among users and developers. Such assessments may lead to a shift in how companies view the potential of foreign AI models, urging a reevaluation of domestic versus international capabilities.