A PHP Error was encountered

Severity: Warning

Message: fopen(/var/cpanel/php/sessions/ea-php80/ci_session819796941bf44987f39f9cc03b6cda148282d8b7): Failed to open stream: No space left on device

Filename: drivers/Session_files_driver.php

Line Number: 174

Backtrace:

File: /home/judfadzm/public_html/contenttechseries.com/application/controllers/News.php
Line: 23
Function: library

File: /home/judfadzm/public_html/contenttechseries.com/index.php
Line: 317
Function: require_once

A PHP Error was encountered

Severity: Warning

Message: session_start(): Failed to read session data: user (path: /var/cpanel/php/sessions/ea-php80)

Filename: Session/Session.php

Line Number: 143

Backtrace:

File: /home/judfadzm/public_html/contenttechseries.com/application/controllers/News.php
Line: 23
Function: library

File: /home/judfadzm/public_html/contenttechseries.com/index.php
Line: 317
Function: require_once

Hugging Face and Protect AI Partner to Support Safe and Secure Usage of World’s Largest Repository of Open Source Machine Learning Models

Hugging Face and Protect AI Partner to Support Safe and Secure Usage of World’s Largest Repository of Open Source Machine Learning Models

Protect AI’s Guardian has been added to Hugging Face’s model scanners and provides comprehensive security alerts and deep insights into the safety of more than 1 million foundational ML models

Hugging Face and Protect AI Partner to Support Safe and Secure Usage of World’s Largest Repository of Open Source Machine Learning Models

Media:
Marc Gendron
Marc Gendron PR for Protect AI
marc@mgpr.net
+1 617-877-7480

Protect AI, the leading Artificial Intelligence (AI) and Machine Learning (ML) security company, and Hugging Face, the world's fastest growing community and most used platform for machine learning, today announced a collaboration to bolster security for the Hugging Face Hub, the world’s single largest repository of ML models. Protect AI’s Guardian has been added as a scanner to the Hugging Face platform, providing comprehensive security alerts and deep insights into the safety of foundational models before use.

The growing democratization of artificial intelligence and machine learning is largely driven by the accessibility of open-source 'Foundational Models' on platforms like Hugging Face. Today, the Hugging Face Hub hosts over a million freely accessible models, used by over 5 million users. More than 100,000 organizations collaborate privately on hundreds of thousands of private models. These models are vital for powering a wide range of AI applications.

However, this trend also introduces security risks, as the open exchange of files on these repositories can lead to the unintended spread of malicious software among users. Once added to a model, unseen malicious code can be executed to steal data and credentials, poison data, and much more. Through the huntr bug bounty community and first party research, Protect AI has identified thousands of unique threats in models that are commonly used in production today.

The Protect AI - Hugging Face partnership is a proactive response to these increasing AI security risks, and designed to help organizations balance protecting their AI with enabling speed of innovation. By scanning foundational models with Protect AI’s Guardian, Hugging Face is enabling the safe and trusted delivery of ML models to the global AI community, fostering a transparent environment where innovation thrives without compromising trust or safety.

“Protect AI is committed to helping build a safer AI-powered world, and has taken significant steps to secure the AI supply chain by actively contributing and maintaining open source security tools, as well as using our 15,000 member huntr threat research community to identify and offer remediation advice for AI vulnerabilities,” said Ian Swanson, CEO and Co-Founder of Protect AI. “This partnership helps us further deliver on our commitment, and we couldn’t be more excited to be partnering with Hugging Face to help accelerate the secure and trusted delivery of AI models to the global community.”

Protect AI’s Guardian is the industry’s leading model security solution that scans both internally built and externally acquired models for threats. As part of the Protect AI Security Platform, Guardian provides the most comprehensive model scanning capabilities, supporting an extensive list of model files and formats, including Tensorflow, Keras, XGboost and more. Guardian has been added into the Hugging Face platform, where it will continuously scan all models in the Hugging Face repository so users can understand the security posture of a model they are exploring for use. Users interacting with a model will be shown its security status and gain deep insights into potentially compromised models, adding a critical layer of safety and trust to ML model experimentation and development.

“At Hugging Face we take security seriously, as AI rapidly evolves, new threat vectors seemingly pop up every day,” said Julien Chaumond, Co-Founder of Hugging Face. “We have been very impressed by the work Protect AI have been doing in the community, coupled with the scanning capabilities of Guardian, they were an obvious choice to help our users responsibly experiment with and operationalize AI/ML systems and technologies.”

In addition to understanding the security status of each model within Hugging Face, users will also have access to a corresponding security report on Protect AI's Insights DB which is a vital educational resource that helps enterprises not only understand the security and safety of a model, but gain crucial knowledge on the specific risks associated with detected threats. Protect AI's Insights DB is continuously updated with exclusive access to findings from Protect AI's Threat Research team and its huntr AI/ML bug bounty community.

About Protect AI

Protect AI empowers organizations to secure their AI applications with comprehensive AI Security Posture Management (AI-SPM) capabilities, enabling them to see, know, and manage their ML environments effectively. The Protect AI Platform offers end-to-end visibility, remediation, control, and governance, safeguarding AI/ML systems from security threats and risks. Founded by AI leaders from Amazon and Oracle, Protect AI is backed by top investors, including Acrew Capital, boldstart ventures, Evolution Equity Partners, Knollwood Capital, Pelion Ventures, 01 Advisors, Samsung, StepStone Group, and Salesforce Ventures. The company is headquartered in Seattle, with offices in Berlin and Bangalore. For more information, visit our website and follow us on LinkedIn and Twitter.

About Hugging Face

Hugging Face is the collaboration platform for the machine learning community. The Hugging Face Hub works as a central place where anyone can share, explore, discover, and experiment with open-source ML. HF empowers the next generation of machine learning engineers, scientists, and end users to learn, collaborate and share their work to build an open and ethical AI future together. With the fast-growing community, some of the most used open-source ML libraries and tools, and a talented science team exploring the edge of tech, Hugging Face is at the heart of the AI revolution.


Read Previous

TTC Global Joins The Valuable Directory

Read Next

Generational Group Advises ITsynch, LLC

Add Comment