Google, Microsoft, NVIDIA, Intel and others establish 'Coalition for Secure AI (CoSAI)' to improve security of AI products



On Thursday, July 18, 2024 (local time), at the 15th

Aspen Security Forum held in Aspen, Colorado, USA, the Coalition for Secure AI (CoSAI), an open source initiative aimed at providing guidance and tools for security-focused AI development, was announced. The initiative is organized by OASIS Open , a computer and communications standards organization.

Introducing the Coalition for Secure AI, an OASIS Open Project - OASIS Open
https://www.oasis-open.org/2024/07/18/introducing-cosai/



Google announces the Coalition for Secure AI
https://blog.google/technology/safety-security/google-coalition-for-secure-ai/

Intel Welcomes the Coalition for Secure AI
https://www.intel.com/content/www/us/en/newsroom/opinion/intel-welcomes-coalition-for-secure-ai.html

Introducing the Coalition for Secure AI (CoSAI) - Cisco Blogs
https://blogs.cisco.com/security/introducing-the-coalition-for-secure-ai-cosai

Chainguard joins Coalition for Secure AI with OpenAI, Google, Anthropic
https://www.chainguard.dev/unchained/chainguard-joins-coalition-for-secure-ai

CoSAI was founded to foster a collaborative ecosystem for sharing open source methodologies, standardized frameworks and tools. CoSAI brings together a diverse range of stakeholders, including industry leaders, academics and other experts, to address the fragmented landscape of AI security.

CoSAI's founding premium sponsors include Google, IBM, Intel, Microsoft, NVIDIA and PayPal. Other founding sponsors include Amazon, Anthropic, Cisco, Chainguard, Cohere, GenLab, OpenAI and Wiz.

OASIS Open, the organizers of CoSAI, explained that 'CoSAI is an effort to strengthen trust and security in the use and deployment of AI.' CoSAI's scope of activities includes the secure building, integration, deployment, and operation of AI systems, with a focus on mitigating risks such as model theft, data contamination, prompt injection, scaled misuse, and inference attacks.

OASIS Open lists CoSAI's goal as 'developing comprehensive security measures that address traditional and inherent risks in AI systems.' OASIS Open also describes CoSAI as 'an open source community led by a Project Management Committee that drives and manages the overall technical agenda, and a Technical Steering Committee made up of AI experts from academia and industry that oversees its work streams.'



Regarding the need for CoSAI, OASIS Open explains, 'AI is rapidly changing the world and has great potential to solve complex problems. To ensure trust in AI and promote responsible development, it is important to develop and share methodologies that prioritize security, identify and mitigate potential vulnerabilities in AI systems, and lead to the creation of systems that are secure by design.'

OASIS Open points out that the existing AI industry has a fragmented approach to securing AI and AI applications and services. In fact, developers have complained that it is 'inconsistent and siloed.' The lack of clear best practices and standardized approaches in the AI industry makes assessing and mitigating common risks specific to AI a major challenge even for experienced organizations. CoSAI is an organization established to solve these challenges.

'CoSAI was founded out of a need to democratize the knowledge and advancements essential to the safe integration and deployment of AI,' said Google's David LaBianca, Co-Chair of CoSAI's Board of Directors. 'With the support of OASIS Open, we look forward to continuing this work and collaboration among leading companies, experts and academia.'

'We are committed to working with organizations that are at the forefront of responsible and safe AI technology,' said Cisco's Omar Santos, fellow co-chair. 'Our goal is to eliminate redundancies and amplify our collective impact through key partnerships focused on important topics. At CoSAI, we will combine our expertise and resources to fast-track the development of robust AI security standards and practices that will benefit the entire industry.'



CoSAI has established three workstreams, with plans to add more over time:

・Software supply chain security for AI systems
Secure your AI applications with enhanced configuration and provenance tracking.

- Preparing defenders for a changing cybersecurity environment
Addressing investment and integration challenges in AI and traditional systems.

・AI security governance
Developing AI security best practices and risk assessment frameworks.

in Security, Posted by logu_ii