In a significant move to address the lack of oversight in health-related artificial intelligence (AI), the Coalition for Health AI (CHAI) launched its Applied Model Card. The tool, unveiled on Thursday, is designed to provide transparency for health AI systems and set a precedent in the largely unregulated industry. Described as a “nutrition label” for AI, the card aims to give users clear insights into a model’s development, risks, and intended applications.
CHAI’s Bold Step Toward Transparency
Since its inception in 2021, CHAI has worked tirelessly to create best practices for responsible AI use in healthcare. With over 3,000 private and public organizations in its network—including health systems, insurers, and tech companies—the coalition has made significant strides in aligning stakeholders on key principles. The Applied Model Card is its latest milestone, marking a collaborative effort to standardize transparency in health AI.
Dr. Brian Anderson, CHAI’s co-founder and CEO, emphasized the importance of this initiative. Speaking to Newsweek, he described the card as a foundational step toward building trust and understanding in health AI. “This isn’t just about accountability; it’s about making these tools more reliable and equitable for everyone,” Anderson said.
The card’s open-source nature invites feedback from the community until January 22, highlighting CHAI’s commitment to inclusivity and adaptability. Health tech companies are encouraged to test the model card and suggest improvements.
What the Applied Model Card Brings to the Table
The Applied Model Card introduces a structured approach to sharing critical information about health AI models. Developers can disclose details like release dates, regulatory approvals, and global availability. It also includes sections for intended use, risks, limitations, and ethical considerations.
A standout feature is the “trust ingredients” section, where developers can answer pressing questions about their models:
- Does the AI require ongoing maintenance?
- How does it address bias in its algorithms?
- Who funded its development, and who was consulted during its design?
These disclosures, alongside principles like fairness, safety, and usability, offer a comprehensive overview for potential users.
Bridging the Gap Between Stakeholders
One of the more contentious issues in health AI is data input transparency. Health systems need to understand the data used to train AI models to determine their suitability for specific populations. However, developers often hesitate to share proprietary details, fearing a loss of competitive advantage.
CHAI has managed to broker an initial agreement on this front. While the model card is still a draft, it represents progress in reconciling these competing interests. Anderson acknowledged that the card will likely evolve over time, as stakeholders refine their expectations and standards.
The Case for AI Nutrition Labels in Healthcare
Health AI is a booming market, attracting approximately $11 billion in venture capital investments in the United States last year, according to the World Economic Forum. For healthcare providers, navigating the crowded field of vendors can be overwhelming. The Applied Model Card aims to simplify this process, enabling an “apples-to-apples comparison” between different AI systems.
Dr. Daniel Yang, vice president of AI and emerging technologies at Kaiser Permanente, noted the deluge of pitches he receives daily—many of which lack relevance to his organization’s needs. The model card could streamline evaluations by presenting standardized information, reducing time spent on preliminary research.
For health systems, these cards could become an integral part of procurement and governance processes. Anderson explained that many organizations are already requesting such disclosures for inventory management and internal governance.
The Bigger Picture: Aligning Private and Public Standards
Interestingly, CHAI’s efforts align closely with initiatives from federal agencies like the FDA and the Office of the National Coordinator for Health Information Technology (ONC). Both have published sample model cards for AI-enabled medical devices and predictive tools, emphasizing risk management, performance, and limitations.
Anderson pointed out that CHAI’s model card goes further, offering more comprehensive disclosures. “It’s encouraging to see the private and public sectors working together to establish transparency standards,” he said. “This alignment is critical for building trust in these technologies.”
What Lies Ahead for Health AI
For now, participation in the Applied Model Card framework remains voluntary. While some hope the government will eventually mandate similar disclosures, others believe the private sector’s leadership is a more pragmatic approach. Either way, CHAI’s initiative sets a high bar for accountability in health AI.
This effort could pave the way for greater adoption of standardized practices, helping health systems confidently integrate AI into their operations. As the industry grapples with balancing innovation and oversight, tools like the Applied Model Card offer a roadmap for responsible AI development.