Skip to main content

AI in Mental Healthcare Settings: Balancing Innovation and Ethics

May 6, 2025
Image: [image credit]
Photo 272636030 © Konevaelvira | Dreamstime.com

Henry O’Connell, CEO and Co-Founder, Canary Speech

The growing buzz around artificial intelligence solutions peaked in March when OpenAI closed a $40 billion funding round. It was the largest private tech deal on record. It’s the latest sign that global leaders are investing major resources into AI — China recently announced its own multi-billion dollar investment fund — with or without a crystal-clear view of where it all leads.

While major investors contemplate how to capitalize on AI technologies, how should entrepreneurs forge a unique path to bringing their solutions to market? The answer begins with the basics. Even a small company can do the big things necessary by investing in cybersecurity, emphasizing data privacy, and putting ethical uses of AI at the center of any business partnership.

At the ground level where artificial intelligence technologies integrate with behavioral, cognitive, and other neurological healthcare platforms in clinical settings, tech innovators are collaborating on a careful approach to the most pressing ethical concerns facing patients and providers.

B2B audits

Potential healthcare industry partners whose people and products use AI must assess each other’s protocols for AI usage with a keen eye toward the ethical handling of potentially personal information. Clinical providers, and the third-party firms they contract with, must agree to standards that reduce the potential for any AI-based technology to be misused.

With or without AI, companies that engage in healthcare must ensure they’re abiding by HIPAA requirements to be a good partner. Similarly, industry leaders are quick to promote “responsible use” of AI in their own internal systems. But how can their contracting organizations and clients be sure?

A reasonable framework for auditing as it concerns AI specifically would be for the two partners to agree to three separate audits: Partner A completes a self-audit, Partner B audits Partner A, and a third-party Partner C audits both A and B. The scope of each audit means the three might overlap some, but that’s the baseline degree of scrutiny Fortune 500 companies are expecting from their partners in healthcare settings.

De-identifying the data is an essential component of any protocol when patient data-sharing is involved. Use an anonymous ID to pass data back and forth with a doctor or other clinician, and reassociate the data on their side of the firewall so that third-party businesses never know the identity of a patient.

Third-party certificates

Many industries utilize independent certifications unaffiliated with any government body as a way to transparently demonstrate their truthclaims to consumers. A company can be HIPAA-compliant but still fail an IT risk assessment. HIPAA establishes minimum requirements for data privacy and security, while an IT risk assessment evaluates broader vulnerabilities, including:

  • Third-party access controls (e.g., external API integrations)
  • Endpoint security (e.g., securing mobile devices used in telehealth)
  • AI model security (e.g., protecting against adversarial attacks and data poisoning)
  • Incident response readiness (e.g., how quickly a company can detect and mitigate a breach)

For mental healthcare organizations considering AI vendors, it is essential to go beyond HIPAA and request evidence of broader IT security assessments.

The threat of cyberattacks against the healthcare industry is increasing. In 2024, there were 13 data breaches involving more than 1 million healthcare records, including the biggest healthcare data breach of all-time that affected an estimated 100,000,000 million individuals. Penetration tests (pen-tests) conducted by ethical hackers can be beneficial for healthcare cybersecurity.

System and Organization Controls 2 (SOC 2) is among five sets of standards organizations use to assess privacy, security, and/or administrative processes to ensure confidentiality, integrity, and availability of data. Because SOC 2 controls closely align with the requirements of HIPAA, it’s the most relevant of the five sets of standards to the healthcare industry. It can provide assurances to the C-Suite, business partners, and regulators that an organization has implemented appropriate controls to protect data (SOC 2 Type 1) and is using the controls effectively (SOC 2 Type 2).

ISO 27001 and HITRUST for information and data security, and ISO 42001 for responsible AI, are among the most sought-after certifications today. Unlike SOC 2, which focuses on reporting security controls, ISO 27001 emphasizes ongoing risk management and security governance. HITRUST, which integrates HIPAA, NIST, and other compliance standards to ensure high-level security assurance for organizations handling sensitive health data, is widely adopted among large healthcare enterprises and SaaS providers.

These safeguards touch multinational organizations that fundamentally act independently of national governments to deliver safe products, consistently. Certification has become an established signal to potential industry partners that mental healthcare data will remain safe.

With CIOs reportedly under pressure from their boards to adopt new technology, the potential exists for an “AI governance gap”: innovation happening too rapidly for regulations and regulators to keep up, and preventing mental health organizations from adopting AI more quickly.

Against this backdrop, industry leaders are taking a cautious approach to AI implementation independent of regulatory activities, relying on transparency between partner organizations to adopt a patient-centric approach to new technologies. Although the potential of AI to transform mental healthcare processes is vast, the approach to implementation must be vigilant and deliberate to ensure the safety of patient data.