By Emma Bergon & Ellen Hallsworth
Artificial Intelligence (AI) has the potential to transform how we deliver health care and human services. Anthropic’s recent launch of the Claude for Healthcare product demonstrated just some of AIs potential to ease the burdens of documentation, credentialing, and prior authorizations, which represent a fraction of the back-office functions that account for up to 30% of the cost of U.S. healthcare.
It’s challenging for providers and payers to navigate this brave new world. We’re seeing exponential growth in AI vendors, and an “arms race” where payer investment in AI is outpacing provider adoption. Given the variety of offerings, it can be hard to know how to prioritize investment, how to select the right vendor for your organization, and how to measure success.
At BluePath Health, we’ve been working with providers and community-based organizations (CBOs) to help with AI vendor selection processes. For CBOs this can be especially challenging. CBOs face workforce pressures, resource-constraints, and unstable funding structures. In these environments, AI can be particularly useful in reducing burnout and enabling staff to spend more time on direct service delivery. However, it’s especially challenging to make decisions about investing in infrastructure in a resource-constrained environment.
Here are six considerations that providers should bear in mind when selecting an AI vendor:
1. Return on Investment
One of the big unanswered questions around AI’s adoption in health care is who pays. In a financially constrained environment, few public or commercial payers are willing to foot the bill for provider investment in AI.
For provider organizations, it’s important to think about the business case for integrating AI into your workflows. Will the return on investment come through increased productivity? From improved patient experience? Or from increased staff engagement, efficiency and reduced turnover? As the body of evidence around AI in health care continues to grow, it is important to consider how these insights translate to organizational objectives, especially in human services settings. Many provider organizations are thinking about AI in terms of stemming workforce attrition and improving patient experience rather than increasing productivity, at least in the short term.
As the business case is developed, it is important to identify the metrics that will be used to measure success and ensure access to both qualitative and quantitative data to track them. Consider what analytics vendors can provide and whether data and reporting can be tailored to organizational needs.
2. Vendor Relationships
Relationships matter. While product specifications, technical capability and analytics are all key factors, many vendor selection decisions come down to customer service.
Given how new AI is, the quality of an organization’s relationships with vendors is especially important. Success with AI requires ongoing learning and iteration, making strong, productive vendor partnerships essential. Organizations should consider whether vendors understand their mission, the populations they serve, and their goals, as well as whether vendors are responsive to feedback and bring relevant health care expertise. It’s also advisable to work with vendors that have a clear product roadmap, provide defined service-level agreements (SLAs), and have incident management protocols.
Organizations should consider a vendor’s relationships not only with leadership and procurement teams, but also with frontline staff. Leadership excitement about AI is often tempered by skepticism and even suspicion among frontline staff. Adoption of AI tools is often widespread but shallow. Many organizations acquire technologies that are not consistently used by a majority of providers on a regular basis. This limits potential return on investment. What kind of adoption assistance, training, and ongoing support will vendors offer? Are they able to make a persuasive case about the benefits to providers and patients?
3. Privacy & Security
Whether an organization is a covered entity (CE) or a business associate (BA), careful consideration is required when any AI product interacts with protected health information (PHI). AI large language models (LLMs) often ingest large amounts of patient data to improve their offerings. This can pose a risk to the HIPAA requirement to use only the minimum necessary amount of patient information.
Ask questions to understand how vendors will use organizational data, how they will de-identify it, how they will store it, and what that means for HIPAA compliance. Get clarity on ownership of the data, and what happens at the end of the contract. Most AI vendors tout high levels of HIPAA compliance, but it’s important not to take claims at face value. Conduct extensive due diligence before signing contracts.
Similarly, it is important to consider how new contracts with AI vendors affect your existing privacy and security governance. It may be necessary to amend business associate agreements, update privacy and security governance and policies, and invest in staff training.
4. Market Trends
The AI landscape shifts rapidly. For instance, over the past year, AI ambient scribing has received a huge amount of hype and investment. This year, Epic and other EHR vendors launched integrated ambient scribing, likely causing significant disruption in the market for standalone scribing. Much of the excitement has now shifted from generative AI that produces and records information and insights, to agentic AI that performs multi-step tasks like patient intake or utilization management, reducing human burden.
As organizations navigate a complex and evolving landscape, they must balance sustainable medium- to long-term decision-making with change management. In cases where standalone vendors are considered, careful attention should be given to interoperability and integration with existing systems and workflows.
Try to understand how vendors are positioned within the marketplace. Are they prioritizing similar clients? What are their plans for growth? What’s their funding position? Who are their competitors and what’s their value proposition in a crowded market? How would consolidation or adoption of their technology by existing players impact their business?
Organizations should consider the level of specificity required for a vendor to be a good fit. This includes evaluating whether a more general solution can be customized at a reasonable cost or whether a solution tailored to the needs of the populations served is necessary. As CBOs align with health and human services initiatives such as California Advancing and Innovating Medi-Cal (CalAIM), vendor fit within specific market segments becomes increasingly important. Solutions designed primarily for clinical settings may not meet the unique needs of CBOs, and experience has shown that highly medicalized AI products are often a poor fit for social care environments. While products focused on social care and human services are emerging, they have developed more slowly than clinically-oriented solutions.
Defining requirements with sufficient specificity to address current needs, while maintaining flexibility to adapt to future change, is critical to long-term success.
5. Future Policy and Regulation
Policy often struggles to keep up with the pace of technological change. This is especially true for AI in the current moment.
2025 saw a flurry of legislative activity at the state level, driven by concerns around AI’s impact on patient care, patient safety, workforce, access, equity, and transparency. Illinois received a great deal of attention when HB1806 banned the use of AI chatbots in providing therapeutic care. In California, SB243 establishes guidelines requiring clear and conspicuous labeling of patient-facing content generated by AI. AB1064 would have gone further in limiting the use of any chatbot that could foreseeably cause harm to minors, but was vetoed by the Governor.
At the national level, the Food and Drug Administration (FDA) is the key regulatory body for AI and has approved over 1,000 AI-enabled devices for use in health care. The previous Federal Administration published a strategic plan for the use of AI in health care in early 2025. For most of the past year the current administration has shown limited appetite to legislate or regulate around AI. However, toward the end of 2025, the U.S. Department of Health and Human Services (HHS) began to engage more actively, with an early December strategy for the use of AI in its internal operations. In December, HHS published a request for information (RFI) emphasizing the need to accelerate adoption of AI, but stressing the need for interoperability and secure patient data in support of this goal.
The patchwork nature of state-based AI regulation, combined with the absence of consistent national regulation, presents challenges for providers and vendors operating across multiple states and may slow the pace of innovation. It’s helpful to engage with professional bodies and trade groups that are monitoring this space. It is important to track legislation in each state of operation and to develop policies and make investments that can be applied across markets. Ongoing horizon scanning and proactive risk mitigation will be essential as the regulatory environment continues to evolve in the coming years.
6. Patient Safety & Equity
AI comes with risks, both to patient safety and for equity and inclusion.
AI’s predictive potential can more accurately manage some risks in care delivery. All AI requires a degree of human oversight, but overconfidence in AI and its predictive potential can lead to human complacency, leading to potential harm and creating organizational liabilities. AI hallucinations can provide misleading and potentially dangerous information to patients and clinicians.
Organizations should establish governance frameworks for AI use, including processes to review, monitor, and mitigate risks to patient or client safety. Consideration should also be given to how safeguards are communicated to patients and clients to build trust. Provider organizations may benefit from additional training for both clinical and administrative staff to support the effective and safe use of AI.
In terms of equity, AI has the potential to overcome language barriers in care delivery. There’s growing evidence that using AI in areas like patient intake can increase access to care for diverse groups, due to reduced stigma and judgement. AI can also provide after-hours support, for those who might struggle to access care during the working day.
Though AI technologies don’t have the ingrained biases that their human counterparts often do, they have often been trained on selective, historical data that contains biases. There’s a risk that, while seeming “neutral”, AI perpetuates existing inequities. Just as organizations develop a business case for AI, they should also plan for its impact on equity and inclusion and track relevant metrics.
For CBOs and safety-net providers, thinking about equity and trust is especially important. Ask potential vendors questions about the data used to train their AI. Ask about data they gather on demographics and whether they can share this with you. Ask about their own organizational commitment to equity and ensure that their values align with yours.
***
2026 is likely to be a tipping point for AI’s adoption at scale in health and social care. It’s clear that from a procurement prospective, success will depend on developing and communicating a clear business case supported by metrics and KPIs, horizon scanning to understand the market and regulation, and putting patients’ security, safety and inclusion at the heart of what you do.
If you’d like to talk to BluePath Health about how we can help you to select the right AI vendor to help your organization succeed please contact John Weir (john.weir@bluepathhealth.com).