Executive Perspective from Dr. Bickkie Solomon
President, Founder, and Executive Consultant Stat Rx LLC
From your perspective, what breaks first as AI adoption accelerates toward 2030?
Dr. Solomon:
Organizational readiness and ownership break first, not the technology.
AI initiatives fail when organizations deploy tools without clear executive accountability, defined decision rights, and escalation pathways. As adoption accelerates, AI rapidly exposes misalignment across leadership, clinical operations, IT, legal, and compliance. Those gaps quickly become enterprise risk.
From an executive perspective, AI amplifies weak ownership structures. Without a named enterprise sponsor and a clearly accountable owner, pilots either stall or scale prematurely, creating exposure rather than value.
That is why AI must be treated as a change- and risk-management initiative, not just a technical deployment. When implemented correctly, AI improves workflow efficiency, increases traceability, reduces burnout, strengthens compliance, and delivers measurable return on investment, while managing and reducing risk and allowing clinicians to focus on higher-value work.
Dr. Solomon:
Healthcare leaders must treat AI governance as an enterprise operating and risk function, not a technical or static, policy-only exercise.
Unlike less regulated industries, healthcare requires continuous regulatory readiness, not episodic compliance. In pharmacy, that means assuming daily inspection readiness for oversight bodies such as the Food and Drug Administration, Drug Enforcement Administration, state Boards of Pharmacy, the Centers for Medicare and Medicaid Services, The Joint Commission, and State Departments of Health. These are representative examples rather than an exhaustive list, and the oversight landscape varies by healthcare setting and scope of practice.
Governance must explicitly integrate patient safety, clinical accountability, privacy and professional liability, and full auditability and traceability. These are operational requirements that must be designed into AI use cases before pilots begin, not added later.
Effective AI governance also requires standing multidisciplinary oversight, including executives, clinical leadership, pharmacy operations, compliance, legal, informatics, and data leaders, with clear ownership, decision rights, and escalation paths.
Most importantly, governance must scale in parallel with pilots. If governance follows deployment, clinicians lose trust, executives lose defensibility, and adoption stalls regardless of technical performance. In healthcare, AI only scales when leadership can explain, audit, and defend decisions every day, not just during an inspection.
Dr. Solomon:
You pilot and scale AI by integrating it into existing systems and workflows, not by creating parallel work or added layers of process that increase cognitive and operational burden.
AI must be layered into current platforms and processes, so it removes friction rather than adding operational burden. The objective is to eliminate repetitive tasks, increase traceability, and allow clinicians to work at the top of their profession, not introduce another tool they have to manage.
From a change-management standpoint, frontline staff must be engaged early in planning, not after a pilot struggles. They work the system every day and surface workflow constraints, safety risks, and practical blind spots that leadership often cannot see.
Transparency around goals, impact, and expectations is critical. When staff understand how AI improves their work and reduces burden, adoption increases. When they perceive it as surveillance or extra work, adoption fails.
Ultimately, successful pilots and scalable AI depend on frontline adoption. If a solution requires daily clinician participation and they reject it, the pilot will fail regardless of technical performance. This is a change-management discipline focused on workflow integration, ownership, and trust, not a siloed technology decision.
Dr. Solomon:
You manage underperforming pilots through structured continuous improvement, not abandonment or blind persistence.
A pilot is a learning vehicle, not a pass-fail event. Pilots should be managed using Plan, Do, Check, Act to enable rapid feedback loops, and Lean Six Sigma principles, specifically Define, Measure, Analyze, Improve, Control, to identify root causes before any decision to scale.
Continuous Quality Improvement must be embedded from day one, with defined success criteria, metrics, and review cadence. This process must also operate within a Just Culture framework. Teams need confidence that pilots are intended to identify system and process issues, not penalize individuals.
When a pilot underperforms, leaders must assess whether the issue is data quality, workflow integration, training, governance design, or ownership, rather than defaulting to blaming the technology. This disciplined approach prevents premature scaling, limits downstream risk, and preserves trust across the organization.
Dr. Solomon:
The greatest AI risks emerge at the intersection of human judgment, automation, and unclear accountability.
In the pharmacy domain, these risks surface quickly because pharmacy is a high-risk, high-stakes, and highly visible area of healthcare. While the operational footprint may be relatively small, the downstream impact of failure, patient harm, regulatory exposure, or service disruption is disproportionately large.
For example, when AI is applied to sterile compounding activities, such as supporting beyond-use dating, monitoring environmental trends, prioritizing batches, or assisting with formulation verification, the risk is not the alert itself. The risk lies in over-reliance on automated output, unclear ownership of final clinical or operational decisions, performance drift that is not actively monitored, and limited ability to trace outputs back to source data during regulatory review.
These are operational and governance risks rather than technical failures. At an enterprise level, leaders must define where human oversight applies, establish clear decision ownership, and ensure auditability and traceability across the full lifecycle so AI-supported decisions remain explainable, defensible, and adjustable in real-world practice.
Dr. Solomon:
You balance agility and standardization by stabilizing the foundation and allowing controlled flexibility above it.
Data standards, governance structures, safety controls, and audit mechanisms must remain consistent and predictable. Agility belongs at the use-case, pilot, and iteration level.
In pharmacy, Boards of Pharmacy have established expectations for documentation, traceability, access controls, licensing, and inspection readiness. These controls do not change when piloting an AI-supported workflow.
What evolves is how the use case is tested, refined, and scaled based on evidence.
When organizations attempt to make foundational controls flexible, risk increases. When everything is locked down, innovation stalls. Clear separation between what must remain stable and what can evolve enables both safety and speed at enterprise scale.
Dr. Solomon:
There are several early warning signs that consistently indicate weak data foundations.
These include heavy manual effort to prepare data, inconsistent clinical or operational definitions, limited ability to trace AI outputs back to source data, and over-reliance on vendors to compensate for internal gaps.
In pharmacy, this may appear as inconsistent interpretation of formulary status, beyond-use dating, or inventory classifications, which undermines confidence in AI-supported decisions.
Data relevance over time is equally critical. Using pre–COVID-19 utilization or supply chain data to predict future-state demand ignores fundamental shifts in care delivery, shortages, and sourcing patterns. Models built on outdated assumptions may be technically sound but operationally misleading.
When these conditions exist, AI is not creating value. It is masking structural weaknesses.
Dr. Solomon:
The critical mindset shift is moving from treating AI as a series of projects to treating it as a durable organizational capability.
By 2030, success will not be defined by who adopted AI first, but by who built systems that executives can defend, clinicians trust, and organizations can sustain over time. That requires clear executive ownership, governance and auditability embedded from the outset, and a disciplined focus on measurable return.
In healthcare and pharmacy, this also means using AI to remove low-value, repetitive work that contributes to burnout, while strengthening supply chain resilience in an environment where drug shortages are persistent and disruptive.
AI is a tool. The value comes from how it is governed, integrated into operations, and scaled over time, not from the technology itself.
_
Dr. Bickkie Solomon, PharmD, MBA – HM, BCSCP, CPEL, CPH, CSSBB is a healthcare executive, pharmacist, and Founder of Stat Rx LLC, with over 20 years of leadership experience, including 14 years in healthcare leadership across complex, highly regulated environments. A former Vice President of Pharmacy and current Director of Pharmacy, Assistant Professor, Pharmacy, she has held enterprise accountability for multi-site health systems and $100M+ portfolios, and advises organizations on AI governance, auditability, and scalable adoption aligned with patient safety, regulatory readiness, and sustainable operational execution.
__
This interview reflects independent industry perspectives shared by Dr Bickkie Solomon. The views expressed are her own and are intended for general industry discussion only. They do not represent the policies, positions, or implementation guidance of any current or former employer or affiliated healthcare organization.