Medical AI and Clinician Surveillance — The Risk of Becoming Quantified Workers

“There has been less focus on AI [artificial intelligence] trained on clinician data that health care systems and insurers could use to manage clinicians’ practice. But clinicians have reason to worry about becoming AI “data subjects.”

Quantification of clinician practice could help health care systems improve quality of care and facilitate documentation to support transparency and utilization review, and physicians should take the lead in helping to achieve those aims (one of us holds equity in medical AI companies). But medical AI tools, including those that are introduced with a goal of improving patient care, also create a glide path for turning clinicians into “quantified workers” — workers whose daily tasks are monitored and controlled by AI technologies, which denies them autonomy and the benefit of discretion based on their expertise. [..] Examples of such management [AI-based “mechanical managers”] approaches include keystroke logging, recording screenshots of computers at dictated intervals without worker consent, and using sensors to track workers’ locations. Workers subject to unwanted surveillance often rate their work experience as worse than before surveillance was implemented.

There are several ways in which AI-based monitoring tools designed to benefit patients and clinicians might be used for clinician surveillance. First, ambient AI scribe tools, which transcribe and interpret patient and clinician speech to generate a structured note, have been rapidly adopted with a goal of reducing the burden associated with documentation and improving documentation accuracy. But ambient dictation systems introduce new capabilities for monitoring clinicians. By analyzing speech patterns, sentiment, and content, health care systems could use AI scribes to assess how often clinicians’ recommendations deviate from institutional guidelines.

In addition, these systems could detect “efficiency outliers” — clinicians who spend more time conversing with patients than employers consider ideal, at the expense of conducting new-patient visits or more total visits. [..] Akin to automated quality-improvement dashboards for tracking adherence to chronic-disease–management standards, AI models may generate performance scores on the basis of adherence to scripted protocols, average time spent with each patient, or degree of shared decision making, which could be inferred with the use of linguistic analysis. Even if these metrics are established to support quality-improvement goals, hospitals and health care systems could leverage them for evaluations of clinicians or performance-based reimbursement adjustments.

AI-based monitoring may also be used for analysis and summarization of patient messages on electronic health record (EHR) portals. Such tools could help clinicians triage patient concerns more efficiently, in some cases providing automated responses and in others escalating concerns to members of the care team. But they could also be used to monitor clinicians’ responsiveness, tone, and diagnostic reasoning. Hospitals might track response times, frequency of follow-up recommendations, or alignment with “ideal” message structures.

AI-generated responses to patient questions are already reported to be more empathetic than clinician responses. Reimbursement for many physicians is tied to patient-reported measures of physician–patient communication; a natural extension of this approach would be to tie reimbursement to AI interpretation of physician–patient communication. For example, clinicians may face scrutiny or financial penalties if their responses to patient messages deviate from established measures of empathy, as determined by AI — even if deviations reflect exercise of clinical judgment. AI-driven oversight risks prioritizing algorithmic conformity over individualized care. Payers or malpractice insurers could also analyze and use communication that is routinely shared with them by health care systems (such as for billing and legal purposes) in ways that may diverge from the interests of clinicians.

Although we aren’t aware of any examples of AI-based monitoring being used in decisions to terminate a clinician’s employment or in lawsuits against clinicians, cases of automated monitoring leading to such outcomes arose in the pre-AI era. One case involved a patient undergoing spinal surgery who had a serious intraoperative complication. Although the anesthesiologist involved reported having monitored the patient continuously, the EHR’s automatic time stamps revealed long periods with no data entry and no alarm responses — calling into question the physician’s testimony that proper monitoring had occurred. This case illustrates an important double-edged sword: as AI-based ambient monitoring systems generate more granular data on clinician–patient communication and practice patterns, use of such technology may appropriately capture patient-safety issues, but it may also have serious professional implications for clinicians. Although harms associated with AI monitoring could affect all clinicians, loss of autonomy may particularly threaten those with the least power in the workplace.

Clinicians who find such a future dystopian aren’t without recourse. One effective strategy in industries affected by AI quantification has involved trying to disrupt the efficacy of AI surveillance. For example, office workers who have been subject to invasive productivity surveillance have found workarounds, such as “mouse jigglers,” to thwart AI monitoring. But thwarting AI that informs patient care might harm patients or increase malpractice liability.

Another strategy draws on the power of “naming and shaming.” In 2018, Amazon patented a bracelet that would deploy haptic feedback to steer warehouse workers to place items in the correct bin. Workers alerted media outlets to the technology’s development, and the resulting coverage prompted widespread outrage. Amid a surge of unionization efforts at its warehouses, the company scrapped plans to implement these tools. The risk of negative publicity related to use of AI tools in medicine could help clinicians gain seats on AI-governance committees. Clinician leadership could support the enforcement of agreements not to use AI-generated data to penalize clinicians in employment decisions, make it harder for organizations to collect clinician-specific data, and could help ensure other privacy protections.

Finally, the law may be used to resist problematic monitoring of clinicians. Unionization could be helpful in pushing back against AI surveillance; a nursing union recently protested Kaiser Permanente’s implementation of chatbots and other AI tools. Clinicians are also protected by the Occupational Safety and Health Act, which should give them a right to refuse the adoption of AI technologies that might make their workplace unsafe and to report utilization of such tools, though use of the act in this way is largely untested.

Several advocates have testified before Congress on the need for an AI worker bill of rights that would delineate how employers may use AI technologies without violating employment laws, including antidiscrimination laws such as Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, and the Americans with Disabilities Act. A clinician bill of rights for AI, created by physician advocacy organizations in partnership with clinician and patient groups, hospital systems, AI developers, and civil society organizations, could encourage hospitals and health care systems to voluntarily adopt protections against problematic uses of AI. Provisions might include rights of information, participation, and privacy and quality assurance (see box).

Potential Provisions in a Clinician Bill of Rights for AI.*

Right of Information
Clinicians must be informed when AI is used in a patient’s care.
Right of Participation
Health care systems must commit to governance processes that involve clinicians in decisions about implementing AI tools that could affect their autonomy and livelihood; clinicians must have the opportunity to ask questions and express concerns about the use of AI tools without fear of retaliation.
Right of Privacy
Health care systems must state clearly to clinicians when and with whom AI analysis of clinician care delivery will be shared and justify sharing beyond that which is required by law or for industry self-regulation.
Quality Assurance
For AI tools that may pose more than minimal risk to patients, health care systems must commit to preimplementation review and regular postimplementation assessments, with results of that analysis shared with clinicians.
*AI denotes artificial intelligence.


Many AI innovations, including tools permitting ambient scribing or summarization, could benefit patients and clinicians. Yet medicine must heed lessons from industries in which AI adoption has resulted in reduced autonomy for workers and inferior working conditions. AI threatens to transform clinicians into data subjects; if they don’t act now, clinicians may face a future as quantified workers. By organizing, engaging in advocacy, and seeking proactive legal recourse, clinicians can help ensure that their autonomy is prioritized alongside patients’ health during the AI revolution.”

Full article, IG Cohen, I Ajunwa and RB Parikh, New England Journal of Medicine, 2025.6.14