Skip to main content

Think Twice Before Asking for That Photograph

Even when there’s minimal medical liability risk, it is the physician’s duty to protect a patient’s reputation and well-being when circumstances call for it.

Well before the onset of COVID-19 restrictions, doctors and other healthcare providers have relied upon patients to provide photographs and videos to address medical issues, especially when seeking remote consultation. A diagnosis could often be made by viewing an image of a skin rash or other condition, with prompt, thorough resolution. Physicians and patients alike take this easy access to photography for granted, sometimes failing to understand the implications when it comes to electronically taking, transmitting, or sharing a picture through different platforms or social media.

On-Demand Webinar: Key Strategies for Ensuring a Profitable Independent Practice
During this one-hour program, practice management expert Debra Phairas discusses how various business models and operational enhancements can increase revenue to help your practice remain successful in today’s competitive marketplace.

In this “Case of the Month“, we examine problems related to photography and the uploading of what may be considered inappropriate content with possible sanctions and criminal implications for the patient or patient’s family.

Recently, The New York Times highlighted two cases where parents of minor children, in consultation with their pediatricians, took photos of their child’s genitalia to assist in diagnosing a medical problem. One case occurred in San Francisco and the other in Houston. While we will primarily focus on the San Francisco case, the circumstances surrounding each are nearly identical. In both cases, intimate photos were taken of a genital rash and sent to the doctor via a secure portal for viewing on the other end by a medical professional. And, in both cases, what seemed to be an innocent transaction of information, by way of photography, became a nightmare for the parents. The problem was not on the doctor’s side.

In the San Francisco case, the parents called their pediatrician to schedule a weekend appointment for an emergency consultation. The nurse scheduling the appointment asked the parents to take photos of the rash and send them to the doctor so that they could be reviewed prior to the visit. 

Digital images were automatically uploaded from the individual’s phone to the cloud. Monitoring for inappropriate content, Google flagged the photographs and sent a warning to the phone’s owner notifying them that their accounts were locked and suspended. The reason Google suspended the account was because “harmful content that was a severe violation of Google’s policies and might be illegal," was detected. A “learn more” link led to a list of possible reasons, including child abuse and sexual exploitation.1 In addition, a report was made to the CyberTipline of the National Center for Missing & Exploited Children, which in turn reported the matter to the local police for follow up and investigation.

A little technical explanation is needed at this point to explore how questionable content is often identified by large technology companies. In this case, Google has artificial intelligence capabilities to monitor and scan millions of images uploaded to the cloud. These capabilities have been refined over the years with the intent of monitoring inappropriate content that is being circulated or trafficked, and possibly identifying unknown potential victims of abuse. Google has also made this technology available to other technology companies, including Facebook. Apple has delayed adopting this technology due to privacy concerns.

Once flagged, a “human content moderator” reviews the images to determine if they meet the federal definition of child sexual abuse. If so, the user’s account is locked and there is a search for other exploitive material on the user’s account, and, as required by law, a report is made to the CyberTipline. The Center reports that it received 29.3 million reports in 2021, about 80,000 per day. These statistics include inappropriate content that has been previously identified and continues to circulate. Emphasis is placed on investigating new cases to expedite protection of new potential victims. The CyberTipline shares information with other technology companies, and it reports it made over 4,260 potential new child abuse cases to authorities in 2021. Google alone made over 600,000 reports of child abuse material and disabled over 270,000 user accounts.2

With respect to emerging artificial intelligence, Google continues to teach and refine its monitoring capabilities so as to not be overly sensitive in flagging innocent images, such as someone giving a baby a bath or a young child running naked through a sprinkler.

Given this information, physicians are faced with a fine line between asking parents to do what they think is right to assist their children with medical issues and to monitor content which may be determined to be inappropriate or exploitive.

Suzanne B. Haney, MD, Chair of the Council of Child Abuse & Neglect for the American Academy of Pediatrics, advises parents against taking photos of their children, even when directed by a doctor. She states, “The last thing you want is for a child to get comfortable with someone photographing their genitalia.”3

An informal poll of physicians indicated that many were not aware of this problem, but also indicated that they would not ask a parent to take such sensitive or intimate photographs of their children. Rather, they would ask that the child be brought in, citing that image quality is problematic and in-person encounters are more effective. Dr. Haney goes on to state that, as a last resort, if a photo is necessary, take a picture, send it through a secure platform to the healthcare provider, and immediately delete the photo so it is not uploaded to the cloud.⁴

The intent of this article is to bring awareness to this issue and stress that great discretion must be exercised when it comes to medical photography. Again, it seems this is not so much a problem for physicians, but more for parents of minor children. However, parents, in their defense, would claim that a physician—a trusted healthcare professional—asked them to take and send the photographs, even if more appropriate or safer alternatives were available. Both parents and physicians need to be aware of the risks of photography and act accordingly, depending on the circumstances.

In the San Francisco case, the father was investigated by the local police and able to demonstrate that the pictures taken and shared were for a legitimate medical purpose. However, Google refused to restore his account despite proving that there was not a case. The father decided not to challenge Google due to the legal expense involved.

Discretion with photography is not just limited to pediatric cases but should be used across the board for all patients.  

Brad Dunkin is a Senior Risk Management and Patient Safety Specialist for CAP. Questions or comments related to this article should be directed to BDunkin@CAPphysicians.com

References:

1A Dad Took Photos of His Naked Toddler, Google Flagged Him as a Criminal. New York Times. August 21, 2022.

2Google AI Flagged Parents' Accounts for Potential Abuse Over Nude Photos of Their Sick Kids. The Verge. August 21, 2022.

3Google Flags Photos of Father’s Sick Son as Child Abuse, Informs Police. PetaPixel. August 21, 2022.

⁴NYT: Parents Lose Google Accounts Over Abuse Image False Positives. PC Magazine. August 21, 2022.