blog post

Lessons from analyzing 4 big surveys with AI

April 10, 2023   |   By Isaac Kadosh

At Hoag, we launched a big initiative to build a centralized patient experience across multiple Hoag networks. To get this initiative rolling, we needed a lot of supported data-backed evidence.
We needed to know what the patients truly thought about our facilities and digital services.

So, we sent out surveys to thousands of patients.

We had four different surveys, each focusing on a different aspect of our hospital, such as the actual hospital experience, digital services, urgent care, and more.

We put together four surveys and sent them out to patients!

Goals

The goal of these surveys was twofold. First, we wanted to establish a baseline of how satisfied our patients were with the existing services. This data would be crucial for developing our future research and improvement plans. Second, we aimed to maintain and improve satisfaction across Hoag. Each survey consisted of ten questions, mostly multiple-choice, with one open-ended question that allowed patients to share their overall thoughts. We wanted to gather comprehensive feedback to guide our decision-making process.

However, analyzing so many responses, particularly the open-ended question, was a daunting and time-consuming task. I needed a strategy, a way to handle this wealth of information effectively.

AI to the rescue

So guess what? I decided to test out ChatGPT! It had just launched, and people were raving about its capabilities. I decided to start with the multiple-choice questions. I fed the survey data into ChatGPT, eager to see how it would perform. Later, I planned to use it to analyze the open-ended responses and extract meaningful insights, which required a more careful approach.

Guess what? ChatGPT handled the multiple-choice questions like a champ! It gave me all the numbers we needed, even the average scores for each survey. I was thrilled! I had the baselines ready, and I could add them to our report. Things were looking good, and we were excited to move forward.

Unexpected turns

But then, it was time for the open-ended question. We had hundreds of responses, and going through them one by one would take forever. That’s where ChatGPT was supposed to shine. I fired up a chat with ChatGPT 4, thinking it would do wonders. I fed it around 300 responses, in chunks of 25 at a time.

Well, here’s where things took an unexpected turn. ChatGPT started hallucinating! It listed responses that we never even gave it. Can you believe that? It was making stuff up! Not only that, it even ignored some of the real responses we fed it and replaced them with its own creations. 

At that point, I realized that trusting ChatGPT blindly was a mistake. Understanding human sentiment is a tough nut to crack for AI systems. Some responses were tricky, like when someone said, “I like Hoag, but I hate MyChart.” It had both positive and negative elements, so what should we do with it? Humans still have the upper hand when it comes to understanding human responses.

Striking the balance

So, we learned our lesson. Open-ended analysis using machines needs to be handled with caution. We can’t fully rely on their outputs. They think differently than us humans, and we’re still better at interpreting emotions and feelings through text. It’s always good to question and double-check what they come up with.