x

Register Now to Beegin Your Journey!

Register Now For Free to Beegin Your Journey!

Register Now to Beegin Your Journey!Register for free

How to Create a Survey With ChatGPT [Use Cases]

How to Create a Survey With ChatGPT [Use Cases]
Marek Strba
•  24.03.2023
With the recent OpenAI chatbot ChatGPT craze still going strong, we decided to look into a few ways how this famous chatbot can help you with creating a UX survey — or any other survey for that matter.

Before we get into the details, it is important to remember that despite all of its hype and being admittedly rather impressive, ChatGPT is still only a tool and should be treated as one. It is always a good idea to ask for a justification for the results it has generated. This justification will help you in deciding if the ChatGPT answered your prompt to your satisfaction.

We don’t recommend handing over the full responsibility of creating the survey to ChatGPT, however, it can certainly ease the process.

how to create a survey with chatgpt

As we see it, ChatGPT can help you create your survey in many different ways, but the main cases we focused on were:

  • Generating additional questions based on the questions already present in the survey
  • Generating answer options for a question
  • Creating questions based on the outcome description
  • Full questionnaire generation
  • Tuning your questions

Let’s dive deeper into each of the use cases and see how ChatGPT can help.

Generating additional questions

creating additional survey questions with chatgpt

Sometimes, you might get into a situation where you already have some portion of your survey ready, but for some reason you simply can’t pinpoint what the additional questions should be. 

We tried this scenario with ChatGPT. We created a survey focused on learning more about the recognizability of different sports apparel brands and other information that may prove useful for creating an app focused on selling sports apparel.

The questions that we created and then gave to ChatGPT were:

  • Which sport would you deem as your main focus?
  • What is the level of organization of your sport activities?
  • Where do you usually buy your sports apparel? 
  • What is the first sport apparel brand which comes to your mind?
  • Name as many sport apparel brands as you can:

These questions are aimed at learning something about the background of the participants, as well as some basic information about their preferences. We didn’t want to go into too much depth, so we tried to see whether ChatGPT could come up with some more complex questions. We asked it to generate 3 additional questions and to justify its suggestions as well. We got this as a result:

  • How often do you purchase new sport apparel? – This question is important because it helps to gauge the frequency at which individuals purchase sport apparel. It can provide insight into how often people replace their sport apparel and potentially how loyal they are to a particular brand. This information can be helpful for companies looking to develop marketing strategies and promotions, as well as for understanding how frequently they may need to release new products to keep up with consumer demand.
  • What factors influence your decision to purchase a specific sport apparel brand? – Understanding what factors influence people’s purchasing decisions can be very valuable for companies. This question can provide insight into what aspects of their brand or products are most important to customers, and how they can better meet their needs. Additionally, this question can help companies identify potential areas for improvement or differentiation in their product offerings.
  • Have you ever switched from one sport apparel brand to another? If so, why did you make the switch? – This question can provide information on brand loyalty and how it is influenced. It can help companies understand why customers may be leaving their brand for another, which can provide valuable feedback for product development and marketing strategies. Additionally, it can help companies identify areas where they need to improve or differentiate themselves to retain customers.

We were satisfied both with the questions as well as with the justifications. With the first 3 questions, we gave ChatGPT free reign on what type of questions it generates. To test how it deals with a more specific request, we asked it to generate one more question, but to make it a “Choose one out of 5 options”.

ChatGPT generated this question:

How important is sustainability in your sport apparel purchasing decision?

  • Very important
  • Somewhat important
  • Neutral
  • Somewhat unimportant
  • Not at all important

And provided the following justification:

This question can provide insight into the importance of sustainability to consumers when purchasing sport apparel. With increasing concern about environmental impact, many consumers are becoming more conscious of the sustainability practices of the brands they buy from. Understanding the importance of sustainability in the purchasing decision can help companies better meet the needs of their customers and potentially develop new sustainability-focused products and initiatives.

This question wasn’t really what we were looking for. We wanted a “Choose one out of 5 options”, and this question was more suitable for a Likert scale answer. Nevertheless, we were impressed that ChatGPT focused on sustainability, which is not the first thing which comes to mind when discussing sports apparel, even though it’s important. 

We instructed ChatGPT to provide another question, however, it simply produced a similar question while exchanging the sustainability for the endorsement of professional athletes. Only after we required a question with 5 distinct options for answering did it provided this:

How do you typically discover new sport apparel brands?

  • Through social media
  • Through friends or family
  • Through advertisements (online or offline)
  • While shopping in stores
  • Other (please specify)

As seen from these examples, ChatGPT is more than capable of generating additional questions for your survey. However, sometimes it may require additional instructions and further specification of what you are looking for. Also if you are creating a survey regarding specificities of your inhouse software or anything similarly specific, ChatGPT might struggle more.

In addition, it is important to remember that each question you put in the survey should be there for a reason. Adding more questions into your survey will increase the likelihood of dropouts and decrease the quality of the responses provided, unless you reflect the length of the study in the provided incentive.

Create surveys with UXtweak Survey Tool

The only UX research tool you need to gather both qualitative and quantitative data about your users

Register for free
Create surveys with UXtweak Survey Tool

Generating answer options for a question

Generating answer options for a question with ChatGPT

When you want to include a question with options in your survey, keep in mind that the options are as important as the question itself. If you have a good question prepared, but you are not sure about the options, you can ask ChatGPT to help you.

We prepared a few questions and tested out the ability of ChatGPT to provide us with reasonable answer options. Here are some examples:

“What is your yearly budget for UX tools?” 

Options for this question may be hard to choose. You want the options to cover all the participant types, while making sure there are not too many options present, as well as keep the options narrow enough to actually provide a usable distinction between participants. With all this in mind, ChatGPT provided the following options along with justifications:

  • Less than $1,000: This option is suitable for individuals or small businesses with limited UX needs or those who are just starting out with UX.
  • $1,000 – $5,000: This option is suitable for small to medium-sized businesses that require basic UX tools and functionalities.
  • $5,000 – $10,000: This option is suitable for medium to large-sized businesses that require more advanced UX tools and functionalities.
  • $10,000 – $20,000: This option is suitable for larger businesses that require a more comprehensive suite of UX tools and functionalities or for those who have specific UX needs.
  • $20,000 – $50,000: This option is suitable for larger businesses that have a significant investment in UX and require high-end UX tools and services.
  • More than $50,000: This option is suitable for large enterprises that have a dedicated UX team and require a suite of premium UX tools and services.
  • I don’t have a budget for UX tools: This option is suitable for individuals or small businesses who do not have a dedicated UX team or have limited UX needs.
  • I’m not sure: This option is suitable for individuals or businesses who are unsure of their UX needs and budget, or for those who are exploring different UX tool options.

This was a question with rather specific answers and the only thing that ChatGPT needed to determine were the ranges. Next, we tried a less-defined question: 

“How do you conduct UX studies in your company?”

The question itself was rather vague on purpose. If there were no options provided, the answers “Moderated and Unmoderated”, as well as “In a lab” were both viable answers, even though they look at the question from different angles. We wanted to see how the chatbot would deal with this. The options and justifications it provided were:

  • Surveys: This option involves creating and administering surveys to gather quantitative data from users about their experience with a product or service. Surveys can be distributed online or in-person and can provide valuable insights into user behavior and preferences.
  • Interviews: This option involves conducting in-depth interviews with users to gather qualitative data about their experience with a product or service. Interviews can be conducted in-person, over the phone, or via video conference and can provide valuable insights into user needs and pain points.
  • Focus groups: This option involves bringing together a group of users to discuss their experience with a product or service in a group setting. Focus groups can provide valuable insights into user behavior and preferences, as well as provide an opportunity for users to share feedback and ideas with each other.
  • User testing: This option involves observing users as they interact with a product or service, either in-person or remotely, and gathering data on their behavior and reactions. User testing can provide valuable insights into user behavior, preferences, and pain points and can help identify areas for improvement.
  • Analytics: This option involves analyzing user data, such as website traffic or click-through rates, to gain insights into user behavior and preferences. Analytics can provide valuable insights into how users interact with a product or service and can help identify areas for improvement.
  • A/B testing: This option involves testing two different versions of a product or service to determine which one performs better with users. A/B testing can provide valuable insights into user behavior and preferences and can help identify areas for improvement.
  • Expert reviews: This option involves having UX experts review a product or service and provide feedback based on their expertise and experience. Expert reviews can provide valuable insights into best practices and areas for improvement.
  • Other: This option allows respondents to provide additional details on how they conduct UX studies in their company, which can provide valuable insights into new and innovative methods.

As we can see based on the answers provided by ChatGPT, it looked at the question from an angle of the techniques which might be used to conduct the UX study. All the answers it provided were relevant and focused on the same view.

We tested it further on more similar questions and the results were always usable. Sometimes the ChatGPT needed some help or additional description on what we were expecting, however, in the end, we were always pretty satisfied with the results it provided us.

Creating questions based on the desired outcome 

Creating questions based on the desired outcome with ChatGPT

There are cases when you know what you need to learn from your participants, but you are not quite sure what you should ask to gather all the information you want. We used this “target information” as prompts for ChatGPT to provide us with a question. We wanted to challenge the chatbot and that’s why we chose a topic which may have been sensitive to some groups of people.

The prompt we gave to ChatGPT was:

I need a question which would allow me to learn whether a participant has any inherent biases connected to their cultural background and may influence the result of a test on a chatting application prototype.

The first question suggestion provided was:

“Can you tell me about your cultural background and how it might influence your communication style or attitudes towards certain topics? Do you think these biases could potentially impact your feedback on the chatting application you are testing?”

This question was rather long, and the language it used seemed to us a little too aggressive for a sensitive topic like this. We provided several additional instructions and corrections to ChatGPT until we finally arrived at a question we believed used a sensitive enough form:

“To help us better understand how our chatting application may be used across different cultures, could you please share a bit about your cultural background and how it may influence the way you communicate?”

This question clearly states why we are asking about the cultural background to avoid any suspicion about any other motivation. The question also uses a very respectful tone to ask the participant to provide this sensitive information. However, it is still a little bit on the nose and could lead to answers, which might not be completely honest.

creating a survey with chatgpt

We asked ChatGPT to be less obvious and we got the following result:

“We value diversity and want to ensure our chatting application is inclusive of all cultures. Can you tell us a bit about your personal background and any communication customs or practices that may differ from the mainstream?”

This was the best version of this question we were able to get. It doesn’t mention the biases in any negative way, and its respectful tone and shown interest in the cultural differences may motivate the participants to provide more honest and in-depth answers. We also agreed that a question like this should not be required unless it can’t be avoided.

Another prompt we provided to ChatGPT was connected to one of the most frequent problems when designing a product — the trust of users. It can be hard to convince someone your app is safe enough for them to provide private information, such as date of birth or your social security number. The prompt we gave to ChatGPT was:

Can you provide me with a question that would help me learn if my target group can trust online applications when sharing personal information, such as date of birth, social security number, etc.?

This time we didn’t need to add any additional instructions or corrections, because the first result we got from ChatGPT was satisfactory:

“When you are using an online application that asks for your personal information, such as your date of birth or social security number, how confident are you that this information will be kept secure and not be misused?”

This question invites participants to share their level of trust in online applications when it comes to sharing personal information. You could follow up with additional questions to understand their concerns and factors that influence their trust, such as:

  • “What are some factors that would make you feel more confident about sharing personal information online?”
  • “What are some concerns you have about sharing personal information online, and how can online applications address those concerns?”
  • “Have you ever had a negative experience with sharing personal information online? If so, what happened?”

These follow-up questions can help you to identify specific concerns or factors that may influence participants’ trust in online applications when sharing personal information, and can help you to design an application that addresses those concerns while building trust with your target group.

The initial question was once again rather straightforward and omitted some of the factors which might influence how a user feels in cases like this. The follow-up questions, which were also generated and managed to specify the results we got from the prompt based on the generation, were satisfying. Even though the first prompt took some time to perfect, it ended up being a question worth including in a survey.

creating a survey with chatgpt

The biggest problem we potentially see is that prompts for ChatGPT must be text only. It can’t provide you with a question specific for a design or a screenshot, because it can’t analyze it in-depth. You can try producing a long prompt, describing what is on the image or a screenshot, but you are better off coming up with the question itself.

Also similar to the generation of the additional questions, prompts can only slightly describe the intricate details of your specific app in a way that would be helpful without significantly stretching the length of the prompt. Coming up with a prompt like this will most likely be more work than creating the question itself.

Full questionnaire generation

Generating a full survey with ChatGPT

We wanted to take the question generation process one step further and learn more about the extent to which this process can be automated using ChatGPT. 

Most UX researchers use a complex testing platform to conduct their unmoderated studies. It is a good practice to have pre-study and post-study questionnaires as an option for these studies. With this in mind, we gave the following task to ChatGPT:

I have a usability study where I test a prototype of a booking website. I need 5 pre-study questions and 3 post study questions, which would help me gather more information from my participants. The study is unmoderated.

The initial response we got was:

Pre-study questions:

  • Have you used any similar booking websites before? If so, which ones?
  • What kind of bookings are you interested in making through this website (e.g., travel accommodations, event tickets, rental services, etc.)?
  • How frequently do you make online bookings for these types of services?
  • What factors are most important to you when using a booking website (e.g., ease of use, speed, pricing, availability, customer service, etc.)?
  • Are there any features or functionalities that you specifically look for when using a booking website?

Post-study questions:

  • Overall, how easy or difficult was it to complete your booking on this website?
  • Were there any particular steps or tasks during the booking process that you found confusing or frustrating? If so, please describe.
  • Would you be likely to use this booking website again in the future? Why or why not?

Even at first glance, one can see there are some flaws in these questions. For example, the question “Have you used any similar booking websites before? If so, which ones?” is included in the pre-study questionnaire, which is presented to the participants before the main body of the study, which includes the prototype itself. Therefore, they won’t be able to answer this question, since they don’t know yet what the tested website/prototype looks like.

We mentioned a few points like this to ChatGPT along with some corrections and further instructions.

The final result we received and were satisfied with was:

Pre-study questions:

  • Which booking sites have you used in the past?
  • What types of services do you usually book online?
  • How frequently do you make online bookings for these types of services?
  • What factors are most important to you when using a booking website (e.g., ease of use, speed, pricing, availability, customer service, etc.)?
  • Are there any features or functionalities that you specifically look for when using a booking website?

Post-study questions:

  • What kind of bookings are you interested in making through this website?
  • Travel accommodations (e.g., hotels, vacation rentals, etc.)
  • Event tickets (e.g., concerts, sports games, theater performances, etc.)
  • Rental services (e.g., car rentals, equipment rentals, etc.)
  • Restaurant reservations
  • Tourist attractions (e.g., museum tickets, theme park tickets, etc.)
  • Transportation (e.g., flights, trains, buses, etc.)
  • Other (allow participants to write in their own response)
  • Overall, how easy or difficult was it to complete your booking on this website?
  • Very easy
  • Somewhat easy
  • Neutral
  • Somewhat difficult
  • Very difficult
  • Were there any particular steps or tasks during the booking process that you found confusing or frustrating? If so, please describe.
  • Would you be likely to use this booking website again in the future? Why or why not?

Using these examples we believe that ChatGPT shouldn’t have problems generating extra questions for simple studies. However, all the questions it generated were rather general. If you are looking for more specific questions regarding the functionality or design decisions which are more or less unique for your application, ChatGPT will most likely not be able to help you, unless you take the time to thoroughly describe the context of your application to it. The question remains whether it is worth your time or not.

We also tried generating a whole survey based on a simple description:

“Generate a short survey about customer satisfaction regarding a running application”

The first version we got was again very generic:

  • On a scale of 1 to 5, how satisfied are you with the overall performance of the application?
  • How easy was it to navigate through the application?
  • How often do you use the application?
  • On a scale of 1 to 5, how quickly does the application respond to your actions?
  • How frequently have you encountered technical difficulties while using the application?
  • If you have encountered technical difficulties, were they resolved in a timely and efficient manner?
  • On a scale of 1 to 5, how visually appealing is the application to you?
  • How likely are you to recommend the application to others?
  • How frequently do you receive updates for the application?
  • On a scale of 1 to 5, how satisfied are you with the customer support for the application?

None of the questions were specific to a running application and all could be used with any application. We asked ChatGPT to adjust the provided questions so that they would fit better with the context of a running application.

The adjusted questions it provided were:

  • On a scale of 1 to 5, how satisfied are you with the accuracy of the distance and pace measurements of the running application?
  • How easy was it to start and stop a run on the application?
  • How often do you use the running application?
  • On a scale of 1 to 5, how quickly does the application respond to your actions during a run?
  • How frequently have you encountered technical difficulties while using the running application?
  • If you have encountered technical difficulties during a run, were they resolved in a timely and efficient manner?
  • On a scale of 1 to 5, how visually appealing is the running application to you?
  • How likely are you to recommend the running application to other runners?
  • How frequently do you receive updates for the running application?
  • On a scale of 1 to 5, how satisfied are you with the accuracy of the GPS tracking in the running application?

As we can with some of these questions, it simply added “running” in the text.

Generic version:

On a scale of 1 to 5, how visually appealing is the application to you?

Running app version:

On a scale of 1 to 5, how visually appealing is the running application to you?

However, it also managed to generate some run specific questions, such as:

On a scale of 1 to 5, how satisfied are you with the accuracy of the distance and pace measurements of the running application?

So in short, yes, ChatGPT can help you generate a whole survey, but once again you will need to point it in the right direction and be patient with the adjustments through the additional prompts.

Create surveys with UXtweak Survey Tool!

The only UX research tool you need to gather both qualitative and quantitative data about your users

Register for free
Create surveys with UXtweak Survey Tool!

Tuning questions

We also tried exchanging the roles. We created a question and asked ChatGPT if it’s a good question to ask and how would it improve the question. The task we gave it was:

I have this question: “What do you think is the hardest part of creating a good survey?” and I want to include it in my survey. Is this a good question? How would you improve it?

ChatGPT rather lazily suggested that we can simply add options to this question and provided the following options:

  • Formulating clear and concise questions
  • Avoiding biased or leading questions
  • Choosing the right question type (e.g., multiple choice, open-ended, Likert scale)
  • Determining the appropriate survey length
  • Ensuring that the survey is relevant to the target audience
  • Designing a visually appealing and user-friendly survey

We tried a few other questions as well with similar results. The change came only after we tried this:

I have this question: “Do you think automated UI evaluation with recommendations would be helpful?” and I want to include it in my survey. Is this a good question? 

How would you improve it?

The full answer we got from ChatGPT was:

Yes, that is a good question to include in your survey. It can help you understand whether people would find automated UI evaluation with recommendations helpful, which could inform the development of future products or services.

To improve the question, you could consider providing some additional context and explanation to ensure that respondents understand what you mean by “automated UI evaluation with recommendations.” For example, you could add a brief description of what automated UI evaluation is and how it works, as well as some potential benefits of using it. This could help respondents make a more informed decision when answering the question.

Additionally, you could ask follow-up questions to gather more specific feedback on why respondents do or do not find automated UI evaluation with recommendations helpful. For example, you could ask what specific types of recommendations they would like to receive, how they would like the recommendations to be presented, and how they would use the recommendations in their work. This will give you a more nuanced understanding of respondents’ preferences and needs when it comes to automated UI evaluation and recommendations, which can help you develop more effective products or services.

We asked it to modify the question according to the suggestions it gave us and the result was:

“Automated UI evaluation is a process in which an AI system assesses the usability of a user interface and provides recommendations for improvement. Do you think automated UI evaluation with recommendations would be helpful in your work related to user interfaces? If yes, what specific types of recommendations would you like to receive, and how would you like them to be presented? If not, please explain why automated UI evaluation would not be helpful to you.”

Based on our experience, you can use ChatGPT as a consultant on the quality of your questions; however, it is likely it won’t provide you with anything groundbreaking. Fortunately, it can point you in the right direction.

creating a survey with chatgpt

Closing thoughts

Our experimenting with ChatGPT in the scenarios we described above has led us to believe that it can indeed be very helpful in the process of creating a survey.

Some general points we learned and suggest you remember should you decide to use ChatGPT in the future are:

  • It’s a tool and it should be treated as one. You need to provide it with a set of clear instructions and corrections to get to the results you are looking for.
  • The final refinement of the texts will be up to you. ChatGPT is good in providing quick and raw results, however, the final questions should be your edits.
  • ChatGPT performs well when you task it with generic prompts. If you are looking for something more brand specific for example, it might not be able to help you or, it will require so much baby sitting that you might as well create the questions by yourself.
  • All inputs and outputs of ChatGPT are in text. If you need help with image-related questions, its capabilities are limited. It can help you with providing some resources for illustration images, but not much more.
  • If you feel like the responses you are receiving are too generic (and you can believe me they will be), you can try specifying the context further via additional prompts as we did with the running app survey. Also, pointing out specific logical mistakes ChatGPT made in producing the result can help as well.

In the end, you will need to try using ChatGPT for yourself in your specific case. It might be a futile effort, but at least it’s fun.

Share on socials |

Read More