Skip to Main Content

Using ChatGPT in Research: Limitations

Things to Consider

While ChatGPT and other AI tools may sound like a great and efficient way to conduct research, there are limitations to consider:

  1. Potential Bias: Because ChatGPT's answers are based directly on the prompt you provide, it is important to note that there are potential biases based on how you phrase your prompt. For example, if you phrase the topic of your question in a negative light (Ex: What are the disadvantages of eating broccoli every day?) you may not see the other side of the argument. Because you only asked for one side, ChatGPT may only answer that part of the topic. You will need to rephrase your question to include both sides of the argument, or use additional prompts to cover all sides.
  2. False/Outdated Information: Because ChatGPT is an evolving language model, it is not perfect and can inevitably make mistakes. It may provide inaccurate or outdated answers to complex questions, so it is very important to double check any answers to complicated questions with credible sources. In some cases, it may not know the answer to something and make up an "answer" that sounds correct. In this way, ChatGPT prioritizes providing an answer regardless of accuracy.
  3. Legal Limitations: There are some questions that cannot be answered through AI due to lack of access to information. Questions pertaining to specialized areas, such as legal and/or medical fields, are best answered by experts, where that type of information is not available freely online. In addition, ChatGPT does not have real world experience, which can greatly impact answers to real questions.
  4. Incorrect Bibliographic Information: Because citation rules are always changing, ChatGPT does not always provide accurate citations. It's important to double check any citations made with AI tools.
  5. Long Answers: ChatGPT's answers tend to be unnecessarily long and formal, which can waste time in the research process.
  6. Human Insight: Chat bots are unable to understand slang, idioms, regional and cultural differences, and context like humans can. As a result, many questions are taken at face value and answers are very literal. This can impact the quality of using ChatGPT for research purposes, especially in the humanities and social sciences where perspective and context play a large role in writing and research.

If you apply these limitations to the sample prompt in the previous tab ("Impacts of eating sugar on humans"):

  1. Potential Bias: The prompt was not asking for one side of the question (advantages or disadvantages), but most information retrieved focused on the negative aspects of sugar consumption. This is important to consider, especially when prompting debatable topics.
  2. False/Outdated Information: While the information provided appears correct, it's critical to fact check any information in the event that it came from an outdated or nonexistent source. This especially true for factual information, such as the medical and legal fields.
  3. Legal Limitations: Not all accurate information on sugar consumption could be available online, and could be retrieved from an expert in the field (ChatGPT does not retrieve information directly from experts).
  4. Long Answers: The sample responses were very long for such a short prompt.
  5. Human Insight: While many studies on sugar consumption show similar findings, different bodies may react to sugar consumption differently. ChatGPT treats this prompt in a general sense and does not consider exceptions.