Why ChatGPT Raises Trust Issues
Many recent LinkedIn posts extol the virtues of ChatGPT for copywriting, content generation, and business and marketing planing. Whilst machine-learning models such as ChatGPT produce text that may appear similar to published news content, it is easy to overlook that the base function of an ML model is to create output that is statistically likely to be true by repackaging content created by others. The ML model does this without developing an understanding of what it produces and, although the end product may seem well-reasoned and persuasive, mistaking that output for objective fact or scientific knowledge is problematic. Doing so creates a knowledge gap in decision-making that could introduce unknown errors if that content is relied upon. In contrast, content produced by mass media and related news sources is verifiable, vetted by journalists with substantial experience, and subject to fact-checking as well as editorial standards.
A horizontal scan of current news headlines indicates that major media outlets have also picked up on this problem. Relevant links are included under 'Additional References' below.
I was curious as to what would happen if I asked ChatGPT to assess its own capabilities and limitations in this regard. Following a detailed prompt, its response was as follows:
"As an AI language model, my responses are generated based on patterns and associations learned from large amounts of text data. While I strive to provide accurate information, the accuracy of my responses may vary based on several factors, including the quality of the data used to train me and the complexity of the question being asked. In general, the accuracy of the content I provide can be comparable to the content published by journalists on leading media platforms, as both are based on publicly available information and are subject to similar standards of accuracy and reliability. However, it's important to note that the content I provide is not necessarily fact-checked or reviewed by human experts before being presented to users, whereas media platforms typically have editorial standards and practices in place to ensure the accuracy and credibility of their content. Therefore, it's always a good idea to verify the information provided by any source, including me, with other reliable sources and to exercise critical thinking and judgement when evaluating information."
In short, it is always good practice to screen where information comes from. Not all information is created equal and there is a profound need to ensure veracity as a fundamental expectation for sound decision-making. In this particular case, it is advisable to supplement ChatGPT output with additional, credible sources to form a more complete picture before the end content is relied upon.
Read the World Economic Forum article here
2 Feb 2023 - The Guardian - "ChatGPT maker OpenAI releases ‘not fully reliable’ tool to detect AI generated content"
14 Dec 2022 - Harvard Business Review - "ChatGPT Is a Tipping Point for AI"
8 Dec 2022 - The Atlantic - "ChatGPT Is Dumber Than You Think"