ChatGPT, developed by Microsoft-backed research lab OpenAI, has occupied the artificial intelligence (AI) news stream since its initial launch in December 2022. ChatGPT is a natural language processing (NLP) generative AI tool that, depending on the model a user can access, can generate text-to-text, or image-to-text, output in a formula which mimics human conversation. The first publicly available tool of its kind, ChatGPT has become one of the fastest-growing consumer applications of all time, attracting 100 million active monthly users earlier this year.[1] Of course, like most new technologies, ChatGPT has its risks and limitations.
ChatGPT is Amazing, but Sometimes it’s Amazingly Wrong
While ChatGPT has extensive potential for the advancement of all types of tasks, there are also system bugs that plague its reliability. By now, you’ve likely heard about ChatGPT “hallucinations”: ChatGPT output that is either slightly, or completely, incorrect or even nonsensical, delivered with confidence. As an example of a ChatGPT hallucination, the Guardian reported that ChatGPT made up the title of a news article supposedly written by them, with the associated reporter’s name, as a source of information on a particular topic inquired upon by a researcher. The article in question had never been written.
ChatGPT’s answers also seem to show bias as a result of the dataset it was trained with, a concern that OpenAI has addressed publicly.[2] While OpenAI seeks to tackle misinformation, questions arise surrounding what threshold exists for deeming certain information as “misinformation” and who is responsible for making those decisions.
Can we Govern ChatGPT?
Bill C-27, the Digital Charter Implementation Act, is currently in second reading in the House of Commons.[3] In part, the Bill seeks to implement the Artificial Intelligence and Data Act (AIDA), the first legislation in Canada specifically addressing the AI sphere.
AIDA’s current text leaves many of its “essential components” to future regulations. However, the recently released Companion Document to AIDA from Innovation, Science and Economic Development Canada, provides a window into the Government of Canada’s intent and key considerations when establishing a regulatory framework for AI (the “Companion Document”).[4]
A continued pain point for OpenAI, it appears, will be transparency. The Companion Document defines transparency as “providing the public with appropriate information about how high-impact AI systems are being used” and states that “the information provided should be sufficient in order to allow the public to understand the capabilities, limitations, and potential impacts of the systems”. ChatGPT is a black box system, meaning that it is difficult to understand how it works and how it makes decisions.
Regulatory privacy concerns arising from ChatGPT have already been acknowledged by international privacy regulators. In late March, the Italian Data Protection Authority temporarily banned access to ChatGPT, citing concerns with EU privacy regulation compliance.[5] In the UK, government is calling on regulators to create rules for AI in “tailored, context-specific approaches that suit the way AI is actually being used in their sectors.”[6]
In early April 2023, the Office of the Privacy Commissioner of Canada announced that it launched an investigation into OpenAI in response to a complaint that alleged collection, use and disclosure of personal information without consent.[7]
The Dark Side of ChatGPT
While every new technology has pros and cons, ChatGPT’s ability to generate human-like text carries specific concerns about use for malicious purposes and impersonating individuals.
Cybercriminals have exploited ChatGPT’s drafting abilities to develop more sophisticated phishing emails. Despite other indicators that individuals can consider when determining an email’s authenticity, one indicator that may no longer be relevant is whether a suspected phishing email is riddled with typos and grammatical mistakes. Cybercriminals can also develop fake chatbots, powered by technology like ChatGPT, to trick employees into divulging sensitive information.[8] The concerns regarding bad actors’ use of ChatGPT led the European Union’s law enforcement agency to issue a warning about the tool’s potential for misuse,[9] identifying ChatGPT as an avenue for increased phishing attempts, disinformation, and cybercrime.[10]
Ethical hackers have put ChatGPT to the test as well. In one instance, a researcher tricked the tool into building undetectable malware for locating and exfiltrating certain documents, despite ChatGPT’s directive to refuse malicious requests.[11] Although OpenAI seeks to eliminate the ways that users can misuse ChatGPT, there is no telling if, or when, ChatGPT could be fully secure against inappropriate or dangerous inputs.
Where does Liability Lie for ChatGPT’s Output?
The risk of misinformation with ChatGPT remains one of its most significant threats to its reliability. One question that arises is whether, in sharing misinformation, OpenAI could be liable for its incorrect statements, or subsequent spread of misinformation. This is precisely what is being explored by a regional Australian Mayor who is taking formal steps in a defamation action against OpenAI, alleging that ChatGPT erroneously identified him as a guilty party in a foreign bribery scandal when in actuality he was the whistleblower and never charged with a crime.[12]
Conclusion
ChatGPT has the potential to assist in boosting productivity in a great number of areas. However, before using it professionally, one must consider the associated risks and limitations. In the words of one of OpenAI’s co-founders, Sam Altman, “it’s a mistake to be relying on it for anything important right now.”[13] Perhaps, for now, ChatGPT is best treated just as Bing frames it when you open a search: “What do you want to have fun with today?”
[3] Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, online: LEGISinfo: <https://www.parl.ca/legisinfo/en/bill/44-1/c-27>.
[13] Sam Altman [@sama], “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it's a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.” (December 10, 2022), online: Twitter <https://twitter.com/sama/status/1601731295792414720?lang=en>.
Any article or other information or content expressed or made available in this Section is that of the respective author(s) and not of the OBA.