(855) ER-TECH-1
healthcare managed it services
msp healthcare
(855) ER-TECH-1

The Risks of Using ChatGPT in the Workplace

Jun 01, 2023

"With its tireless dedication, vast knowledge, and remarkable adaptability, ChatGPT is the indispensable colleague that transcends time and expertise, empowering teams to achieve their full potential in the digital era.”


If you’re wondering where that quote comes from, it’s a response ChatGPT generated when asked to create a quote about itself.


OpenAI
’s ChatGPT, in simple terms, is a general-purpose chatbot that uses artificial intelligence (AI) to understand user prompts and generate human-like responses.


Released on November 30, 2022, ChatGPT gained
one million users in five days and over 100 million users in less than two months, making it the fastest-growing consumer app in history.


People are talking about it and talking
to it, but it’s not all sunshine and rainbows.


Less than four months after it was released, ChatGPT suffered a data breach.

The ChatGPT Data Breach

chatgpt data leak

On March 20, 2023, OpenAI discovered a bug in the Redis client open-source library, redis-py, which OpenAI uses to cache ChatGPT user information for faster recall and access.


According to
OpenAI, the ChatGPT exploit exposed some users’ personal and payment information to other users. Such data included:



  • First and last name
  • Email address
  • Payment address
  • Credit card type
  • Last four digits of their credit card number
  • Credit card expiration date


“[The bug] allowed some users to see titles from another active user’s chat history,” admitted OpenAI. “It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.”

What Are the Risks of ChatGPT?

dangers of chatgpt

With its user-friendly interface, human-like responses, and the breadth of topics it can handle, ChatGPT is racing past other large language models (LLMs) like Bard by Google and LLaMA by Meta.


However, with so much knowledge and potential at their fingertips, users can’t help but ask crucial questions like:
What are the drawbacks of ChatGPT? And what are the risks of using them in one’s business?

Security and Privacy Issues

According to Security Intelligence by IBM Security, “Whenever you have a popular app or technology, it’s only a matter of time until threat actors target it.”


Depending on what you use it for and how you use it, using LLMs in the workplace may involve sharing sensitive or confidential information with an external service provider. So if you use ChatGPT, your prompt and all its details will be visible to OpenAI. The LLM you use will store your query and use it for further development.


If your employees use an LLM for work-related tasks and cybercriminals start targeting it, your company data could be at risk of getting leaked or made public. Because of these security and privacy risks, major companies like Apple, Samsung, JPMorgan, Bank of America, and Citigroup have
banned ChatGPT in the workplace to protect confidential information.


The UK’s
National Cyber Security Centre (NCSC) recommends thoroughly understanding your LLM’s terms of use and privacy policy before adding sensitive questions or prompts or allowing your team to use it in the workplace.


Read More: How to Keep Your Data Off The Dark Web

Inaccurate or Unreliable Information

OpenAI’s list of ChatGPT limitations states, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”


ChatGPT and other LLMs generate responses based on patterns and examples from a vast amount of data available on the Internet. While their sources include scientific research, journals, and books, they also include social media posts, fake news, and offensive material.


LLM responses are not always fact-checked and may not always produce accurate or reliable information. As a result, LLMs sometimes “hallucinate.”


According to
Zapier, hallucination is a term used to describe instances when AI tools like ChatGPT generate inaccurate or completely made-up responses because “they lack the reasoning to apply logic or consider any factual inconsistencies they're spitting out. In other words, AI will sometimes go off the rails trying to please you.”


For example, writers and journalists from several agencies were
shocked to find their names attached to articles and bylines that were never published—ChatGPT had fabricated the links and citations.

Legal and Ethical Repercussions

Using AI models such as ChatGPT may raise legal and ethical concerns, especially when used in the workplace.


ChatGPT may inadvertently generate content and cite incorrect sources—or not cite any sources at all—and infringe on intellectual property rights or copyrights.


Another ethical risk involving ChatGPT and other AI tools is its ability to perpetuate bias and discrimination, both intentionally and unintentionally.


ChatGPT generates content based on the massive amount of training data fed to it. So if that data contains biases, ChatGPT can produce discriminatory responses.


Unfortunately, unintentional bias introduction isn’t the only way a user can bring out toxic content from ChatGPT. By adjusting a system parameter, users can assign a persona to ChatGPT. A
recent study shows that, when given a persona, ChatGPT’s toxicity can increase up to six times.


The study also shows that the responses ChatGPT generates can vary significantly depending on the subject of the chat. For example, the toxicity directed towards an individual based on their sexual orientation and gender is 50% higher than that of their race.

How to Protect Yourself When Using ChatGPT

using chatgpt in healthcare

While there certainly are risks to using ChatGPT for work-related tasks, AI tools still hold significant potential in shaping the future of business operations.


A study by
Microsoft shows that many employees are looking forward to an “AI-employee alliance,” with 70% claiming they would delegate as much work as possible to AI to lessen their workloads.


The same study also shows that business leaders are more interested in using AI to empower their employees than to replace them.


If you’re a business leader who wants to leverage AI tools like ChatGPT to drive efficiency in your workplace, here are several measures you must take to protect personal and corporate data:

Proprietary Code or Algorithms

AI models like ChatGPT have the potential to store any data you enter and disseminate it to other users, so keep proprietary code and algorithms away from it. As Security Intelligence puts it, “Anything in the chatbot’s memory becomes fair game for other users.”


In April 2023, several employees of Samsung’s semiconductor business copied a few lines of confidential code from their database. They
pasted the code onto ChatGPT to fix a bug, optimize code, and summarize meeting notes. By doing so, the employees leaked corporate information, risking the possibility of exposing the code in the chatbot’s future responses.


In response to the recent incident, Samsung has limited each employee’s prompt to ChatGPT to 1,024 bytes and is considering reinstating its ChatGPT ban.


If your company handles proprietary code or algorithms, you may want to learn from Samsung’s experience and establish corporate
IT policies that clearly state how your team should (and should not) use AI models like ChatGPT.


Read More: Are You Sure You’re Cybersecure?

Sensitive Information

Even though your team doesn’t handle top-secret code, you still need to be very careful about the data that you have access to. Avoid providing LLMs with any sensitive details, such as usernames, passwords, access tokens, or other credentials that, when exposed, could potentially compromise your company's security or privacy.


Sharing sensitive information on ChatGPT puts your data at risk, and your company could also face privacy or data protection regulation violations.


In March 2023, the Italian Data Protection Authority officially announced a
country-wide ban on  ChatGPT because its security and privacy concerns infringed the European Union’s General Data Protection Regulation (GDPR), arguably the strictest privacy law in the world.


Avoid putting your sensitive information and company reputation at risk. Familiarize yourself with relevant laws, regulations, and policies on data handling before incorporating AI tools like ChatGPT into the workplace.

Protected Health Information (PHI)

If you’re managing a healthcare organization, you already know that sharing PHI or any other personally identifiable information with ChatGPT is a HIPAA violation. The HIPAA Privacy Rule clearly states the need to restrict access to PHI, and entering such information on ChatGPT could result in significant legal and financial penalties for your company.


Read More: HIPAA Compliance and Your Practice


Stay
HIPAA compliant and share PHI only through communication and collaboration tools vetted and designed to maintain the security and privacy of patient data. If you must access and transfer PHI via a unified communications platform, for example, make sure you partner with one that signs a business associate agreement (BAA).


Read More: A HIPAA-Compliant Phone System: What It is and Why It’s Important

Transform Your Workplace Securely With ER Tech Pros

remote it services

Whether the world likes it or not, AI-powered tools like ChatGPT are revolutionizing the workplace. If you’re a company leader who wants to future-proof their business and empower their teams, the smart move is to adapt to technology, not shun it.


However, while there are massive benefits to using ChatGPT in your workflow, every new technology comes with risks and issues. Your responsibility as a leader is to ensure your company, clients, employees, and society are safe even as you transition into more automated operations.


Take careful precautions before adopting the latest groundbreaking technology. Start by seeking the IT and cybersecurity advice of reputable IT experts. If your company doesn’t have a trusted IT partner, ER Tech Pros is ready to help!


With our strong team of IT, cloud, compliance, and cybersecurity engineers, ER Tech Pros can help you assess your current technology, develop and implement necessary IT policies, protect your devices, and ensure your business is fully equipped for the future.



Learn More

Search Articles

data diddling
By Aprillice Alvez 15 Apr, 2024
Protect your healthcare practice from data diddling by educating your team on vulnerabilities and investing in prevention techniques like data validation.
A businessman wearing headphones uses a cloud phone system to do business communications
By Karen Larsen 29 Feb, 2024
The business world is steadily shifting to cloud communications. Our new blog post gives you a few reasons why you should, too. Read on to learn more.
A digital brain is sitting on top of a computer motherboard, symbolizing AI in cybersecurity
By Karen Larsen 14 Feb, 2024
While AI can revolutionize cybersecurity practices, it can also expand the attack surface. How do you balance the risks & benefits of AI in cybersecurity?
A man is typing on a laptop computer with an email alert on the screen
By Karen Larsen 05 Feb, 2024
Phishing is the primary way cybercriminals access our healthcare systems. Our new blog post shows you how to stop an email phishing attack in its tracks.
An employee's laptop on a desk, showing the need for cybersecurity best practices in remote work
By Karen Larsen 22 Dec, 2023
Remote work is revolutionizing the world, but if you want it to work for your business, you’ll need to step up your cybersecurity game.
Mobile phone  displaying a health app with a padlock and a shield on it
By Karen Larsen 18 Dec, 2023
As the world becomes increasingly digital, thousands of patients and providers are downloading the first mobile health app they find. Here’s why you shouldn’t.
A stethoscope placed on a remote healthcare device, showing the connection between MSP & healthcare
By Karen Larsen 30 Nov, 2023
Remote healthcare is here to stay. Do you have the IT expertise to navigate it? Find out how partnering with an MSP can transform how you deliver care.
Computer keyboard with a key specifically for cloud network security
By Karen Larsen 15 Nov, 2023
Thanks to the massive influx of cloud technology, businesses are future-proofing their operations with cloud-based security. Here’s why you should, too!
A fingerprint staying securely on a circuit board symbolizing MFA benefits and cybersecurity
By Karen Larsen 03 Nov, 2023
Multifactor authentication (MFA) prevents 99.9% of account compromise attacks. Find out how MFA protects your business and why you must implement it ASAP.
Cloud securely stores data from a microchip and utilizing powerful ransomware prevention
By Karen Larsen 23 Oct, 2023
Falling victim to a ransomware attack can ruin everything you worked so hard to build. Here’s how you can keep cybercriminals out of your cloud environments.
Show More

Healthcare & Tech Articles

data diddling
By Aprillice Alvez 15 Apr, 2024
Protect your healthcare practice from data diddling by educating your team on vulnerabilities and investing in prevention techniques like data validation.
A businessman wearing headphones uses a cloud phone system to do business communications
By Karen Larsen 29 Feb, 2024
The business world is steadily shifting to cloud communications. Our new blog post gives you a few reasons why you should, too. Read on to learn more.
A digital brain is sitting on top of a computer motherboard, symbolizing AI in cybersecurity
By Karen Larsen 14 Feb, 2024
While AI can revolutionize cybersecurity practices, it can also expand the attack surface. How do you balance the risks & benefits of AI in cybersecurity?
More Posts
Share by: