Navigating the Gray Area: Legal Implications of ChatGPT Use in Business

As a DC business lawyer, I’ve seen firsthand how technology is rapidly changing the way companies operate. One emerging technology that has caught my attention is the use of ChatGPT in business. While this artificial intelligence program can be incredibly useful for streamlining operations and improving customer service, its use also raises important legal questions. 

As companies navigate this gray area, it is crucial to understand the legal implications of ChatGPT use to ensure compliance and minimize potential risks. In this post, our DC business attorney explores some of the key legal considerations businesses should consider when incorporating ChatGPT into their operations.

Business Lawyer

What is ChatGPT?

ChatGPT is a "large language model tool" that can answer questions and write things that the chatbot user requests, like a poem or an email. ChatGPT can draw upon its vast knowledge base and combine that information with its understanding of words in context.

Given enough information, ChatGPT can mimic an individual's speech patterns so that the words it strings together sound like that person's work product. The more that ChatGPT sounds like a human being, the more frightened some people become.

Still, more than a million users signed up to test the chatbot, so there is great interest in this technology despite the risks ChatGPT can pose. Using the AI language model and exhaustive online data, the chatbot can create surprisingly good content, even though it often includes incorrect information.

How Are Businesses Using ChatGPT?

Companies have seized upon ChatGPT as a way to increase efficiency at the office. A chatbot can make writing responses and other things easier and quicker in a business setting, just as electric typewriters, word processors, computers, and printers improved the process of doing work in offices when they first hit the market.

Some companies are concerned that employees might enter sensitive information into ChatGPT, so they have banned the use of ChatGPT in the office. Amazon, Accenture, JPMorgan Chase, and Verizon are said to have banned chatbots. Several companies have at least tried to use AI technology, including Duolingo, Snap Inc, and Microsoft, to improve their customer service.

AI examples in business. The mental health company Koko ran an experiment using ChatGPT to respond to users of their mental health services. Needless to say, this experiment was not well received when the public learned about it.

Koko's cofounder, Rob Morris, assured people that the chatbot did not speak directly to Koko's users. Instead, only the Koko employees used the AI bot to help them write responses to the users.

Another company, DoNotPay, said that a chatbot advised traffic court defendants while they were in court, a claim he later denied.

Benefits of AI in business. Some supporters of AI tools like chatbots make wild claims about their potential uses in business, but OpenAI is quick to caution people that ChatGPT is not a magic wand.

People who use the ChatGPT bot still need to check the work of the chatbot for accuracy and correct information. Sometimes, ChatGPT writes "plausible-sounding" content that is simply wrong.

Still, there are benefits of AI in business, for example:

  1. Assigning boring, repeatable tasks to AI can make the workplace more efficient and productive.

  2. "Human error" could get reduced when businesses use AI to perform rules-based tasks, similar to how calculators reduce mathematical errors.

  3. Just as computers and copy machines allowed businesses to work faster, AI can also speed up how businesses get the job done.

  4. AI could allow companies to deliver more personalized customer service because AI could call upon a vast amount of customer information and customize the interactions between the company and the client.

  5. Because AI can process a vast amount of data in seconds, the business can stay on top of issues immediately, before a problem arises, rather than having to wait for a machine to break down, for example, and then react to the situation.

Business intelligence automation. Business intelligence automation combines AI with business process management (BPM) and robotic process automation (RPA) to plow through massive amounts of data in little time and deliver useful insights for companies.

Businesses used to have to hire data analysts to do this work, but business intelligence automation can do the tasks much more quickly and at a lower cost, for most companies. The company will need to compare the financial value of the data they expect to receive from business intelligence automation and the cost of this technology against the cost and benefits of having the work performed by a data analyst.

Business automation solutions. Using AI can help to eliminate inherent bias that could cause errors in the interpretation of data. Business automation solutions can also create embedded insights and rank the insights to save time for the analysts.

AI in law. Law firms are finding AI useful for organizing and managing documents, billing, and other routine tasks. When a law firm improves its efficiency and reduces its errors, the profitability of the firm can improve. A law firm can use AI to scan electronic information much quicker than scanning paper documents. The AI can then retrieve information in seconds that could take hours when done manually.

AI legal research. Lawyers and paralegals have used computers to perform legal research for decades. AI takes this task to another level. By using legal research software together with practice management software, law firms can complete legal research faster and extract information from specific case files to increase the accuracy of the legal research results.

What are some potential legal problems with ChatGPT?

As with any significant technological advancement, there are potential legal issues with ChatGPT. The courts will have to sort out these and other legal issues as they arise:

  • Who owns the content ChatGPT generates and who gets the right to use the content?

  • The very fact that ChatGPT has the capacity to share personal data with its users that it learned during training could violate data protection legislation.

  • Is the generated content protected intellectual property? Whether it is intellectual property or not, how does one protect the generated content?

  • Will third-party intellectual property rights be infringed upon when the chatbot gets deployed? The chatbot uses a massive amount of copyrighted materials when it trains.

  • Who is responsible if a company uses ChatGPT and it creates defamatory or offensive content?

  • Who will be at risk of liability if the AI, for example, reveals a trade secret, confidential code, or other undesired information on the internet? When performing its services, the natural language processing technology might get programmed to post to the internet, to the extent that the user allows.

  • People who use ChatGPT to generate social media posts could create incorrect or misleading content, even fake news. With social media spreading accurate or false information so quickly to so many people, the fallout could be extraordinary.

ChatGPT Risks and the Need for Corporate Policies

Employers must create corporate policies for employees using ChatGPT that include reasonable steps to minimize risk. ChatGPT technology raises complex questions of law that should get addressed before the widespread use of this generative pre trained transformer.

Employers Should Consider These Risks When Employees Use ChatGPT

When an exciting tool like ChatGPT comes along, employers need to balance the artificial intelligence tool's benefits against its risks when deciding if and how to implement the automated chatbot. Here are some of the potential dangers of the commercial use of ChatGPT:

  • Workers will rely on ChatGPT-created content without exercising due diligence to check it for correctness.

  • Without engaging in critical thinking, an employee might merely proofread the content generated by ChatGPT for things like grammar and spelling without looking for accurate information and correctness.

  • In its task of content creation, ChatGPT could divulge confidential information like a trade secret or personal data.

Depending on how it gets used, the use of ChatGPT software to generate responses could violate privacy laws and pose additional risks. When used to perform specific tasks in specific circumstances, the employer could manage the risk of using ChatGPT.

Accuracy and Reliance on AI

Face it, some people are lazy. ChatGPT makes mistakes, so everything generated by this AI tool should get proofread before sending it out, but some people will fail to do so. If an employee lets ChatGPT do some of their work without checking it for accuracy, nonsensical answers could bring the company embarrassment.

Also, if ChatGPT uses information it learned during its training stage, and that data becomes outdated or no longer accurate, employees need to engage in rigorous fact-checking before sending out ChatGPT-generated content. The knowledge cutoff could affect the accuracy of the material created by ChatGPT.

Privacy and Confidentiality

Privacy and confidentiality are two of the primary concerns of using ChatGPT in business. Imagine how a patient would react if they discovered that an email to their doctor's office got answered by ChatGPT, rather than a human being? Who would have access to the information used by the chat bot and to ChatGPT's response? Eventually, companies might have to disclose human authorship versus AI-created content.

In the near future, the quality of this AI technology might make it virtually impossible to tell AI-created human like text from something written by a real person. The trust people place in professionals like doctors, lawyers, accountants, financial advisors, and mental health service providers could get eroded quickly.

Other Employer Concerns

The legality of ChatGPT use in business could expose companies to legal risks. Employees might fear losing their jobs and getting replaced by AI that could perform their tasks quicker and more efficiently. An employer might be tempted to use AI to write a blog post or generate content.

Employers should brace themselves for claims that they violated employment discrimination laws if they lay off workers who created such content as the new technology now does. Employers who use ChatGPT will want to talk with a DC business attorney who can draft certain safeguards and legal protections for making employment decisions resulting from using AI to perform workplace tasks. Such issues should be addressed before problems arise.

River

A former attorney, River now provides SEO consultation, writes content, and designs websites for attorneys, business owners, and digital nomad influencers. He is constantly in search of the world’s best taco.

http://www.thepageonelawyer.com
Previous
Previous

Don't Underestimate the Taxes When Selling Your Business

Next
Next

What Does Silicon Valley Bank's Collapse Mean For Business?