Prompts & Don'ts: A savvy guide to ethically engaging with LLMs

Published:

Jul 12, 2023

Language has the power to inspire, persuade, and connect. With the advent of AI in the realm of language, we've embarked on a journey that has extended human potential in ways we could never have imagined.

LLMs such as ChatGPT, based on the GPT-4 architecture, are an exciting manifestation of this evolution, enabling an incredible range of applications.

With the power of these innovative technologies comes the responsibility to use them ethically and legally. Here, we look at the "Prompts & Don'ts" of working with LLMs to help you navigate potential pitfalls and adhere to best practices.

1. Content matters: prompts & responsiveness

Prompt in the right direction
The power of LLMs is in the prompts. Use clear, specific prompts to get the best results. But remember, you're responsible for the prompts you use.

Don't rely on the model's filters
While LLMs are designed with safeguards against producing inappropriate responses, these systems are not foolproof. Never rely solely on the built-in safeguards of an LLM. Always monitor and control the output, especially when used in public applications.

2. Intellectual property: rights & responsibilities

Prompt for original content
LLMs are capable of generating impressive, original content. This makes them great tools for brainstorming and fostering ideas. Remember, however, that an LLM does not exempt you from respecting intellectual property laws.

Don't assume LLM-generated content is free from copyright
The legal landscape regarding AI-generated content is complex and still evolving. Do not assume that AI-generated content is automatically free of copyright or other intellectual property rights. Consider consulting a legal expert if you plan to use LLM results in a commercial context.

3. Privacy: protecting personal data

Prompt anonymously
When interacting with an LLM, avoid providing personal data. This entails obvious examples like name and email address but also information that can be used to single out a specific person. The GDPR applies as soon as you use personal data, and given the novelty of the technique it’s better to be safe than sorry.

Don't Utilize LLMs for Processing Sensitive Information
LLMs should not be used to handle or process sensitive information. Even if you trust the model's ability to forget information, the risks associated with potential data breaches, misuse or breaches of privacy regulations make this a clear 'don't'.

4. Accountability: understanding limitations

Prompt with awareness
Remember that LLMs are tools and they don't possess human judgment or understanding. Their responses are based on patterns they've learned from a large dataset of text, and they do not have beliefs, desires, or consciousness.

Don't use LLMs for decision-making in critical areas
While LLMs can provide useful insights and suggestions, they should not be solely relied upon for critical decisions, particularly in areas like healthcare, law, or safety. LLMs do not understand the real world and its consequences the way humans do, and therefore are not capable of providing infallible advice or guidance.

In conclusion, LLMs like ChatGPT open up a world of possibilities. But as we explore this exciting frontier, it's important that we proceed with caution and a keen awareness of the ethical and legal implications.. By following the "Prompts and Don'ts" in this guide, you can enjoy the benefits of engaging with LLMs while mitigating a lot of the potential risks. Stay curious, stay informed, and keep prompting responsibly!

We recently released our general AI Policy, contact us to learn more.

Disclaimer:
Please note: Pocketlaw is not a substitute for an attorney or law firm. So, should you have any legal questions on the content of this page, please get in touch with a qualified legal professional.

Book a personalized demo

Enterprise ready.

ISO 27001 certified and GDPR compliant. Data encrypted at rest with AES 256 and in transit with TLS 1.2+.

For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out our Privacy Policy.