Published by
Generative AI tools like OpenAI’s ChatGPT have been the subject of much hype over the past couple of years. If you’re like me, you’ve had some opportunities to put these types of Large Language Model (LLM) tools to the test in the hopes of speeding up your everyday workflows. However, you may have discovered that while highly capable, they are a far cry from the inevitable promise of true Artificial General Intelligence (AGI). Whether you’re a beginner to LLMs or a seasoned veteran, these tips and tricks will help you squeeze more performance out of these systems in the form of more accurate, relevant, and useful responses from your AI assistant.
1. SLICE AND DICE
Instead of trying to cram all the details about the problem you’re trying to solve into a single initial prompt, consider describing specific aspects of the problem you’re trying to solve over several initial prompts.
For example, at the end of each prompt, ask the LLM to give a “thumbs up” that it understands the problem so far and indicate that you will continue to provide additional details once things are well understood so far. This will help prevent the LLM from prematurely giving an answer that may not be in the detail that you need and will help reserve the maximum amount of context window tokens available for future prompts. When moving to new subtopics, it can also be good to provide a quick summary of the previous content and decisions to help keep things on the right track.
Try adding something like the following to the end of your initial prompts:
“Please let me know if you understand my goal so far. If so, I will proceed with the next piece of information to make the problem I’m trying to solve clearer.”
2. BE THE BOSS
Take charge of the output. Tell the LLM to only give you the pieces of information that you care about instead of letting it regurgitate extraneous information as it sees fit.
With coding tasks especially, LLMs tend to respond with the entire updated contents of a file when all you needed is the line or two that was updated. When asking for coding help, consider ending your prompt with something like this:
“Reply with the relevant code that needs to be implemented or deleted only. Please do not provide placeholder or example code. I will ask for clarifications if needed.”
3. DON’T GUESS, ASK!
Rather than give the LLM typical free reign to assume details that it doesn’t know or hallucinate itself into answers that don’t exist, ask it to tell you when it’s not sure about some of the details of your request, or about the correctness of its answer.
A phrase along the lines of the following can be good to add at the end of every prompt to help keep the responses effective, especially when getting several messages into the conversation:
“Please let me know if something in my request isn’t clear, or if there is any additional context or inputs that I can provide. If you are not 100% certain of the details about something in my request (e.g. file contents), then please let me know what you need, and I will provide it to you.”
4. ROLE PLAY
By default, LLMs are already quite capable in a variety of tasks. However, you can get a performance boost in specific areas by suggesting that the LLM assume the role of an expert in the specific topic that you’re working with. By making this part of your initial prompt, you help to set the overall context for the responses, which can lead to more relevant and focused responses.
This tip can also be repeated several times for a given topic too. After you’re satisfied with the output from your initial conversation, try starting a new chat where you instruct the LLM to assume a different role to get a different perspective on the same problem.
For example, when working on the inner workings of an application and its source code, you might clearly define the role or expertise that you want the LLM to assume with something like:
“I want you to act as an experienced Python developer with expertise in data science.”
Then, when working on the user experience portion of an application, you might start a new chat with something like:
“I want you to act as an experienced user interface and user experience (UI/UX) architect.”
5. FISH FOR FEEDBACK
Give the LLM space to suggest solutions that you didn’t think about.
By explicitly stating something like the following, you are encouraging the LLM to explore alternate solutions rather than sticking too closely to the initial prompt direction:
“If you have any ideas for improvements or think that my task could be accomplished in a different way, please let me know. Your input and opinions are important to me!”
6. SWEET TALK YOUR AI
While it may sound silly, there have been studies that have shown that LLM performance on benchmarks can be improved by telling it that it’s a test or that someone’s life depends on it.
“LLMS WERE TRAINED ON HUMAN-GENERATED DATA, AND THUS NATURALLY CREATE HUMAN-LIKE RESPONSES.”
If you think about it, this makes sense as LLMs were trained on human-generated data, and thus naturally create human-like responses. Using flattery and emotional situations can improve its precision and quality of input just like they would for a human.
While LLMs don’t care about money, the general motivation behind getting paid more to generate high-quality work can be exploited with something like the following:
“Please give me the best solution possible. I’m going to tip $10,000 for a better solution!”
7. GET CHUNKY
Structure your prompts into logical sections and make sure they’re well formatted.
This is sometimes referred to as “chunking”. The basic idea is that LLMs will provide different output for discrete sentences vs a collection of sentences organized into a paragraph. For paragraphs, the LLM is able to consider both the overall context as well as the relationships between the sentences and phrases in the text, resulting in a better internal representation of the text.
Proper formatting is also a good idea. Markdown formatting techniques like backticks around `code
blocks`, double or single asterisks to **bold** or *italicize* words or phrases when necessary, and making sure to use punctuation all help to improve readability. Numbered lists are also a good technique to clearly define overall steps or requirements and help track progress.
Avoid a single long train-of-thought prompt that all runs together. This can introduce noise or dilute the significance of certain points, resulting in less precise output.
8. SHOW AND TELL
When possible, provide examples of what you’re trying to accomplish or show what type of output you’re looking for.
For instance, at the end of your prompt, say something like:
“I want the output formatted like this: Name: John Smith, Age: 30”
An advanced version of this technique is known as “few-shot learning”, where multiple examples are provided in the hopes of demonstrating a pattern or relationship, usually in an input-output format. This helps give the LLM additional context around the specific formatting, tone, or reasoning patterns that you want it to follow.
For example, providing multiple input-output pairs can help establish a pattern and guide the LLM to a better answer with something like the following in your prompt:
Input: “The weather is sunny”
Output: “Clear skies today”
Input: “The weather is rainy”
Output: “Bring an umbrella”
Now when provided the input of “The weather is cloudy”, the LLM can respond appropriately:
“Overcast conditions”
9. BRIEF, RINSE, REPEAT
When starting on a complex task, it can be helpful to take some time to write out a hyper-specific problem “brief” and save it to an external text editor or notes app.
After using your problem brief as the first prompt, if you notice that things are starting to go off the rails, the LLM is making simple mistakes, or it’s clear that it doesn’t have all of the details, update your problem brief with the additional details that it missed and start over.
Starting a fresh chat with the improved problem brief will help ensure that you’re making maximum use of the token context window and allow you to progress farther towards the best answer.
10. KNOW WHEN TO FOLD ‘EM
These things aren’t infallible (yet)! Knowing when to call it quits is better than endlessly trying to get the LLM to output what you want when it’s clearly struggling.
I find that if the LLM doesn’t get something right after trying 3 times (e.g. a specific line of code that won’t compile), it’s probably not going to succeed. Either what you’re trying to do isn’t possible, or it just isn’t smart enough.
Also, keep in mind that due to the limited context window that all LLMs are constrained by, sometimes starting a new chat by summarizing your problem and the steps that have been taken so far can yield improved results (see #9 above).
When all else fails, sometimes it’s still fastest to do things “the old-fashioned way” and fire up Google and put in a little manual labor. Don’t worry though, with the announcement of some upcoming advanced LLM-based search engines and research tools, you won’t need to do this for much longer! 😉
POWER UPS (BONUS TIPS)
Temperature Control
Some models have a “temperature” setting. Adjusting this value can affect the type of responses that you get. Lower temperatures are better for tasks like coding where fact-based information is needed. Higher temperatures are better for more creative
Build an Archive
Keep track of your successful techniques. Build up a library of prompts that you find are particularly effective for common tasks so that you don’t have to start from scratch each time.work.
Utilize Advanced Techniques
Advanced techniques such as “chain-of-thought” prompting can improve performance on complex reasoning tasks.
This technique encourages the LLM to break down complex problems into logical steps, similar to how a human would reason through a problem. For example, consider a prompt like the following, where the entire request is contained in a single statement:
“What is 15% of $85 plus $20?”
Instead, we can break the problem down into 2 steps:
“Help me solve the following step by step:
1. First, calculate 15% of $85
2. Then, add $20 to that result
What is 15% of $85 plus $20?”
“THE KEY IS TO STAY CURIOUS, EXPERIMENT WITH DIFFERENT APPROACHES, AND KEEP REFINING YOUR PROMPT ENGINEERING SKILLS.”
By explicitly asking the LLM to show its work, you’re more likely to get accurate results and can more easily spot any errors in its reasoning. This is especially useful for things like math problems, logic puzzles, complex coding tasks, multi-step analysis, or debugging issues.
THAT’S ALL FOLKS
While these tips and tricks can significantly improve your interactions with LLMs, remember that the field of AI and Machine Learning is rapidly evolving. What works today might be superseded by better techniques tomorrow, and new capabilities are being added regularly. The key is to stay curious, experiment with different approaches, and keep refining your prompt engineering skills. As these tools continue to advance, the techniques you develop now will form the foundation for even more powerful AI interactions in the future.
Happy prompting!