To have faster responses and lower token usage, add urgency in your prompt for models having the “thinking” feature.
⚠️ Might not work on all models
E.g. Groq.
Your response is time critical, get to an answer as quickly as possible.
Think as little as possible. If you keep getting the same answer while thinking, stop thinking and provide the final answer.
{your_prompt}
Ask the LLM if it has questions to better clarify the project.
For tasks with many steps, you can break the task up and chain together the LLM’s responses
Ask for rewrites
Long context prompting
When dealing with long documents, puts the doc before the details and query
Have LLM find relevant quotes first before answering, and to answer only if it finds relevant quotes
Have LLM read the document carefully because it will be asked questions later
Longform input data MUST be in XML tabs so it’s clearly separated from the instructions
You are a master copy editor. Here's a draft document for you to work on:<doc>{{DOCUMENT}}</doc>Please thoroughly edit this document, assessing and fixing grammar and spelling as well as making suggestions for where the writing could be improved. Improved writing in this case means:1. More reading fluidity and sentence variation2. ...
I'm going to give you a document.Read the document carefully, because I'm going to ask you a question about it.Here's the document:<document>{{TEXT}}</document>First, find the quotes from the document that are most relevant to answering the question, and then print them in numbererd order. Quotes should be relatively short. If there are no relevant quotes, write "No relevant quotes" instead.Then answer the question, starting with "Answer:". Do not include or reference quotes content verbatim in the answer. Don't say "According to Quote[1]" when answering. Instead, make references to quotes relevant to each section of the answer solely by adding their bracketed numbers at the end of relevant sentences.Thus, the format of your overall response should look like what's shown between the <examples></examples> tags. Make sure to follow the formatting and spacing exactly.<examples>[Examples of question + answer pairs using parts of the given document, with answers written exactly like how the LLM's output should be structured]</examples>If the question cannot be answered by the document, say so.Here is the first question: {{QUESTION}}
🦮 Guiding the LLM
Guiding the Coding Assistant towards satisfactory outcomes is therefore of ever-growing importance in our daily work.
3 critical measures that are required to work successfully in an AI-assisted coding setup:
Well-structured Requirements: defines the destination.
# Coding pattern preference- Always prefer simple solution.- Avoid duplication of code whenever possible, which means checking for other areas of the codebase that might already have similar code and functionality.- Write code that takes into account the different environments: dev, test and prod.- You are careful to only make changes that are requested or you are confident you understood well to the topics related to the change being requested.- When fixing an issue or bug, do not introduce a new pattern or technology without first exhausting all options for the existing implementation. And if you finally do this, make sure to remove the old implementation afterwards so we don't have duplicate logic.- Keep the codebase very clean and organized.- Avoid writing scripts in files if possible, especially if the script is likely only to be run once.- Avoid having files over 200-300 lines of code. Refactor at that point.- Mocking data is only needed for tests, never mock data for dev or prod.- Never add stubbing or fake data patterns to code that affects the dev or prod environments.- Never overwrite my .env file without first asking and confirming.# Coding workflow preferences- Focus on the areas of code relevant to the task.- Do not touch code that is unreleated to the task- Write through tests for all major functionality.- Avoid making major changes to the patterns and architecture of how a feature works, after it has shown to work well, unless explicitly instructed.- Always think about what other methods and areas of code might be affected by code changes.