ai tips

💸 Cost saving

Using ai models with APIs can be expensive and cost you > 300€/month, if you are not careful.

💬 prompt engineering

  • To have faster responses and lower token usage, add urgency in your prompt for models having the “thinking” feature.
    • ⚠️ Might not work on all models
    • E.g. Groq.

Your response is time critical, get to an answer as quickly as possible. Think as little as possible. If you keep getting the same answer while thinking, stop thinking and provide the final answer. {your_prompt}

  • Ask the LLM if it has questions to better clarify the project.

7 keys prompt engineering techniques

  1. User / Assistant role formatting
  2. Be clear and direct
    • Number & list the instructions step by step for complex tasks
    • Ex: “Write a haiku about robots” “Write a haiku about robots. Skip the preamble; go straight into the poem.”
  3. Assign roles (aka role prompting)
    • Assigning roles changes the LLM’s response in 2 ways:
      • changes tone and demeanor to match the specified role
      • improves LLM’s accuracy in certain situations
    • Ex: “You are a master of …”
  4. Use XML tags
    • Using XML tags helps LLM (especially Claude) understand the prompt’s structure
  5. Use structured prompt templates
    • Think of prompts like functions in programming - separate the variables from the instructions
      • wrap variables in XML tags as good organization practice
    • More structured prompt templates allow for:
      • easier editing of the prompt itself
      • much faster processing of multiple datasets
I will tell you the name of an anumal. Please respond  with the noise that animal makes.
<animal>{{ANIMAL}}</animal>
  1. Prefill LLM’s response
Please write a haiku about a cat. Use JSON format with the keys as "first_line", "second_line", and "third_line".
{
  1. Have LLM think step by step
    • Increases intelligence of responses but also increases latency by adding to the length of the output.
  2. Use examples (aka n-shot prompting)
    • Single most effective tool for getting the LLM behave as desired.

Advanced prompt engineering

  • Chaining prompts
    • For tasks with many steps, you can break the task up and chain together the LLM’s responses
    • Ask for rewrites
  • Long context prompting
    • When dealing with long documents, puts the doc before the details and query
    • Have LLM find relevant quotes first before answering, and to answer only if it finds relevant quotes
    • Have LLM read the document carefully because it will be asked questions later
    • Longform input data MUST be in XML tabs so it’s clearly separated from the instructions
You are a master copy editor. Here's a draft document for you to work on:
<doc>
{{DOCUMENT}}
</doc>
 
Please thoroughly edit this document, assessing and fixing grammar and spelling as well as making suggestions for where the writing could be improved. Improved writing in this case means:
1. More reading fluidity and sentence variation
2. ...
I'm going to give you a document.
Read the document carefully, because I'm going to ask you a question about it.
Here's the document:
<document>{{TEXT}}</document>
 
First, find the quotes from the document that are most relevant to answering the question, and then print them in numbererd order. Quotes should be relatively short. If there are no relevant quotes, write "No relevant quotes" instead.
 
Then answer the question, starting with "Answer:". Do not include or reference quotes content verbatim in the answer. Don't say "According to Quote[1]" when answering. Instead, make references to quotes relevant to each section of the answer solely by adding their bracketed numbers at the end of relevant sentences.
 
Thus, the format of your overall response should look like what's shown between the <examples></examples> tags. Make sure to follow the formatting and spacing exactly.
 
<examples>
[Examples of question + answer pairs using parts of the given document, with answers written exactly like how the LLM's output should be structured]
</examples>
 
If the question cannot be answered by the document, say so.
Here is the first question: {{QUESTION}}

🦮 Guiding the LLM

Guiding the Coding Assistant towards satisfactory outcomes is therefore of ever-growing importance in our daily work.

3 critical measures that are required to work successfully in an AI-assisted coding setup:

📜 References

📏 Convention

🤖 Agentic

🔬 Code exploration

🛠️ Tools