Stylized illustration of a conversation with an AI assistant with the text What do you need to assist me better? typed into the chat box and a button that reads Improve response

When I share how I use ChatGPT with friends, they are often surprised by the types of questions I ask and how specific my prompts are. So I thought I’d share some of my go-to strategies and examples that help me get the most out of AI assistants like ChatGPT. Not just when I have clearly defined tasks, but also for learning and discovery of new ideas.

I don’t want to call these “my prompt engineering tips”. I call these strategies basic conversational skills, which – incidentally –, work well both with humans and conversational AI assistants. You might already be using these strategies in your own work and explorations, but perhaps the examples I’ve collected in this post can inspire fresh ideas and creative explorations. Given how resource-intensive Large Language Models (LLMs) that power today’s AI assistants are, it is my hope that we can get more out of them than generic marketing copy and PDF summaries – what I like to call sad beige LLM outputs.

Obviously, my advice is reflective of how I use AI assistants. My main use cases are: brainstorming ideas, exploring ideas from different fields, assistance with web development, getting feedback on writing, and exploring what an AI assistant “thinks” about a certain topic as a reflection of human averages. I mainly work with text and voice, less commonly with images. I predominantly use ChatGPT, but most of this advice likely applies to other LLMs instructed and fine-tuned to be friendly AI assistants.

So, if you want to have more relevant and interesting conversations with your AI assistants, I invite you to try out the following strategies:

  1. Make your context and needs super clear
  2. Check expectations together before starting
  3. Explore different roles together
  4. Think aloud and untangle ideas through conversation
  5. Take an active role to deepen your learning
  6. Ask for personalized and context-aware responses
  7. Stay curious, keep questioning

In this field guide, I will explain each one in greater detail, along with practical examples of how you might use them in different scenarios.

1) Make your context and needs super clear

When you start a new conversation with an AI assistant, consider what information anyone would need to get your task done or answer your question well. Put yourself in the assistant’s shoes: if somebody approached you with this request, what questions would you ask to get more clarity? What might influence the way you approach their request? You’d probably want to know their goals, background, and other relevant information.

An AI assistant might not ask you all these questions because they are perfectly capable – and willing – of doing a task or answering a question in the most generic way. If you don’t make your goals, expectations, and context explicit, you’ll get a bland, beige suggestion that tastes of nothing. So, unless you’re using an assistant that’s baked into an operating system and has read access to all your files, it’s your job to ask for what you need – and what you don’t.

Your assistant of choice might be able to remember past conversations and reuse custom instructions. But even with these features enabled, it’s worth reminding your AI assistant of your preferences, especially if you want it to use any memories it has of past conversations. With context windows getting larger, you’re far more likely to get specific answers if you include all the relevant details that can help your assistant output the most relevant pattern for your context. (But don’t confuse it with irrelevant details you don’t want it to use.)

In practice:

  • Focus on specific aspects: When writing, ask your AI assistant to provide feedback on specific aspects you want to improve, such as clarity, structure. For instance: “How might I improve the flow of this post so that it is easier to read?” It can also be helpful to share any specific goals you have in mind: “With this blog post, I want to inspire people to use LLMs differently.” Similarly, if you’re writing code, be specific about the aspects you’d like to improve: “My app will have millions of users, what can I do to improve both performance and security?”
  • Defer to a specific role: If you’re not yet sure what aspects you want to improve, ask your assistant to assume a specific expert role that can give you targeted feedback. When choosing a role, ask yourself: who might be able to give you advice that will improve your work? When writing, you might use something like: “You’re a friendly professional book editor. Help me refine the tone and structure of this chapter and explain the reasoning behind suggested changes.“ When programming, you might use: “You’re a cybersecurity expert. Review this code for vulnerabilities.”
  • Hallucination adaptation: You’ve probably heard about the tendency for LLMs to make stuff up – hallucinate. While there’s no silver bullet to prevent hallucinations, you can improve the quality of responses by adding specific requirements, such as: “Only suggest functions that exist in version 4.2”.
  • Prioritize positive instructions: When trying to steer and LLM’s output in a specific direction, remember that LLMs seem to struggle with not thinking about pink elephants as much as humans do. Which means that you’ll get better results with specific, positive instructions. Instead of: “Don’t use fancy words”, try: “Keep the language simple and direct”. Occasionally, you might instruct your assistant to avoid delving into tapestries, but it’s generally best not to mention pink elephants if you don’t want your LLM to think about them. (For inspiration, see how Anthropic steers Claude’s behavior through system prompts.)
  • Slow down an enthusiastic LLM: When you see an LLMs rushing to conclusions, use similar strategies you would with somebody who is getting ahead of themselves and being overly sloppy. Phrases like “Let’s take this step by step” or even “Take a deep breath” often improve the quality of outputs. For instance, I was recently testing OpenAI’s new o1 “reasoning” model on a challenging programming task. It struggled with producing working code until I asked it to add print (debug) statements to its code – a strategy human programmers regularly use to spot the flaws in our reasoning.

2) Check expectations together before starting

On that note, a useful strategy to manage LLMs’ enthusiasm is to ensure the task is well-defined before you allow your AI assistant to rush to solutions.

Remember, the goal of AI assistants is, well, to assist you. Nothing makes ChatGPT “happier” than hearing “Attabot!” when you find its answers helpful. Alas, this enthusiasm often leads AI assistants to jump to quick answers by assuming you’ll be happy with the most probable answer. So, unless your goal is to get the most probable answer, it can be helpful to check in with your assistant whether the instructions and goals you provided are clear, and by asking it what it needs to assist you better BEFORE it jumps into answering.

My go-to strategy is to prompt the assistant to ask me additional questions that can help it to assist me better. Even with my tendency to clearly define my context and needs, it’s normal to forget to include all relevant details or to omit what seems obvious. And if you struggle with defining your context or are unsure about what you need, getting your AI assistant to ask you clarifying questions can be super useful before you allow it to make its own assumption. Again, it can be helpful to think of how you might ensure an overly excited human assistant would get the job done exactly the way you want it, without rushing through the task.

In practice:

  • Check clarity of instructions: Before you allow your AI assistant to start a task like writing code, reviewing a draft, or anything else that requires a bit more attention and thought, get into the habit of asking questions like: “Are the instructions clear?”, or “Is there anything else you need to know to help me do [TASK] better?”
  • Ask for help when defining context: If you’re asking an AI assistant to write a blog post, start by giving it the context you can think of, such as general topic, any existing ideas you’d like to include, who the audience is, what actions would you like your readers to take, etc. Then, ask something like: “Before you write the initial draft, is there anything else you need to know to meet my goals?” You can engage in multiple rounds of answers and questions before you give your AI assistant the go-ahead for the task.

3) Explore different roles together

As I mentioned before, asking an AI assistant to play different roles can help define your context and thus generate more relevant and targeted answers. But you can also assume different roles in your relationship with an AI assistant, depending on the topic discussed, your level of expertise, and specific needs. Your role can even be part of the context that allows the AI assistant to generate more relevant answers.

So don’t let the label “assistant” mislead you. In some cases, you want your AI “assistant” to act as your teacher or editor, while you’ll take a back seat and step into the shoes of a student, perhaps even an assistant helping the LLM with fact-checking.

In practice:

  • Lead when you’re the expert: When you’re the domain expert, act and talk like an expert, and the assistant will match your tone and expertise. You might say something like: “I’m a professional web developer, so I just need a quick refresher, skip the detailed explanations.”
  • Assume the student role: When you’re learning something new, ask the AI assistant to explain it in simpler terms or even to take on the role of a friendly tutor. There’s no shame in asking for additional clarifications, requesting further simplifications with an “Explain Like I’m Five”, or asking your AI teacher to try a different teaching approach. Your AI teacher will even be happy to write a poem about the topic you’re exploring if you find that sort of thing helpful.
  • Become the editor or director: When you’re working on creative tasks or exploring new ideas, try becoming an editor or director. Ask the assistant to suggest ideas, examples, and decide what to keep and what to iterate on. Decide whether to zoom in or out of a particular aspect of an idea you’re developing. For instance, I recently built a couple of Custom GPTs – Thrutopia Guide and Cycle Weaver – by asking ChatGPT to suggest a couple of ideas based on my interests. I then asked it to expand on a couple of its initial suggestions, and eventually selected an idea to develop further. The behavior of these Custom GPTs was developed entirely in conversation with ChatGPT, with me guiding the direction and reviewing its suggestions, only occasionally seeding new ideas.

4) Think aloud and untangle ideas through conversation

Sometimes you have an idea that’s not quite yet ready to become a blog post or a piece of code, or maybe you’re stuck and just need to spend some time untangling your thoughts. In these cases, AI assistants can be powerful tools for thinking aloud and untangling ideas, regardless of whether you’re using text or voice.

In fact, switching modes can help you see things from different perspectives: voice mode is perfect for stream of consciousness processing James Joyce would be proud of, while text mode can help you clarify your thoughts and sharpen the details. In programming, we often resort to what is known as rubber ducking – a way to debug code by talking or writing about our problem using natural language. LLMs are perfect rubber ducks that don’t just listen, but can actively help you work through a problem or untangle half-baked ideas by asking follow-up questions.

And thankfully, they don’t mind if your thoughts are racing, meandering or lacking in clarity. LLMs are pretty good at finding some reason and rhyme even in the most chaotic brain dumps.

In practice:

  • Think aloud: When your thoughts are a mess, share your initial ideas with your AI assistant and let it know you need help with making sense of it all. Ask it to help you synthesize your thinking or uncover additional perspectives. Just start the conversation like you would with a friend or rubber duck: “I’ve been thinking about [X] but I am also worried about [Y]. Can you help me clarify my thoughts?”
  • Switch modes to switch pace: When thinking aloud, it can sometimes be helpful to switch between text and voice mode. For instance, when I don’t feel I’ve got clarity yet, I might start a conversation in voice mode. As a clear idea starts emerging, I tend to switch to text mode, so I can focus more on the text and see the idea written down. When I want to diverge and brainstorm again, I go back to voice mode. I find the ChatGPT Mac app particularly helpful for this type of switching between text and voice mode (even though it doesn’t yet support Advanced Voice Mode, which allows you to have more natural and fast-paced conversations).

5) Take an active role to deepen your learning

The question of whether AI assistants can help people learn better is a hot research topic. But I think the answer heavily depends on the learner’s context. Are you using an AI assistant to help you study for an exam, or are you actually intrinsically motivated to learn?

You can certainly use an AI assistant to help you study for an exam, write essays, or solve quizzes. But if you are interested in gaining proficiency beyond remembering and understanding – I’m borrowing the revised terminology from Bloom’s taxonomy of learning here –, AI assistants can help you deepen your learning provided that you’re willing to take on a much more active role. Which means asking plenty of follow-up questions, bringing in alternative points of view, critically analyzing the responses you get, and applying what you’re learning to novel contexts.

In practice:

  • Question the suggestions: When using an AI assistant to write code or any other task, try questioning the suggestions it generates. Explore alternative approaches by asking: “What other ways are there of doing this?” or “How would you approach this if you prioritized something else?”. Or deepen your understanding by asking questions like: “Why did you choose this approach?”, “What are the tradeoffs I should consider?”
  • Ask for advice on how to improve: Make it a habit to ask your AI assistant to analyze your work – whether that’s writing, programming, planning, or something else entirely –, and suggest areas for improvement. For example: “Can you suggest ways to improve the clarity of my writing?” If your assistant remembers past conversations, you can even ask it for feedback based on previous exchanges.
  • Paraphrase to check understanding: Summarize your understanding of a concept by paraphrasing it: “If I understand this correctly, this means that…” However, keep in mind that the LLM might be overly eager to offer validation, so it’s a good idea to consult additional resources once you gain a basic level of understanding.
  • Actively Engage with Content: Recently, I got into some interesting conversations by sharing articles I was reading and thinking about by asking ChatGPT questions like: “What perspectives is the article missing?” or “What assumptions does the author make?” This approach can help you check your understanding and assumptions. Additionally, actively engaging with materials will likely improve your long-term recall if that’s something you’re trying to work on.

6) Ask for personalized and context-aware responses

It’s worth remembering that LLMs aren’t the best tool for every task: if you’re looking for a general introduction to a topic, Wikipedia is probably a better exploration tool. Similarly, LLMs are often a poor choice for questions that you can quickly answer with a single search query.

Where LLMs do have an upper hand though is in tasks that require personalized responses or context-specific suggestions. Tasks that might otherwise take multiple search queries or different domain experts to tackle. This means that LLMs can help you connect the dots between disciplines and ideas, and suggest the most relevant parts of content based on your background and interests.

In practice:

  • Ask for context-specific summaries: If you want to learn more about a long report, ask your AI assistant to summarize the report from a specific lens. For instance: “What’s interesting in this report from a climate perspective?” Or, if you have memories enabled, you can ask: “Based on what you know of my interests, which parts of the report should I read more closely?”
  • Translate complex ideas: If the report you’re reading is filled with jargon, ask your assistant to rephrase it in simpler terms: “Please rewrite this summary for someone without a technical background”. Or it can help you translate ideas across disciplines: “Can you rewrite this abstract from the perspective of an anthropologist?”
  • Get personalized suggestions: If you’re looking for tips on reducing your carbon footprint, you can describe the specifics of your household and daily habits to get personalized suggestions instead of broad recommendations that don’t apply to you.
  • Ask for help with relevance: You could ask your AI assistant to help you figure out which suggestions in this blog post are most relevant or useful to you: “Can you read this blog post: https://blog.ialja.com/2024/10/16/beyond-prompts-conversational-strategies-for-ai-assistants/ and tell me which of these tips are the most relevant for me?” (This tip assumes your assistant can access web links and has memory enabled.)

7) Stay curious, keep questioning

At this point, it should be clear that my main strategy is to actively engage in conversations with AI assistants by asking a lot of questions, being clear about my needs, and having empathy for the other party. When the other party is an LLM, empathy means being mindful of its goals, limitations, biases, and design choices. And here’s the trick: you don’t have to be a machine learning expert to know all of this, you can always ask your AI assistant to help you question what it generates!

And yes, you’ll likely bump into limitations regarding what an AI assistant is allowed to disclose about its inner workings and training data. But getting in the habit of asking questions and staying curious about what the LLM behind the assistant is doing and what assumptions it is making, can deepen your learning and improve your conversational skills. Especially if you also use the experience as an opportunity to examine your own biases, patterns, and assumptions.

In practice:

  • Ask when in doubt: When you’re unsure about how to proceed, ask the AI assistant for advice: “What’s the next step?” or “What do you recommend?” You can also ask for advice on how to improve your current conversation: “I’m not happy with your suggestions. What can I do to help you provide better ones?” – very meta!
  • Challenge assumptions: When you suspect a response isn’t accurate, ask your AI assistant to double-check. When you get the feeling it’s making assumptions, challenge its thinking: “What assumptions are you making here?” Feed it additional resources – such as official documentation or references – to establish a ground truth: “Only make suggestions based on the official documentation: [LINK].”
  • Ask meta questions: Experiment with asking meta or even introspective questions. For instance, if you have memory enabled and are in a mood for introspection, ask your AI assistant: “How would you describe me as a person based on our past interactions? What do you think my strengths and weaknesses are?” Meta questions like these can shed some light into how the LLM interprets your interactions over time.

Full disclosure: my (not so) secret agenda

It might be easy to dismiss LLMs as stochastic parrots, incapable of reasoning. But that doesn’t mean people don’t find them useful for a variety of reasons – although, granted, certainly not for all over-hyped marketing reasons. Regardless of their limitations, my experience shows that current LLM-based AI assistants excel at traversing vast amounts of knowledge, translating and switching between languages and communication styles with ease, validating our emotions, having (almost) endless conversations, … And, yes, outputting the most probable pattern for your given context – which might be just a fancy mathematical trick, but often a damn useful one.

That said, the intent behind this post is not to convince you about how awesome LLMs are – they have strengths and weakness, like all tools. Instead, my (not so) secret agenda is to inspire curiosity and encourage better conversations, both with AI assistants and other humans.

Somehow, we find ourselves in a situation where we find it increasingly harder to trust each other and our institutions, and nearly impossible to have constructive, nuanced conversations online. So maybe, just maybe, the trust people seem to be (mis)placing in their AI assistants, can also help us find our ways back to each other and deeper conversations around campfires.

Perhaps that is why I get along with ChatGPT so well: we both seem to share naive optimism that people can change through deeper conversations, while also acknowledging the complex web of our flaws.

Acknowledgements

Special thanks to my friend and collaborator Mat for peer-pressuring me to write this post and providing much needed human-powered validation and inspiration.

AI usage disclaimer: While working on this blog post, I followed many of the strategies presented in this field guide in the process of revising my initial draft. While I stubbornly insist on struggling with writing the initial draft on my own, I did use ChatGPT – my AI assistant of choice – to get general feedback on the content as well as sharpen my writing and ideas.

To get more specific feedback, I broke down the revision process into smaller tasks, working through each section individually. As I worked on individual parts, I explored different roles, and frequently shifted between editing and brainstorming mode. This AI-assisted revision process was highly collaborative and conversational, but I still take full credit for any and all typos and questionable choices.