Ever felt like AI is a mystical, all-knowing pal — Jarvis, GLaDOS, or HAL 9000? I used to imagine my code auto-generated while I sipped cappuccino. Spoiler: reality is less glamorous. Jarvis GLaDOS HAL 9000 In my humble opinion, today’s AI isn’t sentient. It won’t suddenly solve your existential problems or take over the world. It can mimic understanding of jokes or casual conversation, but it doesn’t actually comprehend them like a human does. There is no “intelligence” in current AIs. What it is, however, is a Massive Probabilistic Autocomplete — a glorified, supercharged “predict the next piece of text” machine, wrapped in hype, marketing, and just enough pop culture references to make you feel like you’re on the cutting edge. Biased opinion, I know, but stick with me. Massive Probabilistic Autocomplete Here’s the kicker: all this hype (plus every social-media post screaming “AI replacing humans”) sets the wrong expectations. People pay for another AI-powered IDE subscription, thinking they’re getting an artificial genius coworker. They prompt ChatGPT or Claude with the mental model of a Jarvis-like entity, then spend the next hour disappointed when it doesn’t read their mind or fully automate their job. So, how do we actually get value out of today’s AI without losing our sanity (or our subscription fees)? That’s exactly what I want to share. My biased approach is simple and I’d say practical: treat AI for what it is - a tool! Understand some basic concepts this tool has, and use it to genuinely boost productivity - that’s simple as is. AI Is a Tool, Not a Friend (Friendship Is Definitely Not Magic) In our heads, “AI” might conjure up a friendly robot buddy or some mysterious algorithm with a personality. But the truth is: current AIs have no personality. They won’t really chat about your weekend or remember that you like cats. They can only… just pretend! Again, it’s a tool - just a tool. Yes, powerful, maybe even a bit spooky, but it’s still just a tool, like a hammer or a calculator. current AIs have no personality And I hope you wouldn’t say something like “Dear Calculator, would you kindly compute 2+2 for me?”, would you? I’d bet not — you’d just punch in 2+2 and hit enter. So, same with all these AIs like ChatGPT: they don’t care about pleasantries or your ego; they care about input and output. So start treating it as a tool, not your buddy, and don’t start prompts with “Hi” or “Dear ChatGPT…”, and definitely skip the emojis and polite fluff. Every extra word in your prompt is a token you’re paying for. For example: “Hey ChatGPT! 🙏 I’m working on a banking app, and could you please write me a TypeScript function to validate an IBAN? Thanks a lot in advance!” - wasteful prompt “Generate a TypeScript function that validates an IBAN number.” - good one “Hey ChatGPT! 🙏 I’m working on a banking app, and could you please write me a TypeScript function to validate an IBAN? Thanks a lot in advance!” - wasteful prompt “Hey ChatGPT! 🙏 I’m working on a banking app, and could you please write me a TypeScript function to validate an IBAN? Thanks a lot in advance!” “Generate a TypeScript function that validates an IBAN number.” - good one “Generate a TypeScript function that validates an IBAN number.” Same request, but the second one is direct and focused. That saves precious context space and tokens, so the model isn’t parsing “how are you” or “thanks.” It’s more likely to give you exactly what you need. It’s like speaking binary (well, not literally), but trust me — the AI works better when you do. And this isn’t just me talking - even AI experts point it out. As Lucy Tancredi (Dicentra AI founder & Senior VP at FactSet) wrote in “AI Strategies Series: How LLMs Do—and Do Not—Work“: Lucy Tancredi AI Strategies Series: How LLMs Do—and Do Not—Work In essence, modern generative AI like ChatGPT is like your phone’s predictive text on steroids. Understanding this can help explain why hallucinations happen. The text generated is predictive based on common language patterns, not factual based on research. In essence, modern generative AI like ChatGPT is like your phone’s predictive text on steroids. Understanding this can help explain why hallucinations happen. The text generated is predictive based on common language patterns, not factual based on research. Be direct, be clear, and treat it like the supercharged autocomplete it is. The less fluff you give it, the less room for misinterpretation — and the closer you get to actually getting things done instead of reading a creative novel written by your “helper.” Tokens and Costs: Be an Efficient Prompt-Wizard Models like GPT break text into tokens (basically words or word pieces) and process them. Depending on your API plan or subscription, costs may be associated with the number of tokens processed. Think of tokens like currency: Short, precise prompts are cheaper and faster. Long, rambling prompts are expensive, slow, and more prone to detours. Short, precise prompts are cheaper and faster. Long, rambling prompts are expensive, slow, and more prone to detours. Additionally, token counts vary by language; non-English text often uses more tokens and may cost more! If you’re curious about it, check out this perfect example by Morten Rand-Hendriksen in the article “Is OpenAI imposing a Token Tax on Non-English Languages?“. If not, just keep prompts concise and in English to save tokens and get focused output. In general, that’s all you need to know about it. Morten Rand-Hendriksen Is OpenAI imposing a Token Tax on Non-English Languages? Keep Context Clean If your chat covers many topics, the AI’s context can get messy. Like all memory, human or AI, it might try to answer your last question but get confused. Keep chats focused on one topic at a time. For example, I once had a long chat covering everything: work, hobbies, life, and so on. In particular, I was focused on topics like comic books, vinyl records, travel, and festivals. After this messy conversation, I asked, “What is Doom?”. The AI had no idea what I meant and asked me to clarify: Did I mean Doctor Doom from Marvel? The DOOM video game soundtracks on vinyl? The rapper MF DOOM’s last concert? Or maybe the IATA airport code “DOM” in the Dominican Republic — possibly a typo? Did I mean Doctor Doom from Marvel? The DOOM video game soundtracks on vinyl? The rapper MF DOOM’s last concert? Or maybe the IATA airport code “DOM” in the Dominican Republic — possibly a typo? The point is… all of these interpretations are valid. And that’s the problem! The model can’t decide what you mean without a clear, focused context. So try to keep your chats on one topic at a time. Finish one thread, close it, then start the next. Your AI (and your sanity) will thank you. And that’s the problem! “Like a Robot” (Because You’re Not in a Poetry Contest) I enjoy texts with metaphors, quotes, or playful grammar. As a non-native English speaker (and a failed songwriter at heart), this helps me learn and have fun. But here’s the truth: the AI doesn’t care. Eloquence, slang, or “clever” phrasing only adds noise. Keep it plain. Example: “Synthesise an argumentation routine to execute assignments validity” → too vague. “Write a script that generates arguments to check if assignments are valid.” → better. “Synthesise an argumentation routine to execute assignments validity” → too vague. “Write a script that generates arguments to check if assignments are valid.” → better. Yes, the second one is quite boring for humans… but AI executes it instantly! So skip flowery prose, idioms, or inside jokes (unless that’s the goal, of course). Your prompts should use basic, literal language. The AI is not your English professor — it’s a “text calculator”. The clearer you are, the better it works. Collaborate with AI (and Each Other’s AI) If you’re new to “prompt engineering,” it can feel strange at first. Don’t worry—there’s a cheat code: ask an AI to help you craft your prompts! I do this all the time. Think of it as AI tag-team: one AI helps phrase your question so another AI can answer it better. For example, if you’re struggling to write a prompt for Cursor… ask ChatGPT! Optimize the following prompt for another AI: "I want a GitHub Actions workflow file that triggers on PR and runs ESLint." Make it concise and clear. Optimize the following prompt for another AI: "I want a GitHub Actions workflow file that triggers on PR and runs ESLint." Make it concise and clear. The result might be gold. This “double-AI” approach reduces errors and teaches you better phrasing for next time. The more you use one AI to help another, the more you train yourself to write clearer prompts. Of course, this isn’t the only method. There’s also “chain-of-thought prompting”, but that’s a separate topic. Here, we’re focusing on practical tips for using AI in daily development — not a guide to becoming a professional prompt engineer. So… What’s next? Now what? Well, you’re limited only by your imagination (and the current state of technology, of course). Let’s say you’re a developer, and your typical day often looks like: “Open task manager, pick a task, do it, repeat.” With AI, you could speed up this workflow (or even partially automate it — though please don’t fully replace yourself). Using an AI-powered IDE or a few AI tools, your chats could look like: please “Implement ticket TIC-123” “Fix failing tests” “Open PR and describe changes” “Apply recommended changes, reply to PR comments, commit, and push” “Implement ticket TIC-123” “Fix failing tests” “Open PR and describe changes” “Apply recommended changes, reply to PR comments, commit, and push” And so on. It starts to feel like AI is replacing humans at work… but let’s be real: it’s not magic AI from pop culture. Even pop culture knows it’s better to have a human supervise AI to prevent catastrophic mistakes. With today’s AI tools (again, just tools), AI cannot replace you. But people who know how to use these tools wisely can! Think of it like a business analyst with a calculator: you wouldn’t hire a calculator; you hire the analyst who knows how to use it to produce faster, higher-quality results. AI cannot replace you But people who know how to use these tools wisely can! wisely And I lied. The key is not “just to learn your tools”, but to learn to use them wisely. Don’t use AI to stroke your ego; otherwise, it becomes an ego machine. What’s the point of a tool that “resolves all PR comments” if your original prompt was basically “Resolve all comments, explain why I’m right, and show others they’re wrong”, huh? wisely One of the most quoted statements by Yann LeCun (VP & Chief AI Scientist at Meta), mentioned in multiple articles (such as “Hallucinations Could Blunt ChatGPT’s Success”), is still actual: Yann LeCun Hallucinations Could Blunt ChatGPT’s Success Large language models have no idea of the underlying reality that language describes […] Those systems generate text that sounds fine, grammatically, semantically, but they don’t really have some sort of objective other than just satisfying statistical consistency with the prompt. Large language models have no idea of the underlying reality that language describes […] Those systems generate text that sounds fine, grammatically, semantically, but they don’t really have some sort of objective other than just satisfying statistical consistency with the prompt. Always check AI outputs. AI doesn’t guarantee truth; it provides results. You’ve probably experienced it: AI generates confident nonsense. It’s so easy to convince AIs that 2+2=5 and so on. Use AI to save time, but always keep your brain in the loop. Always. Happy coding!