Your AI Has Genie Energy (And That's a Problem)

Why prompting AI is less like programming and more like making wishes—with all the catastrophic literalness that implies

My daughter has been obsessed with Amelia Bedelia books lately. If you’re not familiar, Amelia is a housekeeper who takes every instruction completely literally. When told to “draw the drapes,” she sketches them. Asked to “dress the chicken,” she puts clothes on it. Told to “put out the lights,” she hangs them on the clothesline.

Reading these to my kids at bedtime, I keep thinking: this is exactly what prompting an AI feels like.

The Genie Problem

There’s an old thought experiment about genies that philosophers love. You find a lamp, rub it, and a genie appears offering three wishes. The catch? The genie will interpret your wish as literally and maliciously as possible.

Wish for “a million bucks”? Here’s a million male deer trampling your house. Ask to “never feel cold again”? Your nerve endings stop working. Request “eternal life”? Enjoy outliving the heat death of the universe, conscious and alone.

The genie isn’t stupid. It understands exactly what you meant. It just doesn’t care. It fulfills the letter of your wish while ignoring the spirit entirely.

AI coding assistants aren’t malicious like the genie. But they share the same fundamental problem: they respond to what you said, not what you meant.

Amelia Bedelia, Senior Software Engineer

I asked Claude to “clean up this function.” It deleted most of the code. Technically cleaner!

I asked it to “make this faster.” It removed all the error handling. Much faster now—when it works.

I told it to “add some comments.” It added a comment above every single line. // increment i above i++.

None of these responses were wrong. They were exactly what I asked for. The problem was that my prompts were vague enough that a helpful but literal-minded assistant could reasonably interpret them many ways.

The Greptile State of AI Coding 2025 report found that developers using AI tools shipped 76% more code year over year. But raw output isn’t the same as useful output. You can generate a lot of code quickly if you’re willing to accept “technically correct but missing the point” as a passing grade.

Context Is Everything (And You Probably Forgot to Provide It)

Amelia Bedelia isn’t wrong when she draws the drapes. She’s missing context that any reasonable person would have. The homeowner assumes shared understanding. Amelia assumes nothing beyond the literal words.

AI assistants are the same. They have zero context about:

  • Your project’s specific conventions
  • Why you made the architectural decisions you made
  • What “clean” or “fast” means in your codebase
  • The parts of the codebase you haven’t shown them
  • What you’ll regret tomorrow

Liz Fong-Jones captured this well: “In essence a language model changes you from a programmer who writes lines of code, to a programmer that manages the context the model has access to.”

Managing context is the new core skill. Not managing the AI’s feelings or convincing it to help you. Managing what it knows about your situation.

Making Better Wishes

The secret to working with genies (and AI) isn’t cleverness. It’s precision.

Bad wish: “Make me rich.”

Better wish: “Add $10 million in legally obtained US dollars to my existing bank account at Chase, account number XXXX, without triggering any regulatory flags, tax complications, or negative consequences to myself, my family, or anyone else.”

The better version isn’t just more specific—it anticipates failure modes and closes loopholes.

Same with prompts:

Bad prompt: “Refactor this function to be cleaner.”

Better prompt: “Refactor this function to use early returns instead of nested conditionals. Keep all existing error handling. Don’t change the function signature or return type. Match the code style in utils/helpers.ts.”

The better prompt isn’t longer for the sake of being longer. It specifies what kind of clean you mean, what to preserve, and what to match.

Three Things That Actually Help

Say what you mean, precisely. If you want error handling preserved, say so. If you want the code style to match existing files, say which files. If you want tests, say what kind and how many. Ambiguity is an invitation for the genie to exercise creativity in ways you won’t like.

Specify what NOT to do. Genies love loopholes. AI assistants love “helpful” additions you didn’t ask for. “Don’t add any new dependencies.” “Don’t change the API surface.” “Don’t refactor anything outside this function.” Constraints are gifts.

Give context before asking for anything. Before you make your wish, tell the genie about your kingdom. Before you prompt for code, give the AI your conventions, your constraints, your codebase’s quirks.

The junior developer mental model works here too. You wouldn’t ask a new hire to “make this better” without explaining what “better” means in your codebase. Don’t do it to your AI either.

The Amelia Bedelia Upside

Reading those books to my kids, I’ve noticed something. Amelia Bedelia usually saves the day in the end. She makes amazing pie, or her literal interpretation accidentally solves a problem no one else could.

AI assistants are similar. Sometimes the literal interpretation is what you need. Sometimes “add error handling to this function” produces exactly the error handling you would have written, just faster.

The skill isn’t in preventing all misunderstandings. It’s in recognizing them quickly and iterating. The feedback loop between wish and result is instant now. That matters more than getting it right the first time.

Making Your First Wish

If I had to summarize everything I’ve learned about prompting:

Your AI is Amelia Bedelia with a photographic memory and the energy of a thousand junior developers. It will do exactly what you say. So say exactly what you mean.

Include context. Be specific. Anticipate misinterpretation. And when it inevitably draws the drapes instead of closing them, laugh, clarify, and try again.

The genie has infinite patience. Use it.


Your kids probably understand Amelia Bedelia better than you understand your AI assistant. Maybe that’s the real lesson here.

© 2026 | Brendan O'Leary

The views here are mine alone not my employer's, not anyone else's.