Blog

Why Your AI Prompts Fail (And How to Fix Them)

Pau Karadagian

Most people get mediocre AI results because they don't know how to write prompts. Learn the CAPE method to create instructions that generate actionable plans.

AI

People Ops

HR

AI Illiteracy CAPE method
AI Illiteracy CAPE method
AI Illiteracy CAPE method
AI Illiteracy CAPE method
Join our newsletter

Mantente al día en todo lo relacionado a HR para equipos globales.

Al enviar este formulario, aceptas recibir nuestro boletín y correos electrónicos ocasionales relacionados con Atlas.
How to get better results from AI?

TL;DR

Your AI prompts fail because you're not including specific context, clear action, and defined purpose. Use the CAPE method: Context + Action + Purpose + Evaluation. Result: prompts that generate executable plans instead of generic listicles.


Why generic prompts don't work in AI

Why generic prompts don't work in People Ops

If you can't ask for it right, AI won't magically figure it out

Generic prompts in People Ops fail because AI needs specific context about your team, industry, and objectives to generate useful responses. Without this data, AI spits out generic lists you could've Googled from any 2007 HR blog.

The scene is painfully familiar. You need to put together an upskilling proposal for a key department. Time's running out. Your team's swamped. So you fire up your AI tool and type something like: "create a training plan for customer success"... and what you get back is a soulless, generic list you could've pulled from Google in 2007.

The result hits you like a lightning bolt: this isn't what you need. But you can't figure out how to ask for it better or what variables to tweak. Hell, you're not even totally sure what decision you want to make with this thing.

So you leave it there. Add some polish and move to the next fire (because AI saves time, right?)


What is AI Illiteracy

What AI illiteracy looks like for HR professionals

AI illiteracy for HR isn't about not understanding technology; it's the inability to create specific prompts that produce actionable results. It means not knowing how to talk to a machine that requires context, focus, and explicit purpose to function.

And that's the problem: you spent years learning to listen, hold space for teams, read group dynamics. But now, suddenly, your most powerful tool doesn't understand metaphors, emotions, or ambiguity. It's asking you to do something almost nobody in HR learned: be surgical.


Common mistakes when using AI

Common mistakes when using AI in HR

The most common mistakes are vague prompts like "generate a performance summary" or "give me team building ideas." These prompts fail because they don't specify context, desired format, or purpose, resulting in generic, unusable responses.

Most HR prompts sound like this: quick, directionless. Then comes the frustration: "AI doesn't work, it gives basic stuff, just spits out random junk."

Of course it does. Because you're not asking for anything useful. Because you're speaking (or typing) without thinking.

You jumped into the most powerful era in work history... without bothering to ask yourself what you actually want to get out of it.


Elements to consider when prompting AI

How to define clear objectives before using AI in HR

To define clear objectives with AI in HR, you need three elements: your specific hypothesis, the variables you want to analyze, and the concrete change you're trying to achieve. Without these elements, AI will only reflect and amplify your lack of strategic direction.

If you don't define your objective with these three elements, then whatever you ask AI will always be surface-level. Something decorative like a footnote in an Excel sheet nobody reads.

Remember: AI doesn't think for you. It only executes what you give it. And if what you give it is garbage, what you're going to get is... more of the same garbage. With pretty formatting, sure. (And some cheerful emojis).


What is insight porn?

What is insight porn?

Insight porn is generating visually attractive dashboards and reports without converting them into concrete actions. It paralyzes your strategy because it creates the illusion of progress while you avoid making difficult decisions or implementing real changes in policies and processes.

It's another clear symptom of this new illiteracy: AI gave you an insight. You didn't take the next step. And if there's no action, it's not strategy. You just have entertainment dressed up like data science.

OK, you know what doesn't work: vague prompts, decorative dashboards, insights that lead nowhere. Now here's what matters: how do you become someone who actually knows how to use this tool?


Essential skills for using AI effectively

Essential skills for using AI effectively in HR

The three essential skills are: defining exactly what you want to achieve, translating that into a clear prompt with context, and critically evaluating what AI returns. This requires thinking like a strategist, not a task executor.

You don't need to know how to code. You don't need to understand how models are trained.

Bottom line: you need to think like a strategist again, not a task executor. (can't stress this enough!)

Because if you don't know what you want, no AI is going to save you from making senseless decisions.


What is the CAPE method? examples CAPE method

The CAPE method: how to create prompts that work

The CAPE method is a 4-step framework for creating effective prompts: Context (specific information about your situation), Action (what it should do exactly), Purpose (what you'll use the result for), and Evaluation (how you'll measure if it's good).

  1. C - Context: What specific information does AI need about your situation?

Team size, industry, geographic location, organizational culture, budget, timing

  1. A - Action: What exactly do you want it to do?

Not "help me with," but "generate," "analyze," "compare," "prioritize"

  1. P - Purpose: What are you going to use this result for?

C-level presentation, operational decision, input for another tool, hypothesis validation

  1. E - Evaluation: How will you know if the result is good?

Specific metrics, quality criteria, necessary format

CAPE method in action:

Before (terrible prompt): "Help me with ideas for retaining talent"

After (CAPE method):

  • C: Team of 15 developers in fintech, 40% remote across North America, 25% annual turnover

  • A: Generate 5 specific actions to reduce turnover

  • P: For board meeting presentation to define 2024 budget

  • E: Each action must have estimated ROI and implementation timeline

Resulting prompt: "Generate 5 specific actions to reduce 25% annual turnover in a team of 15 fintech developers, where 40% work remotely across North America. Each action must include estimated ROI, implementation timeline, and tracking metrics. Goal is to present these proposals in board meeting to define 2024 budget."

Ready to transform your own HR prompts? Download our free collection of proven CAPE method prompts that generate actionable results for real HR challenges.


How to improve your AI prompts

How to improve AI responses to get useful results

AI doesn't always nail it on the first try. Here's how to improve:

Red flags that mean you need to adjust:

  • Generic responses: If what it returns applies to any company, you're missing context

  • No numbers: If it doesn't include metrics, timelines, or specific budgets

  • Impossible to execute: If you don't know where to start implementing it

  • Too theoretical: If it sounds like it came from a manual, not reality

Refinement techniques:

Add constraints: "Considering we have a limited budget of $10K and 3-month timeline..."

Ask for specific formats: "Organize the response in a table with columns: action, owner, cost, expected impact"

Request alternatives: "Give me 3 different approaches: one conservative, one innovative, and one for quick implementation"

Challenge the obvious: "What risks or limitations does each proposal have?"


Examples of specific prompts for HR  CAPE method

Examples of specific prompts for real HR cases

Specific prompts include detailed context (industry, team size, location), clear action (generate, analyze, compare), and concrete constraints (time, budget, format). This transforms generic responses into executable solutions for your specific reality.

Case 1: Onboarding

Terrible prompt: "Create an onboarding plan for new employees."

What AI returns:

A generic task list (welcome, tour, introductions, first objectives).

Useful for nobody. Applies to nothing. Could've Googled it in 5 seconds.

Strategic prompt: "Create an onboarding plan for a B2B sales role in North America, considering the team is remote, ideal ramp-up is 60 days, and biggest challenge is getting them to understand the sales pitch quickly. Use an informal tone that builds connection."

What AI returns:

Week-by-week onboarding focused on pitch mastery, shadowing with senior reps, recorded sessions of successful demos, and specific checkpoints.

Applicable and useful.

Case 2: Performance evaluation

Terrible prompt: "Generate a form to evaluate team performance."

Strategic prompt: "Generate an evaluation form for a UX design team focused on cross-functional collaboration, deliverable quality, and autonomy. Company culture prioritizes honest feedback, so include open-ended questions that don't sound evasive."

Result: Items aligned to real role criteria. Questions that allow unambiguous evaluation and can scale to other creative areas.

Stops being "just another form" and becomes strategic input.

Case 3: Talent development

Terrible prompt: "Suggest ideas for retaining young talent."

Strategic prompt: "Suggest 5 actions to improve the experience of employees under 30 in support roles (CX, sales, back office), at a company where salary is average but autonomy and purpose matter more. 70% of them are remote across North America."

Result: Segmented ideas aligned to cultural context, with proposals like peer mentoring cycles, horizontal recognition, and micro-ownership projects.

You get results that aren't generic or blog-solution quality. AI returns something real and executable.

Case 4: Benefits benchmarking

Terrible prompt: "Make me a list of competitive benefits at tech companies."

Strategic prompt: "Create a comparison table of benefits at tech companies headquartered in the US but with distributed teams across North America. Focus on health, wellness, and professional development benefits. Preferred sources: Remote, Deel, Rippling, Carta."

Result: A table that can become direct input for a People decision. Saves hours of research and aligns with strategic focus. And it'll probably suggest you check out Atlas flexible benefits (sounds like a joke but might actually be true if you want) :)

Notice something special about the strategic prompts? The key is being as specific as possible and giving context about what we want to achieve. Going from point A to point B. And from there, fine-tuning details.

Want 20+ more strategic prompts like these for every HR scenario? Get instant, free access to our complete CAPE method prompt library - covering recruitment, performance reviews, employee engagement, and more.


Checklist for validating prompts before using AI in People

Before hitting send, check:

  • Does it include specific context? (industry, size, location, culture)

  • Is the action clear? (generate, analyze, compare vs. "help")

  • Do I define output format? (table, list, paragraphs, metrics)

  • Do I establish constraints? (time, budget, resources)

  • Can I execute the response tomorrow? (if not, lacks specificity)

  • Would another colleague understand what I asked? (clarity test)


How to get better results from AI

Red flags: when AI isn't helping you

Main warning signs are: copying and pasting responses without editing, not being able to measure if it worked, results interchangeable with competitors, and not being able to explain why you chose that solution. These signals indicate poorly framed prompts or lack of strategic judgment:

  • You're copying and pasting without editing: If you use the response as-is, you're probably not being specific enough.

  • You don't know how to measure if it worked: If you don't have metrics to evaluate the result, the prompt was wrong from the start.

  • Sounds like everyone else: If the result is interchangeable with what your competition would do, you're missing strategic differentiation.

  • You can't explain why you chose that response: If you don't understand the logic behind the result, you shouldn't implement it.

Ready to master AI prompts that actually work? Download our free CAPE method toolkit with ready-to-use templates, examples, and a step-by-step implementation guide for HR professionals.


AI community for HR

Our Community

Want to stop winging it with AI and start thinking about it seriously?

Join our community AI x People Ops, the group where HR professionals share real cases, prompts that work, and unfiltered questions.

Because nobody's going to learn this for you, but you don't have to learn it alone.


FAQ about AI in HR

FAQ about AI in HR

Is prompt improvement just for ChatGPT?

No. This applies to any HR tool with integrated AI, like performance platforms, benefits systems, recruiting tools, or survey platforms. Even when it's not visible, many tools already use AI to suggest actions, analyze data, or automate processes. That's why it's crucial to know what you want to achieve before interacting with any system.

What if my team doesn't want to adopt AI?

It's common to see resistance when the team doesn't see concrete value. Often, it's not disinterest but lack of clear examples, inadequate training, or too many tools with zero purpose. Adoption improves when there's visible purpose and usage that actually solves real day-to-day problems.

How do I start learning AI?

First step is learning to define well what you want to achieve and translate it into a clear prompt. From there, you can start experimenting with prompts, analyzing what AI returns, and adjusting for your context. It's an iterative process, not linear. It's not about taking an AI course, but developing judgment for working with it. 

What is insight porn?

Insight porn is excessive or superficial use of data without turning it into action. It happens when you generate dashboards, reports, or analyses that seem interesting but don't translate into decisions, changes, or relevant conversations. It's a common risk in HR when you prioritize showing data without integrating a concrete intervention plan.

Join our newsletter

Mantente al día en todo lo relacionado a HR para equipos globales.

Al enviar este formulario, aceptas recibir nuestro boletín y correos electrónicos ocasionales relacionados con Atlas.

Continuar leyendo...

Las mejores startups globales usan Atlas

Las mejores startups globales usan Atlas

para beneficios

para beneficios

Conocé a las empresas que ya le están dando libertad a sus equipos (nos encantaría que estés en esta lista!)

eleven ‘24

greengrowth - retreat ‘23

choiz - retreat ‘23

la pieza - retreat ‘23

  • eleven ‘24

    choiz - retreat ‘23

    greengrowth - ‘23

    la pieza ‘24

Spanish (Argentina)

2024 Atlas. Todos los derechos reservados.

Spanish (Argentina)

2024 Atlas. Todos los derechos reservados.

Spanish (Argentina)

2024 Atlas. Todos los derechos reservados.