← All posts
How-toMay 12, 2026

How to manage an AI coworker

Most teams that hire an AI employee underuse it for weeks because nobody owns the manager role. Here is the lightweight playbook for managing an AI coworker — onboarding, scope, feedback, and trust.

How to manage an AI coworker

Most teams that hire an AI employee underuse it. The pattern is consistent: they buy it, give it three jobs the first day, expect autonomous behavior by Wednesday, and quietly stop using it by month two.

The fix isn't more capability. It is treating the AI coworker the way you treat a human coworker — onboarded, managed, trusted incrementally.

Week one: give it one job and watch

Pick the single highest-leverage repeating job the team currently does badly. A weekly report nobody likes writing. A morning briefing pulled from three tools. Dormant-lead follow-ups. One job, not three.

Give the AI coworker access only to the tools that job needs. Tell it the outcome you want, the channel it should operate in, and the schedule. Then sit next to it for the first three runs — review the output, correct in-thread, let it ship.

Week two: expand carefully

After three good runs, expand by:

  • Adding one more tool to the scope (e.g., now it can also read the CRM)
  • Loosening one approval rule (e.g., it can now post to Slack without approval, but emails still need a click)

Never expand both in the same week. You want to know which change is responsible if something drifts.

The autonomy ladder

Stage What the AI does When to advance
1. Read-only Summarizes, reports, watches First 3 outputs are good
2. Drafts Writes emails, docs, replies for human review First 5 drafts need <2 edits each
3. Approval-gated writes Sends/posts with one-click approval First 10 approvals are auto-yes
4. Autonomous writes Sends/posts on its own under guardrails Trust earned over weeks

Skipping stages is how teams end up with a Junior that sent the wrong thing to the wrong person on day five. Don't skip stages.

Pick a manager

Every AI coworker should have one human owner. That person:

  • Reviews output weekly for the first month
  • Owns the feedback loop (corrections in-thread, not in a separate doc)
  • Adjusts scope as the team's needs change

This is the single biggest predictor of "did this work for your team or not". Teams with an owner ramp fast. Teams without one churn quietly.

Feedback that works

Like with a human: specific, tied to one action, in-thread.

  • ✓ "Tone too formal for this customer — the rest of our thread with them is casual."
  • ✗ "Be more casual."

Junior carries context across sessions, so specific in-thread corrections can be referenced and applied next time. The vague version doesn't give it anything to apply.

What if it drifts?

Pause autonomous actions, drop back one rung on the ladder, find the specific instance that caused the drift, correct it in-thread, then climb back up. Same as you'd handle a human teammate who got something wrong.


Hiring an AI employee is the easy part — onboarding it well is what separates teams that get value in week one from teams that quietly stop using it. If you're starting fresh, the home page walks through the hire flow, and the use-case pages cover concrete first-week jobs by role.


Related reading

Follow Junior

More from Junior