Skip to content
Start free trial

Using the Intelligence Agent

This guide covers how to use the Intelligence Agent effectively — structuring questions clearly, refining results, navigating sessions, and interpreting the outputs.


The agent responds best to specific, goal-oriented questions. The more context you provide, the more useful the response.

Less effective:

“Show me bots”

More effective:

“Show me the bot rate per campaign for the last 30 days, sorted by highest bot rate first”

Even better:

“Show me campaigns from March with a bot open rate above 20%. I want to understand if the spike is coming from Apple MPP or from security scanners.”

  1. Include a time frame — “last 7 days”, “in February”, “since March 1st”. Without one, the agent defaults to the last 30 days.
  2. Specify what to group by — “by campaign”, “by IP”, “by day”, “by ASN”. Grouping shapes the output significantly.
  3. Ask follow-up questions — The agent maintains context within a session, so you can ask follow-ups without repeating all the filters.
  4. Name specific campaigns — If you know the campaign name, include it. Partial matching is supported.
  5. Request a comparison — “Compare bot rates between my newsletter and my promotional campaigns” produces more insight than a single-campaign query.

If the first response is not quite right, ask the agent to adjust:

  • “Show that as a line chart instead of a bar chart”
  • “Filter this to bot events only, remove suspicious”
  • “Add a column for the top bot reason”
  • “Sort by event count descending”
  • “Limit this to the top 10 IPs”
  • “Break this down by day instead of by campaign”

You can also ask the agent to explain its methodology:

  • “How did you calculate the bot rate?”
  • “Why is this campaign flagged as high-bot?”
  • “What does the ‘Cloud IP’ detection reason mean?”

A typical investigation workflow when you notice an unusual spike in bot activity:

  1. Confirm the spike — “Was there a spike in bot opens between March 10–17?”
  2. Identify the source — “Which campaigns sent during that period had the highest bot rates?”
  3. Find the driving factor — “What are the top detection reasons for bot events in that campaign?”
  4. Isolate by IP — “Which IPs contributed the most bot events for campaign ‘Spring Sale’ on March 14th?”
  5. Check provider — “What percentage of those bot events came from SES vs Mailgun?”
  6. Cross-reference with list — “Did the bot rate differ between the ‘Active subscribers’ segment and the ‘Re-engagement’ list in that campaign?”

Bars represent the bot rate (%) per campaign. The x-axis is campaign name, y-axis is bot rate percentage.

  • Long bars — High bot rate; investigate detection reasons
  • Bars split by color — If colored segments show bot vs suspicious, the combined height is the total non-human rate

Shows how bot rate has changed over the selected period.

  • Flat line — Stable bot rate; typical for healthy sending programs (10–20% is normal)
  • Sharp spike — Sudden increase, often correlated with a specific campaign or a new list segment
  • Gradual climb — List decay, growing proportion of stale or bot-seeded addresses

Doughnut chart — engagement distribution

Section titled “Doughnut chart — engagement distribution”

Shows the proportional split between Bot, Suspicious, and Human classifications.

  • The human slice represents real, trackable engagement
  • If the human slice is below 50%, your engagement metrics in your ESP are significantly inflated

The evidence panel appears when the agent surfaces the reasoning behind specific classifications. Each row shows:

  • Rule fired — The detection rule name (e.g. “Apple MPP”, “Cloud IP”, “Sub-second open”, “Honeypot click”)
  • Score contribution — How many points this rule added to the bot confidence score
  • Details — Supporting data (IP address, user agent string, timing in milliseconds, etc.)

  • Name sessions descriptively — The agent does not auto-name sessions. Use the pencil icon to rename sessions to something like “March 2026 bot spike investigation” or “Klaviyo list B cleanup audit”.
  • One topic per session — Keep related questions in the same session so the agent carries context. Start a new session for unrelated investigations.
  • Save notable results — If the agent surfaces an important finding, copy the chart or summary before closing the session. Sessions are retained but outputs are not exported automatically.

Every Monday, ask:

“Give me a summary of bot activity from last week. Which campaigns had the highest bot rates? Were there any unusual spikes?”

This gives you a quick pulse without navigating charts manually.

Before sending to a new list or segment:

“What was the bot rate when I last sent to the ‘Re-engagement 90 days’ segment?”

High historical bot rates on a list segment correlate with poor list quality, which will affect your deliverability.

After a campaign with worse-than-expected engagement:

“What was the real human click rate for the ‘April Newsletter’ campaign after removing bots? How does that compare to my average?”

This separates genuine engagement decline from bot-inflation artifacts.

If you suspect a shared IP is hurting your deliverability:

“Which IPs sent the most bot events via SES in the last 30 days? Include the ASN and the bot rate per IP.”

Compare these IPs against your IP Reputation data to see if high-bot IPs correlate with poor inbox placement.


Open the Intelligence Agent →