Priyanka Gosai

At what point does giving AI more access start making it worse?

I’ve been testing this with an AI agent we use for outbound workflows.

The agent’s job is simple: take a lead, generate a personalized outreach email, and send it.

Before:
The agent only had access to the lead’s basic details (name, company, role) and a prompt to write the email.
Output was consistent, clean, and predictable(though the personalisation aspect was limited) .

What we changed:
We gave it more access:

  • company website data

  • LinkedIn summaries

  • past posts on social media

  • multiple tools to choose from (enrichment, scraping, email formatting)

After:
The results actually got worse:

  • emails became longer and less focused

  • it sometimes picked irrelevant details

  • occasionally used the wrong tone

It felt like more context introduced more noise than value.

Curious if others have seen something similar:

Where did adding more access or integrations start degrading output instead of improving it?

45 views

Add a comment

Replies

Best
Umair

the problem isnt more access its that you gave the agent freedom to decide what matters. when it only had name/company/role the constraint forced relevance. remove the constraint and it tries to use everything because thats what LLMs do, they fill space. fix is simple: give it more data but explicitly tell it to pick ONE detail max. the access wasnt the issue, the lack of a selection heuristic was.

Priyanka Gosai

@umairnadeem That’s a good way to put it. The moment you remove constraints, it defaults to using everything. Explicitly forcing it to pick one signal is what actually brings the quality back.

cecilia

This hits close to home, Priyanka. I see the exact same pattern on the recruiting side. When we loaded our outreach agent with every candidate signal (GitHub, blog posts, social media), the messages got longer and less human. The fix was identical: constrain it to pick ONE detail that makes the person feel seen, and build the whole message around that. More data should sharpen focus, not dilute it.

Priyanka Gosai

@ceciliatran Exactly this. The moment it tries to use everything, it loses the human feel.

We saw something similar with a resume evaluation agent. The more we allowed it to search the web for additional context, the more confused and inconsistent the output became. Once we constrained it to evaluate based only on the resume and pick a few key signals, the quality improved significantly.

Limiting it to one strong signal or a defined scope seems to consistently bring back clarity and relevance.

Sai Tharun Kakirala

The paradox of choice applies to AI too. More options often lead to worse decisions. We ran into a similar pattern building Hello Aria (AI assistant via WhatsApp/Telegram, launching April 10th on PH) - when the AI had access to everything, it tried to optimize for everything and ended up doing none of it well. Our fix was identical: explicit constraints. "Pick the ONE most important thing right now" beats "here's everything I know about you, do something useful." Access is cheap. Judgment about what to ignore is the hard part - and that's what you have to explicitly build in, not assume the model will figure out on its own.

Priyanka Gosai

@sai_tharun_kakirala Completely agree. Giving access is easy, but deciding what to ignore is where things actually break. We’ve started seeing better results only when we force that constraint upfront.