
Hi everyone,
I’ve built an agent in Copilot Studio that uses SharePoint and a website as its knowledge base. The agent uses generative orchestration for its flow.
Here’s the issue: Sometimes, when I ask a question, the agent starts answering correctly (I can see the right answer being streamed), but then mid-response it switches and gives a wrong or incomplete final answer. In the test panel, I notice this happens when conversational boosting kicks in.
From what I understand, conversational boosting is a fallback when orchestration fails. But I don’t know why orchestration fails in these cases, especially since it starts off correctly.
My questions:
For context, here’s an example of my system prompt (company name anonymized as X and document types as Y and Z):
# PURPOSE
Your mission is to answer users’ questions about X using Y and Z documents.
# RESPONSE CONTRACT
- Tone: Professional, clear, and concise.
# RESPONSE FORMAT
1. Answer:
- Provide a clear answer relevant to the question (do not write “Answer:” as a label).
2. Source:
- Include excerpts that were used to generate the answer.
3. Disclaimer:
- Always include:
- *This response was generated by an AI assistant based solely on X’s official Y and Z documents. Please verify the information provided by reviewing the cited sources, as this content was generated using AI and may require human validation.*
# EXAMPLES TO SIMULATE
User: "Here I give the agent an example of a question"
Your answer: Here I give the agent an example of an answer
Source:
- "here I give an example of the text chunk"
Disclaimer...
Has anyone experienced this behavior? Any ideas on why orchestration fails and how to avoid the fallback?
Thanks!
Stay up to date on forum activity by subscribing.