I've got a copilot using generative AI on an internal SharePoint site. I've configured manual authentication and SSO per the online guides and all seems to be working ok having published the Copilot to Teams - we can ask the copilot questions and it is sometimes answering with content from the SharePoint site.
What we are finding though is that sometimes the copilot responds with a "I'm sorry, I'm not sure how to help with that. Can you try rephrasing?" when there is definitely an answer on the SharePoint. Sometimes asking the exact same question a second time will result in the copilot providing the correct answer from the SharePoint but, not always. Sometimes issuing a "restart" and then asking the exact same question again also seems to result with the correct answer from the SharePoint.
I've enabled Application Insights and it is logging away but, the only thing I notice is that when I ask a question and get a "I'm sorry, I'm not sure how to help with that. Can you try rephrasing?" response, Application Insights seems to just show the copilot using the fallback topic with no real information as to why. Are there any other logs that I can check which might highlight any reasons for the inconsistencies or is this expected behaviour when using generative AI?
Hi @remidyon ,
I'm attempting to send an HTTP request to OpenAI's service. It seems to function correctly overall. However, when I dynamically assign the content of the user field in the JSON payload to a variable where the question is stored, the response indicates that the information isn't found in the retrieved data. Strangely, if I manually input the same question into the content field, I receive the correct response.
Would you know why this is hapenning?
Thanks
This is the code:
kind: AdaptiveDialog beginDialog: kind: OnUnknownIntent id: main priority: -1 actions: - kind: SetVariable id: setVariable_J8onVi variable: Topic.UserQuestion value: =System.Activity.Text - kind: HttpRequestAction id: vdrcdy method: Post url: https://ai-XXXXXXXXXXXXXXXXX.openai.azure.com/openai/deployments/gpt-35-turbo-16k/extensions/chat/completions?api-version=2023-06-01-preview headers: api-key: XXXXX-XXX-XXXXX-XXXXX-XXXX body: kind: JsonRequestContent content: kind: ValueExpression literalValue: dataSources: - parameters: endpoint: https://xxxx-xx-xxx-xx.search.windows.net fieldsMapping: contentFields: - Value: content contentFieldsSeparator: | filepathField: metadata_spo_item_name titleField: Name urlField: vectorFields: filter: indexName: sharepoint-index inScope: true key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX queryType: simple roleInformation: You are an AI assistant and you are part of our organization. You need to be polite when you give the answers. The information you will be giving is from an internal SharePoint. semanticConfiguration: default strictness: 3 topNDocuments: 5 type: AzureCognitiveSearch deployment: gpt-35-turbo-16k max_tokens: 800 messages: - content: You are an AI assistant and you are part of our organization. You need to be polite when you give the answers. The information you will be giving is from an internal SharePoint. role: system - content: ${Topic.UserQuestion} role: user stop: stream: false temperature: 0 top_p: 1 response: Topic.Response responseSchema: kind: Record properties: choices: type: kind: Table properties: index: Number messages: type: kind: Table properties: content: String end_turn: Boolean index: Number role: String created: Number id: String model: String object: String usage: type: kind: Record properties: completion_tokens: Number prompt_tokens: Number total_tokens: Number - kind: SendActivity id: sendActivity_CUSi4i activity: "{Last(Last(Topic.Response.choices).messages).content}" - kind: EndDialog id: a1Vse4
Hi @Freddie_B
Currently the generative answer node in Copilot Studio will rewrite / summarize / moderate the answers generated by Azure OpenAI (basically the answer is going through a second layer of LLM with a different set of system prompts) - our product group is looking at allowing users to avoid that rewrite process, stay tune for more information on this.
Rémi
Hi, @remidyon ,
I'm encountering a similar issue. I have created a model through Azure Open AI Studio, which accesses my SharePoint via an index. When using the model in the playground or the webchat app I created (Contoso), the responses are elaborate and correct.
However, when I add the model to Copilot Studio, the behavior is completely different. The responses seem more limited, and sometimes the information provided is very basic. Instead of generating an answer, as it does in the playground, it often responds with preset phrases from one of the default topics.
Is there any explanation to this?
I would appreciate very much your help on this 🙂
I also faced same issue even today. Even after putting lower the moderation level same issue is persisting. Can you reply with any other alternative solution. I have uploaded data in excel, one of the column has name. I am searching for all the rows which contains a particular keyword. With Same question it says "i don't know", other time it fetches the answer.
The second screenshot is in tabular format, as given in instructions. So, it's correct to get that data. But behavior remains inconsistent. Please help
Hi @MarkB_UK
By definition there could be a little variance between each response due to the LLM internal mechanism, and sometimes asking the same questions will produce "I don't know" or the actual answer.
Our product group is working on limiting this kind of behavior and they just released a fix this week regarding this issue, could you try again?
Another thing that could help is to lower the moderation level.