Hi everyone,
Imagine you have an investment application form and you are building an agent that helps users formulate answers for this form.
The challenge is that each answer has a strict character limit.
The agent should:
- ask the user for input,
- compare the input against available best practices,
- generate a well-formulated answer,
- make sure the answer does not exceed, let’s say, 2,000 characters including spaces,
- then continue question by question.
The problem is that we know LLMs cannot reliably count characters. They usually estimate it.
How would you design such a solution from a character-counting and answer-adjustment perspective?
I tried one approach with Generative Answers: the agent generates an answer block, measures the length, and if the length is not acceptable, the process repeats a few times in a loop.
But I am not fully happy with the answer quality when using Generative Answers.
A custom prompt as a tool also tends to hallucinate too much.
If I use a child agent, the answer quality is better, but then it is not very clear to me how to store the AI-generated answer in a variable and measure its length.
Any ideas? ^^