Agents are answering state questions without querying live data (no current tool enforcement)

Agents developed on Base 44 currently appear to use a tool-suggestion model rather than tool enforcement, which allows them to respond to state or status questions without first querying live data. As a result, agents may return generic advisory responses instead of concrete, direct results (such as counts, lists, or pulling and reviewing a record, report, or other element from within the app’s data), even when data functions are available. Prompting or agent instructions alone do not reliably prevent this behavior. This makes it difficult to build AI features that depend on accurate, real-time application or system state. I was informed that this is a current limitation; addressing it would be very helpful for developers building data-driven AI functionality on the platform, as without it, the AI agent is unable to be functionally useful for such use cases.

When users ask state-based questions (for example, “How many devices are connected right now?”), the agent currently responds with generic guidance or explanatory text instead of querying live data. This happens across similar cases such as active users, storage usage, failed workflows, integrations, or backups, where the agent explains how to find the information rather than returning the actual value. The expected behavior is that for these stateful questions, the agent should first execute the relevant data function and then respond with a concrete result (e.g., “0 devices connected,” “2 failed workflows”) Currently the App data, tools configured for the AI, and the navigation and structure and features of the application are all invisible to the AI and it generates generalizes boilerplate responses rendering it ineffective for production implementation. This may be problematic since most modern applications employ this kind of AI.

I tried both workarounds:

2. A routing middleware That can act as a required gateway by intercepting all queries, classifying them as state-based or general, forcing data functions to run first for state queries, and only then passing the results to the agent, ensuring state questions always return real data rather than generic responses.

2. Frontend validation layer: Add logic in your chat interface that checks Anderson's responses for state queries and validates that tool calls were actually made. If not, show "Data currently unavailable" instead of the generic response.

However, the AI still responded with the same generalized or vague replies, Updating agent instructions alone does not seem sufficient because instructions can influence behavior but do not enforce execution of data tools; the platform currently does not support a required or mandatory tool mode that forces a function call before a response is generated; and there is no built-in response validation that blocks or rejects non-data answers for state-based queries. As a result, even correctly defined data functions can be bypassed, leading to generic responses where real, live data is expected.

Src Ticket: #698841c2

Please authenticate to join the conversation.

Upvoters
Status

In Review

Board
💡

Feature Request

Date

6 days ago

Author

Jordan Young

Subscribe to post

Get notified by email when there are changes.