I began using the platform on a Builder subscription, but within the first day I needed to upgrade to Pro, and shortly afterwards to the Elite plan, in order to continue developing my application.
Within my first week, I have now exhausted approximately 95% of my monthly message credits. This has not been due to building new functionality, but primarily due to repeatedly fixing regressions introduced by the platform’s AI development agent.
As a result, I now have no option but to stop development for the next three weeks until the subscription renews, because there is currently no way to purchase additional credits.
The core issue appears to be a lack of safeguards in how the AI development agent operates. It can make changes without assessing the impact on existing functionality, without preventing previously working features from breaking, and without providing a reliable way to roll back to a stable version.
This has created a development cycle where previously resolved issues reappear and must be corrected again, consuming credits each time. Based on my experience, this appears to be a structural limitation in the current development workflow rather than a one-off issue.
During development, the in-app AI development agent repeatedly introduced bugs into parts of the application that had previously been working correctly.
Even when the agent acknowledged the mistake and corrected it, similar issues would often reappear in later changes.
At present there appears to be no mechanism to:
Evaluate the impact of a change before it is applied
Protect existing working functionality
Reliably restore a known working version
Without these safeguards, development can easily fall into a regression loop, where the same problems repeatedly return and require additional prompts to resolve.
Each iteration consumes credits and slows development progress.
Because these regressions required repeated investigation and correction, the majority of my credits were consumed debugging issues introduced by the agent itself, rather than building new features.
Currently there appears to be no way for users to:
Stop an agent caught in a regression cycle
Recover credits spent resolving platform-introduced problems
Purchase additional credits during the current billing cycle
The platform interface also displays a “Purchase more credits →” prompt when credits are exhausted, even though purchasing additional credits is not currently available. This creates a confusing user experience and suggests functionality that does not actually exist.
Interestingly, some of the more advanced AI enrichment features worked extremely well and delivered fast, useful results.
However, the most basic data operations proved unreliable. In particular:
File imports sometimes required multiple attempts to complete successfully
Database updates were inconsistent
Long-running background tasks did not appear to complete or fail cleanly
The job queue system did not appear to provide clear retry, timeout, or failure behavior
These types of operations are generally well understood in software engineering, so their instability creates a significant reliability concern for anyone trying to build a production-quality application.
From my perspective, many of these issues stem from the absence of a structured framework governing how the AI agent performs changes to an application.
I believe both the AI agents and the Base44 organisation would benefit from implementing a recognized service management framework such as ITIL (other frameworks are available). Even a lightweight approach covering Incident Management, Problem Management, and Change Management could significantly reduce the types of issues described here.
For example, the current process appears to lack:
Impact assessment before applying changes
Safeguards to prevent modifications during active incidents
Reliable rollback capability
Clear change tracking showing what was modified and when
Without these controls, the AI agent effectively has unrestricted ability to modify application behavior without the safeguards typically present in modern development environments.
Another difficulty I encountered is that, although the platform is presented as a no-code solution, I have repeatedly been asked to perform tasks that are typically the responsibility of technical specialists.
This has included being asked to:
Investigate code-level issues
Diagnose system behavior
Identify technical faults
Provide detailed explanations of what may be happening internally
In practice, this places the user in the role of developer, systems analyst, and technical investigator, which is very different from the expectation created by the platform’s messaging that coding knowledge is not required.
Sympathy or reassurance
Temporary workarounds
General assurances that the issue will not happen again
Credit restoration Credits that were consumed fixing agent-introduced regressions should be restored.
Migration support Assistance moving the project to a clean environment without consuming additional credits.
Platform acknowledgement Confirmation that the regression loop and reliability issues described above are recognized and being addressed.
User interface correction The “Purchase credits” prompt should not be shown if additional credits cannot actually be purchased.
I have invested a considerable amount of time working through these issues and reporting them constructively. My intention in posting this is not simply to complain, but to highlight platform-level reliability concerns that may affect other users building applications on the platform.
The platform clearly has significant potential. However, improving reliability safeguards, development controls, and support structures would greatly improve the experience for users trying to build real applications.
Please authenticate to join the conversation.
In Review
Feature Request
4 days ago

Chip
Get notified by email when there are changes.
In Review
Feature Request
4 days ago

Chip
Get notified by email when there are changes.