
Speed is the primary promise of AI development. Tools like Lovable allow founders to ship functional applications in record time. However, speed often comes at the cost of security.
In a recent internal demonstration, our team ran security checks on a job board application built entirely with AI. Despite looking fully functional, the application had critical vulnerabilities. Within seconds, we were able to:
Most concerning was that the platform's built-in scanner failed to detect either issue.
This highlights a fundamental reality of AI development: AI agents are designed to build, not secure. When you rely on the same AI to write and check its own code, you introduce a "blind spot."
This guide outlines our internal framework for hardening AI-generated applications using an external Security Prompt workflow.
Watch video guide to know the right way to audit your code using GitHub:
To properly secure an app built with Lovable (or any AI builder), you must decouple the builder from the security checker. An external Large Language Model (LLM) with fresh context is required to execute these prompts effectively.
A common mistake is copying and pasting individual files for checks. For a comprehensive review, the external AI needs access to the full repository context.
Generic instructions like "check this for bugs" are insufficient. We utilize specific Security Prompts designed to identify distinct vulnerability classes, such as Cross-Site Scripting (XSS) or insecure API endpoints.
Crucial Rule: Do not dump the entire security report back into the building agent.
AI models degrade in performance when overloaded with context.
Simple code scans often miss Row Level Security (RLS) vulnerabilities. This was the specific flaw that allowed us to bypass the admin approval process in our demo. The code was syntactically correct, but the logic was flawed.
To catch this, we use a two-step Security Prompt sequence:
1. The Authorization Rules Writer
First, we run a prompt that analyzes the application's intended logic and writes out plain-English rules.
2. The RLS Policy Analyzer
Next, we run a prompt that compares those written rules against the actual database policies (e.g., in Supabase/PostgreSQL). This cross-reference highlights where the code fails to enforce the business logic.
For teams without access to paid LLM subscriptions (like Claude Pro or ChatGPT Team), effective security checks are still possible using GitHub Codespaces and Google’s Gemini.
1. Initialize the Environment
Navigate to the GitHub repository and launch a "Codespace." This creates a cloud-based development environment directly in the browser.
2. Install the Gemini CLI
Google’s Gemini model offers a high-performance free tier suitable for running security prompts. In the Codespace terminal, install the necessary tools:
npm install -g @google/generative-ai
3. Authenticate and Secure
Authenticate using a personal Google account to access the free tier. Once logged in via the terminal, you can paste the security prompts directly into the command line. Gemini will process the codebase and output the same level of analysis as paid alternatives.
We have compiled the full suite of 20+ Security Prompts used in our process, covering input validation, authentication logic, and spam prevention.
Building with AI does not eliminate the need for engineering discipline. If a team is "vibe coding" building via prompts without security reviews, they are likely deploying vulnerabilities.
To ensure your application is production-ready:
We have probably built something similar before, let us help you