
Structured code reviews for vibe-coding are a must.
Why ad-hoc coding isn’t enough when you’re moving fast with AI support. In this post I show how you a way to run structured code reviews to keep projects stable, trackable, and ready to scale.

Cory Bagozzi
Structured code reviews for vibe-coding are a must.
As I've been digging deeper into AI-supported coding, I've added a recurring step into my workflow: a structured code review. Every so often, I run it to make sure things stay solid as I keep shipping new features.
Right now I'm building an EdTech app that lets teachers check in on their students' moods and provides an easy way to share resources with them. While hacking on that, I've been iterating on a review prompt that's turned out to be really effective--especially when run through ChatGPT-5-high or Claude-4-Sonnet (Thinking).
The prompt walks the model through a full review of the repo, then writes everything out into a markdown file under docs/review/ so I can track feedback and recommendations over time.
Here's the latest version. Replace the specifics at the top with your own setup and give it a run.
## Code Repository Review Prompt
# Fill these before running
Industry: EdTech
Regulations: FERPA, COPPA, GDPR
Frontend: Nuxt 3 + NuxtUI Pro
Backend: Supabase [Auth, DB, Edge Functions, Storage]
Hosting and Deploy: Vercel
Database: Postgres on Supabase
Data classes: [PII, student data, analytics, logs]
Observability: [logging, tracing, metrics, dashboards]
Accessibility: WCAG 2.1 AA
---
**Responsibilities**
1. **Code Quality**
- Evaluate clarity, maintainability, modularity, and adherence to clean code principles.
- Identify duplication, anti-patterns, and refactoring opportunities.
2. **Scale and Performance**
- Assess readiness for large user bases on Vercel and Supabase.
- Call out bottlenecks and propose optimizations across API, DB, and CDN.
3. **Stability and Security**
- Identify outage risks and inconsistent behavior.
- Evaluate security practices: authentication, authorization, data protection, encryption, input validation.
- Flag vulnerabilities, insecure dependencies, or secrets handling issues.
4. **Regulatory Compliance**
- Assess alignment with above regulaitons and compliance requirements.
- Call out storage, logging, or access patterns that create compliance risk.
5. **Recommendations**
- Provide specific steps to improve maintainability, scalability, security, and compliance.
- Suggest modern best practices, patterns, or frameworks where appropriate.
---
**Output Format**
1. **Summary of Findings**
- High-level issues with severity and impact scoring.
2. **Risk Table**
- Fields: id, title, severity (P0–P3), impact (high|med|low), effort (high|med|low), priority rank, evidence (file path + line range).
3. **Detailed Review**
- Organized by the categories above.
- Anchor feedback to code locations when possible.
4. **Actionable Recommendations**
- Quick Wins, low effort, immediate.
- Strategic Work, longer term.
- Include acceptance criteria.
5. **30, 60, 90 Day Plan**
- Sequenced execution plan.
6. **Machine-Readable JSON Report**
```json
{
"summary": "",
"risks": [
{
"id": "",
"title": "",
"severity": "P0|P1|P2|P3",
"impact": "high|med|low",
"effort": "high|med|low",
"priority": 1,
"evidence": [{"path": "", "lines": "L45-L78"}]
}
],
"recommendations": {
"quick_wins": [{"title": "", "acceptance_criteria": ""}],
"strategic": [{"title": "", "acceptance_criteria": ""}]
}
}
```json
7. **File Output**
- Write the full human-readable report to a new markdown file at: docs/review/{timestamp}_code_review.md
- Use ISO 8601 format for {timestamp} (e.g., 2025-08-22T14-30-00_code_review.md).