The Testing Loop
AI code needs verification. Build this habit.
The Verification Flow
AI generates code
Get the initial implementation.
You read it (actually read it)
Don't just glance. Understand what it does.
Run it locally
npm run devCheck edge cases
What happens with empty input? Invalid data? Network errors?
Run tests
npm run testBuild passes
npm run buildOnly then commit
git add .
git commit -m "Add feature X"Test-Driven Vibecoding
Even better — write tests first:
Step 1: "Write a test for a function that validates email addresses"
→ AI writes test
Step 2: "Now implement the function to make the test pass"
→ AI writes implementation
Step 3: Run test → verify it passes
Step 4: "Add edge case tests for: empty string, missing @, multiple @s"
→ AI adds more tests
Step 5: Iterate until solidTests give AI a clear target and catch regressions.
Quick Sanity Checks
Before accepting AI code, verify:
- Does this actually solve my problem?
- Are there any obvious security issues?
- Does it handle errors?
- Is it consistent with the rest of the codebase?
- Would I be embarrassed to show this to a senior dev?
Build Before Commit (Always)
# Make this a habit - every single time
npm run build # or your build command
npm run test # run test suite
npm run lint # check for issues
# Only if all pass:
git add .
git commit -m "Add feature X"🚫
Never commit broken code. If build fails, fix it before committing.
Common Test Prompts
"Write unit tests for the validateEmail function covering:
- Valid emails
- Empty string
- Missing @ symbol
- Missing domain
- Multiple @ symbols""Create integration tests for the /api/users endpoint:
- GET returns list of users
- POST creates new user
- POST with invalid data returns 400
- Unauthorized request returns 401""Add test coverage for error cases in the PaymentService"