https://gitnation.com/contents/vibe-coding-at-enterprise-scale-what-happens-when-ai-joins-your-dev-…
Wesley talked about how he put AI to the test, by having two teams compete against each other. Team 1 (DIY Team) had 3 Engieneers without AI, while Team 2 (AI Team) had 2 Engineers with AI. It was a very structured test, with time tracking, spreadsheets, documentation and so on.
- The AI Team was on average 45% faster than the DIY Team
- AI is language agnostic, and a developer with Python knowledge could deliver a TypeScript project on Team AI
- Quality was as expected, Sonar analysis was used
But that’s not all, the most important bits are that Engineers must know what to do and what the output should look like. It is not just a prompt with “build this”, but it has to be specific.** AI is not a replacement for architectural clarity or engineering expertise**, but a multiplier of it.
Wesley recommended the following points
- work with atomic level tasks, have them as small as possible
- refine each task as a prompt
- be crystal clear in your prompts and instructions
- It should be one objective per task with a clear output
- split features into smaller deliverables
- validate everything
- give feedback and iterate with the AI
- use markdown files for rules and plans
- AI should use a TODO list to work through
- git commit after each task was successful, so you are in control and don’t lose work due to later hallucination.
So work in smaller units and commit often.
He also says to give rich context.
- optimise context to reduce hallucination
- use instruction files and memory banks for persistance
- guide the AI with specs, mocks and so on
So the recommend loop is:
AI works on a task → Human reviews → Human manual tests → Automatic validation → git commit → repeat