@chapterme

Chapters (Powered by ChapterMe) - 
0:00 Intro
0:54 Some vibe coding tips from YC X25 founders
4:12 First, pick your tools and make a plan 
6:27 Use version control 
7:23 Write tests 
8:14 Remember, LLM's aren't just for coding
8:57 Bug fixes
11:10 Documentation
12:03 Functionality 
13:20 Choose the correct stack
15:08 Refactor frequently
15:40 Keep experimenting!
16:21 Outro

@mihirsinhparmar255

Best way to solve bugs I have found is:

Conduct a root cause analysis for the above error. Create hypothesis, validate them based on facts (current codebase and official documentation) and repeat until you find the root cause. Neither assume nor presume, verify and validate assumption.

@billvivino

I have so many new clients with broken apps because they were vibe-coded! Thanks for promoting it! Keep going.

@b2brish

This was a super practical and motivating guide. Thank you! I love how you broke down vibe coding into actionable steps, especially the tips on planning with the LLM, using version control, and writing high-level tests. The advice on switching models and keeping code modular is spot-on. Excited to experiment more and see how fast these tools keep evolving!

@constantinelinardakis8394

5:00 work with LLM to write a comprehensive plan 6:30 use version control 7:25 write high level tests 8:58 bug fixes (copy paste error message) 10:12 add logging and switch models if it cant fix the bugs 10:45 wrote instructions for LLM 11:09 documentation 11:40 use LLM as a teacher 12:04 functionality (use a seperate code base) * key here 13:20 right tech stack 14:20 use screenshots 15:10 refactor frequintly

@Pastacheese

I always write a PRD (Product Requirement Document) with everything i need from the LLM to code. The make the changes on the PRD, then implement it to the IDE. 
I have improved a lot of aspects in my code structure, and the LLM knows exactly what i'm talking about.

@s-code-b

This is one of the most USEFUL soft-dev vids I've ever seen. 🙏

@amirnathoo4600

Working with the LLM to create an action/project plan is actually what I do as well. I am working on an open source project and having and maintaining that document is what I found working the best. It helps both myself and the LLM.
And it’s a living document which I ask the llm to update periodically so it’s always part of the context.

What I also do (with cursor) is to create rules that establish a very clear workflow when starting a task. For example having the llm summarize the chosen approach to solving a problem and telling it to wait for approval before jumping into code. This helps me stay very focused and be prepared for what the llm is going to do.

And as always the correct attitude when using an AI code assistant should be to “trust, but verify” so you don’t end up shipping something broken in production.

@joshuacj6603

Great advice. I also noticed telling the agent what has worked for it to keep it in memory is a good idea. What is working, and what we are improving or adding on so it doesn't try to reinvent the wheel to take you two steps backwards

@kamalaman6237

In the "Fix Bug" section, Cursor and Windsurf can already tail your test logs and react to them without the need for copy-pasting. Just tell your agent to run the test and enter the agentic development workflow. It's awesome!!

@joshualana

People often ask me how I cope with LLM Threads when they are out of memory. My simple respond before I start coding, I tell the Ai to complete with a comprehensive plan for the project, that helps me refer the ai back to the plan.

@zacharycutler8138

When providing context in an initial prompt, be sure to include the real-world/business context of what you are building. Inform the LLM of your project's CVP (customer value proposition). I found this to be invaluable both in both helping to steer technical implementation as well as with regard to bringing to light potential tradeoffs between technical and substantive (e.g., scientific, UX, business model, etc.) concerns.

@wojciechzdzarski1838

Protip from me - "ask me question in case of ambiguity" after each prompt can save hours :)

@ThomasDeGan

You absolutely have to double check what it's generating. I commit code after every successful generation so I can easily revert everything back when the AI starts to really hallucinate. Definitely don't be afraid to call it out. I have responded with "Why are you doing X? It seems like that's not the right way to implement the change"   -- It generally responds with "Yes, you're right! I should have done it this way."

@mrbjjackson

Great list. I'd love to hear more about the testing part of this process. I think that's one area my vibe coding is lacking. If you are looking for follow up ideas please consider this.

@MichaelG68581

Great tips. Another one is to make sure you resolve all linter warnings/errors, Cursor has the LLM read those and it will get very bothered even by minor things and get into a bit of a loop trying to fix them. Depending on the framework you're using you may need to install a VS Code extension that makes the linter smarter for that framework.

@srs.shashank

I found the following points insightful: writing test cases and then asking LLM to generate around it this could be a good guard rail. 2. using git rest and start all over to avoid patching layers of poorly written code.

Some Tips based on my experience using LLMs while coding:
+ counter question  the LLM if your instinct says this generated response/code snippet isn't correct. More often than not the human instincts are correct. 
+ don't directly ask the LLM to implement/generate code for what you want instead ask for a similar product- see what it generates , ponder if this makes sense , refine it and then generate for the actual usecase. - this step acts as a barrier from impulsive copy-pasting the code and thus reducing layers of badly implemented code.
+ when using a new library or facing any error and pasting the error to the LLM it is useful what library version we use since different version could need a different fix.

@katsup_07

Imagine someone who's never baked before decides to make a fancy birthday cake. They don’t follow a recipe — they just throw in some flour (“that feels like enough”), eggs (“two seems good”), sugar (“I like it sweet”), and some colorful sprinkles on top because it “looks fun.” They don’t measure anything, and they don’t preheat the oven — they just go with the vibe. Surprisingly, the cake bakes, and it even tastes okay! So, feeling confident, they decide to bake 100 more for a big event — all using the same guesswork method.

But then the problems start: Some cakes come out raw inside. Others are dry or collapse in the middle. People get sick because the eggs weren’t cooked properly. What looked good in a small, casual setting turned into a disaster when scaled up — all because they didn’t understand the basics of baking or follow a tested method.

When you’re experimenting for fun, “vibe baking” is harmless. But when others rely on your results — like customers, users, or teammates — you need skills, structure, and testing. The same applies to software engineering. The code needs to be carefully checked and understood to avoid major problems as it scales. AI can generate code and significantly speed up workflows, but vibe-driven coding is not sustainable for serious projects. Current AI is still too limited and error-prone to build without careful instructions and review.

@JohnalExpedition

thank god I found this channel! I am currently building personal finance helping app with 0 knowledge about coding. it's pretty sick that we can do this now!

@ralphmoreau2768

This was a great video. Really appreciate the suggestion with Aqua, this is really cool where voice UI is going.