@chapterme

Chapters (Powered by ChapterMe) - 
0:00 Intro
0:54 Some vibe coding tips from YC X25 founders
4:12 First, pick your tools and make a plan 
6:27 Use version control 
7:23 Write tests 
8:14 Remember, LLM's aren't just for coding
8:57 Bug fixes
11:10 Documentation
12:03 Functionality 
13:20 Choose the correct stack
15:08 Refactor frequently
15:40 Keep experimenting!
16:21 Outro

@mihirsinhparmar255

Best way to solve bugs I have found is:

Conduct a root cause analysis for the above error. Create hypothesis, validate them based on facts (current codebase and official documentation) and repeat until you find the root cause. Neither assume nor presume, verify and validate assumption.

@constantinelinardakis8394

5:00 work with LLM to write a comprehensive plan 6:30 use version control 7:25 write high level tests 8:58 bug fixes (copy paste error message) 10:12 add logging and switch models if it cant fix the bugs 10:45 wrote instructions for LLM 11:09 documentation 11:40 use LLM as a teacher 12:04 functionality (use a seperate code base) * key here 13:20 right tech stack 14:20 use screenshots 15:10 refactor frequintly

@Pastacheese

I always write a PRD (Product Requirement Document) with everything i need from the LLM to code. The make the changes on the PRD, then implement it to the IDE. 
I have improved a lot of aspects in my code structure, and the LLM knows exactly what i'm talking about.

@IkedaBC

Never heard anyone refer to Hallucinations as vision quests 😂

@wojciechzdzarski1838

Protip from me - "ask me question in case of ambiguity" after each prompt can save hours :)

@billvivino

I have so many new clients who’s apps are broken because they were vibe-coded! Thanks for promoting it! Keep going.

@joshualana

People often ask me how I cope with LLM Threads when they are out of memory. My simple respond before I start coding, I tell the Ai to complete with a comprehensive plan for the project, that helps me refer the ai back to the plan.

@katsup_07

Imagine someone who's never baked before decides to make a fancy birthday cake. They don’t follow a recipe — they just throw in some flour (“that feels like enough”), eggs (“two seems good”), sugar (“I like it sweet”), and some colorful sprinkles on top because it “looks fun.” They don’t measure anything, and they don’t preheat the oven — they just go with the vibe. Surprisingly, the cake bakes, and it even tastes okay! So, feeling confident, they decide to bake 100 more for a big event — all using the same guesswork method.

But then the problems start: Some cakes come out raw inside. Others are dry or collapse in the middle. People get sick because the eggs weren’t cooked properly. What looked good in a small, casual setting turned into a disaster when scaled up — all because they didn’t understand the basics of baking or follow a tested method.

When you’re experimenting for fun, “vibe baking” is harmless. But when others rely on your results — like customers, users, or teammates — you need skills, structure, and testing. The same applies to software engineering. The code needs to be carefully checked and understood to avoid major problems as it scales. AI can generate code and significantly speed up workflows, but vibe-driven coding is not sustainable for serious projects. Current AI is still too limited and error-prone to build without careful instructions and review.

@amirnathoo4600

Working with the LLM to create an action/project plan is actually what I do as well. I am working on an open source project and having and maintaining that document is what I found working the best. It helps both myself and the LLM.
And it’s a living document which I ask the llm to update periodically so it’s always part of the context.

What I also do (with cursor) is to create rules that establish a very clear workflow when starting a task. For example having the llm summarize the chosen approach to solving a problem and telling it to wait for approval before jumping into code. This helps me stay very focused and be prepared for what the llm is going to do.

And as always the correct attitude when using an AI code assistant should be to “trust, but verify” so you don’t end up shipping something broken in production.

@ThomasDeGan

You absolutely have to double check what it's generating. I commit code after every successful generation so I can easily revert everything back when the AI starts to really hallucinate. Definitely don't be afraid to call it out. I have responded with "Why are you doing X? It seems like that's not the right way to implement the change"   -- It generally responds with "Yes, you're right! I should have done it this way."

@kamalaman6237

In the "Fix Bug" section, Cursor and Windsurf can already tail your test logs and react to them without the need for copy-pasting. Just tell your agent to run the test and enter the agentic development workflow. It's awesome!!

@zacharycutler8138

When providing context in an initial prompt, be sure to include the real-world/business context of what you are building. Inform the LLM of your project's CVP (customer value proposition). I found this to be invaluable both in both helping to steer technical implementation as well as with regard to bringing to light potential tradeoffs between technical and substantive (e.g., scientific, UX, business model, etc.) concerns.

@s-code-b

This is one of the most USEFUL soft-dev vids I've ever seen. 🙏

@josedominguez8144

For larger projects with more persons working on it Code Review is fundamental. To keep consistency and make sure we aren’t breaking anything

@MikeO89

Looking forward to the VULNs and CVEs coming out the next YC batches! 😂

@PKAnane

To help the LLMs identify UI functionality in addition to giving it screenshots, I have setup playwright and I tell it to use the playwright to run the tests

@j--__--p-e5x

I usually use Lovable or similar to fast set up e very nice looking UI. Then merging it to cursor to be able to go more into the details and develop the complex backend parts and additional features.

Usually i have struggled both to spin up a nice looking UI straight out the bat + authentication directly from cursor

The above typically fixes both

@JordiB.E.

great video thanks for uploading and the great info.
One tip I would add is to try out aider. It's like the open source cursor. I'm enjoying programming with aider.

@MichaelG68581

Great tips. Another one is to make sure you resolve all linter warnings/errors, Cursor has the LLM read those and it will get very bothered even by minor things and get into a bit of a loop trying to fix them. Depending on the framework you're using you may need to install a VS Code extension that makes the linter smarter for that framework.