Learn how to fine-tuning a reasoning model that can do chain of thought on the user query. We will use Camel-AI for building chain of thought dataset and unsloth for fine-tuning a small qwen model on the custom dataset.
#finetuning #unsloth #chainofthought
LINKS:
Notebook: https://tinyurl.com/5n6nrreu
dataset used: https://huggingface.co/datasets/zjrwt...
Unsloth website: https://unsloth.ai/
Camel-AI: https://www.camel-ai.org/
💻 RAG Beyond Basics Course:
https://prompt-s-site.thinkific.com/c...
Let's Connect:
🦾 Discord: / discord
☕ Buy me a Coffee: https://ko-fi.com/promptengineering
|🔴 Patreon: / promptengineering
💼Consulting: https://calendly.com/engineerprompt/c...
📧 Business Contact: engineerprompt@gmail.com
Become Member: http://tinyurl.com/y5h28s6h
💻 Pre-configured localGPT VM: https://bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Newsletter, localgpt:
https://tally.so/r/3y9bb0
00:00 Introduction to CoT fine-tuning
01:43 CoT dataset generation with Camel-AI
07:06 Fine-tuning with Unsloth
13:44 How the model performs
All Interesting Videos:
Everything LangChain: • LangChain
Everything LLM: • Large Language Models
Everything Midjourney: • MidJourney Tutorials
AI Image Generation: • AI Image Generation Tutorials
コメント