Links to the book:
https://amzn.to/4fqvn0D (Amazon)
https://mng.bz/M96o (Manning)
Link to the GitHub repository: https://github.com/rasbt/LLMs-from-sc...
This is a supplementary video explaining how to instruction finetune an LLM.
00:00 7.2 Preparing a dataset for supervised instruction finetuning
15:37 7.3 Organizing data into training batches
39:17 7.4 Creating data loaders for an instruction dataset
46:44 7.5 Loading a pretrained LLM
54:25 7.6 Finetuning the LLM on instruction data
1:14:20 7.7 Extracting and saving responses
1:23:56 7.8 Evaluating the finetuned LLM
You can find additional bonus materials on GitHub
Generating a Dataset for Instruction Finetuning, https://github.com/rasbt/LLMs-from-sc...
Direct Preference Optimization (DPO) for LLM Alignment, https://github.com/rasbt/LLMs-from-sc...
Building a User Interface to Interact With the Instruction Finetuned GPT Model, https://github.com/rasbt/LLMs-from-sc...
Evaluating Instruction Responses Using the OpenAI API and Ollama, https://github.com/rasbt/LLMs-from-sc...
コメント