@wilfredomartel7781

Very well explained๐Ÿ‘๐Ÿ‘๐Ÿ‘๐Ÿ‘

@aurelius2515

Very helpful video thanks! In your examples, it seems like the cross-encoder step doesn't add much - which may have something to do with the corpus size, but also that the model seems much more finely tuned to word similarity vs. semantic similarity at the sentence level

@adityaadhikary3617

1) I have unlabelled sentence pairs that I'd like to train on my company specific data , so can I label them using any of the cross encoders?
2) Is it suggested to add my company specific words into the vocabulary of pretrained model before fine tuning it with my data?

@adriangabriel3219

Great video! How would you evaluate your Q&A system? Define a validation set consisting of question-answer pairs and compare with the re-ranked answer? What metric would you use? Comparing n-grams of re-ranked and original answer?

@thepresistence5935

cool๐Ÿ˜Ž!

@ax5344

The overview is explained in the first 1:18 minutes. I feel if you can slow down a little and illustrate a little more on the whys, we probably can follow it more at ease. I'm watching a second time. Let me see if I can figure out a more concrete why.

@ax5344

is there a typo? line 19 and 22 are talking about different numbers.

@ax5344

Can you number this series (so we know how to watch them)? Thanks!