Reputation: 766
I'm trying to create my model for question answering based on BERT und can't understand what is the meaning of fine tuning. Do I understand it right, that it is like adaption for specific domain? And if I want to use it with Wikipedia corpora, I just need to integrate unchanged pre-trained model in my network?
Upvotes: 2
Views: 697
Reputation: 1399
Finetuning is more like adopting the pre-trained model to the downstream task. However, recent state-of-the-art proves that finetuning doesn't help much with QA tasks. See also the following post.
Upvotes: 0
Reputation: 1882
Fine tuning is adopting (refining) the pre-trained BERT model to two things:
You can use pre-trained models as-is at first and if the performance is sufficient, fine tuning for your use case may not be needed.
Upvotes: 2