anna_kos
anna_kos

Reputation: 766

BERT fine tuning

I'm trying to create my model for question answering based on BERT und can't understand what is the meaning of fine tuning. Do I understand it right, that it is like adaption for specific domain? And if I want to use it with Wikipedia corpora, I just need to integrate unchanged pre-trained model in my network?

Upvotes: 2

Views: 697

Answers (2)

Finetuning is more like adopting the pre-trained model to the downstream task. However, recent state-of-the-art proves that finetuning doesn't help much with QA tasks. See also the following post.

Upvotes: 0

Adnan S
Adnan S

Reputation: 1882

Fine tuning is adopting (refining) the pre-trained BERT model to two things:

  1. Domain
  2. Task (e.g. classification, entity extraction, etc.).

You can use pre-trained models as-is at first and if the performance is sufficient, fine tuning for your use case may not be needed.

Upvotes: 2

Related Questions