Reputation: 1
When a model is trained on Dataset A, its early layers learn fundamental features such as edges, textures, and basic shapes. If these layers are frozen before training on Dataset B, they retain the learned representations from Dataset A. The question arises: does freezing these layers prevent the model from capturing new simple features that may be present in Dataset B but were absent in Dataset A? This concerns whether the retained early representations are general enough to handle new variations or if freezing restricts adaptation to new low-level patterns.
Upvotes: -3
Views: 31