Reputation: 41
I'm trying to figure out a way to augment data for Seq2Seq model. But i have a limit in number of training samples. I have 200 samples.
I have an idea that using chat GPT to generates similar output sequences to that one input. Is this approach confused the model?
Upvotes: 1
Views: 23