Reputation: 7703
What is the posible reason of "Required leaf false alarm rate achieved. Branch training terminated."
The folowing command for training:
for creating samples
-img imgs/CHE_one_wb.jpg -num 300 -bg imgs/negat.dat -vec imgs/vector.vec -info imgs/smpl/info.txt -maxxangle 0.1 -maxyangle 0 -maxzangle 0.1 -maxidev 100 -bgcolor 255 -bgthresh 0 -w 20 -h 35
-img imgs/CHE_one_wb.jpg -num 300 -bg imgs/negat.dat -info imgs/smpl/info.txt -maxxangle 0.1 -maxyangle 0.1 -maxzangle 0.1 -maxidev 100 -bgcolor 255 -bgthresh 0 -w 20 -h 35
for train
-data imgs/cascade/ -vec imgs/vector.vec -bg imgs/negat.dat -numPos 200 -numNeg 40 -numStages 10 -featureType LBP -maxFalseAlarmRate 0.9 -w 20 -h 35
RESULT OF TRAINING
Upvotes: 33
Views: 43511
Reputation: 61
I have also had some problems training my cascade properly. I follow the steps that @Dmitry Zaytsev wrote, but I couldn't obtain a proper cascade.xml. I always had messages like:
"Required leaf false alarm rate achieved. Branch training terminated."
or
"The most possible reason is insufficient count of samples in given vec-file."
Finally I made it. I want to share my thoughts even though I'm not completely sure that they are corrects. Take them as my own experience keeping in mind I'm not a developer.
Paremeters -w and -h are really meaningful. Everything started to work properly when I figured out that the vector's samples should have the minimum size possible, and I've read from experienced developers that 24 is a good minimum size for that. I got the relation between axes (width and height) and I use -w 24 and -h 66 for my shape-detector-thing.
Also, I've realize about something when I got the background images. Size cares. 100 x 100 pixels it's a good size (saw in tutorials) but what I wanted to believe is that the size has to be larger than your vector samples' images!! This means that if you use:
opencv_createsamples -vec yourvectorsamplesimages.vec -info path/to/data/file.info -num NumOfPositivesImages -w 24 -h 66
-w and -h values have to be lower than 100. I thought that because when I use opencv_traincascade afterwards, it was the only time I got a proper train cascade.xml as @Dmitry Zaytsev stated before.
I know my message is not rigorous and also maybe wrong but, struggling for 4 days in order to get a proper cascade.xml leaded me to make this relations.
Just trying to help as your messages helped me. :-)
Thanks
Upvotes: 1
Reputation: 887
This is not an error! Given the samples presented and the settings of the training, your cascade has reached its desired potential. Either add more data OR make your requirements harsher! For now it just states, does as good as you requested...
You have three ways now...
2.You can get rid of this by increasing your data I mean number of your positive images and negative images. "Try to keep positives more than negatives"
3.I read this one in somewhere , so I don't know it will work or not. You can do this by increasing -minHitRate 0.995 and -maxFalseAlarmRate 0.5, these two parameters have got their default value and you can get rid of your problem by increasing them to 0.998 and 0.7 and keep doing this.
But like I said this is not an error.
Upvotes: 0
Reputation: 7703
I have achieved my goal and trained good cascade.
0.000412662
or less. But if you get acceptanceRatio like this 7.83885e-07
your cascade is probably overtrained and it wont find anything, try to set less stages.
!!! And one more important thing, when you train your cascade you should have more than one feature on your stage beginning from 2 or 3 stage. If you have only one feature you wont get good cascade. You should work on your training images (negative and positive samples). Normal training will look like this:
For training I have used -data imgs/cascade/ -vec imgs/vector.vec -bg imgs/negat.dat -numPos 1900 -numNeg 900 -numStages 12 -featureType HAAR -minHitRate 0.999 -maxFalseAlarmRate 0.5 -w 24 -h 30
command
Both features types work almost equals sometimes HAAR is a little bit better but it is significant slower than LBP.
Upvotes: 49
Reputation: 762
You set maxFalseAlarmRate=0.9.
This means that in each stage no more than 90% of the 40 negative samples (ie 36 samples) should lie inside the boundary of positives. When the algorithm manages to put outside that boundary at least 4 samples, it can go to the next stage.
This worked for a few stages, until it happened (by mere chance) that less than 36 samples are already inside the positive boundary since the beginning (remenber that negative samples extraction is a random process).
So when the algorithm should operate the separation it has its job already done and it does not know how to procede.
Upvotes: 6
Reputation: 1200
If you have a small number of data, you need less number of stages to achieve the required false alarm rate you set up. This means that the cascade classifier is "good enough" so it doesn't have to grow further. The total false positive ratio is actually multiplied by every stage's ratio, so after a point, the value is achieved.
In your options you set it up to 0.9. Consider making it higher, like 0.95 or more.
Apart from that, your datasets are small, so it's easier for the algorithm to get good results when validating on them during training. The smaller the dataset, the easier for the classifier to be trained, so less stages are required. But this doesn't mean that it's better when running on real data. Also, if you keep the training size low and set a higher ratio, consider that the classifier will need more stages to finish and will be more complicated, but it's very possible that it will be over-trained on the training set.
To conclude, if the nature of your positive and negatives that you have, is making them easy to seperate, then you don't need so many samples. Of course that depends on what you are training the algorithm for. With your amount of samples, the 10 stages you put are a lot, so the algorithm terminates earlier (it's not necessarily bad).
When I was training faces, I think I had around 1 thousand of positive (including all the rotations/deviations) and 2-3 thousands of negatives, to need a classifier of around 11-13 levels, if I remember correctly.
The tutorial of Naotoshi Neo had helped me a lot.
Also, what I noticed now, as Safir mentioned, you have too few negative samples comparing to the positive ones. The should be at least equal in number, preferably around 1.5 - 2 times more than the positives.
Upvotes: 7
Reputation: 902
Number of negatives are too little comparing to number of positives and number of stages.
Upvotes: 7