Reputation: 71
I'm writing a program in C to calculate the access time to certain websites. The sitenames are stored in each element of urls array. If I take out the for (y = 0; y < iterations; y++) loop, then everything runs fine. But if if I keep it. urls[0], the first website, gets messed up after the second for loop completely finishes and increments y
What's causing this?
char *urls[50]; char str1[20];
void *wget(void *argument)
{
int threadid;
threadid = *((int *)argument);
strcpy(str1, "wget -q --spider ");
strcat(str1, urls[threadid]);
system(str1);
}
for (y = 0; y < iterations; y++)
{
for (j = 0; j < numthreads; j++)
{
thread_args[j] = j;
clock_gettime(CLOCK_REALTIME, &bgn);
rc = pthread_create(&threads[j], NULL, wget, (void *) &thread_args[j]);
rc = pthread_join(threads[j], NULL);
clock_gettime(CLOCK_REALTIME, &nd);
times[j] = timediff(bgn,nd);
}
}
Upvotes: 1
Views: 182
Reputation: 8116
My bet is that one of the strings in urls
+ the wget string are longer than 20 bytes and are overwriting that data. Make str1
larger, and move it into your wget function (multiple threads should not be writing to one shared resource without locking).
Upvotes: 3
Reputation: 13946
Some possibilities...
str1
appears to be shared among all the threads. That's a recipe for trouble right there.
str1
is only 20 chars long. Hard to believe the whole wget
command line including the URL will be less than 20 chars. So you're writing off the end of str1
.
Consider making str1
a local variable in wget()
, and either make it a char array big enough to handle the largest possible wget
command line you might have, or dynamically allocate it and free it within wget()
with a size based on the length of the constant part of the command line and the current URL.
Upvotes: 5