Sever code spawing too many instances of php

I've got a sever which has an action triggered by a frequent cron job.
This is a php application build on Silverstripe (4.0).
The issue I'm facing is the php processes stay alive and also keep database connections open. This means after a few days the site stops working entirely once SQL stops accepting new connections.

The system has two tasks on cron jobs;

One takes a massive CSV file and spits it into smaller sub files which are then imported into the database. This one uses a lock file to prevent it running into a previously running instance. I'm not too sure if this is working.

The second task processes all the records which have been updated in large batches.

Either of these could be the source of the overloading but I'm not sure how to narrow it down.
What's the best way to diagnose the source of the issue?

Upvotes: 0

Views: 70

Answers (1)

Barry
Barry

Reputation: 3318

In terms of debugging, this would be like any other task profiling the application with something like xdebug and kcachegrind. To ensure that processes do not run for too long you can limit the max_execution_time for the PHP.ini for the CLI.

To then let the CLI process run for a long time, but only just enough time add something to set the max execution time on a per row basis:

$allowed_seconds_per_row = 3;
foreach($rows_to_process as $row){
    set_time_limit($allowed_seconds_per_row);
    $this->process($row);
}

You can also register a shutdown function to record the state as the script ends.

It is likely that the memory is a key cause for failure and debugging focused on the memory usage and that can be controlled by unsetting variable data as needed.

Upvotes: 2

Related Questions