PseudoAj
PseudoAj

Reputation: 5977

Laravel 5.3: What is causing 'maximum execution time of 30 seconds exceeded'

The problem

I am using Laravel 5.3 to import a huge (about >1 million rows and >25 columns) tab separated file into mysql database using functions in controller code (I am restraining from posting all the code here). While processing the files I am encountered with the following error:

FatalErrorException in Connection.php line 720:

Maximum execution time of 30 seconds exceeded

Please note that the application is importing a different number of rows for different instances before failing.

Question

I know we can fix this using either of following:

  1. changing php.ini suggested here
  2. adding ini_set('max_execution_time', 300); at the beginning of public/index as suggested here

A varied number of reasons might be behind this and I am more interested in knowing where exactly is it running out of time. Laravel doesn't provide any more details than the above-mentioned. I would really appreciate if someone can provide ways to debug this. Things that would help:

Environment

Upvotes: 5

Views: 9400

Answers (1)

Michael - sqlbot
Michael - sqlbot

Reputation: 179384

It's not a specific operation running out of time. It's... everything, combined, from start to finish.

max_execution_time integer

This sets the maximum time in seconds a script is allowed to run before it is terminated by the parser. This helps prevent poorly written scripts from tying up the server. The default setting is 30.

http://php.net/manual/en/info.configuration.php#ini.max-execution-time

The idea, here, is that for a web service, generally speaking, only a certain amount of time from request to response is reasonable. Obviously, if it takes 30 seconds (an arbitrary number for "reasonableness") to return a response to a web browser or from an API, something probably isn't working as intended. A lot of requests tying up server resources would result in a server becoming unresponsive to any subsequent requests, taking the entire site down.

The max_execution_time parameter is a protective control to mitigate the degradation of a site when a script -- for example -- gets stuck in an endless loop or otherwise runs for an unreasonable amount of time. The script execution is terminated, freeing resources that were being consumed, usually in an unproductive way.

Is the time aggregate of all requests by a method?

It's the total runtime time for everything in the script -- not one specific operation.

Does memory overload cause this?

Not typically, except perhaps when the system is constrained for memory and uses a swap file, since swap thrashing can consume a great deal of time.

Will it help by chunking the data and handling it through multiple request?

In this case, yes, it may make sense to work with smaller batches, which (generally speaking) should reduce the runtime. Everything is a tradeoff, as larger batches may or may not be more efficient, in terms of proccessing time per unit of work, which is workload-specific and rarely linear.

Upvotes: 2

Related Questions