Adam
Adam

Reputation: 29119

are php files executed parallel or sequential?

If two users execute the same php file, will it be executed parallel or sequential? Example:

If I have a database data which only has one column id would it be possible that the following code produces for two different users the same outcome?

1.  $db=startConnection();
2.  $query="SELECT id FROM data";
3.  $result=$db->query($query)or die($db->error);
4.  $zeile=mysqli_fetch_row($result);
5.  $number=$zeile['id'];
6.  $newnumber=$number+1;
7.  echo $number;
8.  $update = "UPDATE data Set id = '$newnumber'  WHERE id = '$number'";
9.  $db->query($query)or die($db->error);                       
10. mysqli_close($db);

If it is not executed parallel, does it mean when 100 people are loading a php file that has a loading time of 1 second, then one of them has to wait 99 seconds?


Edit: In the comments it is stated that I could messup my database, I guess this is how it could mess up:

User A executes the file from 1.-7. in this moment user B executes the file from 1.-7. then A loads 8.-10. and B loads 8.-10. In this scenario both users would have the same number on the screen.

Now lets take the following example:

1.  $db=startConnection();
2.  $query=" INSERT INTO data  VALUES ()";
3.  $result=$db->query($query)or die($db->error);
4.  echo $db->insert_id;                        
5.  mysqli_close($db);

Lets say A executes the file from 1.-3. in this moment user B executes the file from 1.-5., after that user A loads the file from 4.-5. I guess in this scenario also both would have the same number on the screen right? Does transaction prevent both scenarios?

Upvotes: 0

Views: 2788

Answers (3)

user1039663
user1039663

Reputation: 1335

Depends. Usually php scripts are executed in parallel, but there are several things that can it make execute sequentially and thus dirty your tests:

  1. most probably because browser, if you use same browser for tests, use two or more browsers, or even better a script calling to curl.
  2. server configuration may sequentialize calls on certail url or from certain ip... check using your pc and handheld device for discarding.
  3. some scripts may implement protections like locks, posix syncronizing methods (criticals, semaphores...) or db transactions to avoid certain critical parts be executed in parallel to avoid inconsistencies.

Best option is to test in your scenario: add an sleep or procesive-indensive-loop inside your script/critical region/protected code area/db transaction code and check. Remember using same and different urls/ips/browsers to get a better overall idea.

Upvotes: 0

Jivan
Jivan

Reputation: 23098

Parallel or Sequential?

Part of your question was about PHP running either parallel or sequential. As I have read everything and its opposite about that topic, I decided to test it myself.

Field testing:

On a LAMP stack running PHP 5.5 w/ Apache 2, I made a script with a very expensive loop:

function fibo($n)
{
    return ($n > 1) ? fibo($n - 1) + fibo($n - 2) : 1;
}

$start = microtime(true);
print "result: ".fibo(38);
$end = microtime(true);
print " - took ".round(($end - $start), 3).' s';

Result with 1 script running:

result: 63245986 - took 19.871 s

Result with 2 scripts running at the same time in two different browser windows:

result: 63245986 - took 20.753 s

result: 63245986 - took 20.847 s

Result with 3 scripts running at the same time in three different browser windows:

result: 63245986 - took 26.172 s

result: 63245986 - took 28.302 s

result: 63245986 - took 28.422 s

CPU usage while running 2 instances of the script:

enter image description here

CPU usage while running 3 instances of the script:

enter image description here

So, it's parallel!

Althoug inside a PHP script, you can't easily use multithreading (while it's possible), Apache takes benefit from your servers having multiple cores to dispatch the load.

So if your 1-second script is run by 100 users at the same time, well if you have 100 CPU cores the 100th user will hardly notice anything. If you have 8 CPU cores (which is more common), then the 100th user will theoritically have to wait something like 100 / 8 = 12.5 seconds for his instance of the script to begin. In practice, as the "benchmark" puts in evidence, each thread's performance diminishes when other threads are running at the same time on other cores. So it could be a lot more. But not 100 seconds more.

Upvotes: 3

Dador
Dador

Reputation: 5425

You can say that php files executed parallel (for most cases it is so, but this depends on web server).

Yes, it is possible that the following code produces for two different users the same outcome.

How to avoid this possibility?

1) If you are using MySQL, you can use transactions and "SELECT ... UPDATE FOR" to avoid this possibility. Just using transaction wouldn't help!

2) Be sure that you are using InnoDB or any other database engine that support transactions. For example MyISAM doesn't support transactions. Also you can have problems if any form of snapshotting is enabled in the database to handle reading locked records.

3) Example of using "SELECT ... UPDATE FOR":

$db = startConnection();

// Start transaction
$db->query("START TRANSACTION") or die($db->error);

// Your SELECT request but with "FOR UPDATE" lock 
$query = "SELECT id FROM data FOR UPDATE";
$result = $db->query($query);


// Rollback changes if there is error
if (!$result)
{
    mysql_query("ROLLBACK");
    die($db->error);
}


$zeile = mysqli_fetch_row($result);
$number = $zeile['id'];
$newnumber = $number + 1;
echo $number;

$update = "UPDATE data Set id = '$newnumber'  WHERE id = '$number'";
$result = $db->query($query);

// Rollback changes if there is error
if (!$result)
{
    mysql_query("ROLLBACK");
    die($db->error);
}    

// Commit changes in database after requests sucessfully executed
mysql_query("COMMIT");

mysqli_close($db);

Why just using transaction wouldn't help?

Just transaction will lock only for write. You can test examples bellow by running two mysql console clients in two separate terminal windows. I did so and that's how it works.

We have client#1 and client#2 that executed parallel.

Example #1. Without "SELECT ... FOR UPDATE":

client#1: BEGIN
client#2: BEGIN
client#1: SELECT id FROM data // fetched id = 3
client#2: SELECT id FROM data // fetched id = 3
client#1: UPDATE data Set id = 4  WHERE id = 3
client#2: UPDATE data Set id = 4  WHERE id = 3
client#1: COMMIT
client#2: COMMIT

Both clients fetched the same id (3).

Example #2. With "SELECT ... FOR UPDATE":

client#1: BEGIN
client#2: BEGIN
client#1: SELECT id FROM data FOR UPDATE // fetched id = 3
client#2: SELECT id FROM data FOR UPDATE // here! client#2 will wait for end of transaction started by client#1
client#1: UPDATE data Set id = 4  WHERE id = 3
client#1: COMMIT
client#2: client#1 ended transaction and client#2 fetched id = 4
client#1: UPDATE data Set id = 5  WHERE id = 4
client#2: COMMIT

Hey, I think such read-locks reduce performance!

"SELECT ... FOR UPDATE" do read-lock only for clients that use "SELECT ... FOR UPDATE". That's good, cause it means that such read-lock wouldn't affect on standart "SELECT" requests without "FOR UPDATE".

Links

MySQL documentation: "SELECT ... FOR UPDATE" and other read-locks

Upvotes: 5

Related Questions