Leo Stein
Leo Stein

Reputation: 157

php webservice doesn't handle multiple requests at the same time

I developed a webservice (PHP/MySQL) that simply output a coupon code through a JSON string.

How it works: the application receives 1 parameter (email), it then makes a request to the database table to get a coupon code that has not yet been assigned. Then a request is made to update the row of this coupon code and put "1" in the assigned column. (SELECT / UPDATE routine)

After that, the JSON is outputed like this:

echo '{"couponCode": "'. $coupon_code . '"}';

That's all.

The problem is that the webservice receives 10000 requests in approx 1 minute. This occurs only one time a day. If I look in the raw logs of apache I can see that it has received exactly 10000 requests each time but in my table there's only 984 rows that has been updated (i.e.: 984 coupon codes given). I tested it multiple time and it varies between 980 and 986 each time. The log file created by the webservice doesn't show any errors and reflects exactly what has been updated in the database, between 980 to 986 new lines each time.

My question is: what happened with the missing requests? Is it the server that has not enough memory to handle such multiple requests in this short period of time? (When I test with 5000 requests it work OK)

If it can help, here's the function that get the a new coupon codes:

function getNewCouponCode($email){

$stmt = $this->connector->prepare("SELECT * FROM coupon_code WHERE email = '' ORDER BY id ASC LIMIT 1");
$stmt2 = $this->connector->prepare("UPDATE coupon_code SET email = :email WHERE id = :id");

try{

    $this->connector->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
    $this->connector->beginTransaction();
    $this->connector->exec("LOCK TABLES coupon_code WRITE");

    /*TRANSACTION 1*/
    $stmt->execute();
    $result["select"] = $stmt->fetch(PDO::FETCH_ASSOC);
    /*TRANSACTION 1*/

    /*TRANSACTION 2*/
    $stmt2->bindParam(":email", $email);
    $stmt2->bindParam(":id", $result["select"]["id"]);
    $result["update"] = $stmt2->execute();
    /*TRANSACTION 2*/

    $this->connector->commit();
    $this->connector->exec('UNLOCK TABLES');

    return $result;

}catch(Exception $e) {
    $this->connector->rollBack();
    $this->connector->exec('UNLOCK TABLES');
    $result["error"] = $e->getMessage();
    return $result;
}

}

Thanks in advance.

Upvotes: 1

Views: 833

Answers (1)

Brad
Brad

Reputation: 163438

986 requests per minute is a pretty significant load for a PHP application the way you've designed it, and an Apache web server. It sounds like you're running this all on a single server.

First off, whatever is slamming you 10k times per minute should know to re-try later on if it gets a failure. Why isn't that happening? If that remote system is under your control, see if you can fix that.

Next, you'll find that the threading model of Nginx is much more efficient than Apache's for what you're doing.

Now, on to your application... it doesn't look like you actually need a SELECT and then UPDATE. Why not just an update, and check the result? Then it's atomic on its own and you don't have to do this table locking stuff (which is really going to slow you down).

Upvotes: 4

Related Questions