Reputation: 143
I am developing a website. Currently, I'm on cheapo shared hosting. But a boy can dream and I'm already thinking about what happens with larger numbers of users on my site.
The visitors will require occasional database writes, as their progress in the game on the site is logged.
I thought of minimizing the queries by writing progress and other info live into the $_SESSION
variable. And only when the session is destroyed (log out, browser close or timeout), I want to write the contents of $_SESSION
to the database.
Questions:
Is that possible? Is there a way to execute a function when the sessions is destroyed by timeout or closing of the browser?
Is that sensical? Are a couple of hundred concurrent SQL queries going to be a problem for a shared server and is the idea of using $_SESSION
as a buffer going to alleviate some of this.
Upvotes: 7
Views: 298
Reputation: 437914
Is there a way to execute a function when the sessions is destroyed by timeout or closing of the browser?
Yes, but it might not work the way you imagine. You can define your own custom session handler using session_set_save_handler
, and part of the definition is supplying the destroy
and gc
callback functions. These two are invoked when a session is destroyed explicitly and when it is destroyed due to having expired, so they do exactly what you ask.
However, session expiration due to timeout does not occur with clockwork precision; it might be a whole lot of time before an expired session is actually "garbage-collected". In addition, garbage collection triggers probabilistically so in theory there is the chance that expired sessions will never be garbage collected.
Is that sensical? Are a couple of hundred concurrent SQL queries going to be a problem for a shared server and is the idea of using $_SESSION as a buffer going to alleviate some of this.
I really wouldn't do this for several reasons:
What about alternatives?
Well, since we 're talking about el cheapo shared hosting you are definitely not going to be in control of the server so anything that involves PHP extensions (e.g. memcached) is conditional. Database-side caching is also not going to fly. Moreover, the load on your server is going to be affected by variables outside your control so you can't really do any capacity planning.
In any case, I 'd start by making sure that the database itself is structured optimally and that the code is written in a way that minimizes load on the database (free performance just by typing stuff in an editor).
After that, you could introduce read-only caching: usually there is a lot of stuff that you need to display but don't intend to modify. For data that "almost never" gets updated, a session cache that you invalidate whenever you need to could be an easy and very effective improvement (you can even have false positives as regards the invalidation, as long as they are not too many in the grand scheme of things).
Finally, you can add per-request caching (in variables) if you are worried about pulling the same data from the database twice during a single request.
Upvotes: 6
Reputation: 18859
Is that sensical? Are a couple of hundred concurrent SQL queries going to be a problem for a shared server and is the idea of using $_SESSION as a buffer going to alleviate some of this.
No. First and foremost, you never know what happens to a session (logout is obvious, where a time-out is nearly undetectable), so it's not a trustworthy caching mechanism at any rate. If there are results that you query multiple times over the span of a few request, which don't change all too often, save the results of those queries to a dedicated caching mechanism, such as APC, or memcached.
Now, I understand your webhost will not provide these caching systems, but then, you probably can do different things to optimise your site. For starters, my most complex software products (which are fairly complex) query the database about 6 times per page, on average. If the result is reusable, I tend to use caching, so that lowers the number of queries.
On top of that, writing decent queries is more important; the quality of your design and queries is more important than the quantity. If you get your schema, indexes and queries right, 10 queries are faster than one query that's not optimised. I'd invest my time investigating how to write efficient queries, and read up on indexing, rather than trying to overcome the problem with a "workaround", such as caching in a session.
Good luck, I hope your site becomes that big of a success you will actually need the advice above ;)
Upvotes: 2
Reputation: 76910
Actually you could use the $_SESSION as a buffer to avoid duplicate reads, thet seems a good idea to me (memcached even better than that), surely not for delaying writing (that is much more complex and should be handled by the db);
you could use a simple hash that you save in $_SESSION
$cache = array();
$_SESSION['cache'] = $cache;
then when you have to make a query
if(isset($_SESSION['cache'][$id]){
//you have a cache it
$question = $_SESSION['cache'][$id];
}else{
//no cache, retrieve your $question and save it in the cache
$_SESSION['cache'][$id] = $question ;
}
Upvotes: 1
Reputation: 3755
It's not a good idea to write data when the session is destroyed. Since the session datas could be destroyed via a garbage collector configured by your hoster, you don't have any idea when the session is really closed until the user's cookie is out of date.
So... I suggest you to use either a shared memory (RAM) cache system like memcache (if your hoster offers it) or a disk based cache system.
By the way, if your queries are optimized, columns correctly indexed, etc., your shared hosting could take tons of queries at the "same time".
Upvotes: 5