Reputation: 67
I am using CodeIgniter for my API implementation. Please find the server resources and technologies used as follows :
SUMMARY
Framework : CodeIgniter
Database : MySQL (Hosted on RDS) (1 MASTER & 2 SLAVE)
Hosting : AWS t2.Micro
Web Server : Nginx
Following is the Report of the LOADER.IO report as per my test.
My API MIN RESPONSE TIME : 383 MS
NUMBER OF HITS : 10000 / 1 MIN CONCURRENT
As you can see in below image AVERAGE RESPONSE is 6436 MS .
I am expecting at least 100000 / 1 MIN users for watching an event on my application.
I would appreciate if anybody can help with some OPTIMIZATIONS Suggestions.
MAJOR THINGS I have done so far
1) SWITCHED TO NGINX FROM APACHE
2) MASTER / SLAVE Configuration (1 MASTER , 2 SLAVE)
3) CHECKED each INDEX in USER JOURNEY in the application
4) CODE OPTIMIZATION : AS you can see 383 MS is a good Response time of an API 5) USED EXPLAIN of MYSQL for checking the explaination of Queries
Upvotes: 2
Views: 440
Reputation: 142356
For 1667 SELECTs
per second, you may need to have multiple Slaves. With such, you can scale arbitrarily far.
However, it may be that the SELECTs
can be made efficient enough to not need the extra Slaves. Let's see the queries. Please include SHOW CREATE TABLE
and EXPLAIN SELECT ...
.
It is possible to run thousands of simple queries per second.
"100000 / 1 MIN" -- Is that 100K connections? Or 100K queries from a smaller number of connections? There is a big difference -- establishing a connection is more costly than performing a simple query. Also, having 100K simultaneous connections is more than I have every heard of. (And I have seen thousands of servers. I have seen 10K connections (high-water-mark) and 3K "Threads_connected" -- both were in deep do-do for various reasons. I have almost never seen more than 200 "Threads_running" -- that is actual queries being performed simultaneously; that is too many for stability.)
Ouch -- With the query_cache_size
at 256MB on 1GB of RAM, you don't have room for anything else! That is a tiny server. Even on a larger server do not set that tunable to more than 50M. Otherwise the "pruning" slows things down more than the QC speeds them up!
And, how big are the tables in question?
And, SHOW VARIABLES LIKE '%buffer%';
And, what version are you running? Version 5.7 is rated at about 64 simultaneous queries before the throughput stops improving, and (instead), response time heads for infinity.
To do realistic benchmarking, you need to provide realistic values for
The heavy-hitters deliver millions of web pages per day. The typical page involves: connect, do a few queries, build html, disconnect -- all (typically) in less than a second. But only a small fraction of the time is any running. That is, 100 connections may equate to 0-5 queries running at any instant.
Please talk about Queries per second that need to be run. And please limit the number of queries run simultaneously.
Upvotes: 1
Reputation: 6565
I would suggest you to focus to tune your mysql to get faster queries execution and thus you can save time. To to this, I would suggest to do the following:
You can setup them in /etc/my.cnf (Red Hat) or /etc/mysql/my.cnf (Debian) file:
# vi /etc/my.cnf
And then append following directives:
query_cache_size = 268435456
query_cache_type=1
query_cache_limit=1048576
In above example the maximum size of individual query results that can be cached set to 1048576 using query_cache_limit system variable. Memory size in Kb.
These changes will make your queries to give faster results by caching frequently executing queries result and it also updates its cached result when any rows will get updated. This will be done by mysql engine and this is how you can save time.
ONE MORE SUGGESTION:
As you are using t2.micro
you would get 1Gig of RAM and 1 CPU. So I would suggest to go with t2.medium
which will give you 4.0 GiB RAM and 2 CPU.
Upvotes: 1