Reputation: 93
I noticed lately that my laravel project in an AWS Elasticbeanstalk setup has been acting strangely. The server would go down in a few minutes. In a t3.small, it goes down in every 50 minutes. The health tab says that the memory is exhausted or something. It will go "Severe" for about 5-10 minutes then goes back without me doing anything. Basically just a whole zigzag in the monitoring. In the t3.nano it goes down at approximately every 5 minutes.
Here are some things that I've done that I suspect to be the cause
Here are some facts:
Here's an observation that I had with the logs - There's an "internal dummy connection" related to Apache. The time when it is logged is identical to the time that the downtime occurs.
I've tried every hint on the logs, from juggling different settings on the cronjob and other possible causes. I've also tried asking my peers but no one has encountered such error before. In fact, they tested my cronjob and it's working properly for them.
I also have this error in the /var/log/httpd/error_log
[Fri Nov 23 19:07:35.208657 2018] [suexec:notice] [pid 3142] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Fri Nov 23 19:07:35.228633 2018] [http2:warn] [pid 3142] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
[Fri Nov 23 19:07:35.228644 2018] [http2:warn] [pid 3142] AH02951: mod_ssl does not seem to be enabled
[Fri Nov 23 19:07:35.229188 2018] [lbmethod_heartbeat:notice] [pid 3142] AH02282: No slotmem from mod_heartmonitor
[Fri Nov 23 19:07:35.267841 2018] [mpm_prefork:notice] [pid 3142] AH00163: Apache/2.4.34 (Amazon) configured -- resuming normal operations
[Fri Nov 23 19:07:35.267860 2018] [core:notice] [pid 3142] AH00094: Command line: '/usr/sbin/httpd -D FOR
Upvotes: 0
Views: 1181
Reputation: 901
This is a case of running into surprises with the CPU credit and throttling restrictions with t2/t3.* EC2 instances. 1 CPU credit allows a (t2/t3) instance to operate at 100% CPU for 1 minute. All t2/t3.* instance CPU credits are replenished at a constant rate per hour for running instances (this rate depends on the instance class). Hence, prolonged periods of load (above a certain threshold) will gradually deplete these credits, leading to the states that you have described above.
It's advised to use higher tier instances (m3.medium and above) to sustain production workloads consistently. Placing a load balancer in front of multiple instances is also a great way to maintain availability.
More information on the same can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html
Upvotes: 2