Reputation: 993
I'm trying to use a standard HTML form with a file
type input
element. This form works fine on small files, but if the upload process takes longer than 20 seconds, the connection is dropped and the upload ends prematurely.
The infrastructure is as follows: 1 VPC containing 1 EC2 t2.micro instance running Amazon Linux with Apache 2.4/PHP 7.3/MySQL installed, connected to 1 EFS mount, accessible through an elastic IP.
I've spent a full day yesterday and more hours today trying to get this figured out. I thought that the EC2 -> Limits was it, as it was set to 20, but that was referring to the number of instances, not the connection time limit.
php.ini
directives have been set accordingly:
max_execution_time = 900
post_max_size = 1048M
upload_max_filesize = 1024M
max_input_time = -1
And yes, the code works fine on a non-AWS server. I'm testing with the most basic HTML uploader you can make:
<?php
if ($_FILES) {
move_uploaded_file($_FILES['wtf']['tmp_name'],
/localpath/'.basename($_FILES['wtf']['name']));
}
?>
<html>
<body>
<form method="post" enctype="multipart/form-data">
<input type="file" name="wtf">
<input type="submit">
</form>
</body>
</html>
This returns the ERR_CONNECTION_ABORTED
message. Of course, this works perfectly fine for all files that upload in under the 20 second cutoff.
I've tried installing swap memory (the default Amazon Linux AMI didn't), adding a load balancer and editing the idle_timeout
to 600 seconds. I've tried files of varying sizes, but it has nothing to do with the size, it's all about the time it takes to process. Without fail, it aborts at about 20 seconds every time.
I've had issues like this in the past with AWS, and at that point, it came down to "stickiness" of the ELB (load balancer). I wasn't originally using a load balancer on the EC2 instance that I was testing the code on, so when I enabled one, it appears that the Edit Stickiness
option only apply to classic load balancers. The current apparently comparable setting that the load balancer provides is idle timeout
(accessible from EC2 -> Load Balancers -> [select load balancer] -> Description tab -> Attibutes).
Additional tests 12 days later:
This is still an issue. I've tried so many things, but nothing has gotten this to work.
I've tried using curl to call the basic uploader provided above. I created a blank file and attempted to submit it to the form:
dd if=/dev/zero of=testfile.txt count=502400 bs=1024
curl -k -i -X POST -H "Content-Type: multipart/form-data" /
-F "file_field=@/var/www/html/testfile.txt" /
https://domain.tld/test_upload.php
Again, it has nothing to do with the size of the files, it's all time related. On a slow connection, I can't upload a 25 MB file, on the server directly, I can upload a 400 MB file, but it cuts off somewhere above that. Again, it always ends after 15 to 20 seconds.
I also tried uploading through localhost, and the same issue exists, so it seems to be a server configuration issue, unless there is an unknown AWS layer at play.
curl -k -i -X POST -H "Content-Type: multipart/form-data" /
-F "file_field=@/var/www/html/testfile.txt" /
localhost/test_upload.php
The return from $_FILES on uploads that take longer than 15/20 seconds states:
Array
(
[name] => testfile.txt
[type] =>
[tmp_name] =>
[error] => 3
[size] => 0
)
Error code is ERR_CONNECTION_ABORTED, which lead me to look up similar issues others have had with EC2 uploads, and nothing suggested helped.
Directives I've tried in php.ini:
post_max_size = 1048M
upload_max_filesize = 1024M
memory_limit = 512M
max_input_time = 3600
max_execution_time 900
ignore_user_abort = On
upload_tmp_dir = /var/www/custom_tmp
It's been suggested that the default tmp folder might have restrictions that cause issues, but trying to change it to a different folder didn't affect it.
The memory_limit has no bearing on any of this, as mentioned by others, but I tried it anyway.
Directives I've tried in httpd.conf:
AllowOverRide = All
TimeOut 1200
KeepAlive On / Off
KeepAliveTimeout 1200
KeepAlive is for making multiple requests over the same TCP connection, so not exactly related, but either way, on or off, it had no impact.
When uploading, I watched the file in the tmp folder being written. It would continue to grow until it hit that same timeout, then it suddenly was deleted, the html form being returned that ERR_CONNECTION_ABORTED error.
I've tried putting a copy of the php.ini in the base HTML folder, no joy. I've set the Apache directives in the .htaccess file, again no change. I've tried using an HTTP connection instead of HTTPS. Nothing helps.
This is a standard Amazon Linux AMI (HVM) t2.micro instance. I don't think that it being a micro should matter, I've set up an upload server on a t1.micro before which didn't have any issues uploading large files, some differences being it was not in a VPC and it was running on AWS Classic.
Upvotes: 3
Views: 2511
Reputation: 6236
Setting the xpack.security.enabled: false in elasticsearch.yml has worked for me.
Upvotes: 0
Reputation: 31
We recently had the same issue as you, adding this to httpd.conf helped us:
RequestReadTimeout header=0 body=0
Upvotes: 0
Reputation: 11
I was having a similar issue with an EBS Multi-container, NLB, .NET stack. Small files (<25M) will POST just fine. Large files (up to 350MB) failed silently. There was no response at all, the API endpoint was not invoked, Postman reported a socket hangup, and CURL reported error 52.
To solve this I switch from the Network Load Balancer (NLB) to an Application Load Balancer (ALB). I also applied the settings from this post Increasing client_max_body_size in Nginx conf on AWS Elastic Beanstalk , but set the max to 350M. I also adding settings to increase the timeouts and buffers. The final content section of the proxy.conf being injected was:
content: |
client_max_body_size 350m;
client_body_buffer_size 128k;
proxy_connect_timeout 360;
proxy_send_timeout 360;
proxy_read_timeout 360;
Upvotes: 1