Reputation: 97
Currenty, i'm using GCS in "interoperability mode" to make it accept S3 API requests. By using the official multipart upload example here (+ setting the appropriate endpoint), the first initiation POST request:
POST /bucket/object?uploads HTTP/1.1
Host: storage.googleapis.com
Authorization: AWS KEY:SIGNATURE
Date: Wed, 07 Jan 2015 13:34:04 GMT
User-Agent: aws-sdk-java/1.7.5 Linux/3.13.0-43-generic Java_HotSpot(TM)_64-Bit_Server_VM/24.72-b04/1.7.0_72
Content-Type: application/x-www-form-urlencoded; charset=utf-8
Transfer-Encoding: chunked
Connection: Keep-Alive
results in this response:
HTTP/1.1 400 Bad Request
Content-Length: 55
Date: Wed, 07 Jan 2015 13:34:05 GMT
Server: UploadServer ("Built on Dec 19 2014 ...")
Content-Type: text/html; charset=UTF-8
Alternate-Protocol: 443:quic,p=0.02
The request's content type is not accepted on this URL.
Could that be an AWS client issue or GCS doesn't support S3's multipart upload yet?
Most of the other actions i have tried (download object, list bucket objects etc) seem to work fine.
Upvotes: 7
Views: 9584
Reputation: 12145
Update: As of May 2021, Google Cloud Storage (GCS) supports S3-compatible multipart uploads.
https://cloud.google.com/storage/docs/multipart-uploads
The AWS SDK will work seamlessly once you configure the appropriate endpoint.
GSC doesn't support the S3 multipart upload interface.
If you want to perform a chunk-parallel upload you can use object composition - see https://cloud.google.com/storage/docs/composite-objects
Upvotes: 7
Reputation: 680
Google Cloud Storage (GCS) now supports the S3-style multipart upload API. As such, use cases like the one in this question should just work.
Upvotes: 8