user2067515
user2067515

Reputation: 85

s3 bucket..The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint

Hi I'm using below code to get the size of a bucket.Researched all over but the only way was to loop through each file.While looping through ,some buckets seems to created in a different region and I'm ending up with above error

AWS::S3::PermanentRedirect: The bucket you are attempting to access must be addressed         using   the specified endpoint. Please send all future requests to this endpoint. from /home//.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/error.rb:38:in `raise'

The end point is us-west-1, Need help in fixing the above issue also how do I switch my code dynamically to region where my bucket belongs to. Also need suggestion on adding exception in case of failure Below is my code. Please feel free to comment.

def get_bucket
  s3 = AWS::S3::Base.establish_connection!(:access_key_id => @config[:ACCESS_KEY_ID], :secret_access_key => @config[:SECRET_ACCESS_KEY])
  if !s3.nil?
    AWS::S3::Service.buckets.each do |bucket|
      puts bucket.inspect
      if !bucket.nil?
        size = 0
        # I'm harding coding below bucket names, for code not to fail
        if ![
             'cf-templates-m01ixtvp0jr0-us-west-1',
             'cf-templates-m01ixtvp0jr0-us-west-2',
             'elasticbeanstalk-us-west-1-767904627276',
             'elasticbeanstalk-us-west-1-akiai7bucgnrthi66w6a',
             'medidata-rave-cdn'
            ].include? bucket.name
          bucket_size = AWS::S3::Bucket.find(bucket.name)
          if !bucket_size.nil?
            bucket_size.each do |obj|
              if !obj.nil?
                size += obj.size.to_i
              end
            end
          end
        end

        load_bucket(bucket.name,bucket.creation_date,size,@config[:ACCOUNT_NAME])
      end
    end
  end
end

Upvotes: 4

Views: 13119

Answers (2)

Guss
Guss

Reputation: 32315

The problem is that buckets can exist in different regions, and while you can list all buckets from the same connection (unlike other AWS entities that are locked to the location they were created in), other operations on buckets require you to log in to the specific "endpoint" (region) to which they are constrained.

My solution is to check where the bucket is located and then re-login to that region:

s3 = AWS::S3.new(@awscreds)
if s3.buckets[bucket].location_constraint != @awscreds[:region] then
  # need to re-login, otherwise the S3 upload will fail
  s3 = AWS::S3.new(@awscreds.merge(region: s3.buckets[bucket].location_constraint))
end

Upvotes: 1

TerminalDilettante
TerminalDilettante

Reputation: 615

I don't understand how you're building the URL to access your bucket.

If it's in US-Standard, you can say http://s3.amazonaws.com/BUCKETNAME/path/to/file. If it's anywhere else, that doesn't work (non-coincidentally, you're limited to domain-allowed characters (lowercase and numbers only) for bucket names) and you use http://BUCKETNAME.s3.amazonaws.com/path/to/file.

This article may be of help: http://docs.aws.amazon.com/general/latest/gr/rande.html

Upvotes: 0

Related Questions