Rakesh
Rakesh

Reputation: 31

How do I get logs from EC2 instances created by Auto Scaling?

I have an EC2 instance created with Auto Scaling enabled in Amazon Web Services - according to the Webload instances are created and terminated automatically. How do I get the logs from the instances that are created automatically?

Upvotes: 2

Views: 8193

Answers (4)

AWS PS
AWS PS

Reputation: 4710

The Best method is to install and configure aws cloudwatch log agent on your instances, this is the best practice and I have implemented ir many times and it supports autoscaling instances

to install it https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html

this is an ubuntu tutorial

https://www.petefreitag.com/item/868.cfm

Upvotes: 0

mahendra rathod
mahendra rathod

Reputation: 1638

I personally followed the below way to get the logs of autoscaling instances.

I have installed AWS cloudwatch agent on ec2 instance and send all logs to aws cloudwatch logs. Created the cloudwatch loggroups according to the environment .

https://medium.com/tensult/to-send-linux-logs-to-aws-cloudwatch-17b3ea5f4863

Another way is you can configure AWS life cycle hooks. where you can setup a hook and send logs on the basis of below EC2 isnatnce states.

autoscaling:EC2_INSTANCE_LAUNCHING
autoscaling:EC2_INSTANCE_TERMINATING

https://docs.aws.amazon.com/cli/latest/reference/autoscaling/put-lifecycle-hook.html

You can also user AWS EFS(Elastic File System) . create aws EFS and mount it on autoscale instance with AWS userdata(bootstrap) and mentioned your log path to the mounted AWS EFS directory in your web server .

One more option is AWS s3fs . which already mention in above trail . (but make sure AWS is not providing any support for third party tools like s3fs.)

Upvotes: 1

Markus
Markus

Reputation: 1

if you go s3 for the logfiles, which I would suggest if you expect a sizable volume of log files, then use EMR to churn through the log files on s3, either on demand or as a scheduled job via elastic data pipelines.

Upvotes: 0

Kyle Wild
Kyle Wild

Reputation: 8915

Here's a thread on the AWS developer forums with some suggestions:

https://forums.aws.amazon.com/message.jspa?messageID=183672

Because you're using Auto Scaling, I assume that the NFS and syslog approaches wouldn't be high-availability enough to handle your log load.

The consensus from that thread is that S3 is the best bet for guaranteeing storage. If you go that route, processing/searching your logs could become a bit of a chore.

One creative option would be to create a MongoDB server/cluster, perhaps exposed via a simple webservice, to aggregate the massive influx of log entries from your n app servers. I've used MongoDB for storage and analysis of some pretty huge metrics/transactions data sets (in the tens to hundreds of millions of records per day), and it has performed admirably.

Upvotes: 5

Related Questions