Reputation: 340
I started working with amazon CloudWatch Logs
. The question is, are AWS
using Glacier
or S3
to store the logs? They are using Kinesis
to process the logs using filters. Can anyone please tell the answer?
Upvotes: 1
Views: 3224
Reputation: 44132
AWS is likely to use S3, not Glacier.
Glacier would make problems if you would want access older logs as to get data stored in Amazon Glaciers can take few hours and this is definitely not the reaction time one expects from CloudWatch log analysing solution.
Also the price set for storing 1 GB of ingested logs seems to be derived from 1 GB stored on AWS S3. S3 price for one GB stored a month is 0.03 USD, and price for storing 1 GB of logs per month is also 0.03 USD.
On CloudWatch pricing page is a note:
*** Data archived by CloudWatch Logs includes 26 bytes of metadata per log event and is compressed using gzip level 6 compression. Archived data charges are based on the sum of the metadata and compressed log data size.
According to Henry Hahn (AWS) presentation on CloudWatch it is "3 cents per GB and we compress it," ... " so you get 3 cents per 10 GB".
This makes me believe, they store it on AWS S3.
Upvotes: 1
Reputation: 1
They are probably using DynamoDB. S3 (and Glacier) would not be good for files that are appended to on a very frequent basis.
Upvotes: -1