Reputation: 875
I have about 28GB of Data-In for a little over 13.5 million rows stored in Windows Azure Table Storage.
6 Columns, all ints except 1 decimal and 1 datetime. Partition Key is about 10 characters long. RowKey is a guid.
This is for my sanity check--does this seem about right?
The Sql Database I migrated the data from has WAY more data and is only 4.9GB.
Is there a way to condense the size? I don't suspect renaming properties will put a huge dent on this.
*Note this was only a sampling of data to estimate costs for the long haul.
Upvotes: 0
Views: 420
Reputation: 71118
Well... something doesn't seem to add up right.
Your numbers are approx. an order of magnitude larger (about 2,000 bytes per entity). Even accounting for bulk from serialization, I don't see how you're getting such a large size. Just curious: how did you compute the current table size? And... have you done multiple tests, resulting in more data from previous runs? Are you measuring just the table size, or the total storage used in the storage account? If the latter, there may be other tables (such as diagnostics) also consuming space.
Upvotes: 1
Reputation: 15861
Renaming properties in the entities that are persisted should have some impact on the size. Unfortunately, that'll be only for data saved in the future. Existing data does not change just because you've renamed the properties
Upvotes: 0