Reputation: 1
I have problem about , 1 object's data size. And I don't know is my data model true ? Here is situation: I have 25 server , 200 server counter , and minute of day 1440 data. My purpose saving daily data of counter of server . But query performance is important for me.
My Data Model :
{ "_id":Object(....),
"serverId":0
"counters": [
{ "counterId":0 , "inserted": ISODate(...) , "counterValue":122.2131 , "min":121 , "max":33333} ,
.
.1440 times "counterId":0
.
{ "counterId":1 , "inserted": ISODate(...) , "counterValue":122.2131 , "min":121 , "max":33333} ,
.
.1440 times "counterId":1
.
.
. 200 Counters
.
{ "counterId":200 , "inserted": ISODate(...) , "counterValue":122.2131 , "min":121 , "max":33333}
.
.1440 times "counterId":200
.
] },
.
.
.25 Server
.
.
{ "_id":Object(....),
"serverId":25
"counters": [
{ "counterId":0 , "inserted": ISODate(...) , "counterValue":122.2131 , "min":121 , "max":33333} ,
.
.1440 times "counterId":0
.
{ "counterId":1 , "inserted": ISODate(...) , "counterValue":122.2131 , "min":121 , "max":33333} ,
.
.1440 times "counterId":1
.
.
. 200 Counters
.
{ "counterId":200 , "inserted": ISODate(...) , "counterValue":122.2131 , "min":121 , "max":33333}
.
.1440 times "counterId":200
.
] },
Problems: ----Quering Performance Of Aggrigation Framework-- : When I try to find avarage counterValue of Counters for 1 day it takes : 12 seconds ---Data Size---- It is not possible to modelling like that 16 Mb fulled when I try.
My Data Model :
{ "_id":Object(....),
"counterId":0
"servers": [
{ "serverId":0, "inserted": ISODate(...) , "counterValue":122.2131 , "min":121 , "max":33333} ,
.
.1440 times "serverId":0
.
{ "serverId":1 , "inserted": ISODate(...) , "counterValue":122.2131 , "min":121 , "max":33333} ,
.
.1440 times "serverId":1
.
.
. 25 Server
.
{ "serverId":25 , "inserted": ISODate(...) , "counterValue":122.2131 , "min":121 , "max":33333}
.
.1440 times "serverId":200
.
] },
.
.
.200 Counter
.
.
{ "_id":Object(....),
"counterId":200
"servers": [
{ "serverId":0, "inserted": ISODate(...) , "counterValue":122.2131 , "min":121 , "max":33333} ,
.
.1440 times "serverId":0
.
{ "serverId":1 , "inserted": ISODate(...) , "counterValue":122.2131 , "min":121 , "max":33333} ,
.
.1440 times "serverId":1
.
.
. 25 Server
.
{ "serverId":25 , "inserted": ISODate(...) , "counterValue":122.2131 , "min":121 , "max":33333}
.
.1440 times "serverId":200
.
] },
Problems: ----Quering Performance Of Aggrigation Framework-- : When I try to find avarage counterValue of Counters for 1 day it takes : 7 seconds ---Data Size---- It is not possible to modelling like that 16 Mb fulled when I try., but I divided 2 pieces it's works.
My Data Model :
{ "_id" : ObjectId("515921e3bbea58d25eb07b22"), "serverId" : 0, "counterId" : 0, "inserted" : ISODate("2013-03-21T00:26:30Z"), "counterValue" : 0.03256159112788737, "min" : -29.967438408872113, "max" : 20.032561591127887 }
.
.7 billion data
.
Problems: ----Quering Performance Of Aggrigation Framework-- : When I try to find avarage counterValue of Counters for 1 day it takes : 7 seconds
SO: I could not decide which data model I must use ,
Thanks..
Upvotes: 0
Views: 88
Reputation: 17589
Not sure why you are choosing monogodb here. I would use your third option but there is lack of indexes. add indexes for serverid and counterid fields and you'll get much better time.
Also you probably end up having separate collections for read and write , ie
-Use one collection as you do in your third option -Once and hour/day/week ( most likely day ) run query to get aggregated data and put it in another collection , also with indexes , which you are going to query from no on. After that you'll remove agregated data if you don't need it anymore.
Here is help page on indexes and faq
In you case it is going to be
db.events.ensureIndex({counterId:1,serverId:2, datetime:3 })
You may play with sequence of fields in index, as it may lead to different results.
Also if you are going to aggregate once an hour as i said, you'll better put datetime as first indexed field
db.events.ensureIndex({datetime:1, counterId:2,serverId:3 })
NOTE: obviously adding indexes will improve speed of reading data from collection, but will slow down writing data to it.
Upvotes: 1