Reputation: 113
I have a fairly long report with about 20 pages, mainly charts (about 40) all using a subsample of the same dataset. This "master" report is iterated about 200 times by passing a parameter with such 200 different values.
I was wondering whether there is a best practice for such case in terms of number of rdl files and datasets. Here are the options:
I see an advantage in option 1 as the shared dataset can be cached on the server making report generation much faster after the first iteration but I'm open to other approaches which might have other advantages.
Upvotes: 0
Views: 236
Reputation: 3389
I think there is no explicit solution to this. This all depends on different influence factors. For example
The chache problem can be solved easy I think. If you don´t need your data to be refreshed every hour/minute you can chache it on a daily basis.
I alaways like one dataset because when I do changes, I have to do the changes just once in one place. On the other side I had datasets with more then millions of rows. For the sake of performance I had to split such hughe datasets into smaller pieces with subreports (etc...). An improvement of performance but a pain in the *** when you have to change something.
So I think the situation will tell you which of your 4 options you can pick.
Upvotes: 1