Reputation: 3432
Every night we run a full cube processing through an SSIS job that runs a script:
<Batch>
<Process>
<Type>ProcessFull</Type>
<Object>
<DatabaseID>OurDatabase</DatabaseID>
</Object>
</Process>
</Batch>
After adding a measure and processing the cube is now showing measure data that is inflated almost way more than it should be. A measure that should read 11.8 million now reads 684 million. The underlying data is accurate but the aggregations are not. There is no pattern to the inflation of the numbers that I can see.
However, if I redeploy the cube via xmla with full processing attached to the alter, it works fine. I would rather not have to do this by hand every morning at 1 am... so any ideas would be helpful.
It should also be noted that we rolled back to the previous cube schema and still have this problem. We have also tried restarting the SSAS service in production with no success. This problem cannot be recreated in any other environment.
Upvotes: 0
Views: 154
Reputation: 5999
Do you have any partitions that could have inaccurate filters, resulting in multiple partitions reading in the same data?
Alternatively, have you tried:
<Batch Transaction="true" ProcessAffectedObjects="true">
Upvotes: 1