Reputation: 1
I'm using Change Deployment Configuration for auto-scaling. I'm not using REST API directly, but instead using MS sample library Microsoft.Samples.WindowsAzure.ServiceManagement.dll
in the way described here: http://blog.maartenballiauw.be/post/2011/03/21/Windows-Azure-and-scaling-how-(NET).aspx.
The issue is: I would expect that when I upscale my service, current instances stay untouched and new ones starting, while it looks like already working instances are also upgraded and become temporary not available. This becomes more visible when I upscale from 1 to 2 - there is a break period when service is not available at all.
Does anybody have recommendations how to do such upscale/downscale to guarantee that current instances will not be touched?
Upvotes: 0
Views: 348
Reputation: 66882
I think the problem you are seeing is probably caused because you are not handling the Changing event in the RoleEnvironment - http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.changing.aspx
If you handle this event and ensure the Cancel flag is not set, then I think your existing instances will remain alive.
Update - see http://social.msdn.microsoft.com/Forums/en-IE/windowsazuretroubleshooting/thread/3a29e642-f5e1-4712-a93c-687e4032b816
Upvotes: 1