Reputation: 49
I am in a confusing situation where I develop plug-ins for WordPress and push them to my git repository. The WordPress is on the AWS server and for every push to git, I have to create a new environment with elastic beanstalk.
Once I push to git, I first create a DEV environment and pull the changes that I want to push to production. Ex. I have changes: c1,c2,c3,c4,c5 and I want to push c1,c2,c3. I pull the changes and create the DEV. I then create the test environment to test. Once that is passed, I create the UAT (customer test environment). Let's say that the customer did not like the c3 and asked us to only push c1 and c2. In this case, I have to recreate a DEV, TEST and UAT environment and retest because removing c3 might affect other code as well. I have to send the code to UAT because at that point I repackaged the code and therefore, needs a new UAT.
I am looking for a way to reduce the number of times I send the same code to UAT. Technically, I am not supposed to send the same code to UAT again.
I was thinking about pushing each change individually rather than packaging them together; this will take away the redundancy in UAT but will add more work to the test team which will lead to a bottle neck.
PS. I cannot create automated tests, because the changes are mostly about the graphics and visuals. Also, there are thousands of pages to test. It just doesn't make sense to write test scripts for everything. Are there any suggestions?
Upvotes: 0
Views: 42
Reputation: 39824
Technically you're not sending the same code to UAT: after the c3
rejection you're sending back c1+c2
, not c1+c2+c3
- not the same code.
Unfortunately, with an integration solution based on post-commit verification there is not a really deterministic way to minimize the number of UAT submissions. That's because you have no way of knowing in advance which commit will cause a UAT rejection.
As you noticed, the most predictable way of moving forward is also the most costly - to run UAT for every change. The only way to reduce the UAT submissions is to submit multiple changes bundled together - the larger the bundle the fewer UAT submissions. But this raises a conflict: the chances of failing UAT also increase with the bundle size and so does the number of bisection re-tries required to identify the culprit (that's assuming only one per bundle, if there are several of them it's even worse).
I'd run an analysis on the UAT submissions in the most recent 2-4 weeks or so and determine at what bundle size the probability of a UAT rejection reaches something like 30-50%, then pick the max power of 2 below that value (better for the bisection you'd need to perform in case of failure). Say, for example, that the analysis suggests a value of 5, then pick 4 as the bundle size.
If you don't have enough changes to fill the bundle I'd suggest again picking the max power of 2 and leave the rest for the next bundle - other changes may be merged in the mean time and maybe you can fill that next bundle. Unless you already know about dependencies between changesets which would require them together in the same bundle and may land you in between the preferred values. Up to you if you pick up those dependencies (higher risk) of let all of them for the next bundle.
You should also keep monitoring the bundle size vs chances of UAT rejection trends (both product and UAT evolve, things change) - you may need to adjust the preferred bundle size from time to time.
Side comment: you can always build some custom UAT wrapper script(s) to make it appear as an automated test that you can hookup into a CI/CD pipeline. Only it'll have undeterministic queue wait and/or execution times. And, if indeed its execution is manual, it can also be more unreliable.
Upvotes: 1