sviklim
sviklim

Reputation: 1064

Execute Spock Test in Different Environments

I have a Selenium test, which is executed with the help of Spock framework. In general it looks like this:

class SeleniumSpec extends Specification {
    URL remoteAddress // Address of SE grid
    Capabilities caps // Desired capabilities
    WebDriver driver  // Web driver

    def setup() {
        driver = new RemoteWebDriver(remoteAddress, caps)
    }
    def "some test" () {
        expect:
        driver.findElement(By.cssSelector("p.someParagraph")).text == 'Some text'
    }
    // other tests go here ...
}

The point here is, that my specification describes behavior of some component (in most cases - web views/pages). So the methods are expected to implement some business-relative logic (smth. like 'click on button and expect message in another field); but another thing, I would like to test, is to ensure that behavior is exactly the same in all browsers (capabilities).

To achieve this in 'ideal' world, I'd like to have a mechanism to specify, that a particular test class should be used several times, but with some different parameters. But for now, I see an ability to apply data sets only for a single method.

I came up only with several ideas to implement this (according to my current knowledge regarding Spock framework):

  1. Use a list of drivers and execute each action over all list members. So each call to 'driver' will be replaced with 'drivers.each { it }' invocation. This approach, on the other hand, makes it hard to exactly discover, which of drivers failed the test.
  2. Use method parameters and data sets to initiate a fresh copy of web driver on each iteration. This approach seems more logical according to Spock philosophy, but it requires a heavy operation of driver and web application initialization to be performed every time. It also removes ability to perform 'step-by-step' testing, since state of driver won't be preserved between test methods.
  3. Combination of these approaches, when drivers are kept in a map, and each test invocation has exact name of driver to be used.

I'd appreciate, if anybody met this case and may come up with ideas, how to properly organize the testing process. Someone could also discover other approaches, or pros & contras of those above. Considering another test tool could be an option also.

Upvotes: 0

Views: 593

Answers (1)

jk47
jk47

Reputation: 765

You could create an abstract BaseSpec which contains all your features, but do not setup the driver in that spec. Then create a subspec for each different browser you want to test e.g.

class FirefoxSeleniumSpec extends BaseSeleniumSpec{

  setupSpec(){
    super.driver = new FirefoxDriver(...)
  }

}

And then you can run all of the sub-specs to test all the browsers

Upvotes: 1

Related Questions