Reputation: 101
I am running into a problem running a spec test in in a random seed order. It passes when I run the test by itself which is really frustrating me. How can I fix this?
describe MarketingInfo do
let(:question) { create(:marketing_question) }
let(:answer) { create(:marketing_answer, marketing_question: question) }
let(:marketing_info) { MarketingInfo.new(create(:account)) }
describe '#create' do
let(:result) { marketing_info.create(info) }
context 'when valid' do
let(:info) { { question.id => answer.id } }
specify { expect(result).to be_true }
end
context 'when invalid' do
let(:info) { { question.id => '' } }
specify { expect(result).to be_false }
end
end
and
def initialize(answerable)
@answerable = answerable
@marketing_responses = []
end
def create(response_data)
response_data.each do |question_id, answer_array|
m_response = build_marketing_response(question_id, answer_array)
@marketing_responses << m_response if m_response
end
valid?
end
Here is the fail message when run with random seed:
1) MarketingInfo#create when valid should be true
Failure/Error: specify { expect(result).to be_true }
expected: true value
got: false
# ./spec/form_objects/marketing_info_spec.rb:29:in `block (4 levels) in <top (required)>'
Upvotes: 4
Views: 1687
Reputation: 3595
For posterity, when a test is flaky in isolation, it is likely dependent upon a non-determinent part of the system, like the system time, a random number, or a network connection.
When a test is flaky when run with other tests (especially when changing the order of the tests causes the flakiness), that means that some other test is leaking state into the environment and causing the flaky test to fail because it's assumptions about the environment are no longer true.
And, when a test is flaky when run in parallel on CI, that's probably due to a race condition between two tests who are both accessing the same global state.
I gave a more detailed answer on this question.
Upvotes: 0
Reputation:
Whenever you run RSpec tests and they run individually and pass but fail as a group it can mean a couple of things (I've come to learn from experience). Sometimes it can be that the suite is smelly and that the order of items is dependent. That doesn't sound like the issue, though. Otherwise, it could mean that database is not responding the way you expected it to when run your tests.
In any case, I found this blog post particularly helpful for debugging these kinds of situations (which has you checking the specs.log
file for the failing test to see what's happening before the test).
Maybe you should clear your instance variables after each test is run?
Upvotes: 1
Reputation: 15945
Another option for debugging the interdependencies between examples is RSpec Bisect. It will try to isolate a minimally reproducible set of examples:
$ rspec -s 123 --bisect
Bisect started using options: "-s 123"
Running suite to find failures... (1 minute 4.16 seconds)
Starting bisect with 1 failing example and 600 non-failing examples.
Checking that failure(s) are order-dependent... failure appears to be order-dependent
Round 1: bisecting over non-failing examples 1-600 . ignoring examples 1-199 (22.55 seconds)
Round 2: bisecting over non-failing examples 200-400 .. ignoring examples 421-400 (28.87 seconds)
Round 3: bisecting over non-failing examples 300-350 .. multiple culprits detected - splitting candidates (37.26 seconds)
Round 4: bisecting over non-failing examples 330-335 .. multiple culprits detected - splitting candidates (43.32 seconds)
...
Bisect complete! Reduced necessary non-failing examples from 600 to 10 in 25 minutes 16 seconds.
The minimal reproduction command is:
rspec './spec/controllers/etc_controller_spec.rb[1:1:1,1:1:2,1:2:1,1:3:1]' './spec/models/thing_spec.rb[1:1:2:1,1:1:2:2]' ... -s 123
Feeding a seed that is known to fail can speed things up.
Upvotes: 4