Reputation: 593
On the DUT I have two channels each consisting of a data interface and a sideband interface. The transactions that are sent down these channels must in order but one channel can stall back while the other channel catches up. I.E: I send transaction A down channel 0, transaction C down channel 1, but channel 1 will not accept transaction C until channel 0 has recieved transaction B.
Furthermore the data interface can be slower than the sideband interface on each channel and certain sideband transactions do not require data to be sent with them.
Currently the tests are set up to create the individual data and sideband sequences, place them into queues then split the queues into the number of channels and send them. However this is becoming difficult to maintain with interface changes on the channels and varying number of channels per configuration. So ideally I'd like to write the test sequence so that it has no knowledge of how many channels are there or what interface needs data for the abstract transaction.
The top sequence should just generate sequences like this:
'uvm_do(open_data_stream_sequence);
'uvm_do_with(send_data_sequence, {send_data_sequence.packet_number == 0;});
'uvm_do_with(send_data_sequence, {send_data_sequence.packet_number == 1;});
'uvm_do_with(send_data_sequence, {send_data_sequence.packet_number == 2;});
'uvm_do_with(send_data_sequence, {send_data_sequence.packet_number == 3;});
'uvm_do(close_data_stream_sequence);
The problem with this approach is that I do not want one channel to block the other or one interface to block the other unless both are stalled back. If I use a virtual sequence like above, the open_data_stream_sequence
may stall for that individual channel when I want to pipeline the send_data_sequence
into the other channel or it may stall on the sideband interface but I want to pipeline the send_data_sequence
data transaction onto the same channel's data interface.
However I'm struggling to figure out how to implement the arbitration between the subsequencers. I thought about sequence layering and the use of fifos to only stall when all interfaces are saturated in a kind of middle layer. Is there asny UVM tricks I'm missing?
Upvotes: 0
Views: 1331
Reputation: 7573
I don't fully understand what your conditions for stalling are, but what I can tell you is that it's going to be complicated in any case.
The code you wrote will execute in a linear fashion, but what you're describing is parallel behavior. What you can do is start all sequences in parallel and block or release driving them based on events. These events would be highly specific to your application:
fork
'uvm_do(open_data_stream_sequence);
begin
@(unblock_channel_0);
'uvm_do_with(send_data_sequence, {send_data_sequence.packet_number == 0;});
end
// ...
begin
@(done_e);
'uvm_do(close_data_stream_sequence);
end
join
Upvotes: 0
Reputation: 1237
I don't think there's a way of getting around writing some code that understand the current channel state and schedules things in an optimal way (or intentionally un-optimal, if the test calls for it). You will have to have some amount of queuing going on in order to allow sideband requests with no data to pass data requests when data channels are stalled, for example.
That can still be encapsulated in a base-class virtual sequence, let's call it 'scheduler', in such a way that stimulus is oblivious to its implementation. The scheduler will have a 'start_sequence' API that starts the given sequence on a channel's sequencer, or queues it up to start it as soon as it a channel is not stalled. The test writer can sub-class 'scheduler' for every top-level sequence he wants to write, and put in the "start_sequence(data0); start_sequence(data1); start_sequence(sideband0);" calls, where each of dataN/sidebandM virtual sequences look like the one you described in your question.
'start_sequence' should return immediately to allow full saturation of the channels, or could block when all channels are saturated to reduce unnecessary queuing.
Upvotes: 1