Sean McLoughlin
Sean McLoughlin

Reputation: 43

How to handle inter-process communication between TLM FIFOs that may or may not be written to this timestep

I want a UVM component that has two input TLM FIFOs and one output AP. For the two inputs, one receives packets that build state, and the other receives packets for querying the state that was previously built. The output AP broadcasts the state that the input AF requested. Below is a simple example with modeling a cache (new, build, etc. are ignored for brevity).

class cache_model extends uvm_component;
  `uvm_component_utils(cache_model)

  // The two TLM FIFO inputs
  uvm_tlm_analysis_fifo#(cache_write_pkt_t) write_af;
  uvm_tlm_analysis_fifo#(tag_t) read_query_req_af;
  
  // The query response output
  uvm_analysis_port#(data_t) read_query_rsp_ap;

  // The internal state being modeled
  data_t cache[tag_t];

  task run_phase(uvm_phase phase);
    super.run_phase(phase);
    fork
      forever process_writes();
      forever process_queries();
    join_none
  endtask

  protected task process_writes();
    cache_write_pkt_t pkt;
    write_af.get(pkt);
    // Converts the pkt to a tag and data and writes the cache
  endtask

  protected task process_queries();
    tag_t tag;
    read_query_req_af.get(tag);
    read_query_rsp_ap.write(cache[tag]);
  endtask
endclass

The problem I'm facing is the order of execution between the two process_ tasks I've created. If there is both a write and a read to the cache in the same simulation timestep, I want the write to be processed first and then the read (I want the read to get the most recently written data). But it's entirely possible that the packets are pushed to the AFs in a different order.

I naively tried the following, but it doesn't seem to work because the write_af could not have been pushed to when process_queries begins executing, but it will get pushed to later in the simulation timestep:

event process_writes_done;

protected task process_writes();
  cache_write_pkt_t pkt;
  write_af.get(pkt);
  // Converts the pkt to a tag and data and writes the cache
  ->process_writes_done;
endtask

protected task process_queries();
  tag_t tag;
  read_query_req_af.get(tag);
  if (!write_af.is_empty()) begin
    wait(process_writes_done.triggered());
  end
  read_query_rsp_ap.write(cache[tag]);
endtask

In general, this can be extrapolated to any number of dependencies between forked processes that are waiting on TLM FIFOs.

This is the kind of scenario where people add #0 to force ordering but I know that's not a good idea. So how can I guarantee process ordering in this scenario? Or is there a better methodology to follow when having a component waiting on many possibly-dependent FIFO packets?

Upvotes: 3

Views: 556

Answers (1)

dave_59
dave_59

Reputation: 42748

If you want the write to be processed before the read you should delay the read. Normally you could use a non-blocking assignment(NBA) to prevent race conditions, But the UVM TLM ports were not set up to do that. And you are correct in thinking inserting #0 delays are a bad idea—they have a way of accumulating #0 #0 and just postponing the race conditions.

UVM does have global task uvm_wait_nba_region that blocks for one iteration of the active/NBA events. You can put that in front of the read_query_req_af.write() in your monitor, or you can put it front of the read_query_rsp_ap.write() in your process_queries() task.

If you want to generalize this for one or more input analysis fifo's, we need to assume all of the input analysis fifos are written to synchronized to the same clock event in the same timestep. Otherwise you have a lot more explaining to do.

Then you can still have a process for each input fifo, but then you need an extra output process that executes after the single NBA region and looks as which input fifos received a transaction and handles the required ordering you need.

event evict, write, read;

// declarations for transactions e,w,r 
// but outside the task so they can be shared
epkt e;
cache_write_pkt_t w;
tag_t r;

task process_evict();
   evict_af.get(e);
   ->>evict; //NBA event trigger
endtask
task process_write();
   write_af.get(w);
   ->>write;
endtask
task process_read();
   read_af.get(r);
   ->>read;
endtask
task process_output();
   @(evict or write or read)
   if (evict.triggered) begin
     // stuff you need to do if there was an evict
   end
   if (write.triggered) begin
     // stuff you need to do if there was an write
   end
   if (read.triggered) begin
     // stuff you need to do if there was an read

     read_query_rsp_ap.write(cache[tag]);
   end
endtask

Upvotes: 1

Related Questions