FusterCluck
FusterCluck

Reputation: 503

Concatenate std_logic bits into unsigned number

In the following code I am trying to concatenate three std_logic inputs to make a three-bit unsigned output. The syntax needed to do this seems non-intuitive, and I don't understand it. Someone explan what's going on here?

When I say "fails" in the comments, I mean that synthesis produces the following error message: found '4' definitions of operator "&", cannot determine exact overloaded matching definition. It then gives 2 line numbers in numeric_std and 2 line numbers in std_1164 (but I don't have those particular versions of the source files to check).

use IEEE.NUMERIC_STD.ALL;

entity Thingy is
    Port ( a : in  STD_LOGIC;
           b : in  STD_LOGIC;
           c : in  STD_LOGIC;
           clk : in  STD_LOGIC;
           decoded : out  UNSIGNED (2 downto 0));
end Thingy;

architecture Behavioral of Thingy is
begin

    process (clk)
        variable dec : STD_LOGIC_VECTOR(2 downto 0) := (others => '0');
    begin
        if rising_edge(clk) then
            -- Intermediate variable, works ok.
            dec := a & b & c;
            decoded <= unsigned(dec);

            -- Also ok. Implicit conversion from std_logic to unsigned?
            -- decoded <= a & b & c;

            -- No intermediate variable, fails.
            -- decoded <= unsigned(std_logic_vector(a & b & c));

            -- Fails.
            -- decoded <= unsigned(a & b & c);
        end if;
    end process;

end Behavioral;

Upvotes: 2

Views: 6486

Answers (1)

wjl
wjl

Reputation: 7785

Let's look at your various cases.

First, you are simply assigning the concatenated std_logic's to an unsigned signal, and, as you say, it works!

decoded <= a & b & c;

So, you are saying this works fine (it should!) and is simple and straightforward, so why do you have a problem with it? unsigned's are, by definition, literally made of std_logic's. There is nothing odd here. This is the best way to write this.


Here you are trying to do a series of conversions:

decoded <= unsigned(std_logic_vector(a & b & c));

It is failing because it can't convert some array type of std_logic's it can't yet infer (the result of the & operators) into a std_logic_vector. But this syntax, although it looks almost the same, should do what you want, because it isn't do a conversion, but is simply telling the compiler the type of the expression:

decoded <= unsigned(std_logic_vector'(a & b & c));

Similarly, this can be fixed up in the same way:

decoded <= unsigned(a & b & c);

Use this instead:

decoded <= unsigned'(a & b & c);

The primary misunderstand I see is that you may be thinking that std_logic's concatenated together are somehow the same as a std_logic_vector. That is not at all true. A std_logic_vector is just one particular array type that is built from std_logic elements. Other examples are unsigned, signed, and any other user-defined type you may create.

When you concatenate std_logic elements with &, they have a sort of "universal" type that can be inferred on assignment, or can be explicitly typed tagged, but can't be converted, because they don't yet have a known type! This is why the syntax with ' works, where your original tries did not.

Hope that helps. Good luck!

Upvotes: 7

Related Questions