elmakhloufi assaad
elmakhloufi assaad

Reputation: 11

Reading binary file in vhdl

I want to get access to binary file content pixel of image from ENVI logiciel :size (100*100) pixel coded each pixel in 16 bits in vhdl.

 library IEEE;
    USE ieee.std_logic_1164.ALL;
    Use ieee.numeric_bit.ALL;

    library std;
    use std.textio.all;

    entity image_bin is
    end entity;

    architecture behavioral of image_bin is
    type image is array(0 to 99,0 to 99) of std_logic_vector(15 downto 0);
    signal image(i,j):bit_vector(15 downto 0);

    begin
      process
      type t_file is file of bit_vector;
      file infile: text is in "C:\Users\hp\Desktop\file\T4.bin";
      variable pixel_image:bit_vector(15 downto 0);
         begin
         IF start'EVENT AND start = '1' THEN
         for i in to 99 loop
         for j in to 99 loop
         read(infile,pixel_image);
         image(i,j)<=pixel_image;
         end loop;
         end loop;
         file_close (infile);
         end if;
         end process;

Upvotes: 0

Views: 3840

Answers (2)

JHBonarius
JHBonarius

Reputation: 11281

Way before you start coding VHDL for an FPGA, especially complex algorithms, you should do two things:

  • Learn basic VHDL
  • Understand the capabilities of FPGAs

You can use file-I/O in VHDL to initialize FPGA memories (RAM/ROM) or in simulation constructs. But not in real-time data processing applications. As I say in the comments: A normal FPGA does not have file-I/O capabilities in a hardwired block. If you want something like this, it is probably best to use a microcontroller/CPU or us a SOC FPGA, which has a hard-wired ARM core on board and perform this in software. But when you're there, why use a an FPGA at all?


Mistakes in your code

There are several programming mistakes in your code, which could have easily been prevented if you'd followed some tutorial.

With the first mistake, I am questioning if you have any programing experience. You define a type, and next you probably want define an object of that new type. But instead you type some illegal code:

signal image(i,j) : bit_vector(15 downto 0);

How would the parser know what i and j is? You don't define them until the declaration part of the code. Secondly, why are you suddenly using bit_vector, while you were using std_logic_vector just before?

Example of how it should be:

type [type name] is [type details];
signal [object name] : [type name] := [optional initialization details];

Next mistake: the process has no sensitivity list. A process statement is the start of a sequential processing block. Without any sensitivity list, it automatically starts. That's a problem: nothing blocks this, thus the process will try to loop infinitely within an infinitesimal time. This will of course cause problems. Solution: use the sensitivity list.

[label:] process ([trigger signal names])
begin
end process;

Internally the process actually has a trigger condition: start'event and start='1'... Where's this start coming from? Probably missing an input port on the entity? But this start should be the trigger for your process.


How then?

What you can do, is read a file and initialize an array. (Please note that multi-dimensional arrays are not supported in synthesis, as there is no such thing as multi-dimensional RAM, so you will have to flatten your array and perform some index arithmetic). For the next example, I created a file "test.bin" and filled it with the string "12345678" = 8 characters = 8*8 bit = 256 bit = 2*2*16 bit -> good for a 2 by 2 array.

Then you should realize that the textio package can only read text! So the easiest is to read every 8 bits as a character and then concatenate two to form a 16-bit value.

Example code:

entity read_bin is end entity;

library ieee;

architecture behavioral of read_bin is
    use ieee.std_logic_1164.all;
    subtype data_type is std_logic_vector(15 downto 0);
    type two_dim_array_type is array(0 to 1, 0 to 1) of data_type;

    impure function init_two_dim_array return two_dim_array_type is
        use std.textio.all;
        type character_file is file of character;
        file file_pointer : character_file;
        variable upper : character;
        variable lower : character;
        variable output : two_dim_array_type;
        use ieee.numeric_std.all;
    begin
        file_open(file_pointer, "test.bin", READ_MODE);
        for i in 0 to 1 loop
            for j in 0 to 1 loop
                read(file_pointer, upper); -- first 8 bits
                read(file_pointer, lower); -- second 8 bits
                output(i, j) := 
                    std_logic_vector(to_unsigned(character'pos(upper),8))&
                    std_logic_vector(to_unsigned(character'pos(lower),8));
            end loop;
        end loop;
        file_close(file_pointer);
        return output;
    end function;

    signal two_dim_array : two_dim_array_type := init_two_dim_array;
begin
end architecture;

Voila:

Modelsim output


The big question

The big question for me is: how come you get a PhD position in telecommunication and embedded system when you make these programming mistakes? Or lacking knowledge om the capabilities of FPGAs. I'm a PhD candidate Electrical Engineering myself and this is stuff we teach our first-year bachelors students. When I wanted my PhD position, I went through a few rounds of assessments, where my knowledge was extensively tested. There's a lot of competition going on for these positions.

My advice: Please get some local support for realizing this. You don't want to spend the next 3 years of your PhD trying to learn VHDL to an adequate level and then finding out you only have 1 year to do some actual research to be able to publish results...

Upvotes: -1

user1818839
user1818839

Reputation:

Question is far too unspecific but here are a couple of pointers.

Binary file I/O works in VHDL but is only guaranteed to be compatible with itself, i.e. you can read files your simulation originally wrote. Beyond that, you may need to experiment.

I've read 3rd party files in Modelsim with little in the way of surprises, (possibly unexpected endian-ness, I can't remember) and I expect GHDL would also work well.

However, Xilinx ISIM (in the ISE10.1 era) insisted on seeing a specific binary header on each file, whose format Xilinx refused to document even when explicitly asked.

I wrote a file from ISIM, inspected its contents, and hacked that file's header onto the image files I needed to read, and stripped it off any further output files, using the Linux head and tail commands, so I could interchange binary files with the rest of the world. ISIM was perfectly happy with that, but it was a pain.


EDIT now we have some code ...

First, there is nothing wrong with doing this in VHDL as opposed to Python; there will be a bit more typing involved (in both senses of typing) but whichever language you use, you will have to understand what you are writing.

And you will have to finish the code (some of it is missing) and fix the trivial mistakes (you have a type and a signal with the same name ... sloppy practice in any language) and other trivial syntax errors.

So the next step is to finish it, compile it, and fix those errors. If you don't understand the errors after double checking the erroneous line against your VHDL textbooks, post the error message and ask what it means and what to do about it - at least, that would be an actual question.

You are also using VHDL-1987 file declarations and declaring the file of type text; you'll have to upgrade to VHDL-1993 file handling. It shouldn't be difficult to find examples of "VHDL-1993 binary file I/O" by the usual methods - try it.

Upvotes: 0

Related Questions