Reputation: 1
I am currently writing a multithreaded program in C++ to connect to various sensors, read from them, and write out chunks of data in parallel. This software is going to be running on a Raspberry Pi 5. One of these sensors is the Sony IMX219 camera, officially compatible with libcamera and the raspberry pi. I implemented this solution in Python, but it was too slow and resource intensive using multiprocessing, so I switched to C++. I have most of the system running, but I have a strange problem wherein, firstly, I cannot get the camera to enter the 8 bit mode that I can in Python, but also, in the 10 bit mode (unpacked to 16) that does work, I get a relatively consistent pattern of black pixels in the image. These sometimes change slightly, but the overall distribution is similar. I checked the pixel intensities via a historgram, and the values are 0.
I have tried adjusting FPS, gain, and exposure settings thinking these may be related, but have observed no change. I also turned off AWB and AE with no effect. In Python, at least on the build running a few weeks ago, these spots do not appear. I am hoping to find a solution that gets rid of these black spots. Below, you can find a snippet of my code for how I am retrieving and capturing the data.
static void world_frame_callback(libcamera::Request *request) {
// Determine if we have received invalid image data (e.g. during application shutdown)
if (request->status() == libcamera::Request::RequestCancelled) {return;}
// Define a variable to hold the arguments passed to the callback function
world_callback_data* data;
// There should be a single buffer per capture
const libcamera::Request::BufferMap &buffers = request->buffers();
auto buffer_pair = *buffers.begin();
// (Unused) Stream *stream = bufferPair.first;
libcamera::FrameBuffer *buffer = buffer_pair.second;
// Capture the metadata of this frame. This lets us know if a frame was successfully captured and also
// the frame sequence number. Gaps in the sequence number indicate dropped frames
const libcamera::FrameMetadata &metadata = buffer->metadata();
// Check if the frame was captured without any sort of error
if(metadata.status != libcamera::FrameMetadata::Status::FrameSuccess) {
std::cout << "World | Frame unsuccessful" << '\n';
return ;
}
// Retrieve the arguments data for the callback function
data = reinterpret_cast<world_callback_data*>(buffer->cookie());
// RAW images have 1 plane, so retrieve the plane the data lies on
libcamera::FrameBuffer::Plane pixel_data_plane = buffer->planes().front();
void *memory_map = mmap(nullptr, pixel_data_plane.length, PROT_READ, MAP_SHARED, pixel_data_plane.fd.get(), pixel_data_plane.offset);
if (memory_map == MAP_FAILED) {
std::cout << "World | Failed to map buffer memory!" << std::endl;
return;
}
// Cast to byte array
uint8_t* pixel_data = static_cast<uint8_t*>(memory_map);
std::memcpy(data->buffer->data() + data->buffer_offset, pixel_data, pixel_data_plane.length);
}
Here is a snippet of how I am initializing the camera in regards to mode. Using picamera in Python, I get a message saying it is correctly able to put the camera into SRGGB8, but in C++ I get a message that it put it into SRGGB16 (even though I've set it to 8 as below and confirmed it is a supported mode):
// Retrieve the first available camera from the manager to be the world camera
camera = cm->get(cameras[0]->id());
// Acquire the camera
camera->acquire();
// Define the configuration for the camera (this MUST be raw for raw images)
std::unique_ptr<libcamera::CameraConfiguration> config = camera->generateConfiguration( { libcamera::StreamRole::Raw} );
libcamera::StreamConfiguration &streamConfig = config->at(0);
streamConfig.pixelFormat = libcamera::formats::SRGGB8;
streamConfig.size.width = 640;
streamConfig.size.height = 480;
if (config->validate() == libcamera::CameraConfiguration::Invalid) {
std::cerr << "World | ERROR: Invalid configuration" << std::endl;
return -1;
}
camera->configure(config.get());
And here is a snippet of how I am reading them in for analysis (and thus seeing the black spots)
def world_parser(buffer: np.ndarray):
buffer = buffer.view(np.uint16)
# First, we retrieve the shape of an individual frame
frame_shape: np.ndarray = np.array([480, 640])
# Now, let's calculate how many frames we have
num_frames: int = int(buffer.shape[0] / np.prod(frame_shape))
return buffer.reshape(num_frames, *frame_shape)
Upvotes: -1
Views: 45