IO.Nathan
IO.Nathan

Reputation: 21

HTTP ZIP-Streaming using ChunkedInput

I've been porting a NIO Streaming-Server to HTTP using Netty. It supports downloading of zip-archives generated on demand on the server. After some research and prototyping i did my own ChunkedInput implementation to implement this feature.

During each call of nextChunk i read some bytes of the current file, zip it and write it to a ChannelBuffer which is then returned. This iterates as long as there is data and files to process (isEndOfInput returns false so long).

The problem is, that the transferred archive cannot be opened. The error message tells me, that some bytes are missing.

I am 100% sure that i'm sending all the data because i count the total generated amount of bytes returned through ChannelBuffers. This sum differs from the downloaded archive exactly the amount of bytes which are missing. For debugging purposes in every iteration of nextChunk i read the ChannelBuffers actual content and write it to a FileOutputStream. The written File is a valid archive and can be opened, so there can't be an error in the zipping.

Are there some specialties implementing a custom ChunkedInput in combination with a ChunkedWriteHandler or am i missing something?

Upvotes: 0

Views: 576

Answers (2)

Norman Maurer
Norman Maurer

Reputation: 211

You need to use new ChannelBuffer for each chunk as the ChannelBuffer is queued for write. If you reuse the same ChannelBuffer you may corrupt the content.

Upvotes: 1

IO.Nathan
IO.Nathan

Reputation: 21

I looked at the implementation of ChunkedFile and noticed that in nextChunk always a new ChannelBuffer is returned by using ChannelBuffers.wrappedBuffer( bytes ). In my implementation of ChunkedInput i use a single ChannelBuffer which is updated in nextChunk and then returned, so i don't create a new ChannelBuffer during each call of nextChunk. I changed my nextChunk-Method to return a new ChannelBuffer during each call and it works now.

Upvotes: 0

Related Questions