Reputation: 105
I'm working on a p2p filesharing system in python3 right now and I've come across an issue I don't know how to fix exactly.
I have peers with a server process and a client process where a client process connects to the other nodes, puts it in its own thread, and listens for data over a socket. When downloading from only one other peer, the file is written correctly with no problem, but when it is split up over multiple peers, the file is corrupted. The data is correctly received from both other peers so I'm thinking this would be a file write issue.
When I get the data from a peer, I open the file, seek to the position where the data comes from, and then write it and close the file. Would locks be the solution to this?
This is the code that is in its own thread that is constantly listening
def handleResponse(clientConnection, fileName, fileSize):
# Listen for connections forever
try:
while True:
#fileName = ""
startPos = 0
data = clientConnection.recv(2154)
# If a response, process it
if (len(data) > 0):
split = data.split(b"\r\n\r\n")
#print(split[0])
headers = split[0].replace(b'\r\n', b' ').split(b' ')
# Go through the split headers and grab the startPos and fileName
for i in range(len(headers)):
if (headers[i] == b"Range:"):
startPos = int(headers[i+1])
#fileName = headers[i+2].decode()
break
# Write the file at the seek pos
mode = "ab+"
if (startPos == 0):
mode = "wb+"
with open ("Download/" + fileName, mode) as f:
f.seek(startPos, 0)
f.write(split[1])
f.close()
Upvotes: 0
Views: 204
Reputation: 105
Answered by Steffen Ullrich.
Solution is to open the file in rb+ instead of ab+, seek to the position and write. Do note that if the file does not exist, it will throw an exception since it is not created in rb+
Upvotes: 1