Reputation: 157
Imagine we have a piece of code which cuts the large data into smaller data and do some process on it.
def node_cut(input_file):
NODE_LENGTH = 500
count_output = 0
node_list=[]
for line in input_file.readlines():
if len(node_list) >= NODE_LENGTH :
count_output += 1
return( node_list,count_output )
node_list=[]
node,t=line.split(',')
node_list.append(node)
if __name__ =='__main__':
input_data = open('all_nodes.txt','r')
node_list, count_output = node_cut(input_data)
some_process(node_list)
while node_cut return the first data list, the for loop stop going on for the rest of the large data. How I can make sure that it returns but still the loop continues?
Upvotes: 7
Views: 9198
Reputation: 500773
Use yield
:
def node_cut(input_file):
NODE_LENGTH = 500
count_output = 0
node_list=[]
for line in input_file.readlines():
if len(node_list) >= NODE_LENGTH :
count_output += 1
yield( node_list,count_output )
node_list=[]
node,t=line.split(',')
node_list.append(node)
if __name__ =='__main__':
with open('all_nodes.txt','r') as input_data:
for node_list, count_output in node_cut(input_data):
some_process(node_list)
Upvotes: 7
Reputation: 363787
Use yield
instead of return
. See this question or this (somewhat old) article for how it works.
Upvotes: 5