Reputation: 13
I am writing a script in Python to automate a GIS process. The general process is as follows:
.dbf
.dbf
to a .txt
file.txt
file.txt
file.For the most part, the process works very well. The problem I am encountering is that the .txt
file is incomplete. The resulting .dbf
from the ArcGIS model has 7304 records, but the .txt
file only has 7232 records. It is almost as if when writing the .txt
file, the script just gives up before reaching the end. I cannot seem to figure out what is causing this to happen.
I will attach a portion of the script as well as the .txt
file output. Any help/suggestions would be very much appreciated.
DBF = r'Q:\GIS_Mapping\BillingDept\ERU\DO_NOT_TOUCH\ReportOutput\ERU.dbf'
output_directory = r'Q:\GIS_Mapping\BillingDept\ERU\DO_NOT_TOUCH\ERU_Output'
ERU_file = os.path.join(output_directory,'ERU.txt')
arcpy.AddMessage('Creating ERU file')
print "3"
report = open(ERU_file, "w")
cursor = arcpy.SearchCursor(DBF)
for row in cursor:
ACCT = row.getValue('ACCT')
STR_ACCT = str(ACCT)
NEW_ACCT = STR_ACCT.replace('.0','')
IMPAREA = row.getValue('IMPAREA')
STR_IMPAREA = str(IMPAREA)
NEW_IMPAREA = STR_IMPAREA.replace(".0",".00")
SWCODE = row.getValue('SWCODE')
STR_SWCODE = str(SWCODE)
report.write(NEW_ACCT + "," + NEW_IMPAREA + "," + STR_SWCODE + '\n')
del (ERU_file)
print "4"
arcpy.AddMessage('Adding headers')
headers = ['"ACCT","IMPAREA","SWCODE"']
filename = r"Q:\GIS_Mapping\BillingDept\ERU\DO_NOT_TOUCH\ERU_Output\ERU.txt"
tmp = open('TMP', 'w')
orig = open(filename, 'r')
tmp.write('\t'.join(headers) + '\n')
for line in orig.readlines():
tmp.write(line)
orig.close()
tmp.close()
arcpy.AddMessage('Headers added, renaming file')
os.remove(r'Q:\GIS_Mapping\BillingDept\ERU\DO_NOT_TOUCH\ERU_Final\ERU.txt')
os.rename('TMP', r'Q:\GIS_Mapping\BillingDept\ERU\DO_NOT_TOUCH\ERU_Final\ERU.txt')
print "5"
os.startfile(r'Q:\GIS_Mapping\BillingDept\ERU\DO_NOT_TOUCH\ERU_Final\ERU.txt')
arcpy.AddMessage('Done')
Below is a portion of the .txt
output without the headers. As you can see, the process is running fine, and then just stops after 222415613,0.00
.
600414006,0.00,1
602311015,0.00,1
910010858,0.00,1
2000716007,0.00,1
220735804,0.00,1
910010076,0.00,1
300724505,0.00,1
910012468,0.00,1
303737006,0.00,1
503143201,10079.33,2
213001881,0.00,1
2007212003,0.00,1
4080010042,0.00,1
4030010111,0.00,1
4090020013,0.00,1
910011618,0.00,1
221624400,0.00,1
600934006,0.00,1
505531404,0.00,1
215232207,0.00,1
600432514,0.00,1
600432011,0.00,0
404834003,0.00,1
222415613,0.00
Attached is a screenshot of the .dbf
. As you can see after the 222415613
record the information continues on as normal for about another 50 or so records.
Upvotes: 0
Views: 1196
Reputation: 678
Change del (ERU_file)
to report.close()
. del (ERU_file)
just deletes the string identifying the location of the file, it doesn't actually close the open file handle and flush the data to disk.
Or better yet, use a with
statement:
Change
report = open(ERU_file, 'w')
to
with open(ERU_file, 'w') as report:
and add a level of indent to your cursor
declaration and for
loop.
What's happening is that you're opening a second copy of the file, with orig = open(filename, 'r')
, while the first copy, report
, is still open with data still in the write buffer and not flushed to disk. When the script finishes running, that data is flushed to disk as part of python's cleanup, which is why you do see it in the file when you look, yourself.
Upvotes: 1