Reputation: 484
I have a tab separated text file in the format
id | field 1 | field 2 ...
I want to insert this into a mysql database with id as the primary key but the text file may contain duplicate id's .
Upvotes: 0
Views: 73
Reputation: 1145
Presuming a Unix shell, I'd do this:
awk '!x[$1]++' inputfile.tsv > uniqfile.tsv
then do your import off of the uniqfile.
edit: to be clear, that script uniq's the input file based on the first field by only outputting rows that do not already have a non-zero value in a hash keyed off of the first field.
Upvotes: 0
Reputation: 94625
Read line by line from text file, parse that line and use INSERT ... ON DUPLICATE KEY UPDATE
Syntax.
Upvotes: 2
Reputation: 23713
I would do a SELECT
before INSERT
and count the number of rows returned by the SELECT
. Something like this:
SELECT * FROM yourTable WHERE yourTable.id = :id
If that returns any row, don't insert and go to next. Otherwise insert it.
Edit: This would be a post strategy. It would be good if you could add a Unique Constraint to guarantee uniqueness. Something like:
ALTER TABLE yourTable ADD CONSTRAINT ukID UNIQUE (id)
Upvotes: 1