Ashish Gaur
Ashish Gaur

Reputation: 2050

Decrease the output of bcp export

In our project we are using bcp command to export about million of rows and logging the output to an output file. For bcp import I can control the output of bcp command by using the -b switch which specifies the no. of rows to be imported in a batch. The output which is something like this :

Starting copy...
1000 rows sent to SQL Server. Total sent: 1000
1000 rows sent to SQL Server. Total sent: 2000
1000 rows sent to SQL Server. Total sent: 3000
1000 rows sent to SQL Server. Total sent: 4000
1000 rows sent to SQL Server. Total sent: 5000
1000 rows sent to SQL Server. Total sent: 6000
1000 rows sent to SQL Server. Total sent: 7000
1000 rows sent to SQL Server. Total sent: 8000
1000 rows sent to SQL Server. Total sent: 9000
1000 rows sent to SQL Server. Total sent: 10000
1000 rows sent to SQL Server. Total sent: 11000
1000 rows sent to SQL Server. Total sent: 12000
SQLState = 22001, NativeError = 0

can easily be reduced by increasing the number sent with -b switch :

Starting copy...
10000 rows sent to SQL Server. Total sent: 10000
SQLState = 22001, NativeError = 0

12406 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 75     Average : (165413.3 rows per sec.)

But for bcp export I can't control the output and for a million rows the log becomes too big, For eg. the below command

bcp  Temp.dbo.TestTable out outdata.txt -t , -f file.fmt -S Server -U user-P password -m 10

outputs this :

Starting copy...
1000 rows successfully bulk-copied to host-file. Total received: 1000
SQLState = S1000, NativeError = 0
Error = [Microsoft][ODBC Driver 11 for SQL Server]Warning: BCP import with a format file will convert empty strings in delimited columns to NULL.
1000 rows successfully bulk-copied to host-file. Total received: 2000
1000 rows successfully bulk-copied to host-file. Total received: 3000
1000 rows successfully bulk-copied to host-file. Total received: 4000
1000 rows successfully bulk-copied to host-file. Total received: 5000
1000 rows successfully bulk-copied to host-file. Total received: 6000
1000 rows successfully bulk-copied to host-file. Total received: 7000
1000 rows successfully bulk-copied to host-file. Total received: 8000
1000 rows successfully bulk-copied to host-file. Total received: 9000
1000 rows successfully bulk-copied to host-file. Total received: 10000
1000 rows successfully bulk-copied to host-file. Total received: 11000
1000 rows successfully bulk-copied to host-file. Total received: 12000
1000 rows successfully bulk-copied to host-file. Total received: 13000
1000 rows successfully bulk-copied to host-file. Total received: 14000
1000 rows successfully bulk-copied to host-file. Total received: 15000
1000 rows successfully bulk-copied to host-file. Total received: 16000
1000 rows successfully bulk-copied to host-file. Total received: 17000
1000 rows successfully bulk-copied to host-file. Total received: 18000
1000 rows successfully bulk-copied to host-file. Total received: 19000
1000 rows successfully bulk-copied to host-file. Total received: 20000
1000 rows successfully bulk-copied to host-file. Total received: 21000
1000 rows successfully bulk-copied to host-file. Total received: 22000

I have tried passing -b switch with bcp out but it always exports them in batch of 1000 and filtering the rows by greping or seding them will take too much time. Thanks for your help.

Upvotes: 7

Views: 10627

Answers (5)

Nigel Fds
Nigel Fds

Reputation: 813

Very simple solution is to use out-null

Just append | out-null at the end of the command.

Eg:

bcp  xxxxxxxxxxxxxxxxxxxx   | out-null

Upvotes: 1

BRebey
BRebey

Reputation: 691

I know this is old, but I stumbled across it looking for the same answer, and see that there is no good answer here. I came up with the following, which got me what I wanted, and thought I'd post here in case it helped someone else:

set @BCP_CMD = 'bcp ...etc...'; -- declare and set this appropriately
DROP TABLE IF EXISTS #bcpOutput;
CREATE TABLE #bcpOutput ([BCP_Output] varchar(2048)); 
Insert into #bcpOutput ([BCP_Output]) EXECUTE master..xp_cmdshell @BCP_CMD;
SELECT BCP_Output FROM #bcpOutput where 
-- Here I filter out Blank lines, all those "rows successfully..." lines, etc.
BCP_Output not like '' 
    AND BCP_Output not like '%rows successfully%' 
    AND BCP_Output not like '%S1000%' 
    AND BCP_Output not like '%empty strings in delimited%' 
    AND BCP_Output not like '%Starting copy%' 
    AND BCP_Output not like '%Network packet size%' ;

This produces:

BCP_Output
69673 rows copied.
Clock Time (ms.) Total     : 406    Average : (171608.38 rows per sec.)

An alternate method technically "works" just fine: append

 | grep -v -e "rows successfully\|Starting copy\|...and so on..."

to your "bcp..." command. For me, though, it turned a <5 second bcp operation into almost 30 seconds, which was unacceptable. Maybe in your environment piping to grep works better. Might be worth a try.

Upvotes: 2

ravish.hacker
ravish.hacker

Reputation: 1183

  1. You can either redirect output to a file using >

    bcp sometable out outfile -S Server -U user -P password > export.log

    Note the > export.log bit in the end. This will fill export.log with log. So in case your command fail you can check. See here for more details on this approach.

  2. bcp also provides output parameter -o

    bcp sometable out outfile -S Server -U user -P password -o export.log

    This time notice the -o export.log in the end.

Upvotes: 2

KaM
KaM

Reputation: 11

Block commandline output:

bcp Temp.dbo.TestTable out outdata.txt -t , -f file.fmt -S Server -U user-P password -m 10>nul

Upvotes: 1

one angry researcher
one angry researcher

Reputation: 612

There seems to be no solution for this within bcp. There is a workaround, however; package your bcp command line into an xp_cmdshell statement and specify the no_output option:

EXEC xp_cmdshell "bcp  Temp.dbo.TestTable out outdata.txt -t , -f file.fmt -S Server -U user-P password -m 10", no_output

Source: click here

Upvotes: 4

Related Questions