Reputation: 3541
is there a way to increase the number of rows sent to sql server/sec?
I have over 7M records (1.5 GB file) and it's taking around 30 mins to import all into the sql server table from the bcp file.
Is there a way to increase this to like 10k or even 100000 rows sent everytime? i ran into some issues with bcp until finally i resolved them, but everytime i had to debug these issues i had to wait half an hour which was frustrating. Also, we may need to load in over and over so it would be helpful to know a faster way for future bcp loads.
Some stats here:
7217765 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total : 1765235 Average : (4088.84 rows per sec.)
The bcp file is on a network drive, and the table DDL is as follows:
CREATE TABLE [dbo].[Historical_Fact](
[Style Code] [varchar](100) NULL,
[Trim Code] [varchar](100) NULL,
[Country] [varchar](100) NULL,
[Org] [varchar](100) NULL,
[Prod Code] [varchar](100) NULL,
[Report Year] [varchar](100) NULL,
[Scen] [varchar](100) NULL,
[Volume] [decimal](25, 10) NULL,
[EBIT] [decimal](25, 10) NULL,
[NSR] [decimal](25, 10) NULL,
[Measures] [varchar](100) NULL,
[Values] [decimal](25, 10) NULL
) ON [PRIMARY]
Upvotes: 2
Views: 3371
Reputation: 72258
From the docs:
-b batch_size
Specifies the number of rows per batch of imported data. Each batch is imported and logged as a separate transaction that imports the whole batch before being committed. By default, all the rows in the data file are imported as one batch. To distribute the rows among multiple batches, specify a batch_size that is smaller than the number of rows in the data file. If the transaction for any batch fails, only insertions from the current batch are rolled back. Batches already imported by committed transactions are unaffected by a later failure.
Upvotes: 2