Sandhurst
Sandhurst

Reputation:

SqlBulkCopy Not Working

I have a DataSet populated from Excel Sheet. I wanted to use SQLBulk Copy to Insert Records in Lead_Hdr table where LeadId is PK.

I am having following error while executing the code below:

The given ColumnMapping does not match up with any column in the source or destination

string ConStr=ConfigurationManager.ConnectionStrings["ConStr"].ToString();

using (SqlBulkCopy s = new SqlBulkCopy(ConStr,SqlBulkCopyOptions.KeepIdentity))
{
    if (MySql.State==ConnectionState.Closed)
    {
        MySql.Open();
    }

    s.DestinationTableName = "PCRM_Lead_Hdr";
    s.NotifyAfter = 10000;

    #region Comment
    s.ColumnMappings.Clear();

    #region ColumnMapping
    s.ColumnMappings.Add("ClientID", "ClientID");
    s.ColumnMappings.Add("LeadID", "LeadID");
    s.ColumnMappings.Add("Company_Name", "Company_Name");
    s.ColumnMappings.Add("Website", "Website");
    s.ColumnMappings.Add("EmployeeCount", "EmployeeCount");
    s.ColumnMappings.Add("Revenue", "Revenue");
    s.ColumnMappings.Add("Address", "Address");
    s.ColumnMappings.Add("City", "City");

    s.ColumnMappings.Add("State", "State");
    s.ColumnMappings.Add("ZipCode", "ZipCode");
    s.ColumnMappings.Add("CountryId", "CountryId");

    s.ColumnMappings.Add("Phone", "Phone");
    s.ColumnMappings.Add("Fax", "Fax");
    s.ColumnMappings.Add("TimeZone", "TimeZone");
    s.ColumnMappings.Add("SicNo", "SicNo");
    s.ColumnMappings.Add("SicDesc", "SicDesc");

    s.ColumnMappings.Add("SourceID", "SourceID");
    s.ColumnMappings.Add("ResearchAnalysis", "ResearchAnalysis");
    s.ColumnMappings.Add("BasketID", "BasketID");
    s.ColumnMappings.Add("PipeLineStatusId", "PipeLineStatusId");

    s.ColumnMappings.Add("SurveyId", "SurveyId");
    s.ColumnMappings.Add("NextCallDate", "NextCallDate");
    s.ColumnMappings.Add("CurrentRecStatus", "CurrentRecStatus");
    s.ColumnMappings.Add("AssignedUserId", "AssignedUserId");
    s.ColumnMappings.Add("AssignedDate", "AssignedDate");
    s.ColumnMappings.Add("ToValueAmt", "ToValueAmt");
    s.ColumnMappings.Add("Remove", "Remove");
    s.ColumnMappings.Add("Release", "Release");

    s.ColumnMappings.Add("Insert_Date", "Insert_Date");
    s.ColumnMappings.Add("Insert_By", "Insert_By");
    s.ColumnMappings.Add("Updated_Date", "Updated_Date");
    s.ColumnMappings.Add("Updated_By", "Updated_By");

    #endregion
    #endregion

    s.WriteToServer(sourceTable);

    s.Close();

    MySql.Close();
}

Upvotes: 20

Views: 37151

Answers (7)

Dickster
Dickster

Reputation: 3009

I would go with the staging idea, however here is my approach to handling the case sensitive nature. Happy to be critiqued on my linq

using (SqlConnection connection = new SqlConnection(conn_str))
{
        connection.Open();
        using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
        {
            bulkCopy.DestinationTableName = string.Format("[{0}].[{1}].[{2}]", targetDatabase, targetSchema, targetTable);
            var targetColumsAvailable = GetSchema(conn_str, targetTable).ToArray();
            foreach (var column in dt.Columns)
            {
                if (targetColumsAvailable.Select(x => x.ToUpper()).Contains(column.ToString().ToUpper()))
                {
                    var tc = targetColumsAvailable.Single(x => String.Equals(x, column.ToString(), StringComparison.CurrentCultureIgnoreCase));
                    bulkCopy.ColumnMappings.Add(column.ToString(), tc);
                }
            }

            // Write from the source to the destination.
            bulkCopy.WriteToServer(dt);
            bulkCopy.Close();
        }
}

and the helper method

private static IEnumerable<string> GetSchema(string connectionString, string tableName)
        {



   using (SqlConnection connection = new SqlConnection(connectionString))
        using (SqlCommand command = connection.CreateCommand())
        {
            command.CommandText = "sp_Columns";
            command.CommandType = CommandType.StoredProcedure;

            command.Parameters.Add("@table_name", SqlDbType.NVarChar, 384).Value = tableName;

            connection.Open();
            using (var reader = command.ExecuteReader())
            {
                while (reader.Read())
                {
                    yield return (string)reader["column_name"];
                }
            }
        }
    }

Upvotes: 1

Asad
Asad

Reputation: 419

One of the reason is that :SqlBukCOpy is case sensitive . Follow steps:

  1. In that Case first you have to find your column in Source Table by using "Contain" method in C#.
  2. Once your Destination column matched with source column get index of that column and give its column name in SqlBukCOpy .

For Example:`

//Get Column from Source table 
  string sourceTableQuery = "Select top 1 * from sourceTable";
   DataTable dtSource=SQLHelper.SqlHelper.ExecuteDataset(transaction, CommandType.Text, sourceTableQuery).Tables[0];// i use sql helper for executing query you can use corde sw

 for (int i = 0; i < destinationTable.Columns.Count; i++)
                        {    //check if destination Column Exists in Source table
                            if (dtSource.Columns.Contains(destinationTable.Columns[i].ToString()))//contain method is not case sensitive
                            {
                                int sourceColumnIndex = dtSource.Columns.IndexOf(destinationTable.Columns[i].ToString());//Once column matched get its index
                                bulkCopy.ColumnMappings.Add(dtSource.Columns[sourceColumnIndex].ToString(), dtSource.Columns[sourceColumnIndex].ToString());//give coluns name of source table rather then destination table so that it would avoid case sensitivity
                            }

                        }
                        bulkCopy.WriteToServer(destinationTable);
                        bulkCopy.Close();

Upvotes: 2

jocheng
jocheng

Reputation: 371

Thought a long time about answering... Even if column names are case equally, if the data type differs you get the same error. So check column names and their data type.

P.S.: staging tables are definitive the way to import.

Upvotes: 0

Tony Basallo
Tony Basallo

Reputation: 3084

The answer by Marc would be my recomendation (on using staging table). This ensures that if your source doesn't change, you'll have fewer issues importing in the future.

However, in my experience, you can check the following issues:

Column names match in source and table That the column types match

If you think you did this and still no success. You can try the following.

1 - Allow nulls in all columns in your table 2 - comment out all column mappings 3 - rerun adding one column at a time until you find where your issue is

That should bring out the bug

Upvotes: 2

Tareq
Tareq

Reputation: 1417

I've encountered the same problem while copying data from access to SQLSERVER 2005 and i found that the column mappings are case sensitive on both data sources regardless of the databases sensitivity.

Upvotes: 40

Rich
Rich

Reputation:

What I have found is that the columns in the table and the columns in the input must at least match. You can have more columns in the table and the input will still load. If you have less you'll receive the error.

Upvotes: 0

Marc Gravell
Marc Gravell

Reputation: 1064244

Well, is it right? Do the column names exist on both sides?

To be honest, I've never bothered with mappings. I like to keep things simple - I tend to have a staging table that looks like the input on the server, then I SqlBulkCopy into the staging table, and finally run a stored procedure to move the table from the staging table into the actual table; advantages:

  • no issues with live data corruption if the import fails at any point
  • I can put a transaction just around the SPROC
  • I can have the bcp work without logging, safe in the knowledge that the SPROC will be logged
  • it is simple ;-p (no messing with mappings)

As a final thought - if you are dealing with bulk data, you can get better throughput using IDataReader (since this is a streaming API, where-as DataTable is a buffered API). For example, I tend to hook CSV imports up using CsvReader as the source for a SqlBulkCopy. Alternatively, I have written shims around XmlReader to present each first-level element as a row in an IDataReader - very fast.

Upvotes: 22

Related Questions