pepr
pepr

Reputation: 20792

SQL: How to alter all varchar column in all tables to nvarchar?

I need to convert all varchar columns in about 40 tables (filled with the data) to nvarchar columns. It is planned to happen in a dedicated MS SQL server used only for the purpose. The result should be moved to Azure SQL.

Where should be the conversion done: on the old SQL, or after moving it on Azure SQL Server?

According to Remus Rusanu's answer https://stackoverflow.com/a/8157951/1346705, new nvarchar columns are created in the process, and the old varchar columns are dropped. The space can be reclaimed by DBCC CLEANTABLE or using ALTER TABLE ... REBUILD. Are the dropped varchar columns packed into the backup table, or does the backup/restore also remove the dropped columns?

Can the process be somehow automated using a universal SQL script? Or is it necessary to write the script for each individual table?

Context: We are the 3rd party with respect to the enterprise information system. Our product reads from the information system SQL database and presents the data the way that would otherwise be expensive to implement in the IS. The enterprise information system is now migrated to the new version and is to be run on Azure SQL. The database of the IS have been changed heavily, and one of the changes was to abandon the old 8-bit text encoding (varchar) and to use Unicode instead (nvarchar). Our system was used also for collecting data typed manually -- using the same encoding that the old IS used.

Migration is to be done via doing old version of backup (SqlCmd that produces xxx.bak files), restoring on another good old SQL server. Then we run the script that removes all the tables, views, and stored procedures that can be reconstructed from the IS. One of the main reasons is that the SQL code uses features that are not accepted by the new backup tool SqlPackage.exe to produce xxx.bacpac file. Then the bacpac file is restored in Azure SQL.

Upvotes: 0

Views: 233

Answers (2)

TheGameiswar
TheGameiswar

Reputation: 28930

Where should be the conversion done: on the old SQL, or after moving it on Azure SQL Server?

I would do it on local SQLServer First,Running this on Azure database,might cause you to run into some issues like hitting your DTU limits,disk IO throttling..

Are the dropped varchar columns packed into the backup table, or does the backup/restore also remove the dropped columns?

The space wont be released back to filesystem,also backup doesn't process free spaces,so you will not see much change there.You might want to read more on dbcc cleantable though,before proceeding ..

Can the process be somehow automated using a universal SQL script? Or is it necessary to write the script for each individual table?

It can be automated,may be you can use dynamic sql to see the column type and process further.You will also have to see if any of those columns are part of indexes,if so you have to drop them first

Upvotes: 1

Dan Guzman
Dan Guzman

Reputation: 46231

I suggest making the schema changes beforehand on the old instances. Even if you don't bother cleaning up space with DBCC CLEAANTABLE or ALTER...REBUILD, the resultant bacpac size will be the same because, unlike a physical backup/restore, a bacpac file is just a compressed package format of schema and data.

Consider using SQL Server Data Tools (SSDT) to facilitate the schema changes. This will consider all the dependencies (constraints, indexes, etc.) that is a challenge with a "universal" T-SQL solution. SSDT will generally generate a migration script that employs temp tables for such schema changes so the end result won't have wasted space in your old database. However, you will need sufficient unused space in the database to contain the old/new objects side-by-side.

Upvotes: 1

Related Questions