PavelPraulov
PavelPraulov

Reputation: 629

Performance of SQL inserts with Liquibase

In our project we are using Liquibase 4.6.1 and install the cumulative (initial) state of DB and test data within SQL scripts with many INSERTs. We do an installation for MySQL and for Oracle, for MySQL it is Okay, we use multiple inserts like:

INSERT INTO `${schema-user.name}`.`TABLE`
( `FIELD_1`, ...,  `FIELD_N` )
VALUES
( 0, ..., 0 ),
( 99, ..., 99 );

That is why an installation of a schema A is about 5 seconds on local PC with MySQL instance in a docker container.

However, there is no multiple insert command for Oracle, so generally we just do inserting one by one. And that is quite painful: on the same PC, with an Oracle instance inside a container it's about 30 secs.

From what I have tried:

  1. INSERT ALL
INSERT ALL
    INTO "${schema-user.name}".TABLE (FIELD_1,..., FIELD_N)
    VALUES (1, .., N)
    ...
    INTO "${schema-user.name}".TABLE (FIELD_1,..., FIELD_N)
    VALUES (1, .., N)
SELECT 1 FROM DUAL
/
  1. SELECT WITH:
insert /*+ APPEND NOLOGGING */ into "${schema-user.name}".TABLE (FIELD_1,..., FIELD_N)
with vals as (
    select 1, .., N from dual union all
    select 1, .., N from dual
    )
select * from vals
/

The approaches above do not give any improvement.

The specific of schema A is what tables mostly have no any indexes/constrains except PK and there are no triggers on the tables. So I don't see here a problem of many indexes/triggers which could impact the performance.

From Liquibase's logs I see, that LB reads SQL file and does something quite long before it start to apply those particular INSERTs like the following:

[2022-02-10 17:56:03] FINE [liquibase.changelog] Reading ChangeSet: cumulative/cumulative.root.yml::1643287602-install-test-data-schema-a::ppraulov
[2022-02-10 17:56:03] FINE [liquibase.resource] Closing duplicate stream for file:/home/ppraulov/path/schema-a.test-data.sql
[2022-02-10 17:56:03] FINE [liquibase.configuration] No configuration value for liquibase.fileEncoding found
[2022-02-10 17:56:03] FINE [liquibase.configuration] Configuration liquibase.fileEncoding is using the default value of UTF-8
<pause some seconds>
[2022-02-10 17:53:44] FINE [liquibase.executor] Executing with the 'jdbc' executor
[2022-02-10 17:53:44] FINE [liquibase.executor] 1 row(s) affected
[2022-02-10 17:53:44] FINE [liquibase.database] Executing Statement: INSERT INTO "schema-a"."TABLE"
...

As the last measure we have an option of installation of test data from CSV files, which is faster, however, it is still far from the desirable level of 5 secs, which we have for MySQL installation.

So my question is what would be possible ways to improve performance of installation on Oracle using SQL scripts?

Upvotes: 1

Views: 694

Answers (1)

thatjeffsmith
thatjeffsmith

Reputation: 22447

If you want to use Liquibase in the Oracle toolkit, e.g. SQLcl, we allow you to batchload rows to your table in a changeLog using a LOAD command, that rips through a csv file.

I talk about this in detail here

Concepts: SQLcl is free. We have taken the community edition of Liquibase, and enhanced the Oracle support. It's using the 4.6.x library, but also has access to ALL of our features in SQLcl, a more modern take on SQLPlus.

SO if you wanted to use the SQLcl (from SQL Developer) features...like generating DDL, loading rows to a table...that would look like this:

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog 
        xmlns="http://www.liquibase.org/xml/ns/dbchangelog" 
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
        xmlns:n0="http://www.oracle.com/xml/ns/dbchangelog-ext" 
        xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog 
        http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.6.xsd">
        <changeSet id="abcd1234567890" author="Generated" failOnError="false"   runAlways="true" >
                <n0:runOracleScript objectName="MY_SCRIPT" objectType="SCRIPT" ownerName="FUNKY" sourceType="STRING" replaceIfExists="false">
                        <n0:source><![CDATA[
cd C:\liquibase\load_table
set sqlformat csv
set feedback off
spool locations.csv
select * from hr.locations;
spool off
create table locations as select * from hr.locations where 1=2;
load locations locations.csv
commit;
                        ]]></n0:source>
                </n0:runOracleScript>
        </changeSet>
</databaseChangeLog>

And it's batching the rows via JDBC, will be MUCH faster than row-by-row inserts.

Disclaimer: I'm a product manager at Oracle and SQLcl is one of my products.

Upvotes: 2

Related Questions