Reputation: 607
I'm experiencing really slow data ingestion in OrientDB by using the Blueprint Java API.
Specifically, I'm loading ~ 1M nodes and 3M edges from several CSV files by using the plocal
mode and the OrientGraphNoTx
class (unfortunately I couldn't use the ETL since it does not allow me to read a file containing edges among existing nodes).
The code is written in Scala and it run for approximately one and a half hour.
The schema of the database contains 5 vertex classes, 7 edge classes, and 6 indexes. Attributes I use to create edges are indexed by using unique_hash_index
es.
Creating edges between existing nodes was the most time consuming operation (probably because there are many edges), below is the code I used.
Does anybody have any idea how to optimise it?
/**
* Adds edges to the graph.
* Assumes edgesPath points to a CSV file with format (from, to)
*/
def addEdges(edgesPath: String,
fromTable: String, fromAttribute: String,
toTable: String, toAttribute: String,
edgeType: String, graph: OrientGraphNoTx) {
logger.info(s"Adding edges from '$edgesPath'...")
val in = Files.newBufferedReader(Paths.get(edgesPath), Charset.forName("utf-8"))
val records = CSVFormat.DEFAULT
.withHeader("from", "to")
.withSkipHeaderRecord(hasHeader)
.parse(in)
var errors = 0
for (r <- records) {
val (src, target) = (r.get("from"), r.get("to"))
if (src != "" && target != "") {
try {
graph.command(new OCommandSQL(s"CREATE EDGE $edgeType FROM (" +
s"SELECT FROM $fromTable WHERE $fromAttribute = '$src') " +
s"TO (SELECT FROM $toTable WHERE $toAttribute ='$target')")).execute()
} catch {
case e: OCommandExecutionException => errors += 1
}
} //if
} //for
if(errors > 0)
logger.warn(s"Couldn't create $errors edges due to missing sources/targets or internal errors")
logger.info("done.")
} //addEdges
Upvotes: 1
Views: 104
Reputation: 1949
If you are working in plocal and you need one batch import try to disable the WAL for your importer
OGlobalConfiguration.USE_WAL.setValue(false);
Upvotes: 2