Reputation: 752
I have a database with +8million records which I need to processes in particular ways which is written in Java. After looking some stuff up, I found following related posts:
This is my code that returns the items stored in the column Tags of my MySQL database:
public ResultSet getAllTags() {
String query = "SELECT Tags FROM dataset";
ResultSet rs = null;
try {
connection = ConnectionFactory.getConnection(DATABASE);
preparedStatement = connection.prepareStatement(query, ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY);
preparedStatement.setFetchSize(Integer.MIN_VALUE);
rs = preparedStatement.executeQuery(query);
// following line is for testing, to see what comes out of the resultset
System.out.println("output: " + rs.getString(1));
return rs;
} catch (Exception ex) {
ex.printStackTrace();
return null;
} finally {
closeAll();
}
}
Here I return the ResultSet in order for me to process each line in the rs.next()
loop. However, at the line of rs = preparedStatement.executeQuery(query);
it starts to eat all my computer's free memory (I work on Mac OSX with 8GB of RAM. With only Eclipse open I have +/- 5GB left free, when running application it goes down till < 100MB free) making me to shut down the database connection and application etc... So I assume this can be called a Memory Leakage?
Can someone explain what I'm doing wrong and why this issue occurs even when I follow the instructions found on other pages with similar amount of records?
Upvotes: 4
Views: 1934
Reputation: 592
Did you consider positive values for fetch sizes, 256/512/1024/2048. I would expect setting a negative value for fetch size having no effect, however this could vary by driver implementation and you should verify the actual behavior in the driver documentation.
public void setFetchSize(int rows) throws SQLException {
synchronized (checkClosed().getConnectionMutex()) {
if (((rows < 0) && (rows != Integer.MIN_VALUE))
|| ((this.maxRows != 0) && (this.maxRows != -1) && (rows > this
.getMaxRows()))) {
throw SQLError.createSQLException(
Messages.getString("Statement.7"), //$NON-NLS-1$
SQLError.SQL_STATE_ILLEGAL_ARGUMENT, getExceptionInterceptor()); //$NON-NLS-1$ //$NON-NLS-2$
}
this.fetchSize = rows;
}
}
Upvotes: 1
Reputation: 691695
The only thing you're doing wrong is to use a stupid database driver (MySQL), which by default reads the whole resultset in memory.
Try using the useCursorFetch and defaultFetchSize properties described in http://dev.mysql.com/doc/connector-j/en/connector-j-reference-configuration-properties.html to avoid that, and you should be able to iterate through the rows without fetching everything in memory (not tested though).
Note that the line
System.out.println("output: " + rs.getString(1));
will throw an exception, since you haven't called next()
yet in the result set. Also note that, if closeAll()
closes the connection, the caller won't be able to iterate through the result set, since it will be closed. You should execute the iteration before closing the connection.
Note that the documentation of the driver says:
By default, ResultSets are completely retrieved and stored in memory. In most cases this is the most efficient way to operate, and due to the design of the MySQL network protocol is easier to implement. If you are working with ResultSets that have a large number of rows or large values, and cannot allocate heap space in your JVM for the memory required, you can tell the driver to stream the results back one row at a time.
To enable this functionality, create a Statement instance in the following manner:
stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);
But you're used TYPE_SCROLL_SENSITIVE
and not TYPE_FORWARD_ONLY
.
Upvotes: 5