Reputation: 253
So I have the following issue. I have a Java application that connects to a Redshift-DB via jdbc. Executing a simple SELECT-query yields a ResultSet that is not null (I checked).
Now, when I try to extract data from this ResultSet, the method will sometimes return null, even though a previous ResultSet.wasnull() returns false and a manual check in the database also reveals that the record in question does not contain a null value for that attribute.
Sample code:
private void test(){
ResultSet rs = null;
ResultSetMetaData rsmd = null;
String getTable = "SELECT * from myTable"; //
try(PreparedStatement prep = myConnection.prepareStatement(getTable
, ResultSet.TYPE_FORWARD_ONLY
)){
rs = prep.executeQuery();
rsmd = rs.getMetaData();
int columns = rsmd.getColumnCount();
while(rs.next()){ // loop over all rows
for(int col = 1; col <= columns; col++){
if(rs.wasNull())
System.out.println("NULL Value detected!");
System.out.println("Column Name: " + rsmd.getColumnName(col));
System.out.println("Column value: " + rs.getString(col));
}
}
} catch (SQLException e) {
e.printStackTrace();
}
}
This works most of the time, except for some records the rs.getString(col) will just result in a null (even though the data type in the DB is CHAR or VARCHAR). This does not necessarily happen for all columns (I've seen it happen that a couple of columns were read and printed correctly, until eventually, a null appeared, even though wasnull() returned false.
So far, I have observed this behaviour for ResultSet.getString(), ResultSet.getDate() and ResultSet.getTimestamp(), but never when extracting any sort of number (getInt()...). In all these cases, I checked the DB to make sure the jdbc-method matches the data type.
I tried using different driver versions, I tried using rs.getBytes(col) instead, but the result remains the same: I just cannot get rid of those nulls.
I also successfully did an UNLOAD-operation, unloading the troublesome table to S3 and manually checking the exported csv files. The UNLOAD runs without any issue, the CSVs look fine to me (all values as expected).
Now I could of course manually handle those "unwanted NULLs" by skipping the records in which they occur, but I'd rather avoid glossing over this error like that without understanding its cause.
So my question would be:
Am I doing something fundamentally wrong in my code? Is there a known issue with jdbc / Redshift that could cause this? Can someone maybe point me to some sources or give me some ideas on what else I could try?
Upvotes: 2
Views: 3021
Reputation: 109146
From the javadoc of ResultSet.wasNull()
:
Reports whether the last column read had a value of SQL
NULL
. Note that you must first call one of the getter methods on a column to try to read its value and then call the methodwasNull
to see if the value read was SQLNULL
.Returns:
true
if the last column value read was SQLNULL
and false otherwise
In other words, you should use it after reading a column. Be aware that for object types it makes no sense to call wasNull
, as the object getters will already return null
.
This method is intended to be used when reading columns using the primitive getters (like getInt()
), because those will return 0
for null
values. You can then use wasNull()
to check if the value was really 0
or actually null
.
In other words, change your code to:
while(rs.next()){ // loop over all rows
for(int col = 1; col <= columns; col++){
String value = rs.getString(col);
if(value == null)
System.out.println("NULL Value detected!");
System.out.println("Column Name: " + rsmd.getColumnName(col));
System.out.println("Column value: " + value);
}
}
Or, if you really want to use wasNull()
(but really, you shouldn't!):
while(rs.next()){ // loop over all rows
for(int col = 1; col <= columns; col++){
String value = rs.getString(col);
if(rs.wasNull())
System.out.println("NULL Value detected!");
System.out.println("Column Name: " + rsmd.getColumnName(col));
System.out.println("Column value: " + value);
}
}
Upvotes: 1