Reputation: 523
I have an issue with mapping retrieved data via JDBi3 using PostgreSQL query in my DAO interface.
In my Dropwizard app I have Book DTO class which is has Many-To-Many relation with Author and Category DTO classes and have a problem with mapping queried rows onto BookDTO class. Here are the code snippets of DTO classes:
class BookDTO {
private Long bookId;
// other fields are left for code brevity
private List<Long> authors;
private List<Long> categories;
// empty constructor + constructor with all fields excluding Lists + getters + setters
}
class AuthorDTO {
private Long authorId;
// other fields are left for code brevity
private List<Long> books;
// empty constructor + constructor with all fields excluding List + getters + setters
}
class CategoryDTO {
private Long categoryId;
// other fields are left for code brevity
private List<Long> books;
// empty constructor + constructor with all fields excluding List + getters + setters
}
...and since I am using JDBi3 DAO interfaces for performing CRUD operations this is how my method for querying all books in database looks like:
@Transaction
@UseRowMapper(BookDTOACMapper.class)
@SqlQuery("SELECT book.book_id AS b_id, book.title, book.price, book.amount, book.is_deleted, author.author_id AS aut_id, category.category_id AS cat_id FROM book " +
"LEFT JOIN author_book ON book.book_id = author_book.book_id " +
"LEFT JOIN author ON author_book.author_id = author.author_id " +
"LEFT JOIN category_book ON book.book_id = category_book.book_id " +
"LEFT JOIN category ON category_book.category_id = category.category_id ORDER BY b_id ASC, aut_id ASC, cat_id ASC")
List<BookDTO> getAllBooks();
...and this is map
method of BookDTOACMapper
class look like:
public class BookDTOACMapper implements RowMapper<BookDTO> {
@Override
public BookDTO map(ResultSet rs, StatementContext ctx) throws SQLException {
final long bookId = rs.getLong("b_id");
// normally retrieving values by using appropriate rs.getXXX() methods
Set<Long> authorIds = new HashSet<>();
Set<Long> categoryIds = new HashSet<>();
long authorId = rs.getLong("aut_id");
if (authorId > 0) {
authorIds.add(authorId);
}
long categoryId = rs.getLong("cat_id");
if (categoryId > 0) {
categoryIds.add(categoryId);
}
while (rs.next()) {
if (rs.getLong("b_id") != bookId) {
break;
} else {
authorId = rs.getLong("aut_id");
if (authorId > 0) { authorIds.add(authorId); }
categoryId = rs.getLong("cat_id");
if (categoryId > 0) { categoryIds.add(categoryId); }
}
}
final List<Long> authorIdsList = new ArrayList<>(authorIds);
final List<Long> categoryIdsList = new ArrayList<>(categoryIds);
return new BookDTO(bookId, title, price, amount, is_deleted, authorIdsList, categoryIdsList);
}
}
Problem I encounter is that when invoking my GET method (defined in Resource class which invokes getAllBooks()
method from BookDAO class) displays inconsistent results while the query itself returns proper results.
Many questions that I've managed to find on Stackoverflow, official JDBi3 Docs API and Google Groups are considering One-To-Many relationship and using @UseRowReducer
annotation which contains class which impelements LinkedHashMapRowReducer<TypeOfEntityIdentifier, EntityName>
but for this case I could not find a way to implement it. Any example/suggestion is welcome. :)
Thank you in advance.
Versions of used tools:
Dropwizard framework 1.3.8
PostgreSQL 11.7
Java8
Upvotes: 1
Views: 698
Reputation: 523
As zloster mentioned in his answer I've chosen 2nd option (by this answer for Many-To-Many relationships) which was to use edit my PostgreSQL query @SqlQuery
annotation above List<BookDTO> getAllBooks();
method. Query now uses array_agg
aggregate function in SELECT statement to group my results in an ARRAY and now looks like this:
@UseRowMapper(BookDTOACMapper.class)
@SqlQuery("SELECT b.book_id AS b_id, b.title, b.price, b.amount, b.is_deleted, ARRAY_AGG(aut.author_id) as aut_ids, ARRAY_AGG(cat.category_id) as cat_ids " +
"FROM book b " +
"LEFT JOIN author_book ON author_book.book_id = b.book_id " +
"LEFT JOIN author aut ON aut.author_id = author_book.author_id " +
"LEFT JOIN category_book ON category_book.book_id = b.book_id " +
"LEFT JOIN category cat ON cat.category_id = category_book.category_id " +
"GROUP BY b_id " +
"ORDER BY b_id ASC")
List<BookDTO> getAllBooks();
Therefore map(..)
method of BookDTOACMapper
class had to be edited and now looks like this:
@Override
public BookDTO map(ResultSet rs, StatementContext ctx) throws SQLException {
final long bookId = rs.getLong("b_id");
String title = rs.getString("title");
double price = rs.getDouble("price");
int amount = rs.getInt("amount");
boolean is_deleted = rs.getBoolean("is_deleted");
Set<Long> authorIds = new HashSet<>();
Set<Long> categoryIds = new HashSet<>();
/* rs.getArray() retrives java.sql.Array and after it getArray gets
invoked which returns array of Object(s) which are being casted
into array of Long elements */
Long[] autIds = (Long[]) (rs.getArray("aut_ids").getArray());
Long[] catIds = (Long[]) (rs.getArray("cat_ids").getArray());
Collections.addAll(authorIds, autIds);
Collections.addAll(categoryIds, catIds);
final List<Long> authorIdsList = new ArrayList<>(authorIds);
final List<Long> categoryIdsList = new ArrayList<>(categoryIds);
return new BookDTO(bookId, title, price, amount, is_deleted, authorIdsList, categoryIdsList);
}
Now all results are consistent and here's a screenshot of query in pgAdmin4.
Upvotes: 1
Reputation: 1157
This will be too long for a comment:
while (rs.next()) { if (rs.getLong("b_id") != bookId) { break; } else {
The firstif
after the while
is eating the row after the current (the one that wass current when the row mapper is called). You are skipping the processing there (putting the data in the Java objects) for the bookId, authorId, etc. That's why you get
inconsistent results while the query itself returns proper results.
So you need to revisit how you process the data. I see two paths:
bookId
. It is possible to achieve this with scrollable ResultSet
s - i.e. request a scrollable ResultSet
and before the brake;
call rs.previous()
. On the next call to the row mapper the processing will start from the correct line in the result set.Also take your time and check the other answers of https://dba.stackexchange.com/users/3684/erwin-brandstetter. They give invaluable insights in the SQL and PostgreSQL.
Upvotes: 2