Reputation: 2927
I know this question has been asked before, example, and I do agree that one query with a join is faster than performing another query for each record returned by the firs query.
However, since with a join you tend to generate redundant field will this slow the network ?
Let's say I have a table hotel and HOTEL has a number of images in a table HOTEL_IMAGE. HOTEL has 20 fields. performing a join on HOTEL_IMAGE will produce 20 fields for each hotel image. will this query still be faster over the network ?
Upvotes: 6
Views: 4918
Reputation: 22524
That depends a lot on your actual data, but from what I have seen, if you have a well-parameterized DB with fresh statistics, it is much better to put the join in SQL and let the DB figure out what to do.
Anyway, DB queries are in my opinion the first things you want to profile. It is not a coincidence that any good DBMS has a lot of performance measuring tools. And you need to profile with data as close to actual data as possible (recent copies of your production environment are best).
Upvotes: 2
Reputation: 7681
As a solution to avoid duplication in joined data, you can return multiple recordsets from single query, if your database supports it. One recordset will return the master records, and second - the detail records plus the key fields from master query.
select ID, Name, ... from HOTEL where <.... your criteria>;
select h.ID as HotelID, i.ID, i.Description, i.ImageFile, .... from HOTEL_IMAGE i
join HOTEL h on h.ID = i.HotelID and ( <.... same criteria for HOTEL> )
Not sure if the query on the master table would be cached, so the second select will reuse it, but it will save traffic for sure.
We are using this approach for queries, which tend to return multi-level joined results.
Upvotes: 0
Reputation: 3436
Don't use select * but only the columns that you need. IF you do this, a join will be faster (not sure why you would ever want to do this with 2 queries, you have to make 2 connections to your database ect.)
Upvotes: 1