Reputation: 21542
I'm currently doing some tests and I noticed the following:
select field1 from table1
Will result into an index fast full scan
when field1
is the primary key, thus with a low cost (in my case it is 4690), whereas
select field2 from table1
Will result into a table access full
(there's no constraint nor index on field2
, yet even with a regular index the result is the same), with a cost of 117591.
I'm aware of the gain when the indexes/constraints are involved in JOIN/WHERE clauses, but in my case there's nothing filtered: I don't understand why the PK should be faster because, anyway, I am retrieving all the rows...
Is it because of the uniqueness? Tom says that a unique index is the same as a conventional index, structurally, which really makes me wonder why selecting the PK would cost less than any other column.
Thanks for your enlightenments :-)
rgds.
Upvotes: 1
Views: 290
Reputation: 231821
A single-column b-tree index does not store data for rows that are NULL. So if you have an index on field2
but field2
allows NULL
, Oracle can't do a scan on the index without risking potentially returning incorrect data. A full table scan is, therefore, the only valid way for Oracle to retrieve the data for the field2
column for every row in table1
. If you add a NOT NULL
constraint to field2
, Oracle should be able to at least consider doing a full scan of the index.
Of course, whether or not the optimizer chooses to use the index (and the cost it ultimately assigns to using the index) will depend on the statistics that you've gathered both on the index and on the table. If your statistics are inaccurate, the optimizer's cost estimates are going to be inaccurate and so the plan that is generated is likely to be inefficient. That's one of the reasons that people are usually advised to be cautious about putting too much credence into Oracle's estimate of the cost of a plan-- if you're looking at a plan, it's likely because you suspect it is inefficient which should imply that you can't rely on the cost. You're generally much better served looking at the cardinality estimates for each step and determining whether those make sense given your distribution of data.
Upvotes: 4