Reputation: 8483
The endIndex returns the same values as count. Is it a correct behaviour or a bug?
var ar = [1, 2, 3, 4]
ar.count // 4
ar.endIndex // 4
Upvotes: 40
Views: 10067
Reputation: 1
endIndex
and count
are only the same if startIndex
is 0.
Note that startIndex
need not be zero.
let someSlice = someArray[100...200] // someSlice.startIndex is 100
.
Upvotes: 0
Reputation: 299623
There seem to be a lot of comments here that endIndex
being count
elements after start
is counter-intuitive. I can't speak to what's intuitive to one person or another, but it's not random. It's a very sensible way for it to work.
I believe some folks are thinking of indexes like this, with the index pointing directly at the "box" containing the value:
|-----|-----|-----|-----|
| 0 | 1 | 2 | 3 |
|-----|-----|-----|-----|
↑ ↑
start end
But this is not the right way to think about them. The correct way is this, with the index pointing to the border that you would begin reading the value from:
|-----|-----|-----|-----|
| 0 | 1 | 2 | 3 |
|-----|-----|-----|-----|
↑ ↑
start end
If one begins at the start and advances count
times, one should expect to be at the end, and that's what this achieves.
Consider now the case of the empty list:
|
|
↑
start
↑
end
Everything works exactly as expected. If you begin at start
and advance 0 elements, you arrive at end
. I say "advance" here on purpose. Indexes don't have to be integers. They just have to be something that has an "advance" operation and an end.
If endIndex
were instead the "index of the last element," what would it be for an empty list? There is no last element. Making it -1 would be a possible dodge when indexes are signed integers, but Swift doesn't require Collection indexes to be integers at all (in fact, they usually aren't; Array is very unusual). Swift could make endIndex
Optional, but that makes the math even uglier.
As HangarRash notes in the comments, for Collections with Int indexes that don't start at zero (ArraySlice and sliced Data in particular), endIndex
wouldn't be count - 1
in any case. For an empty slice, even offsetting with startIndex + count - 1
wouldn't be -1 any more (so you can't just check for -1 to mean "empty"), and could point into the containing Array, which would clearly be wrong. On the other hand, the rules endIndex = startIndex + count
and "empty collections have startIndex == endIndex" still work identically for slices. This approach is consistent over a wide variety of cases without requiring work-arounds.
This insight that the distance from the start to the end should be the count (rather than one less than the count), long predates Swift and is closely related to why many languages number arrays from zero rather than one. Dijkstra was discussing this in 1982. ("the difference between the bounds as mentioned equals the length of the subsequence") Counter-intuitive as it seems (and I certainly find it surprising at times!), this approach has been very powerful. It's not about performance or "C uses pointers" or anything like that. It's because it makes algorithms more beautiful.
Upvotes: 8
Reputation: 118761
count
is the number of items in the collection, whereas endIndex
is the Index
(from the Collection
protocol) which is just past the end of the collection.
For Array
, these are the same. For some other collections, such as ArraySlice
, they are not:
let array = ["a", "b", "c", "d", "e"]
array.startIndex // 0
array.count // 5
array.endIndex // 5
let slice = array[1..<4] // elements are "b", "c", "d"
slice.startIndex // 1
slice.count // 3
slice.endIndex // 4
Upvotes: 42
Reputation: 43330
Array.endIndex
is meant to be the 1 past the end of the array (or the same as count) for iteration purposes, not subscripting.
let x = [1, 2, 3, 4]
for var i = x.startIndex; i < x.endIndex; i++ {
println(x[i])
}
Upvotes: 17