ImprobMyster
ImprobMyster

Reputation: 17

Is Array of Objects Access Time O(1)?

I know that accessing something from an array takes O(1) time if it is of one type using mathematical formula array[n]=(start address of array + (n * size Of(type)), but Assume you have an array of objects. These objects could have any number of fields including nested objects. Can we consider the access time to be constant?

Edit- I am mainly asking for JAVA, but I would like to know if there is a difference in case I choose another mainstream language like python, c++, JavaScript etc.

For example in the below code

class tryInt{
    int a;
    int b;
    String s;
    public tryInt(){
        a=1;
        b=0;
        s="Sdaas";
    }
}

class tryobject{
    public class tryObject1{
        int a;
        int b;
        int c;
    }
    public tryobject(){
        tryObject1 o=new tryObject1();
        sss="dsfsdf";
    }
    String sss;
}
public class Main {
    public static void main(String[] args) {
        System.out.println("Hello World!");
        Object[] arr=new Object[5];
        arr[0]=new tryInt();
        arr[1]=new tryobject();
        System.out.println(arr[0]);
        System.out.println(arr[1]);
    }
}

I want to know that since the tryInt type object should take less space than tryobject type object, how will an array now use the formula array[n]=(start address of array + (n * size Of(type)) because the type is no more same and hence this formula should/will fail.

Upvotes: 1

Views: 1815

Answers (4)

O. Jones
O. Jones

Reputation: 108676

The answer to your question is it depends.

If it's possible to random-access the array when you know the index you want, yes, it's a O(1) operation.

On the other hand, if each item in the array is a different length, or if the array is stored as a linked list, it's necessary to start looking for your element at the beginning of the array, skipping over elements until you find the one corresponding to your index. That's an O(n) operation.

In the real world of working programmers and collections of data, this O(x) stuff is inextricably bound up with the way the data is represented.

Many people reserve the word array to mean a randomly accessible O(1) collection. The pages of a book are an array. If you know the page number you can open the book to the page. (Flipping the book open to the correct page is not necessarily a trivial operation. You may have to go to your library and find the right book first. The analogy applies to multi-level computer storage ... hard drive / RAM / several levels of processor cache)

People use list for a sequentially accessible O(n) collection. The sentences of text on a page of a book are a list. To find the fifth sentence, you must read the first four.

I mention the meaning of the words list and array here for an important reason for professional programmers. Much of our work is maintaining existing code. In our justifiable rush to get things done, sometimes we grab the first collection class that comes to hand, and sometimes we grab the wrong one. For example, we might grab a list O(n) rather than an array O(1) or a hash O(1, maybe). The collection we grab works well for our tests. But, boom!, performance falls over just when the application gets successful and scales up to holding a lot of data. This happens all the time.

To remedy that kind of problem we need a practical understanding of these access issues. I once inherited a project with a homegrown hashed dictionary class that consumed O(n cubed) when inserting lots of items into the dictionary. It took a lot of digging to get past the snazzy collection-class documentation to figure out what was really going on.

Upvotes: 3

templatetypedef
templatetypedef

Reputation: 372814

In Java, the Object type is a reference to an object rather than an object itself. That is, a variable of type Object can be thought of as a pointer that says “here’s where you should go to find your Object” rather than “I am an actual, honest-to-goodness Object.” Importantly, the size of this reference - the number of bytes used up - is the same regardless of what type of thing the Object variable refers to.

As a result, if you have an Object[], then the cost of indexing into that array is indeed O(1), since the entries in that array are all the same size (namely, the size of an object reference). The sizes of the objects being pointed at might not all be the same, as in your example, but the pointers themselves are always the same size and so the math you’ve given provides a way to do array indexing in constant time.

Upvotes: 1

Lajos Arpad
Lajos Arpad

Reputation: 76574

In general. array denotes a fixed-size memory range, storing elements of the same size. If we consider this usual concept of array, then if objects are members of the array, then under the hood your array stores the object references and referring the i'th element in your array you find the reference of the object/pointer it contains, with an O(1) complexity and the address it points to is something the language is to find.

However, there are arrays which do not comply to this definition. For example, in Javascript you can easily add items to the array, which makes me think that in Javascript the arrays are somewhat different from an allocated fixed-sized range of elements of the same size/type. Also, in Javascript you can add any types of elements to an array. So, in general I would say the complexity is O(1), but there are quite a few important exceptions from this rule, depending on the technologies.

Upvotes: 0

Chris Beck
Chris Beck

Reputation: 16204

The answer depends on context.

It's really common in some textbooks to treat array access as O(1) because it simplifies analysis. And in fairness it is O(1) cpu instructions in today's architectures.

But:

  • As the dataset gets larger tending to infinity, it doesn't fit in memory. If the "array" is implemented as a database structure spread across multiple machines, you'll end up with a tree structure and probably have logarithmic worst case access times. If you don't care about data size going to infinity, then big O notation may not be the right fit for your situation
  • On real hardware, memory accesses are not all equal -- there are many layers of caches and cache misses cost hundreds or thousands of cycles. The O(1) model for memory access tends to ignore that
  • In theory work, random access machines access memory in O(1), but turing machines cannot. Hierarchical cache effects tend to be ignored. Some models like transdichotomous RAM try to account for this.

In short, this is a property of your model of computation. There are many valid and interesting models of computation and what to choose depends on your needs and your situation.

Upvotes: 0

Related Questions