Reputation: 495
I am trying to write a class to store millions 3D coordinates data. At the first, I tried to use a 3D array to store those coordinates data.
#ifndef DUMPDATA_H
#define DUMPDATA_H
#define ATOMNUMBER 2121160
#include <string>
using namespace std;
class DumpData
{
public:
DumpData(string filename);
double m_atomCoords[ATOMNUMBER][3];
};
#endif // DUMPDATA_H
Then I compiled the program, but I got segfaults when I run the program in ubuntu 14.04 system (64 bit). So I changed the 3D array to vector by declaring:
vector < vector <double> > m_atomCoords;
Then the program worked. I am just wondering are there limitations of declaring very large arrays in a class?
Upvotes: 4
Views: 1896
Reputation: 42924
The stack is a very precious and scarce resource, so I'd just use the heap to allocate big data.
If you have an array of 3D coordinates, instead of using a vector<vector<double>>
, I'd just define a class to represent a 3D point, using just three separate double
data members, or a raw array of three double
s, e.g.:
class Point3D {
private:
double m_vec[3]; // X, Y and Z
// or:
// double x;
// double y;
// double z;
public:
double X() const {
return m_vec[0];
// or:
// return x;
}
... other setters/getters, etc.
};
and then I'd just use a std::vector<Point3D>
as a data member inside your DumpData
class.
(A Point3D
class defined as above has less overhead than a std::vector<double>
, and also offers an higher level of semantics, so it's a better choice.)
With the default allocator, std::vector
will allocate the memory for the huge number of Point3D
s from the heap (not from the stack), which works well, and it is also hidden from the client of DumpData
, making a nice simple public interface for the DumpData
class.
Upvotes: 2
Reputation: 81916
In general, the stack has a limited size.
This will likely cause a stack overflow:
int main() {
DumpData x;
}
While these won't:
int main() {
static DumpData x;
std::unique_ptr<DumpData> y = std::make_unique<DumpData>();
}
Upvotes: 6