ozzyzig
ozzyzig

Reputation: 719

Postgres -> ERROR: invalid memory alloc request size 1212052384

I am getting an this error within psql (10.4) when running an insert query into the table.

ERROR:  invalid memory alloc request size 1212052384

The data I'm trying to insert is geographic point data and I'm guessing (as the file size is 303MB) of around 2-3 million points i.e. individual records. Is this too large for a one off INSERT? The sql query is below; it copies JSON data from a text file and insert into database. Would it be better i.e. less memory to try and store the individual shapes as records rather than the points?

delete from shapes;

create temporary table temp_json (values text);
copy temp_json from '/Users/me/model/json/shapes_routes.json';
insert into shapes

select  values->>'shape_id' as shape_id,
        (CAST(values->>'shape_pt_lat' as real)) as shape_pt_lat,
        (CAST(values->>'shape_pt_lon' as real)) as shape_pt_lon,
        (CAST(values->>'shape_pt_sequence' as integer)) as shape_pt_sequence,
        (CAST(values->>'shape_dist_traveled' as real)) as shape_dist_traveled,
        values->>'route_id' as route_id

from   (
           select json_array_elements(replace(values,'\','\\')::json) as values 
           from   temp_json
       ) a;

drop table temp_json;

Upvotes: 3

Views: 5953

Answers (2)

Jasen
Jasen

Reputation: 12412

300MB is fairly small really, I did an 28GB insertion earlier today. (but I used COPY tablename FROM STDIN )

you could try re-writing a as CTE instead of as a subquery.

Upvotes: 0

Bohemian
Bohemian

Reputation: 425083

In most cases relational databases do not handle batch processing of large amounts of data very well, not the least of which is RDBMS logging requirements, which set a maximum limit on data changes in one transaction.

Moving lots of data in is an operational concern, so an operational approach is a good and logical choice.

Break your file up,externally to the DB (using command line tools), into many smaller files of say 10K each and load them as you would the single large file - keep the chunking logic outside the DB.

Upvotes: 3

Related Questions