veradiego31
veradiego31

Reputation: 408

Postgresql Query is very Slow

I have a table with 300000 rows and when i run a simple query like

   select * from diario_det;

it leaves 41041 ms to return rows. It's fine that? How i can optimize the query?

I use Postgresql 9.3 in Centos 7.

Here's is my table

CREATE TABLE diario_det
(
  cod_empresa numeric(2,0) NOT NULL,
  nro_asiento numeric(8,0) NOT NULL,
  nro_secue_pase numeric(4,0) NOT NULL,
  descripcion_pase character varying(150) NOT NULL,
  monto_debe numeric(16,3),
  monto_haber numeric(16,3),
  estado character varying(1) NOT NULL,
  cod_pcuenta character varying(15) NOT NULL,
  cod_local numeric(2,0) NOT NULL,
  cod_centrocosto numeric(4,0) NOT NULL,
  cod_ejercicio numeric(4,0) NOT NULL,
  nro_comprob character varying(15),
  conciliado_por character varying(10),
  CONSTRAINT fk_diario_det_cab FOREIGN KEY (cod_empresa, cod_local, cod_ejercicio, nro_asiento)
      REFERENCES diario_cab (cod_empresa, cod_local, cod_ejercicio, nro_asiento) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT fk_diario_det_pc FOREIGN KEY (cod_empresa, cod_pcuenta)
      REFERENCES plan_cuenta (cod_empresa, cod_pcuenta) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
 )
WITH (
   OIDS=TRUE
);
 ALTER TABLE diario_det
   OWNER TO postgres;

-- Index: pk_diario_det_ax

-- DROP INDEX pk_diario_det_ax;

 CREATE INDEX pk_diario_det_ax
   ON diario_det
   USING btree
   (cod_pcuenta COLLATE pg_catalog."default", cod_local, estado COLLATE pg_catalog."default");

Upvotes: 1

Views: 1420

Answers (1)

Vyktor
Vyktor

Reputation: 20997

Very roughly size of one row is 231 bytes, times 300000... It's 69300000 bytes (~69MB) that has to be transferred from server to client.

I think that 41 seconds is a bit long, but still the query has to be slow because of amount of data that has to be loaded from disk and transferred.

You can optimise query by

  • selecting just columns you that are going to use not all of them (if you need just cod_empresa it would reduce total amount of transferred data to ~1.2MB, but server would still have to iterate trough all records - slow)
  • filter only rows that are going to use - using WHERE on columns with indexes can really speed the query up

If you want to know what is happening in your query, play around with EXPLAIN and EXPLAIN EXECUTE.

Also, if you're running dedicated database server, be sure to configure it properly to use a lot of system resources.

Upvotes: 3

Related Questions