malaverdiere
malaverdiere

Reputation: 1537

PostgreSQL: BYTEA vs OID+Large Object?

I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob.

Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant.

I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field.

So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance.

Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

Upvotes: 13

Views: 13317

Answers (4)

Paul Draper
Paul Draper

Reputation: 83255

tl;dr Use bytea

...unless you need streaming or >1GB values


Bytea: A byte sequence that works like any other TOAST-able value. Limited to 1GB per value, 32TB per table.

Large object: Binary data split up into multiple rows. Supports seek, read, and write like an OS file, so operations don't require loading it all into memory at once. Limited to 4TB per value, 32TB per database.


Large objects have the following downsides:

  1. The is only large object table per database.

  2. Large objects aren't automatically removed when the "owning" record is deleted. See the lo_manage function in the lo module.

  3. Since there is only one table, large object permissions have to be handled record by record.

  4. Streaming is difficult, and has less support by client drivers than simple bytea.

  5. It's part of the system schema, so you have limited to no control over options like partitioning and tablespaces.


I venture that 93% of real-world uses of large objects would be better served by bytea.

Upvotes: 4

Chris Travers
Chris Travers

Reputation: 26464

Basically there are cases where each makes sense. bytea is simpler and generally preferred. The client libs give you the decoding so that's not an issue.

However LOBs have some neat features, such as an ability to seek within them and treat the LOB as a byte stream instead of a byte array.

"Big" means "Big enough you don't want to send it to the client all at once." Technically bytea is limited to 1GB compressed and a lob is limited to 2GB compressed, but really you hit the other limit first anyway. If it's big enough you don't want it directly in your result set and you don';t want to send it to the client all at once, use a LOB.

Upvotes: 8

Peter Eisentraut
Peter Eisentraut

Reputation: 36729

I don't have a comparison of large objects and bytea handy, but note that the switch to the hex output format in 9.0 was made also because it is faster than the previous custom encoding. As far as text encoding of binary data goes, you probably won't get much faster than what there currently is.

If that is not good enough for you, you can consider using the binary protocol between PostgreSQL client and server. Then you basically get the stuff straight from disk, much like large objects. I don't know if the PostgreSQL JDBC supports that yet, but a quick search suggests no.

Upvotes: 1

Frank Heikens
Frank Heikens

Reputation: 127086

But I am concerned that bytea fields are encoded in Hex

bytea input can be in hex or escape format, that's your choice. Storage will be the same. As of version 9.0, the output default is hex, but you can change this by editting the parameter bytea_output.

I haven't seen any benchmarks.

Upvotes: 5

Related Questions