Reputation: 84650
I'm working on setting up a project with another developer, a very experienced and capable coder whose skill and competence are not in question. Recently I sent him a demo of some work I had done, and he seemed a bit surprised that I had chosen a Firebird database. The conversation went something like this:
Him: Why did you pick Firebird? SQLite would be faster.
Me: SQLite is embedded only, and it doesn't scale well. Plus it's missing a lot of features, including stored proc support.
Him: Firebird has scalability problems too, when your database size gets beyond the amount of RAM available.
Me: What do you mean?
Him (direct quote from his email): Massive slowdowns. Apparently when the indexes + data don't fit RAM (physical RAM, not virtual RAM), it enters a "slow mode" of sorts, we've been able to alleviate it to some extent by increasing the memory usage values of FireBird conf, but then there is a risk of sudden "out of memory" failure if for some reason it can't acquire the memory (as contrarily to MSSQL or MySQL f.i., FireBird doesn't reserve the physical RAM at startup, only progressively). Also somewhere above 8 GB the slowdowns seem to stay regardless of memory, even on 24 GB machines. So we progressively migrate those to Oracle / MSSQL.
Now as I said, this is a very smart, capable developer. But on the other hand, we have the Firebird website's claim that people are using it in production for databases over 11 TB in size, which should be so impractical as to be impossible, for all intents and purposes, if what he says is true.
So I have to wonder, does this problem really exist in Firebird, or is he overlooking something, perhaps some configuration option he's not aware of? Is anyone familiar with the issue he's describing?
Upvotes: 1
Views: 555
Reputation: 109162
As I commented earlier, what the other developer describes could be attributed to a bug that surfaces in a combination between the Windows filesystem cache on Windows 64 bit and the fact that Firebird read its database file(s) with FILE_FLAG_RANDOM_ACCESS
. For some reason this would cause the filesystem cache to not release pages from its cache, causing it to grow potentially beyond the available physical memory (and eventually might even grow beyond the available virtual memory), see this blog post for details. This issue has been fixed in Firebird 2.1.5 and 2.5.2 with CORE-3971.
The case studies on firebirdsql.org list several examples of databases in tens or hundreds or gigabytes, and they don't seem to suffer from performance problems.
A company that offers Firebird recovery and performance optimization services did a test with a 1 terabyte database a while back. That page also lists three examples of relatively large Firebird databases.
Disclosure: I develop a database driver for Firebird, so I am probably a bit biased ;)
Upvotes: 5