Serge
Serge

Reputation: 1601

Redis + phpredis losing keys — memory overflow?

New to Redis, testing it with php on a small box with just 512Mb RAM, using phpredis client.

Inserted 3m integer values into a set. But the sCard() method for that set returns only about 270k count.

Is that a memory limit that I faced? How to check for errors while inserting?

The application: there are two binary files that store sequences of four-byte unsigned integers, that I want to load into Redis for a fast in-memory diff. Here's my insert method (skipped error checking lines):

function loadToRedis( $id, $filename){
    $length = filesize( $filename) / 4; // how many ids are there? Each is 4 bytes.
    $divisor = 100; // how many ids to insert in a single batch

    printf( "Length of %s: %d 4-byte numbers\n", $filename, $length);
    $FP = fopen($filename, 'r');
    for( $b=0; $b<=floor( $length/ $divisor); $b++){
        $set = array( $id);
        for( $i=$b*$divisor; $i < min(( $b+1)*$divisor, $length); $i++) {
            $bytes = unpack( "L", fread( $FP, 4));
            array_push( $set, array_shift( $bytes));
        }

        call_user_func_array( array( $this->redis, 'sAdd'), $set);
    }
    fclose($FP);
    printf( "%d items in the list named %s\n", $this->redis->sCard( $id), $id);
}

So, after reading a first of two 3m-values files, size of the first set is only about 270k, and the second file seems to completely miss Redis:

Length of /var/www/.../dat/OLD_26750264: 3123758 4-byte numbers
270457 items in the list named OLD_26750264
Length of /var/www/.../dat/NEW_26750264: 3125000 4-byte numbers
0 items in the list named NEW_26750264

Redis INFO output right after this:

redis_version:2.4.10
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.6
process_id:8416
uptime_in_seconds:1471232
uptime_in_days:17
lru_clock:1618016
used_cpu_sys:387.21
used_cpu_user:414.13
used_cpu_sys_children:0.03
used_cpu_user_children:0.32
connected_clients:1
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:19997864
used_memory_human:19.07M
used_memory_rss:22544384
used_memory_peak:27022288
used_memory_peak_human:25.77M
mem_fragmentation_ratio:1.13
mem_allocator:jemalloc-2.2.5
loading:0
aof_enabled:0
changes_since_last_save:0
bgsave_in_progress:0
last_save_time:1379328354
bgrewriteaof_in_progress:0
total_connections_received:153
total_commands_processed:16073
expired_keys:0
evicted_keys:0
keyspace_hits:99
keyspace_misses:83
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:835
vm_enabled:0
role:master
db0:keys=2,expires=0

Upvotes: 1

Views: 1101

Answers (1)

Serge
Serge

Reputation: 1601

I figured it out: maxmemory was achieved a lot faster than I could expect. In further tests, with maxmemory = 40mb only 1048600 integer values could fit into a set. That's 44,62 bytes per integer on average. Not really efficient.

Upvotes: 2

Related Questions