Daniel Jankowski
Daniel Jankowski

Reputation: 15

Vespa.ai query service not available

I've deployed Vespa application with Docker with the following schema:

schema bert_search {
    document bert_search {

        field title type string {
            indexing: attribute | summary
        }
        field hash type string {
            indexing: attribute | index
        }
        field plain_text type string {
            indexing: summary | index
            index: enable-bm25
        }
        field text_embedding type tensor<int8>(x[128]) {
            indexing: attribute | index
            attribute {
                distance-metric: angular
            }
            index {
                hnsw {
                    max-links-per-node: 16
                    neighbors-to-explore-at-insert: 500
                }
            }
        }
    }

    fieldset default {
        fields: plain_text
    }
    rank-profile similaritembedding_y inherits default { 
        first-phase {
            expression: closeness(text_embedding)
        }
    }
    rank-profile bm25 inherits default {
        first-phase {
            expression: bm25(plain_text)
        }
    }
    rank-profile bm25-embedding-similarity inherits default {
        first-phase {
            expression: bm25(plain_text) + closeness(text_embedding)
        }
    }
}

After deployment, I'm able to feed Vespa with the data, but I'm not able to make a query. In the Vespa container I can see the message:

runserver(configserver) running with pid: 26
1688376323.746318       43dbbea6e5d1    28      -       vespa-start-services    info    Too low vm.max_map_count [65530] - trying to increase it to 262144
1688376323.746327       43dbbea6e5d1    28      -       vespa-start-services    warning Could not increase vm.max_map_count - current value [65530] too low, should be at least 262144
Starting config proxy using tcp/43dbbea6e5d1:19070 as config source(s)
Waiting for config proxy to start
`runserver(configproxy)` running with pid: 89
1688376325.143058       43dbbea6e5d1    28      -       start-services  warning Could not ping configproxy: exit status 1
config proxy started after 1s (runserver pid 89)
runserver(config-sentinel) running with pid: 266
Deploying. Waiting for Vespa to start 60...

Success: Deployed .

Waiting up to 1m0s for query service to become available ...

The message from the last line about waiting for query service is running all the time and even after one minute there is no information if the query service failed or not.

On another machine, there is no such problem and query service is up quickly. Both machines' specifications are very similar and I don't think the problem is about memory or other resources. Could this be a firewall problem?

The machine specification:

System: Debian 11
RAM: 32GB
CPU: 11th Gen Intel(R) Core(TM) i7-11700K @ 3.60GHz 
GPU: RTX 3080

Upvotes: 0

Views: 207

Answers (1)

Kristian Aune
Kristian Aune

Reputation: 996

Could this be a firewall problem?

We know some users have had issues due to being on VPN while running Vespa locally, unable to reach the query API port

https://docs.vespa.ai/en/jdisc/container-components.html#troubleshooting is a place to start. Run vespa log for a more detailed log dump, that should indicate if there are Vespa processes failing to start.

To eliminate a possible port problem, you can also exec into the container and run vespa query "select * from bert_search where true"

Upvotes: 0

Related Questions