punkish
punkish

Reputation: 15278

what is the right way to generate ollama embeddings?

In the embedding models documentation, the suggested way to generated embeddings is

ollama.embeddings({
    model: 'mxbai-embed-large',
    prompt: 'Llamas are members of the camelid family',
})

but I don't see the above syntax anywhere in the ollama-js documentation. Where is ollama.embeddings() documented? What is the canonical way to generate ollama embeddings, especially with multiple inputs?

Additionally, the ollama-js documentation says: "The Ollama JavaScript library's API is designed around the Ollama REST API".

But the REST API uses { model: …, input: … } as input while the example provided above uses { model: …, prompt: … } as input (prompt instead of input). Why this confusing inconsistency?

Upvotes: 0

Views: 427

Answers (2)

punkish
punkish

Reputation: 15278

diving into the source code, it seems that ollama.embed() has superseded ollama.embeddings(). Time for ollama to update its webpages.

Update: turns out I was correct

Upvotes: 0

M.Ali El-Sayed
M.Ali El-Sayed

Reputation: 1779

It seems to me there is a confusion between LangChan embedding and ollama.embed so in your node root folder run

 npm install ollama --save 

then use something like the following code

import { Ollama } from "ollama";

const ollama = new Ollama({ host: OLLAMA_API_URL });
    async function textToEmbeddingOllama(text) {
  try {
    const response = await ollama.embed({
      model: OLLAMA_MODEL_NAME,
      input: text,
      truncate: false,
      keep_alive: "1.5h",
      options: {
        embedding: true,
      },
    });
    console.log(response);
    return response.embeddings[0];
  } catch (error) {
    console.error("Error generating embeddings with Ollama:", error);
    throw error;
  }
}

Upvotes: 0

Related Questions