Carla
Carla

Reputation: 3390

How to start a Local langchain4j Server?

I'm trying to learn langchain4j. I see some of the available examples are using the OpenAPI Key but I'd prefer downloading a Model and experiment with it. Can anyone point me to some examples that use a Model downloaded locally? As I understand, this could be done by starting a Local Model Server:

import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.localai.LocalAiChatModel;
public class HowAreYou {

    public static void main(String[] args) {
        ChatLanguageModel model = LocalAiChatModel.builder()
                .baseUrl("http://localhost:8080")
                .modelName("lunademo")
                .temperature(0.9)
                .build();

        String answer = model.generate("How are you?");
        System.out.println(answer);
    }

}

However, I haven't found any example on how to start a Model Server from a Model file. Any help? Thanks

Upvotes: 0

Views: 202

Answers (1)

Lachezar Parvov
Lachezar Parvov

Reputation: 1

Please check https://localai.io/

Start LocalAI link Start the image with Docker to have a functional clone of OpenAI! 🚀:

docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu
# Do you have a Nvidia GPUs? Use this instead
# CUDA 11
# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-nvidia-cuda-11
# CUDA 12
# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-nvidia-cuda-12

Upvotes: 0

Related Questions