Ollama not found with Quakus, forbidden with Firefox RESTED plugin

66 views Asked by At

I'm running Ollama in Docker.
I tried to install it as a service, the issue is the same.
I'm on Unbuntu 22.04.
I created a Quarkus project with the quarkus-langchain4j-ollama dependency.
When I run the project with mvn quarkus:dev and then hit "d", I'm on the Quarkus Dev UI.
If I go to the chat and ask a question, I have a 404 error.

"Received: 'Not Found, status code 404' when invoking: Rest Client method: 'io.quarkiverse.langchain4j.ollama.OllamaRestApi#generate'"

If I check the Ollama Logs, I can see a call to "/api/generate".
My Docker compose file looks like this :

services:
  ollama:
    image: ollama/ollama
    container_name: ollama
    volumes:
      - ~/docker/data/ollama:/root/.ollama
    ports:
      - 11434:11434
    environment:
      - OLLAMA_HOST=0.0.0.0
      - OLLAMA_ORIGINS=http://0.0.0.0:11434

If I try to access it with the RESTED Firefox plugin, I have a 403 : Forbidden error.
And still, I can see a call to "/api/generate" in the Ollama logs.
And if I run the following curl, it works perfectly :

curl -X POST http://localhost:11434/api/generate -d '{ "model": "mistral", "prompt":"Tell me a joke" }'

Note : If I update OLLAMA_ORIGINS to Resteasy Reactive Client then the Quarkus error turns to

"io.netty.channel.AbstractChannel$AnnotatedConnectException: Connexion refusée: localhost/127.0.0.1:11434"
2

There are 2 answers

0
BenjaminD On BEST ANSWER

Finally, on the Quarkus side, it was all caused by a typo : in my application.properties, I typed

quarkus.langchain4j.ollama.chat-model.model-id=mistral:lastest

with an extra in latest.

With

quarkus.langchain4j.ollama.chat-model.model-id=mistral:latest

it works perfectly.
It's strange that the error was a 404 with no more explainations.

On the Firefox plugin, it was fixed by adding and environment to the docker compose : OLLAMA_ORIGINS=moz-extension://* or on a larger scale : OLLAMA_ORIGINS=*

0
k33g_org On

The DNS name of Ollama in the compose stack is the name of the service. So you should use these environment variables:

    environment:
      - OLLAMA_HOST=ollama
      - OLLAMA_ORIGINS=http://ollama:11434

Of course, you need to pull the model to be able to query Ollama. Or you can create an image with the preloaded model: https://github.com/whales-demos/llm-in-container/blob/main/ollama-04/Dockerfile

If you have an issue with the volume, try this:

    volumes:
      - ./ollama:/root/.ollama