How to run LangChain Ollama with ngrok url?

344 views Asked by At

I ran a script to get the ngrok url:

import asyncio

# Set LD_LIBRARY_PATH so the system NVIDIA library 
os.environ.update({'LD_LIBRARY_PATH': '/usr/lib64-nvidia'})

async def run_process(cmd):
  print('>>> starting', *cmd)
  p = await asyncio.subprocess.create_subprocess_exec(
      *cmd,
      stdout=asyncio.subprocess.PIPE,
      stderr=asyncio.subprocess.PIPE,
  )

  async def pipe(lines):
    async for line in lines:
      print(line.strip().decode('utf-8'))

  await asyncio.gather(
      pipe(p.stdout),
      pipe(p.stderr),
  )

await asyncio.gather(
  run_process(['ngrok', 'config', 'add-authtoken','mytoken'])
)

await asyncio.gather(
    run_process(['ollama', 'serve']),
    run_process(['ngrok', 'http', '--log', 'stderr', '11434']),
)

After that, I ran the command export OLLAMA_HOST = url and ollama pull llama2 on my MAC terminal.

Finally, I ran the code below using Python:

ollama = Ollama(base_url=url, model="llama2")

print(ollama("why is the sky blue"))

But it gave the error 404.

I tried to install ngrok on Python and set the auth token and I expected it can connect to the url. But it still gave me 404 error.

1

There are 1 answers

0
Vesman Martin On

I too faced the same issue. still having issue with the ollama server. The server is not responding but able to communicate now. Try to open port 11434 using command sudo ufw allow tcp/11434 on the ollama hosted machine.If this not work try the following:

sudo systemctl stop ollama.service
sudo nano /etc/systemd/system/ollama.service
#add a line Environment under section [Service]:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

sudo systemctl daemon-reload
sudo systemctl restart ollama.service
ngrok http --log stderr 11434

Now use the Ngrok URL on your code or where you want. But still I am getting error

raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))