Connection error in langchain with llama2 model downloaded locally

435 views Asked by At
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000214F6282A50>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))

I have downloaded llama model from hugging face hub Is this correct way to use locally downloaded model with langchain. I am not sure about that can anyone check and confirm this please. I cant understand the above error from where it got.

from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms.ollama import Ollama

model_path = "model\\llama-2-7b-chat.ggmlv3.q8_0.bin"
llm = Ollama(model=model_path)
prompt = PromptTemplate.from_template("Generate a blog post about {topic}.")
chain1 = LLMChain(llm=llm, prompt=prompt)
topic = "Future of graphic design for beginners"
output = chain1.invoke({"topic": topic})
print(output)
0

There are 0 answers