List Question
20 TechQA 2023-10-24T18:03:49.200000llama.cpp llama_cublas enabled, but only 75mb/6gb of vram used when running ./main
183 views
Asked by djbritt
codellama generates newline character repeatedly
536 views
Asked by benna121
Unexpected Continuous Conversation from LlamaCpp Model in LangChain
523 views
Asked by Eren Kalinsazlioglu
llama-index: multiple calls to query_engine.query always gives "Empty Response"
707 views
Asked by Jamie Dixon
I am trying to Integrate LLAMA-2 Model Locally using nextjs and Node js. getting error Error parsing JSON: Error: spawn UNKNOWN
428 views
Asked by Ankit Vashishta
Converting a TinyStories Llama model to GGUF for llama.cpp
401 views
Asked by Ammar Husain
No GPU support while running llama-cpp-python inside a docker container
2.1k views
Asked by Pratyush
langchain with llama2 local slow inference
234 views
Asked by Muhammad Muneeb Ur Rahman
Suppress LLamaCpp stats output
606 views
Asked by sten
Deploy app with llama-cpp-python dependency on Vercel
282 views
Asked by cbacelar
How fix 'type=value_error' when loading a wizard-vicuna model to PrivateGPT?
570 views
Asked by pol0
Could not load Llama model from path: ./Models/llama-7b.ggmlv3.q2_K.bin. Received error Llama.__init__() got an unexpected keyword argument 'input'
3.6k views
Asked by rahularyansharma
How to use decapoda-research / llama-7b-hf with fine tuning LoRA in LLaMA.cpp?
865 views
Asked by Khoi V
Persist VectorStoreIndex (LlamaIndex) locally
350 views
Asked by 6core
llama-cpp-python on GPU: Delay between prompt submission and first token generation with longer prompts
638 views
Asked by jhthompson12
CMAKE in requirements.txt file: Install llama-cpp-python for Mac
545 views
Asked by Maxl Gemeinderat
llama.cpp conversion of finetuned HF ( huggingface ) fails for LLaMA2 - 7B model
239 views
Asked by Vikram Murthy
Streaming local LLM with FastAPI, Llama.cpp and Langchain
1.6k views
Asked by Maxl Gemeinderat
llama-cpp-python Log printing on Ubuntu
223 views
Asked by San Vik
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based project
1.3k views
Asked by Kaustubh Ratna