List Question
10 TechQA 2023-10-13 16:11:26Is it possible to fine tune or use RAG on the CoreML version of Llama2?
250 views
Asked by Mike Ike
Compare two strings by meaning using LLMs
1.7k views
Asked by root
Implementation (and working) differences between AutoModelForCausalLMWithValueHead vs AutoModelForCausalLM?
364 views
Asked by Deshwal
How do I know the right data format for different LLMs finetuning?
152 views
Asked by John
CUDA OutOfMemoryError but free memory is always half of required memory in error message
328 views
Asked by olivarb
Query with my own data using langchain and pinecone
835 views
Asked by javascript-wtf
Could not find a version that satisfies the requirement python-magic-bin
420 views
Asked by Debrup Paul
Any possibility to increase performance of querying chromadb persisted locally
599 views
Asked by mlee_jordan
Grid based decision making with Llama 2
102 views
Asked by skvp