List Question
10 TechQA 2024-12-30 15:11:25Is it possible to fine tune or use RAG on the CoreML version of Llama2?
247 views
Asked by Mike Ike
Compare two strings by meaning using LLMs
1.7k views
Asked by root
Implementation (and working) differences between AutoModelForCausalLMWithValueHead vs AutoModelForCausalLM?
374 views
Asked by Deshwal
How do I know the right data format for different LLMs finetuning?
147 views
Asked by John
CUDA OutOfMemoryError but free memory is always half of required memory in error message
328 views
Asked by olivarb
Query with my own data using langchain and pinecone
827 views
Asked by javascript-wtf
Could not find a version that satisfies the requirement python-magic-bin
424 views
Asked by Debrup Paul
Any possibility to increase performance of querying chromadb persisted locally
605 views
Asked by mlee_jordan
Grid based decision making with Llama 2
105 views
Asked by skvp