List Question
10 TechQA 2024-12-30 15:11:25Is it possible to fine tune or use RAG on the CoreML version of Llama2?
215 views
Asked by Mike Ike
Compare two strings by meaning using LLMs
1.7k views
Asked by root
Implementation (and working) differences between AutoModelForCausalLMWithValueHead vs AutoModelForCausalLM?
331 views
Asked by Deshwal
How do I know the right data format for different LLMs finetuning?
115 views
Asked by John
CUDA OutOfMemoryError but free memory is always half of required memory in error message
292 views
Asked by olivarb
Query with my own data using langchain and pinecone
788 views
Asked by javascript-wtf
Could not find a version that satisfies the requirement python-magic-bin
386 views
Asked by Debrup Paul
Any possibility to increase performance of querying chromadb persisted locally
568 views
Asked by mlee_jordan
Grid based decision making with Llama 2
64 views
Asked by skvp