About the Models category
|
|
0
|
4177
|
August 12, 2020
|
Testing hugging face in langchain vs code
|
|
0
|
2
|
June 12, 2025
|
Resubmit Model Card for LLaMA 3.1 / 3.2 on Hugging Face
|
|
1
|
7
|
June 11, 2025
|
Looking to fine-tune flux Schnell, Want to hire a professional
|
|
0
|
7
|
June 11, 2025
|
Pytorch-Image models
|
|
4
|
38
|
June 11, 2025
|
Training models
|
|
3
|
74
|
June 10, 2025
|
Can Small Models Reflect? Prompt-only Metacognition in LLaMA 3B (No Fine-Tuning)
|
|
1
|
27
|
June 10, 2025
|
Wav2Vec2 WER remains 1.00 and return blank transcriptions
|
|
14
|
2853
|
June 10, 2025
|
Model File Lookup by SHA256 Hash
|
|
8
|
153
|
June 10, 2025
|
Fine-Tuning LLMs on Large Proprietary Codebases
|
|
5
|
149
|
June 10, 2025
|
FLUX.1-dev in n8n
|
|
1
|
21
|
June 10, 2025
|
Having issues accessing tokenizer_config.json for T5
|
|
3
|
51
|
June 10, 2025
|
[Help Needed] Suicide Risk Detection from Long Clinical Notes (Few-shot + ClinicBERT approaches struggling)
|
|
2
|
25
|
June 10, 2025
|
Transformer model works locally but not in Docker container
|
|
6
|
2206
|
June 10, 2025
|
Does using any embedding model for fine-tuning or inference affect model performance?
|
|
1
|
21
|
June 10, 2025
|
QuantumAccel – Symbolic Quantum Logic Gates for Efficient Computation
|
|
1
|
14
|
June 10, 2025
|
Model Tuning and Re-Tuning Problems
|
|
2
|
33
|
June 10, 2025
|
Error in generating model output: InferenceClient.chat_completion() got an unexpected keyword argument 'last_input_token_count'
|
|
2
|
41
|
June 10, 2025
|
Why can't I reproduce benchmark scores from papers like Phi, Llama, or Qwen? Am I doing something wrong or is this normal?
|
|
2
|
32
|
June 10, 2025
|
Segmant Anything Model (SAM) ValueError: Invalid image type
|
|
2
|
11
|
June 10, 2025
|
Efficient batch inference using stacked past_key_values for multiple continuation candidates
|
|
1
|
15
|
June 10, 2025
|
Error while initializing ZeroGPU
|
|
5
|
29
|
June 10, 2025
|
Why Do We Settle for Less?
|
|
25
|
217
|
June 10, 2025
|
Any thoughts on Novita AI?
|
|
1
|
32
|
June 10, 2025
|
Building a Multi Lingual Multi Task Model in Finance Domain
|
|
2
|
42
|
June 10, 2025
|
LLM architecture Dots1ForCausalLM conversion to GGUF
|
|
1
|
36
|
June 7, 2025
|
Opus-MT: Translation returns <unk> token
|
|
3
|
11
|
June 6, 2025
|
Convert the models downloaded in .cache/huggingface/hub/ to original format
|
|
1
|
17
|
June 6, 2025
|
Unable to access Llama 3.3 despite attaining it through Llama.com
|
|
1
|
19
|
June 4, 2025
|
Running and testing / BharatGPT-3B-Indic
|
|
5
|
33
|
May 28, 2025
|