walking-octopus

@[email protected]

This profile is from a federated server and may be incomplete. View on remote instance

walking-octopus ,

If you want to host a capable pretrained model, feel free to check out LLaMA, especially the LLaMA.cpp since it allows for speedy inference. For the front-end, there's text-generation-webui, official web UI, Serge, XInference, or chatbot-ui with LocalAI (a server that makes LLaMA.cpp use OpenAI's schema).

For the model fine-tunes, I'd personally recommend WizardLM. It's not perfect, far from it, but it seems the closest to GPT-3.5 in my experience. Be sure to never trust what it says though, it does hallucinate less then other fine-tunes I saw, but still does so frequently enough.

There isn't really much of a need to train a model on a particular community. If you need it to work with changing facts, just throwing results from the search engine into the context window. Most of these models were already trained on huge datasets including Reddit, so...

If you want to fine-tune it on most helpful comments to make sure it generates more consistent advise, I'd recommend QLoRa and a 1k instruction dataset like in LIMA paper. Though again, I'm not sure there's any use for that.

[PROJECT] An application to search through Synology Photos using natural language captions ( kbin.social )

I save and backup all the photos on a Synology NAS instead of using one of the online providers. However Synology Photos doesn't have good search capabilities. So I built a project to search through the images using natural language captions, and found that it works really well....

walking-octopus ,

There's been some work getting CLIP to run in pure C++ with quantization in GGML, and there's a curious FasterViT model I've seen months ago, so hopefully this can be made faster to inference and easier to host as one binary soon enough.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines