OpenWebUI connected tabbyUI’s OpenAI endpoint. I will try reducing temperature and seeing if that makes it more accurate.
OpenWebUI connected tabbyUI’s OpenAI endpoint. I will try reducing temperature and seeing if that makes it more accurate.
Context was set to anywhere between 8k and 16k. It was responding in English properly, and then about halfway to 3/4s of the way through a response, it would start outputting tokens in either a foreign language (Russian/Chinese in the case of Qwen 2.5) or things that don’t make sense (random code snippets, improperly formatted text). Sometimes the text was repeating as well. But I thought that might have been a template problem, because it seemed to be answering the question twice.
Otherwise, all settings are the defaults.
I tried it with both Qwen 14b and Llama 3.1. Both were exl2 quants produced by bartowski.
Perplexica works. It can understand ollama and custom OpenAI providers.
Super useful guide. However after playing around with TabbyAPI, the responses from models quickly become jibberish, usually halfway through or towards the end. I’m using exl2 models off of HuggingFace, with Q4, Q6, and FP16 cache. Any tips? Also, how do I control context length on a per-model basis? max_seq_len in config.json?
You can right click the URL bar for sites that support the OpenSearch XML standard. Which I guess is what they wanted to replace it with. But I don’t really know why they removed the button to a about: config setting. Could at least be a checkbox or something to enable.
Returns the add custom search engine button. Which for some reason, has been hidden by default.
Depends on the continuity and who’s writing it, but often yes. He was notably portrayed this way in the Justice League cartoon.
Even the smell of Olives causes me to gag. I absolutely cannot eat them. Olive oil is fine. But actual olives, no. Doesn’t matter if they’re old, new, canned, fresh. They’re absolutely disgusting. One of the few foods I outright cannot and will not eat.
I use a Misskey fork for micro blogging and I can’t even get Lemmy posts to load. The profiles of communities do, but that’s it.
Ah right. What I really meant to ask was if it can do protocols other than http.
Which I don’t think it can…
Are you able to tunnel ports other than 80 and 443 through Cloudflare?
I find it somewhat unclear how this works. Is it the JavaScript that loads comments on the posts, on the static site itself?
Will existing projects have to adapt their codebases to work with ActivityPods? I assume yes.
So is there a way to follow someone on Threads now? Or at least get one’s instance to load a post? Where are the details of this beyond Zuckerberg’s post?
WordPress is open source, for one.
Can just use an external image host in the meantime.
Your instance probably has a very low upload size limit
Edit: lemm.ee has a limit of like 100kb.
Fluffy is also a desktop client.
https://agnos.is/posts/tech-recruitment-is-out-of-control.html
This was my experience at the beginning of 2024. It was bad enough that I had to write a blog post about it.