

It seems Signal has already pushed out a fix for this, which was abusing the QR codes to actually link a device when it was presenting itself as a way to join a group.
Paywalled: https://www.wired.com/story/russia-signal-qr-code-phishing-attack/
Just a stranger trying things.
It seems Signal has already pushed out a fix for this, which was abusing the QR codes to actually link a device when it was presenting itself as a way to join a group.
Paywalled: https://www.wired.com/story/russia-signal-qr-code-phishing-attack/
Well, in the case of legacy GPUs you are forced to downgrade drivers. In that case, you can no longer use your recent and legacy GPU simultaneously, if that’s what you were hoping for.
But if you do go the route of legacy drivers, they work fine.
I can’t speak about vulkan, but I had an old GTX 680 from 2012, that has worked without issue until a year back or so. I was able to get it recognized by nvidia-smi.
I had it running using the proprietary drivers, with the instructions from here, using the legacy method: https://rpmfusion.org/Howto/NVIDIA#Legacy_GeForce_600.2F700
Is that what you did?
PS: When I mean working without issue I mean gaming on it using proton.
In this case, without clicking any links in the email, why don’t you just simply go to the proton website manually and log in for good measure?
I hear you, I always see this problem being solved by the link being in the description and the host saying “link in the description”. I hadn’t come across a situation where an audio only format was accessible and there was no way to interact with the content but in some corner cases it does make sense.
I don’t understand in what circumstances anyone would like to use link shorteners? I can only find reasons why not to use them:
Deepseek is good at reasoning, qwen is good at programming, but I find llama3.1 8b to be well suited for creativity, writing, translations and other tasks which fall out of the scope of your two models. It’s a decent all arounder. It’s about 4.9GB in q4_K_M.
Tldw: guy tests the RX 6800 at 1080p, 1440p and 4k across 19 games on Windows 11 vs Nobara 41.
Allegedly, nobara beats windows on all games except 2 (witcher 3 and CS2), across almost all resolutions, by around single digit percents.
I think the requested salary amount plays a big role. If a typical 100k annual role was rejected on salary misalignments despite requesting 60k, I would be much more critical of the company.
One thing which I find useful is to be able to turn installation/setup instructions into ansible roles and tasks. If you’re unfamiliar, ansible is a tool for automated configuration for large scale server infrastructures. In my case I only manage two servers but it is useful to parse instructions and convert them to ansible, helping me learn and understand ansible at the same time.
Here is an example of instructions which I find interesting: how to setup docker for alpine Linux: https://wiki.alpinelinux.org/wiki/Docker
Results are actually quite good even for smaller 14B self-hosted models like the distilled versions of DeepSeek, though I’m sure there are other usable models too.
To assist you in programming (both to execute and learn) I find it helpful too.
I would not rely on it for factual information, but usually it does a decent job at pointing in the right direction. Another use i have is helpint with spell-checking in a foreign language.
Regarding photos, and videos specifically:
I know you said you are starting with selfhosting so your question was focusing on that, but I would like to also share my experience with ente which has been working beautifully for my family, partner and myself. They are truly end to end encrypted, with the source code available on github.
They have reasonable prices. If you feel adventurous you can actually also host it yourself. They have advanced search features and face recognition which all run on device (since they can’t access your data) and it works very well. They have great sharing and collaborating features and don’t lock features behind accounts so you can actually gather memories from people on your quota by just sharing a link. You can also have a shared family plan.
Ollama, latest version. I have it setup with Open-WebUI (though that shouldn’t matter). The 14B is around 9GB, which easily fits in the 12GB.
I’m repeating the 28 t/s from memory, but even if I’m wrong it’s easily above 20.
Specifically, I’m running this model: https://ollama.com/library/deepseek-r1:14b-qwen-distill-q4_K_M
Edit: I confirmed I do get 27.9 t/s, using default ollama settings.
You can. I’m running a 14B deepseek model on mine. It achieves 28 t/s.
Removed by mod
Removed by mod
From what I understand, sealed sender is implemented on the client side. And that’s what’s in the github repo.
It’s unfortunate that you react like this. I don’t claim to be an expert, never have. I’ve only been asking for evidence, but all we get to are assumptions and they all seem to stem from the fact that allegedly the CIA has indirectly funded Signal (I’m not disputing nor validating it).
The concern is valid, and it has caused a lot of distrust in many companies due to the Snowden leaks, but that distrust is founded in the leaks. But so far there is no evidence that Signal is part of any of it. And given the continued endorsement by security experts, I’m inclined in trusting them.
Are you implying that Signal is withholding information from the Californian Government? And only providing the full extent of their data to the government?
This comes back to the earlier point that there is no proof Signal even has more data than they have shared.
They have to know who the message needs to go to, granted. But they don’t have to know who the message comes from, hence why the sealed sender technique works. The recipient verifies the message via the keys that are exchanged if they have been communicating with that correspondent before or else it is a new message request.
So I don’t see how they can build social graphs if they don’t know who the sender if all messages are, they can only plot recipients which is not enough.
Would you be able to share more info? I remember reading their issues with docker, but I don’t recall reading about whether or what they switched to. What is it now?