• 1 Post
  • 91 Comments
Joined 2 months ago
cake
Cake day: December 10th, 2024

help-circle

  • There might be a universe in which magic exists. However, there is no universe in which I exist and magic exists. That’s because I was born into a mundane version of the universe, so there are infinite possibilities, but because my existence in a magical universe is 0

    That doesn’t really follow. Specifically, you’re putting way too much credit (infinity times as much credit as you should, in fact) on your ability to know exactly how your universe works. You’re saying there are zero hypothetical worlds in which you are the person you are now and also magic exists. I’m sure you can see how this is not true; for all you know magic is very obvious in your world and you just got mind-controlled, a minute ago, to your current state of mind. Or maybe you just never noticed it and hence grew up thinking you are in a mundane universe, which is very unlikely but not probability-0. Or one of many many other explanations, which are all unlikely (nothing involving a universe with magic in it is going to be likely), but very much not probability-0.



  • Sure, in Firefox itself it wasn’t a severe vulnerability. It’s way worse on standalone PDF readers, though:

    In applications that embed PDF.js, the impact is potentially even worse. If no mitigations are in place (see below), this essentially gives an attacker an XSS primitive on the domain which includes the PDF viewer. Depending on the application this can lead to data leaks, malicious actions being performed in the name of a victim, or even a full account take-over. On Electron apps that do not properly sandbox JavaScript code, this vulnerability even leads to native code execution (!). We found this to be the case for at least one popular Electron app.



  • There’s no real need for pirate ai when better free alternatives exist.

    There’s plenty of open-source models, but they very much aren’t better, I’m afraid to say. Even if you have a powerful workstation GPU and can afford to run the serious 70B opensource models at low quantization, you’ll still get results significantly worse than the cutting-edge cloud models. Both because the most advanced models are proprietary, and because they are big and would require hundreds of gigabytes of VRAM to run, which you can trivially rent from a cloud service but can’t easily get in your own PC.

    The same goes for image generation - compare results from proprietary services like midjourney to the ones you can get with local models like SD3.5. I’ve seen some clever hacks in image generation workflows - for example, using image segmentation to detect a generated image’s face and hands and then a secondary model to do a second pass over these regions to make sure they are fine. But AFAIK, these are hacks that modern proprietary models don’t need, because they have gotten over those problems and just do faces and hands correctly the first time.

    This isn’t to say that running transformers locally is always a bad idea; you can get great results this way - but people saying it’s better than the nonfree ones is mostly cope.















  • The thing I said I did? Yes; here’s the processed image:

    If you mean the math in the post, I can’t read it in this picture but it’s probably just some boring body-of-rotation-related integrals, so basically the same thing as I did but breaking apart the vase’s visible shape into analytically simple parts, whereas I got the shape from the image directly.