• 0 Posts
  • 381 Comments
Joined 2 years ago
cake
Cake day: July 11th, 2023

help-circle
  • How ks the drill baby drill crowd going to compete against mini stars in a can?

    Nu-Cu-Lar Bad? That’s…about as far as they’ll make it. To be fair, that might be as far as they need to. It’s all the oil companies will approve of them learning, at least.

    Of course, it sounds like the big problem of how to remove more power from it than you spend keeping it reacting remains an issue, presuming they can continue to extend reaction lifetimes to be functionally unlimited.





  • I wasn’t suggesting it as “font list and you’re done”. I was using it as an example because it’s one where I’m apparently really unusual.

    I would think you’d basically want to spoof all known fingerprinting metrics to be whatever is the most common and doesn’t break compatibility with the actual setup too much. Randomizing them seems way more likely to break a ton of sites, but inconsistently, which seems like a bad solution.

    I mean hypothetically you could also set up exceptions for specific sites that need different answers for specific fields, essentially telling the site whatever it wants to hear to work but that’s going to be a lot of ongoing work.


  • The crazy part about fingerprinting is that if you block the fingerprint data, they use that block to fingerprint you. That’s why the main strategy is to “blend in”.

    So, essentially the best way to actually resist fingerprinting would be to spoof the results to look more common - for example when I checked amiunique.org one of the most unique elements was my font list. But for 99% of sites you could spoof a font list that has the most common fonts (which you have) and no others and that would make you “blend in” without harming functionality. Barring a handful of specific sites that rely on having a special font, that might need to be set as exceptions.





  • No, the opposite; it’s a classic example showing that correlation doesn’t necessitate causation.

    Right, but ice cream sales and shark attacks have a shared cause, and it’s the weather. Humans both get in the ocean where they are shark-accessible more often and also buy more ice cream when it’s hot out.

    Basically causation is X->Y. But there are other relationships between X and Y, and in the case of ice cream sales and shark attacks it’s W->X and W->Y (one doesn’t cause the other, but they are caused by the same thing). It’s also possible for two things to correlate without any connection whatsoever, because sometimes things just happen to move in the same directions at the same times for a while.

    People have trouble dealing with that, and much magical thinking arises from X and Y happening together being believed to necessarily mean X and Y are connected in some fashion because humans are very good at building patterns even when they don’t exist.

    That’s literally where the vaccines cause autism thing started from - kids start showing clear signs of autism at about the same age they get several vaccines. The guy who originally proposed it with a deeply flawed study was only specifically claiming it was the combined MMR and not all vaccines generically and produced his study in an attempt to sell a separate MMR series that could be spaced out (rather than being one shot with all three) which would allegedly prevent the effect, because he would directly profit from his vaccine series being used instead of the combined MMR.


  • …it would be if in your analogy GMail blocks Yahoo because they don’t like the politics of their CEO, Outlook blocks both GMail and Yahoo to create a safe space, and you left Protonmail out of the list entirely because almost everyone else is blocking them for not banning users who email the wrong kind of porn to each other.

    It’s not a big deal until you realize the notion that they all talk to each other is mostly a lie and all the big ones block dozens of instances each. Hell, the threads on the larger instances about whether or not Threads and Truth Social should be defederated if they ever enable federation were some of the highest activity topics on Lemmy for a bit. So was people cheering about Burggit shutting down their lemmy server.



  • and the stuff about apple seeds being dangerously poisonous is just some bullshit

    The short version being that apple seeds are in fact poisonous, but you’d have to eat much more of them than you’d find in a single apple, and you’d have to break or crush the seeds in the process to release the poison. The dose makes the poison and all.


  • SSNs are reused. Someone dies and their number gets reassigned.

    Not even that. If you were born before 2014 or so and you’re from somewhere relatively populous theres a pretty good chance there’s more than one living human with your SSN right now. SSN were never meant to be unique, the pairing of SSN and name was meant to be unique but no one really checked for that for most of the history of the program so it really wasn’t either. The combination of SSN, name and age/birthdate should actually be unique though because of how they were assigned even back in the day.



  • Schadrach@lemmy.sdf.orgtoScience Memes@mander.xyzWhat Refutes Science...
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    11 days ago

    AI’s primary use case so far is to further concentrate wealth with the wealthy,

    Under capitalism, everything further concentrates wealth with the wealthy because the wealthy are best able to capitalize on anything. Wealth gives you the means to better pursue further wealth.

    and to replace employees.

    So what you’re saying is that we need to dismantle every piece of automation and go back to manufacturing everything by hand with the most basic hand tools possible? Because that will maximize the number of people needed to be employed to produce, well, anything. Anything else is using technology to replace employees.

    Or is it just that now we’re talking about people working office jobs they thought were automation-proof getting partially automated that’s made automation a bad thing?



  • In parallel to what Hawk wrote, AI image generation is similar. The idea is that through training you essentially produce an equation (really a bunch of weighted nodes, but functionally they boil down to a complicated equation) that can recognize a thing (say dogs), and can measure the likelihood any given image contains dogs.

    If you run this equation backwards, it can take any image and show you how to make it look more like dogs. Do this for other categories of things. Now you ask for a dog lying in front of a doghouse chewing on a bone, it generates some white noise (think “snow” on an old TV) and ask the math to make it look maximally like a dog, doghouse, bone and chewing at the same time, possibly repeating a few times until the results don’t get much more dog, doghouse, bone or chewing on another pass, and that’s your generated image.

    The reason they have trouble with things like hands is because we have pictures of all kinds of hands at all kinds of scales in all kinds of positions and the model doesn’t have actual hands to compare to, just thousands upon thousands of pictures that say they contain hands to try figure out what a hand even is from statistical analysis of examples.

    LLMs do something similar, but with words. They have a huge number of examples of writing, many of them tagged with descriptors, and are essentially piecing together an equation for what language looks like from statistical analysis of examples. The technique used for LLMs will never be anything more than a sufficiently advanced Chinese Room, not without serious alterations. That however doesn’t mean it can’t be useful.

    For example, one could hypothetically amass a bunch of anonymized medical imaging including confirmed diagnoses and a bunch of healthy imaging and train a machine learning model to identify signs of disease and put priority flags and notes about detected potential diseases on the images to help expedite treatment when needed. After it’s seen a few thousand times as many images as a real medical professional will see in their entire career it would even likely be more accurate than humans.