Migrated over from [email protected]

  • 0 Posts
  • 13 Comments
Joined 2 months ago
cake
Cake day: June 28th, 2025

help-circle
  • The problem isn’t the tech itself. Getting a pretty darn clean 4k output from 1080p or 1440p, at a small static frametime cost is amazing.

    The problem is that the tech has been abused as permission to slack on optimization, or used in contexts where there just isn’t enough data for a clean picture, like in upscaling to 1080p or less. Used properly, on a well optimized title, this stuff is an incredible proposition for the end user, and I’m excited to see it keep improving.


  • Oh, does it? I was literally thinking to myself that Teardown was an interesting example of destruction, and wondering how they did their lighting. RT makes perfect sense, that must be one of the earliest examples of actually doing something you really couldn’t without RT (at least not while lighting it well).

    But yes, agreed that recent performance trends are frustrating, smearing DLSS and frame gen to cover for terrible performance. Feels like we’re in a painful tween period with a lot of awkward stuff going on, and also deadlines/crunch/corporate meddling etc causing games to come out half-baked. Hopefully this stuff does reach maturity soon and we can have some of this cool new stuff without so many other compromises.


  • The big benefit of raytracing now, imo (which most games aren’t doing), is that it frees games up to introduce dynamic destruction again. We used to have all kinds of destructible walls and bits and bobs around, with flat lighting, but baked lighting has really limited what devs can do, because if you break something you need a solution to handle all the ways the lighting changes, and for the majority of games they just make everything stiff and unbreakable.

    Raytracing is that solution. Plug and play, the lighting just works when you blow stuff up. DOOM: TDA is the best example of this currently (although still not a direct part of gameplay), with a bunch of destructible stuff everywhere, and that actually blows up with a physics sim rather than a canned animation. All the little boards have perfect ambient occlusion and shadows, because raytracing just does that.

    It’s really fun, if minor, and one of the things I actually look forward to more games doing with raytracing. IMO that’s why raytracing has whelmed most people, because we’re used to near-flawless baked lighting, and haven’t really noticed the compromises that baked lighting has pushed on us.


  • Hazzard@lemmy.ziptoToday I Learned@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    4 days ago

    Honestly, I’m a bit relieved that OpenAI is at least trying to intervene here. When I heard they backtracked and re-released 4o, alarm bells went off for me that they were going to give in and just rake in profit off this type of dangerous AI addiction. Sounds like at least some of that original non-profit “managing the future of AI” concern is still there, if obviously far less than I’d like.


  • Bit of an odd answer, but for me (and my wife), the last piece of the puzzle was really budgeting. The invisible, constant financial stress is a lot, and adds to that feeling of “pretending” when you’re not even sure if buying groceries will cause a bill to bounce, let alone hanging out with friends who always seem to comfortably have the money to do whatever it is you’re doing.

    It’s been several years now (early 30s, started budgeting in late 20s), it took us a while to figure it out and progress was slow, but I can “see the line” now, towards retirement, towards home ownership, we have no more credit card debt (just student loans left, which we’re working on), and we budget “fun money” that I save up to make big purchases like a 7900XTX without any guilt or credit.

    We’re also having our first kid soon, and at least financially, I’m not stressed about it at all, which would’ve been impossible in our twenties. Getting our financials in hand and headed in the right direction has just done massive work in helping me feel like I know what I’m doing, and that our life is actually getting better rather than stuck in place.


  • Hazzard@lemmy.ziptocats@lemmy.worldCat raised with dogs
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    12 days ago

    Mhm, of course, critical thinking in general is absolutely important, although I take some issue with describing looking for artifacts as “vague hunches”. Fake photos have existed for ages, and we’ve found consistent ways to spot and identify them, such as checking shadows, the directionality of light in a scene, the fringes of detailed objects, black levels and highlights, and even advanced techniques like bokeh and motion blur. You don’t see many people casting doubt on the validity of old pictures with Trump and Epstein together, for example, despite the long existence of photoshop and advanced VFX. Hell, even this image could have been photoshopped, and you’re relying on your eyes to catch the evidence of that if that were the case.

    The techniques I’ve outlined here aren’t likely to become irrelevant in the next 5+ years, given they’re based on how the underlying technology works, similar to how LLMs aren’t likely to 100% stop hallucinating any time soon. More than that, I actually think there’s a lot less incentive to work these minor kinks out than something like LLM hallucination, because these images already fool 99% of people, and who knows how much additional processing power it would take to run this at a resolution where you could get something like flawless tufts of grass, in a field that’s already struggling to make a profit given the high costs of generating this output. And if/when these techniques become invalid, I’ll put in the effort to learn new ones, as it’s worthwhile to be able to quickly and easily identify fakes.

    As much as I wholeheartedly agree that we need to think critically and evaluate things based on facts, we live in a world where the U.S. President was posting AI videos of Obama just a couple weeks ago. He may be an idiot who is being obviously manipulative, but it’s naive to think we won’t eventually get bad actors like him who try to manipulate narratives like that with current events, where we can’t rely on simply fact-checking history, or that someone might weave a lie that doesn’t have obvious logical gaps, and we need some kind of technique to verify images to settle the inevitable future “he said, she said” debates. The only real alternative is to just never trust a new photo again, because we can’t 100% prove anything new hasn’t been doctored.

    We’ve survived in a world with fake imagery for decades now, I don’t think we need to roll over and accept AI as unbeatable just because it fakes things differently, or because it might hypothetically improve at hiding itself in the future.

    Anyway, rant over, you’re right, critical thinking is paramount, but being able to clearly spot fakes is a super useful skill to add to that kit, even if it can’t 100% confirm an image as real. I believe these are useful tools to have, which is why I took the time to point them out despite the image already having been proven as not AI by others dating it before I got here.


  • True, someone else did some reverse image searching before I got here, but I think it’s an important skill to develop without relying on dating the image, as that will only work for so long, and there will likely be more important things than memes that will need to be proven/disproven in the future. A reverse image search probably won’t help us with the next political scandal, for example. It’s a pretty good backup to have when it applies though, nice that it proves me correct here.



  • I’d recommend you get some practice identifying and proving AI generated images. I agree this has a bit of that “look”, but in this case I’m quite certain it’s just repeated image compression/a cheap camera. Here’s the major details I looked at after seeing your comment:

    • The grass at the bottom left. AI is frequently sloppy with little details and straight lines, usually the ones in the background. In this case, you can look at any blade of grass and follow it, and its path makes sense. The same happens with the lines in the tiles, the water stains, etc.
    • The birthmark on the large brown dog. In this case, this is a set of three photos, which gives us an easy way to spot AI. AI generated images start from random noise, so you’d never get the exact same birthmark, consistent across different angles, from a prompt like “large brown dog with white birthmark on chest”. Spotting a change in the birthmark, or a detail like it, would be a dead giveaway, but I can’t spot any.
    • There are other tricks as well, such as looking for strange variations in contrast and exposure from the underlying noise, but those are more difficult to explain in text. Corridor Digital has some good videos demonstrating it with visual examples if you’re interested, but suffice to say I don’t pick up on that here either.

    It’s useful to be able to prove or disprove your suspicions, as well as to be able to back them up with something as simple as “this is AI generated, just look at the grass”. Hope this helps!


  • I’ll give two answers to this question, from the perspective of a Christian reading the Old Testament/Torah.

    Wouldn’t it be effective to convince followers of a religion if a religion could accurately predict a scientific phenomenon before its followers have the means of discovering it?

    This is interpretative, but if there is a God, he seems big on free will. Why give humanity the option to sin in the garden at all? Why not just reveal himself in the sky each morning? Why even bother creating a universe that can be explained without him? There’s an abundance of easy ways God could make himself irrefutable, and yet in the Bible he makes us “in His image”, and offers us choices like that tree in the garden.

    Furthermore, why even create us to sin in the first place? My interpretation of the Torah is that God is big on relationship, and that free will is a key part of that. Just like a human relationship based on a love potion is kinda creepy, and a pale imitation of something real, it seems like God doesn’t want to be irrefutable.

    I think that’s the more relevant answer to your question, but I’ll also give the only example that comes to mind of the Bible seemingly imparting “scientific knowledge”, which is to look at the laws around “cleanliness”. Someone else already mentioned some “unclean” animals, but if you read more, they pretty consistently seem like good advice around bacteria. Some examples of times you need to “purify” (essentially take a bath) that seem like common sense now:

    • being around dead bodies
    • touching blood that’s not yours
    • having your period
    • etc.

    Reading this as a modern person aware of germs, many of these “laws” seem like they would have kept the death rate of faithful Jews a lot lower than their neighbours in that day.


  • Hazzard@lemmy.ziptoFediverse@lemmy.worldNSFW on Lemmy
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    19 days ago

    Exactly what I’ve done. Set my settings to hide NSFW, blocked most of the “soft” communities like hot girls and moe anime girls and whatever else (blocking the lemmynsfw.com instance is a great place to start), and I use All frequently. That’s how I’ve found all the communities I’ve subscribed to, but frankly, my /all feed is small enough that I usually see all my subscribed communities anyway.



  • Ugh, this is what our legacy product has. Microservices that literally cannot be scaled, because they rely on internal state, and are also all deployed on the same machine.

    Trying to do things like just updating python versions is a nightmare, because you have to do all the work 4 or 5 times.

    Want to implement a linter? Hope you want to do it several times. And all for zero microservice benefits. I hate it.