

I have a 7950X, a pile of RAM, and an unfairly expensive RTX 4000-series GPU. The cursor occasionally hitches for ~400ms whenever doing things like opening task manager or resuming from the lock screen, so that checks out unfortunately.
I have a 7950X, a pile of RAM, and an unfairly expensive RTX 4000-series GPU. The cursor occasionally hitches for ~400ms whenever doing things like opening task manager or resuming from the lock screen, so that checks out unfortunately.
I’m not saying you’re wrong or that Web Environment Integrity is a good thing, but a primary source and citation is needed for this statement:
It enforces the original markup and code from a server to be the markup and code that the browser interprets and executes, preventing any post-loading modifications.
Circular dependencies can be removed in almost every case by splitting out a large module into smaller ones and adding an interface or two.
In your bot example, you have a circular dependency where (for example) the bot needs to read messages, then run a command from a module, which then needs to send messages back.
v-----------\
bot command_foo
\-----------^
This can be solved by making a command conform to an interface, and shifting the responsibility of registering commands to the code that creates the bot instance.
main <---
^ \
| \
bot ---> command_foo
The bot
module would expose the Bot
class and a Command
instance. The command_foo
module would import Bot
and export a class implementing Command
.
The main
function would import Bot
and CommandFoo
, and create an instance of the bot with CommandFoo
registered:
// bot module
export interface Command {
onRegister(bot: Bot, command: string);
onCommand(user: User, message: string);
}
// command_foo module
import {Bot, Command} from "bot";
export class CommandFoo implements Command {
private bot: Bot;
onRegister(bot: Bot, command: string) {
this.bot = bot;
}
onCommand(user: User, message: string) {
this.bot.replyTo(user, "Bar.");
}
}
// main
import {Bot} from "bot";
import {CommandFoo} from "command_foo";
let bot = new Bot();
bot.registerCommand("/foo", new CommandFoo());
bot.start();
It’s a few more lines of code, but it has no circular dependencies, reduced coupling, and more flexibility. It’s easier to write unit tests for, and users are free to extend it with whatever commands they want, without needing to modify the bot
module to add them.
Oh cool, there’s a 200mp camera. Something that only pro photographers care about lol.
Oh this is a fun one! Trained, professional photographers generally don’t care either, since more megapixels aren’t guaranteed to make better photos.
Consider two sensors that take up the same physical space and capture light with the same efficiency/ability, but are 10 vs 40 megapixels. (Note: Realistically, a higher density would mean design trade-offs and more generous manufacturing tolerances.)
From a physics perspective, the higher megapixel sensor will collect the same amount of light spread over a more dense area. This means that the resolution of the captured light will be higher, but each single pixel will get less overall light.
So imagine we have 40 photons of light:
More Pixels Less Pixels
----------- -----------
1 2 1 5
2 6 2 3 11 11
1 9 0 1 15 3
4 1 1 1
When you zoom in to the individual pixels, the higher-resolution sensor will appear more noisy. This can be mitigated by pixel binning, which groups (or “bins”) those physical pixels into larger, virtual ones—essentially mimicking the lower-resolution sensor. Software can get crafty and try to use some more tricks to de-noise it without ruining the sharpness, though. Or if you could sit completely still for a few seconds, you could significantly lower the ISO and get a better average for each pixel.
Strictly from a physics perspective (and assuming the sensors are the same overall quality), higher megapixel sensors are better simply because you can capture more detail and end up with similar quality when you scale the picture down to whatever you’re comparing it against. More detail never hurts.
… Except when it does. Unless you save your photos as RAW (which take a massice amount of space), they’re going to be compressed into a lossy image format like JPEG. And the lovely thing about JPEG, is that it takes advantage of human vision to strip away visual information that we generally wouldn’t perceive, like slight color changes and high frequency details (like noise!)
And you can probably see where this is going: the way that the photo is encoded and stored destroys data that would have otherwise ensured you could eventually create a comparable (or better) photo. Luckily, though, the image is pre-processed by the camera software before encoding it as a JPEG, applying some of those quality-improving tricks before the data is lost. That leaves you at the mercy of the manufacturer’s software, however.
In summary: more megapixels is better in theory. In practice, bad software and image compression negate the advantages that a higher resolution provides, and higher-density sensors likely mean lower-quality data. Also, don’t expect more megapixels to mean better zoom. You would need an actual lense for that.
If that were the case, wouldn’t the mouse jump when the latest frame is presented? For me, it’s more that it just stays still until after Windows stops having a fuss.