• 2 Posts
  • 706 Comments
Joined 2 years ago
cake
Cake day: June 5th, 2023

help-circle
  • Here’s the thing, we scientists need our cheerleaders. We spend our time getting good at doing science, so it’s worth it to hire someone who is good at hyping and advocating for our work. Go listen to the recordings of James Webb trying to explain to JFK that we need to do a shit-ton of science before we can get to the moon. JFK just plain doesn’t understand the magnitude of what he’s asking for. He thinks we could do it in 6 months. This lady is our champion.











  • Liz@midwest.socialtoGreentext@sh.itjust.worksAnon's PC works
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    A lot of the efficiency gains in the last few years are from better chip design in the sense that they’re improving their on-chip algorithms and improving how to CPU decides to cut power to various components. The easy example is to look at how much more power efficient an ARM-based processor is compared to an equivalent x86-based processor. The fundamental set of processes designed into the chip are based on those instruction set standards (ARM vs x86) and that in and of itself contributes to power efficiency. I believe RISC-V is also supposed to be a more efficient instruction set.

    Since the speed of the processor is limited by how far the electrons have to travel, miniaturization is really the key to single-core processor speed. There has still been some recent success in miniaturizing the chip’s physical components, but not much. The current generation of CPUs have to deal with errors caused by quantum tunneling, and the smaller you make them, the worse it gets. It’s been a while since I’ve learned about chip design, but I do know that we’ll have to make a fundamental chip “construction” change if we want faster single-core speeds. E.G. at one point, power was delivered to the chip components on the same plane as the chip itself, but that was running into density and power (thermal?) limits, so someone invented backside power delivery and chips kept on getting smaller. These days, the smallest features on a chip are maybe 4 dozen atoms wide.

    I should also say, there’s not the same kind of pressure to get single-core speeds higher and higher like there used to be. These days, pretty much any chip can run fast enough to handle most users’ needs without issue. There’s only so many operations per second needed to run a web browser.



  • Liz@midwest.socialtoGreentext@sh.itjust.worksAnon's PC works
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 months ago

    We reached the physical limits of silicon transistors. Speed is determined by transistor size (to a first approximation) and we just can’t make them any smaller without running into problems we’re essentially unable to solve thanks to physics. The next time computers get faster will involve some sort of fundamental material or architecture change. We’ve actually made fundamental changes to chip design a couple of times already, but they were “hidden” by the smooth improvement in speed/power/efficiency that they slotted into at the time.





  • But the opposite is true when you’re by yourself. If you’re staring at the terminal, literally infinite commands are possible. If you’ve got a GUI, the designers had to spend a little time thinking about what all the operations in the program were, and how to organize and access them. You, the user, then get to navigate this mini-help-guide that is the GUI in order to figure out what you need to do. Yes, it’s more work for the programmer, but that’s the entire point of programming. Do a little more work up front in order to save yourself and others a lot of work down the road.