Ok I think I do know the answer but I never learned it, so I want to learn it today. It’s been about 1 year now we can reliably make 3nm chips, which is impressive on a scale of size. But why is is better? My theory is simply: We can make a product the same size but add more on it because it’s smaller, making it stronger and faster for more complex operations. Which would mean it’s not the chip that’s impressive on its own, just the size of it.

Or there is something else, and I’d love to get the full explanation and understand chips better

  • partial_accumen@lemmy.world
    link
    fedilink
    arrow-up
    153
    arrow-down
    5
    ·
    edit-2
    1 year ago

    Until we have reliable and wide-temperature operating superconductors, electronics are limited by electrical resistance in the materials that conduct electricity. So the materials inside CPUs have resistance. With chemistry we’ve lowered it as much as we can, but for it to still be a semiconductor (the material that makes transistors and CPUs work) there are practical limits and we’ve hit those with humanity’s knowledge today.

    Take your hand palm flat and place it on the floor next to your foot. Put some weight on your hand and drag your hand quick from your toes to your heel. Your hand got a little warm from the friction, right? Now imagine doing that same hand dragging exercise from your bedroom all the way to your living room. HOT HOT HAND! Friction is the same thing that causes heat in CPUs. The friction is the electrons flowing rubbing against the resistance in the conductor.

    So we’ve got heat limiting us, and the more distance we have, the more heat we have, the more limits on CPU speed we have.

    So with present day CPUs, how can we make less heat? Use less distance in the CPU from place to place inside it.

    This is where we come to your 3 nm (nanometers). This is the measurement of the width (of a part called the “gate”) of one single transistor inside the CPU. Its 3 times smaller than say a 9 nm gate technology CPU. Our new CPU has 3 times less distance to travel which also means it needs less electricity to do the same work. Less electricity also means less heat because there fewer electrons rubbing against the conductor’s resistance.

    So less distance to travel, and fewer electrons needed to travel. Thats good stuff for making faster CPUs!

    So now you ask, why are we stopping at 3nm? Why not 1nm right now? In short, we don’t have the technology for it yet. CPUs are made with, believe it or not a photographic process! Light in the specific shape of the CPU circuit is shined on specially prepared silicon. Chemicals make part of that silicon conduct, and some part NOT conduct. This is semiconductor lithography. I could go down a whole separate line for this, but this isn’t what you asked so I’ll leave off right here.

  • SHITPOSTING_ACCOUNT@feddit.de
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    1 year ago

    The smaller it is the less power it needs. That also means it generates less heat, allowing you to do more computation without the device melting.

  • Hazdaz@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    1 year ago

    Smaller process means less energy. Less energy means less heat. Less heat can mean faster operation. So without changing any of the layout or logic of the chip itself to make it more efficient, just shrinking the process alone will give you a speed boost.

    But it goes further than that. Chips are cut from a wafer. The cost to make that wafer is (for the most part) constant. So if you can only make 20 or if you can make 2000 chips from that one wafer, it ultimately costs the same. But then that means the more CPUs that can be made per wafer, the per-CPU cost drops.

    So you get a more power efficient, cooler, faster and cheaper chip when you shrink the process. The entire semiconductor industry is so dependent on this idea that it invests billions into it every year because it is so vitally important.

    • Granixo
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      You are literally correct, the best kind of correct.

    • spauldo@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Power efficiency and the amount of components you can fit into the same area are the big reasons.

      3nm isn’t what you use for regular run-of-the-mill chips like voltage regulators and ADCs. It’s for things like processors, where you have a metric buttload of complexity all in a tiny package.

      We can’t really clock silicon much faster than we do now, so speed increases come from having more cores, more pipelines, and more complicated tricks that let you do more with the same clock speed. People don’t want to buy new devices that aren’t faster than their old devices.

      Taiwanese fabs have pushed the state of the art for quite some time now, so if China is catching up then that will get some people’s attention. But Chinese fabs generally don’t participate in the global supply chain so I personally think it’s not going to have much impact in the west.

  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 year ago

    The faster you want a chip to go, the less time there is between every cycle. It takes time for signals to propagate across silicon. The smaller the window, The less workable area you have. An entire CPU wants to get clocked together, meaning you want all the components more or less running at the same speed, so they can work together efficiently.

    At 10ghz with speed of light delays you can only move 2cm per cycle. And the propagation rate of electrons in silicon is even lower.

    This is a reason multi-core processors have become more common. They’re different time domains. Each processor core is at the limit of usable area within the time constraint. So to get more computation power you add more cores instead. Sadly most programmers still write single threaded programs. Only people who absolutely need performance bother with writing real multithreaded programs. So on your 64 core machine, you’re probably only using one or two cores at any time. Realistically

    Other people have already talked about temperature, so I won’t.

    • ozymandias117@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      This is also why we use speculative execution and various length pipelines per core for single threaded execution

      A long pipeline creates big delays when an instruction wasn’t the correct one, but on average it saves time

  • weew@lemmy.ca
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    The semiconductor industry has been shrinking chips for basically as long as it has existed. Look up Moore’s Law.

    In any case, the smaller the transistor, the more chips you can produce with each batch, they use less power, and run faster. Usually. Sometimes things don’t always go right, but that’s the general trend.

  • rufus@discuss.tchncs.de
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    With a smaller feature size, you make the chip cheaper, more energy efficient and you can/could bump the frequency.

    You can fit more dies on a wafer. So every one gets cheaper. And smaller transistors use less energy. I’m pretty sure the high frequency stuff also gets ‘easier’ with less distance and material involved.

  • max@feddit.nl
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I’m not super well-versed in this field, but I know the basics. People are excited about 3nm processes because it allows you to fit more transistors on the same die or make smaller dies with the same performance as a bigger one on the old process. It also has reduced energy consumption, and thus reduced heat production. How exactly, I’m not sure.

  • marcos@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    1 year ago

    Well, for a start, those 3nm are supposed to represent the feature size, not the transistor size. The feature size is like the resolution of an image (in fact, it is very very like), so as you reduce it, you can get more and more detailed things, but those things do not necessarily reduce in size. Thus you don’t necessarily get more transistors from a finer process.

    But then, the sizes you read about right now are not real feature sizes either. They are calculated by a complex algorithm that takes into account everything, from the cost of the factories to the chip’s power dissipation, and only terminates on the computers of the marketing department. You can’t expect to just read them and learn something useful.

  • Granixo
    link
    fedilink
    arrow-up
    4
    arrow-down
    5
    ·
    1 year ago

    Shrinking is better because it inherently makes electronic components more efficent and cheaper to produce in bulk.

    However, if you ask me, i believe it would be nicer if for a couple of years developers and manufacturers shifted focus onto making better optimized software (mostly just get rid of all the bloatware) and hardware (bring dedicated sound and network chips back).