I’ve seen people talking about it and experienced it myself with a server, but why does Linux run so well on ARM (especially compared to Windows)?

  • nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    58
    ·
    1 year ago

    Linux, and much of the open-source software that goes with it, has been multi-architecture for a long time. If you take something that already runs pretty decently on x86, x86_64, PA-RISC, Motorola 68000, PowerPC, MIPS, SPARC, and Intel Itanium CPUs, porting it to yet another architecture is, while not trivial, at least mostly a known problem.

    Windows, by contrast, was built for descendants of the Intel 8088, period. It’s unsurprising that porting it is a hard problem and that results aren’t always satisfactory.

    (Apple built on top of a modified BSD kernel, and BSD has also been ported around quite a bit, so they also have a ports-are-a-known-problem advantage.)

    • unfnknblvbl@beehaw.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Windows, by contrast, was built for descendants of the Intel 8088, period.

      This is not quite true. Windows NT was built to support multiple architectures from the start.

      • apt_install_coffee@lemmy.ml
        link
        fedilink
        arrow-up
        18
        ·
        1 year ago

        NT is not the majority of windows code though; for windows to be multi architecture, all of windows needs to work with the new architecture; NT, drivers & userspace.

        For Linux, if an existing userspace application doesn’t work in aarch64, somebody somewhere will build a port. For windows, so much of their stuff is proprietary that Microsoft are the only ones able to build that port.

        Not because “windows bad”, just a consequence of such a locked down system which doesn’t have anything open source to inherit.

      • Billegh@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Yes, and DEC Alpha too. But all of that ceased with windows 2000. The only porting since then was from x86 32bit to 64bit. I’m willing to bet money I don’t have that Microsoft really only expected a port to 128bit x86 until ARM started gaining steam.

    • DigitalMuffin@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      No. Windows has portable architecture and it’s quite simple for Microsoft to compile it for whatever processor they want. Just change HAL and you’re ready to go.

    • kristoff@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Hi, Perhaps a stupid question, but what exactly is required to port an OS to a different architecture? OK, there is the boot-process, and low-level language compilers, … but what else?

      How much code has actually to be rewriten, and how much just needs “make” to be recompiled?

      Kr.

      • nyan@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Not my area, but since OSs are really low-level (obviously), they can be affected by details of the host architecture that we don’t often think about. Endianness, for instance.

        I opened up the source package for the kernel I’m currently running (6.1.42) and looked at it. The smallest set of architecture-specific code is the ~2MB for sh (I assume that’s SuperH, a 32-bit RISC architecture from the early 1990s). 32-bit ARM takes up 27MB, although if you check the individual files, a fair amount of that is device trees and the like. So we’re talking about less than 50MB of arch-specific source code for most platforms, and probably less than 10 in many cases, but it depends on the design of the architecture and how many times it’s been extended.

        Looking at individual file names, topics addressed in the kernel’s arch-specific code files appear to include booting, low-level memory access, how to idle the CPU, crypto primitives, interrupts, suspending/hibernating the system and other power management, virtualization facilities if the CPU provides them, crash dumps and stack traces, and, yes, endianness.

        You may also need additional drivers for odd bits of hardware not used by other systems. Or not, but it’s a common sticking point with ARM SOCs and other small-format machines.

        That’s just the kernel. You’ll also need to establish a working cross-compiler before you can get your kernel onto the system. At that point, you can probably bootstrap much of the rest by running make and get to a working command-line system (GUI is going to be more of a crapshoot, requiring additional work on video acceleration and such in order to run well). And there may be odd warts in other pieces of software, each requiring a few lines of code that add up over time.