Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Hardware Linux Technology

Xen Project Officially Ports Its Hypervisor To Raspberry Pi 4 (theregister.com) 19

The Xen Project has ported its hypervisor to the 64-bit Raspberry Pi 4. The Register reports: The idea to do an official port bubbled up from the Xen community and then reached the desk of George Dunlap, chairman of the Xen Project's Advisory Board. Dunlap mentioned the idea to an acquaintance who works at the Raspberry Pi Foundation, and was told that around 40 percent of Pis are sold to business users rather than hobbyists. With more than 30 million Arm-based Pis sold as of December 2019, and sales running at a brisk 600,000-plus a month in April 2020, according to Pi guy Eben Upton, Dunlap saw an opportunity to continue Xen's drive towards embedded and industrial applications.

Stefano Stabellini, who by day works at FPGA outfit Xilinx, and past Apache Foundation director Roman Shaposhnik took on the task of the port. The pair clocked that the RPi 4's system-on-chip used a regular GIC-400 interrupt controller, which Xen supports out of the box, and thought this was a sign this would, overall, be an easy enough job. That, the duo admitted, was dangerous optimism. Forget the IRQs, there was a whole world of physical and virtual memory addresses to navigate. The pair were "utterly oblivious that we were about to embark on an adventure deep in the belly of the Xen memory allocator and Linux address translation layers," we're told. [The article goes on to explain the hurdles that were ahead of them.]

"Once Linux 5.9 is out, we will have Xen working on RPi4 out of the box," the pair said. [...] Stefano Stabellini told The Register that an official Xen-on-RPi port will make a difference in the Internet-of-Things community, because other Arm development boards are more costly than the Pi, and programmers will gravitate towards a cheaper alternative for prototyping. He also outlined scenarios, such as a single edge device running both a real-time operating system alongside another OS, each dedicated to different tasks but inhabiting the same hardware and enjoying the splendid isolation of a virtual machine rather than sharing an OS as containers. George Dunlap also thinks that an official Xen-on-RPi port could also be of use to home lab builders, or perhaps just give developers a more suitable environment for their side projects than a virtual machine or container on their main machines.
Stay tuned to Project EVE's Github page for more details about how to build your own Xen-for-RPi. Hacks to get it up and running should also appear on the Xen project blog.
This discussion has been archived. No new comments can be posted.

Xen Project Officially Ports Its Hypervisor To Raspberry Pi 4

Comments Filter:
  • The abstraction layer crowd has found ARM, as if it already wasn't slow enough. Can't wait to run all my software in bloated insecure containers inside a VM.

    • Comment removed (Score:5, Informative)

      by account_deleted ( 4530225 ) on Tuesday September 29, 2020 @06:28PM (#60555400)
      Comment removed based on user account deletion
      • by raymorris ( 2726007 ) on Tuesday September 29, 2020 @08:49PM (#60555736) Journal

        Indeed. There *is* no abstraction later for most code, it runs natively on the CPU. The IO has to go through the hypervisor, which doesn't slow it significantly because it would otherwise go through a driver. Using virtio, there basically is no driver in the guest, it's just renaming a memory page.

        On security, Xen is highly secure for separating guests.
        With a few patches to Xen for things like tick count, it can be shown that a program running in Xen can't even *detect* that it's running in a VM, much less escape that VM.

        Where there can be a slight loss of security is between toe programs running in the same VM. If you're using VMs, don't run super sensitive code in the same VM as untrusted code that takes input from the internet. The issue there is that with a normal OS, the kernel runs in ring 0 and the applications run in ring 3. Certain important privileged instructions cannot be run in 3, the CPU will only allow the kernel to run them. That's key to securing one program from another, to making sure that an application can't do kernel things like messing with another process. In a VM, the hypervisor has ring 0, so the kernel is ring 3 just like the applications. So you lose the protections provided by the CPU and it's privileged instructions, trusting the hypervisor to handle that instead.

  • Now if only we had enough RAM to make use of it...

    On the other hand, instead of running a bunch of VMs, why not just buy a bunch of RPi4's and not be so RAM restricted? It's not like they cost all that much (unless you're going for 8GB or 4GB versions).

    • The nice thing about VMs is remote access to the console. If you run stuff in a VM, you can recover an otherwise unbootable system without physical access as long as you don't lose access to the hypervisor (or dom0 in the case of Xen).

      • I think it would be more fun to have an array of compute modules that abstracts the console and provides a common network switch/switches. A 1U board that hosts a few dozen compute modules would sure be better for me than the docker hell I am stuck in with some platforms now.
        • There is a 1 HE system from Supermicro can have 2x i7 cpu's with lots of ram, 4x 1GB eth ports, 4x 2.5" bay's, redundant power supply. Have used this type of machines to do realtime image processing, when we first got the info about them it looked to me as a nice thing to run the (qemu) vm's wat run not as fast as i liked on a Core2duo with only 4GB. But after the first arrived and powering up.. it sounded as a jet at take of, fans running at max speed till booting is done. There where 12 !!
    • by tzanger ( 1575 )

      insert obligatory "imagine a beowulf cluster of these!"...

    • by AmiMoJo ( 196126 )

      For the Pi with 4GB of RAM Docker containers are a better option if you can live without the extra isolation and other benefits of a VM. There are some great IoT stacks based on Docker containers on the Pi.

    • by kriston ( 7886 )

      Too bad that ship as sailed. We like containers way more than we like VMs.

  • Struggling to find many use-cases where this would be an advantage over Docker these days. Docker has been available on the Pi for some time, is even more light-weight (making great use of the Pi's limited memory resources), yet achieves the same real-world level of workload isolationism.

    Hell, you can then easily scale up by running k3s.

    • Docker is great for well thought out solutions, or for things you just don’t care about... but the stuff in the middle I struggle to troubleshoot docker problems because everything is abstracted within various docker containers and internal communication that I don’t have any access to. Sure, I need to get better with Docker, but when a container doesn’t do any logging and I just get silent failures between two different containers, I truly am in Docker Hell. I end up just throwing anoth
      • Yeah, I get that. Docker containers aren't always the easiest to debug in and get right. And I work with them every day for my day job!

    • by damaki ( 997243 )
      You do not usually run Docker on bare metal but in a VM in an hypervisor. And in a 8 GB RAM server, it does make sense to use an hypervisor as you may want to quickly create a new VM instance for a quick test while not having to plug new physical hardware. And an underpowered CPU is not a major issue for an hypervisor, I have spend litteraly years playing with hypervisors on Pentium (the new ones) and on a 10 year old Core 2 Duo machine.
      • I run containers "at scale" (ie, multiple thousands) for federal government (thank you, Kubernetes and OpenShift!)

        In my stack, I try to get rid of VMs and get Docker as close to the metal as possible, because:

        * Running a VM layer underneath OpenShift creates two layers that are both trying to shift load. Doesn't make for great load balancing decisions.
        * Splitting HW up into VMs adds extra overhead at multiple layers (another docker engine which saps 15% of a core, etc)

        So yes, it's perfectly valid to remov

    • It's not about isolationism, it's about real time performance. If you need to control something timing sensitive like for example motion control for robotics or finance transactions, that really doesn't run reliably in non-realtime environments, which all applications on top of an normal operating system are. It's also why mainframes are still a thing, much of finance systems are built as real time systems so you can't run them on any random cluster.
      • You make a good point. But if your workload relies on hardware interrupts and the RTC, then you're probably not running either Docker nor Xen.

    • by kriston ( 7886 )

      Came here to say this. Raspbian already comes with container architecture since a few years ago.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...