[Nix-dev] [RFC] Declarative Virtual Machines
Leo Gaspard
leo at gaspard.io
Tue Apr 25 23:39:17 CEST 2017
On 04/23/2017 08:40 PM, Volth wrote:
> I did not do benchmarks, just noticed that boot of /-on-tmpfs +
> /nix/store on 9pfs is slower. The performance was not critical.
> Anyway, thank you for msize suggestion, I will try it.
>
> There was more resentiment than a point :)
> If I need to make something just a bit different from what an existing
> tool has been designed for, I cannot reuse existing code.
> That "a bit different", could be, for example, creating a NixOS .qcow2
> on a remote Ubuntu server. I cannot use make-nix-disk, so I copy-paste
> some code from it. It uses runInLinuxVM, which cannot be used on
> Ubuntu as well, so code from runInLinuxVM is copied with some
> modifications ( libguestfs cannot be used, because on its appliance
> "switch-to-configuration" does not work). So it results in a new tool
> of 500-1000 lines, made of copy-pasted and slightly modified snippets.
> What you do is something similar - "same as nixos-containers, but for
> qemu", which has some basic assumptions hardcoded, such as "shared nix
> store" and "host is nixos too" and "VM is to run on the same machine
> where it was built". The next guy, whose task would not fit with the
> assumptions, ends up in creating another big tool which also creates
> qcow2/vdi/raw/whatever and launch qemu/virtulbox/docker just in a bit
> different way.
> The point is the existing guest-creation-and-control tools are not
> flexible enough, and this results in we have so many of them doing
> very similar things and planning and making new ones (besides those
> which are already in nixos and nixops, I have seen some other tools on
> github, and I believe many of us have own).
> Alhough all these tools are happy to use NixOS module system, they may
> be happy as well to share and reuse something else: definition of
> machines, of networks, a sofisticated tool to work with VM-images
> (independent on runInLinuxVM), ...
I feel like there are two issues that you are pointing out: creation and
usage of a nixos guest.
For the creation of the disk image of a nixos guest, I believe
nixos-prepare-root is the only part that is nixos-specific: the next
step is copying a root FS to a disk image, and this has nothing related
to nixos. If such tools do not exist yet, they could be written by
people related to nix, but they would neither need nor expect a nix
environment (make-disk-image is an example of a useless assumption being
hardcoded, and I would guess may be rewritten in a much cleaner way as
soon as nixos-prepare-root lands: before that, it is easiest to have it
this way in order to make the build in a nix VM, after there will no
longer be any reason to except the amount of work to change it).
For the running of a nixos guest, it should have nothing special about
it: it's just starting a disk image, so the tools to run them are just
the same as to run any other linux (or even windows, if assuming a disk
image with a bootloader) VM.
Now, the problem this RFC attempts to solve is running a nixos guest in
a nixos host in a way fully integrated in nixos. This does mean encoding
some assumptions: as there is no need for maximal configurability (one
just wants to "run a VM"), an arbitrary virtualization solution can be
picked, the host can be assumed to be nixos, etc. Discussing the choice
of these can sure be an important point, but I don't think the fact
containers.* uses systemd containers as a backend should matter in any
way for the end user, even though it is really important for the developer.
As for the other things you envision sharing, definitions of machines
can only be done through the nixos module system (as far as I know), and
nixos-prepare-root appears to (I haven't checked out its exact
behaviour) directly write a root FS based on a module system, hence
having minimal dependencies.
Definitions of networks are necessary for complex deployments, where
nixops is appropriate. In this RFC I'm trying to keep things as simple
as possible, and just having a bridge with all VMs and the host is
enough for most (all?) use cases (maybe not having a VM with two IPs,
but that's rare enough to be safely ignored by the nixos module system,
which as far as I know tries to keep things simple and reduce the
options offered to the user to only the ones he will actually likely
use, even though some edge cases may not be covered).
Hope I'm not too far beside the point!
Leo
PS: would you be OK to continue this discussion on github [1], with a
link to this mail series in the archives? this way we could keep track
of the discussions, which was among others the reason why RFCs were
introduced, I think :)
[1] https://github.com/NixOS/rfcs/pull/12
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 659 bytes
Desc: OpenPGP digital signature
URL: <https://mailman.science.uu.nl/pipermail/nix-dev/attachments/20170425/313e5771/attachment.sig>
More information about the nix-dev
mailing list