[Nix-dev] RE: [Nix-commits] SVN commit: nix - 21928 - eelco - innixos/trunk: liblib/test-driver modules/virtualisation tests

Sander van der Burg - EWI S.vanderBurg at tudelft.nl
Thu May 20 23:47:35 CEST 2010


Hmmm, I probably was not entirely right about this, since the server also needs to know how to reach the client. NAT takes care of this, or the server also needs a default gateway pointing to the router :-)

Maybe it's better for me to take some sleep now :-)


-----Oorspronkelijk bericht-----
Van: nix-dev-bounces at cs.uu.nl namens Sander van der Burg - EWI
Verzonden: do 20-5-2010 23:43
Aan: nix-dev at cs.uu.nl; Eelco Dolstra - EWI
Onderwerp: [Nix-dev] RE: [Nix-commits] SVN commit: nix - 21928 - eelco - innixos/trunk: liblib/test-driver modules/virtualisation tests
 
Very nice!

But do you really have to use NAT in the testcase? I believe if the gateway is properly configured on the client and IP forwarding is turned on the router you're basically there, right?

NAT will only let the server "think" that a connection from the client comes from router, but this is not really required I think.

I'm not sure about this: but can't you use a hostname for identifying the default gateway?

-----Oorspronkelijk bericht-----
Van: nix-commits-bounces at cs.uu.nl namens Eelco Dolstra
Verzonden: do 20-5-2010 23:07
Aan: nix-commits at cs.uu.nl
Onderwerp: [Nix-commits] SVN commit: nix - 21928 - eelco - in nixos/trunk: liblib/test-driver modules/virtualisation tests
 
Author: eelco
Date: 2010-05-20 21:07:32 +0000 (Thu, 20 May 2010)
New Revision: 21928

You can view the changes in this commit at:
   https://svn.nixos.org/viewvc/nix?rev=21928&view=rev

Added:
   nixos/trunk/tests/nat.nix
Modified:
   nixos/trunk/lib/build-vms.nix
   nixos/trunk/lib/test-driver/Machine.pm
   nixos/trunk/modules/virtualisation/qemu-vm.nix
   nixos/trunk/tests/default.nix

Log:
* Allow more complex network topologies in distributed tests.  Each
  machine can now declare an option `virtualisation.vlans' that causes
  it to have network interfaces connected to each listed virtual
  network.  For instance,

    virtualisation.vlans = [ 1 2 ];

  causes the machine to have two interfaces (in addition to eth0, used
  by the test driver to control the machine): eth1 connected to
  network 1 with IP address 192.168.1.<i>, and eth2 connected to
  network 2 with address 192.168.2.<i> (where <i> is the index of the
  machine in the `nodes' attribute set).  On the other hand,
  
    virtualisation.vlans = [ 2 ];

  causes the machine to only have an eth1 connected to network 2 with
  address 192.168.2.<i>.  So each virtual network <n> is assigned the
  IP range 192.168.<n>.0/24.

  Each virtual network is implemented using a separate multicast
  address on the host, so guests really cannot talk to networks to
  which they are not connected.

* Added a simple NAT test to demonstrate this.

* Added an option `virtualisation.qemu.options' to specify QEMU
  command-line options.  Used to factor out some commonality between
  the test driver script and the interactive test script.


Changes:

Modified: nixos/trunk/lib/build-vms.nix
===================================================================
--- nixos/trunk/lib/build-vms.nix	2010-05-20 15:38:20 UTC (rev 21927)
+++ nixos/trunk/lib/build-vms.nix	2010-05-20 21:07:32 UTC (rev 21928)
@@ -40,7 +40,7 @@
           for i in $out/vms/*; do
             port2=\$((port++))
             echo "forwarding localhost:\$port2 to \$(basename \$i):80"
-            QEMU_OPTS="-redir tcp:\$port2::80 -net nic,vlan=1,model=virtio -net socket,vlan=1,mcast=232.0.1.1:1234" \$i/bin/run-*-vm &
+            QEMU_OPTS="-redir tcp:\$port2::80" \$i/bin/run-*-vm &
           done
           EOF
           chmod +x $out/bin/run-vms
@@ -71,26 +71,57 @@
     
       machines = lib.attrNames nodes;
 
-      machinesWithIP = zip machines
-        (map (n: "192.168.1.${toString n}") (lib.range 1 254));
+      machinesNumbered = zip machines (lib.range 1 254);
 
-      # Generate a /etc/hosts file.
-      hosts = lib.concatMapStrings (m: "${m.second} ${m.first}\n") machinesWithIP;
+      nodes_ = lib.flip map machinesNumbered (m: lib.nameValuePair m.first
+        [ ( { config, pkgs, nodes, ... }:
+            let
+              interfacesNumbered = zip config.virtualisation.vlans (lib.range 1 255);
+              interfaces = 
+                lib.flip map interfacesNumbered ({ first, second }:
+                  { name = "eth${toString second}";
+                    ipAddress = "192.168.${toString first}.${toString m.second}";
+                  }
+                );
+            in
+            { key = "ip-address";
+              config =
+                { networking.hostName = m.first;
+                
+                  networking.interfaces = interfaces;
+                    
+                  networking.primaryIPAddress =
+                    lib.optionalString (interfaces != []) (lib.head interfaces).ipAddress;
+                  
+                  # Put the IP addresses of all VMs in this machine's
+                  # /etc/hosts file.  If a machine has multiple
+                  # interfaces, use the IP address corresponding to
+                  # the first interface (i.e. the first network in its
+                  # virtualisation.vlans option).
+                  networking.extraHosts = lib.flip lib.concatMapStrings machines
+                    (m: let config = (lib.getAttr m nodes).config; in
+                      lib.optionalString (config.networking.primaryIPAddress != "")
+                        ("${config.networking.primaryIPAddress} " +
+                         "${config.networking.hostName}\n"));
+                  
+                  virtualisation.qemu.options =
+                    lib.flip lib.concatMapStrings interfacesNumbered ({ first, second }:
+                      "-net nic,vlan=${toString second},model=virtio " +
+                      # Use 232.0.1.<vlan> as the multicast address to
+                      # connect VMs on the same vlan, but allow it to
+                      # be overriden using the $QEMU_MCAST_ADDR_<vlan>
+                      # environment variable.  The test driver sets
+                      # this variable to prevent collisions between
+                      # parallel builds.
+                      "-net socket,vlan=${toString second},mcast=" +
+                      "\${QEMU_MCAST_ADDR_${toString first}:-232.0.1.${toString first}:1234} "
+                    );
 
-      nodes_ = map (m: lib.nameValuePair m.first [
-          { key = "ip-address";
-            config =
-              { networking.hostName = m.first;
-                networking.interfaces =
-                  [ { name = "eth1";
-                      ipAddress = m.second;
-                    }
-                  ];
-                networking.extraHosts = hosts;
-              };
-          }
+                };
+            }
+          )
           (lib.getAttr m.first nodes)
-        ]) machinesWithIP;
+        ] );
 
     in lib.listToAttrs nodes_;
 

Modified: nixos/trunk/lib/test-driver/Machine.pm
===================================================================
--- nixos/trunk/lib/test-driver/Machine.pm	2010-05-20 15:38:20 UTC (rev 21927)
+++ nixos/trunk/lib/test-driver/Machine.pm	2010-05-20 21:07:32 UTC (rev 21928)
@@ -11,9 +11,14 @@
 
 
 # Stuff our PID in the multicast address/port to prevent collissions
-# with other NixOS VM networks.
-my $mcastAddr = "232.18.1." . ($$ >> 8) . ":" . (64000 + ($$ & 0xff));
-print STDERR "using multicast address $mcastAddr\n";
+# with other NixOS VM networks.  See
+# http://www.iana.org/assignments/multicast-addresses/.
+my $mcastPrefix = "232.18";
+my $mcastSuffix = ($$ >> 8) . ":" . (64000 + ($$ & 0xff));
+print STDERR "using multicast addresses $mcastPrefix.<vlan>.$mcastSuffix\n";
+for (my $n = 0; $n < 256; $n++) {
+    $ENV{"QEMU_MCAST_ADDR_$n"} = "$mcastPrefix.$n.$mcastSuffix";
+}
 
 
 sub new {
@@ -107,7 +112,7 @@
         dup2(fileno($serialC), fileno(STDOUT));
         dup2(fileno($serialC), fileno(STDERR));
         $ENV{TMPDIR} = $self->{stateDir};
-        $ENV{QEMU_OPTS} = "-nographic -no-reboot -redir tcp:65535::514 -net nic,vlan=1,model=virtio -net socket,vlan=1,mcast=$mcastAddr -monitor unix:./monitor";
+        $ENV{QEMU_OPTS} = "-nographic -no-reboot -redir tcp:65535::514 -monitor unix:./monitor";
         $ENV{QEMU_KERNEL_PARAMS} = "hostTmpDir=$ENV{TMPDIR}";
         chdir $self->{stateDir} or die;
         exec $self->{startCommand};

Modified: nixos/trunk/modules/virtualisation/qemu-vm.nix
===================================================================
--- nixos/trunk/modules/virtualisation/qemu-vm.nix	2010-05-20 15:38:20 UTC (rev 21927)
+++ nixos/trunk/modules/virtualisation/qemu-vm.nix	2010-05-20 21:07:32 UTC (rev 21928)
@@ -68,6 +68,37 @@
             database in the guest).
           '';
       };
+
+    virtualisation.vlans = 
+      mkOption {
+        default = [ 1 ];
+        example = [ 1 2 ];
+        description =
+          ''
+            Virtual networks to which the VM is connected.  Each
+            number <replaceable>N</replaceable> in this list causes
+            the VM to have a virtual Ethernet interface attached to a
+            separate virtual network on which it will be assigned IP
+            address
+            <literal>192.168.<replaceable>N</replaceable>.<replaceable>M</replaceable></literal>,
+            where <replaceable>M</replaceable> is the index of this VM
+            in the list of VMs.
+          '';
+      };
+
+    networking.primaryIPAddress =
+      mkOption {
+        default = "";
+        internal = true;
+        description = "Primary IP address used in /etc/hosts.";
+      };
+
+    virtualisation.qemu.options =
+      mkOption {
+        default = "";
+        example = "-vga std";
+        description = "Options passed to QEMU.";
+      };
       
   };
 
@@ -94,13 +125,14 @@
       # hanging the VM on x86_64.
       exec ${pkgs.qemu_kvm}/bin/qemu-system-x86_64 -m ${toString config.virtualisation.memorySize} \
           -no-kvm-irqchip \
-          -net nic,model=virtio -net user -smb / \
+          -net nic,vlan=0,model=virtio -net user,vlan=0 -smb / \
           -drive file=$NIX_DISK_IMAGE,if=virtio,boot=on,werror=report \
           -kernel ${config.system.build.toplevel}/kernel \
           -initrd ${config.system.build.toplevel}/initrd \
           ${qemuGraphics} \
           $QEMU_OPTS \
-          -append "$(cat ${config.system.build.toplevel}/kernel-params) init=${config.system.build.bootStage2} systemConfig=${config.system.build.toplevel} regInfo=${regInfo} ${kernelConsole} $QEMU_KERNEL_PARAMS"
+          -append "$(cat ${config.system.build.toplevel}/kernel-params) init=${config.system.build.bootStage2} systemConfig=${config.system.build.toplevel} regInfo=${regInfo} ${kernelConsole} $QEMU_KERNEL_PARAMS" \
+          ${config.virtualisation.qemu.options}
     '';
 
     
@@ -186,7 +218,7 @@
   # host filesystem and thus deadlocks the system.
   networking.useDHCP = false;
 
-  networking.defaultGateway = "10.0.2.2";
+  networking.defaultGateway = mkOverride 200 {} "10.0.2.2";
 
   networking.nameservers = [ "10.0.2.3" ];
 

Modified: nixos/trunk/tests/default.nix
===================================================================
--- nixos/trunk/tests/default.nix	2010-05-20 15:38:20 UTC (rev 21927)
+++ nixos/trunk/tests/default.nix	2010-05-20 21:07:32 UTC (rev 21928)
@@ -11,6 +11,7 @@
   installer = pkgs.lib.mapAttrs (name: complete) (call (import ./installer.nix));
   kde4 = apply (import ./kde4.nix);
   login = apply (import ./login.nix);
+  nat = apply (import ./nat.nix);
   openssh = apply (import ./openssh.nix);
   portmap = apply (import ./portmap.nix);
   proxy = apply (import ./proxy.nix);

Added: nixos/trunk/tests/nat.nix
===================================================================
--- nixos/trunk/tests/nat.nix	                        (rev 0)
+++ nixos/trunk/tests/nat.nix	2010-05-20 21:07:32 UTC (rev 21928)
@@ -0,0 +1,55 @@
+# This is a simple distributed test involving a topology with two
+# separate virtual networks - the "inside" and the "outside" - with a
+# client on the inside network, a server on the outside network, and a
+# router connected to both that performs Network Address Translation
+# for the client.
+
+{ pkgs, ... }:
+
+{
+
+  nodes =
+    { client = 
+        { config, pkgs, ... }:
+        { virtualisation.vlans = [ 1 ];
+          networking.defaultGateway = "192.168.1.2"; # !!! ugly
+        };
+
+      router = 
+        { config, pkgs, ... }:
+        { virtualisation.vlans = [ 2 1 ];
+          environment.systemPackages = [ pkgs.iptables ];
+        };
+
+      server = 
+        { config, pkgs, ... }:
+        { virtualisation.vlans = [ 2 ];
+          services.httpd.enable = true;
+          services.httpd.adminAddr = "foo at example.org";
+        };
+    };
+
+  testScript =
+    ''
+      startAll;
+
+      # The router should have access to the server.
+      $server->waitForJob("httpd");
+      $router->mustSucceed("curl --fail http://server/ >&2");
+
+      # But the client shouldn't be able to reach the server.
+      $client->mustFail("curl --fail --connect-timeout 5 http://server/ >&2");
+
+      # Enable NAT on the router.
+      $router->mustSucceed(
+          "iptables -t nat -F",
+          "iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -d 192.168.1.0/24 -j ACCEPT",
+          "iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -j SNAT --to-source 192.168.2.2", # !!! ugly
+          "echo 1 > /proc/sys/net/ipv4/ip_forward"
+      );
+
+      # Now the client should be able to connect.
+      $client->mustSucceed("curl --fail http://server/ >&2");
+    '';
+
+}

_______________________________________________
nix-commits mailing list
nix-commits at cs.uu.nl
http://mail.cs.uu.nl/mailman/listinfo/nix-commits


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.science.uu.nl/pipermail/nix-dev/attachments/20100520/a26c2892/attachment.html 


More information about the nix-dev mailing list