[Nix-dev] Fwd: Fwd: Improving the Developer Experience in the Nix Community
Bryce L Nordgren
bnordgren at gmail.com
Tue Jul 10 22:19:43 CEST 2012
Again I replied only to an individual. D'oh. I sense this is going to be a
pattern with me and this list.
---------- Forwarded message ----------
From: Bryce L Nordgren <bnordgren at gmail.com>
Date: Tue, Jul 10, 2012 at 8:53 AM
Subject: Re: [Nix-dev] Fwd: Improving the Developer Experience in the Nix
Community
To: Florian Friesdorf <flo at chaoflow.net>
On Sat, Jun 30, 2012 at 2:01 AM, Florian Friesdorf <flo at chaoflow.net>wrote:
> On Fri, 29 Jun 2012 16:50:26 +0200, Bryce L Nordgren <bnordgren at gmail.com>
> wrote:
> > Consensus as the only operating rule excludes Nix from the workplace.
> > Nearly all workplaces have nonnegotiable policies, and it's likely that
> > these will not be compatible. So there must always be an adaptation layer
> > between a more or less generic distribution and the specific policies on
> > site. The adaption layer is bidirectional: the distribution should be
> > prepared to accept and de-specialize contributions from any particular
> > environment, just as each participant must be prepared to specialize the
> > generic distribution to their needs.
> >
> > An important part of Consensus is recognizing when it doesn't apply.
>
> Sorry, I can't follow your argument / do not understand how this
> collides with consensus and why you think it does not apply.
>
> Can you describe why specifically it does not apply? I don't mean that
> we should use the exact implementation of noisebridge, but I'd love if
> we'd come up with our own one.
>
Sorry, I was out of the office for a week or so. I believe my point here is
that consensus cannot be the only mechanism in place. Other mechanisms need
to reduce the need to agree. If you will, these other mechanisms need to
support heterogeneity.
In the above, I envision the Nix community as composed of distribution
maintainers, IT support staff at an institution which has adopted Nix, and
end users. As I read the recent conflict on the list, IT support at a
specific institution is dissatisfied with the response time for patches to
work their way into the main distribution, hence the fork of nixos/nixpkgs.
What bothers me is not the fork, it's the competition.
I think it should be expected that each institution will maintain their own
copy (branch) of nixos/nixpkgs, as it should be expected that they mirror
the distribution channels locally for their own users. Likewise, they
should be expected to set up their own Hydra instance to build variants not
produced by the nixos.org servers. This should be normal, as it provides
all the advantages of Nix/Nixos/Hydra to a specific community of users:
benefitting from the common builds at nixos.org and supporting local
control over any part of the system without depending on an external
entity's response time.
Now since this is expected, I think a plan should be made to exploit it.
How can this situation be envisioned to foster cooperation instead of
competition? In essense, extra man-hours are being devoted to packaging
software for a particular environment, modules are being written to
configure a particular subset of machine types (appliances), hardware (say
they order a handful of computer models off of a contract) and environments
(intranet/DMZ/internet; identity store via kerberos/samba/ldap), more Hydra
servers are building more variants, and more fileservers are hosting more
compiled artifacts. How can this hodgepodge of different stuff managed for
different purposes by different participants strengthen the community
instead of fracture it?
I don't know the hows for everything, but it seems to me that the
functional requirements from an IT support point of view might be:
1] Retain control over the software deployed on the machines they are
responsible for.
2] Be able to contribute "generic" changes to the upstream (new packages,
patches, and variants not currently built).
3] Be able to leverage pre-built variants from the upstream.
4] Locally Modify any package at any time.
5] Locally Add packages which do not currently exist.
6] Provide environment for vendors to compile necessary proprietary
software.
7] Compile and host locally modified packages, locally maintained packages,
and variants not pre-built by the upstream.
8] Keep tabs on what software has been deployed to which machines.
9] Keep things stable. Only deploy to end users after testing. Know that
the upgrade will work.
If we assume that the packages and modules are handled via git, the
remaining big questions are: How are the upstream's pre-built artifacts
merged with locally built ones? How are contributions to the upstream
separated from customizations which shouldn't be shared for some reason or
another?
Back to Consensus: the current model for Nixos distribution seems to follow
along with all other linux distributions. There is One Set of Packages.
There is One Set of Modules. Creating a second set does not necessarily
benefit the first. Yet with Nix, where a large part of the functionality of
the machine is encoded in reusable modules, it will be necessary to create
at least a site-specific set of modules. These site specific modules will
implement local policies and will probably not be common to two
institutions. There's no reason to force agreement between these two
institutions by retaining the One Set of Modules mentality. I think what is
needed is something along the lines of [1], where there is a mechanism for
agreement inside each of the subcommunities. The important point is that
the scope of the need to agree is limited.
Bryce
[1] http://nixos.org/wiki/The_Many_Cooks_Method
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.science.uu.nl/pipermail/nix-dev/attachments/20120710/76046de1/attachment-0001.html
More information about the nix-dev
mailing list