[Nix-dev] Security channel proposal
Domen Kožar
domen at dev.si
Thu Sep 25 20:00:17 CEST 2014
Note that from business perspective server admin usually wants to do
following two things:
1) to be notified if any of software packages has a security vuln
2) to take automated/manual actions to upgrade ONLY those packages and not
bump and versions
Having faster hydra doesn't solve 2)
Domen
On Thu, Sep 25, 2014 at 7:07 PM, Wout Mertens <wout.mertens at gmail.com>
wrote:
> On Thu, Sep 25, 2014 at 6:33 PM, Michael Raskin <7c6f434c at mail.ru> wrote:
>
>> >> I bet against our package set being buildable in 2 hours — because of
>> >> time-critical path likely hitting some non-parallelizable package.
>> >>
>> >
>> >I think most large projects can be compiled via distcc, which means that
>> >all you need is parallel make.
>>
>> WebKitGTK… (there is a comment about failure to make it work with
>> parallel build)
>>
>
> ... https://trac.webkit.org/wiki/WebKitGTK/SpeedUpBuild#distcc
>
> >Libreoffice build is inherently a single-machine task, so to speed it
>> >> up you need something like two octocore CPUs in the box.
>> >>
>> >
>> >Point in case:
>> >
>> https://wiki.documentfoundation.org/Development/BuildingOnLinux#distcc_.2F_Icecream
>> >. Building with "icecream" defaults to 10 parallel builds.
>> >
>> >Also, with ccache the original build time of 1.5 hours (no java/epm) is
>> >reduced to 10 minutes on subsequent runs.
>>
>> How would ccache cache be managed for that? How it would work with
>> rented instances being network-distant from each other?
>>
>
> Perhaps with persistent block storage that gets re-attached when an
> instance is spun up, or by using a central NFS server:
> https://ccache.samba.org/manual.html#_sharing_a_cache
>
> >> With such a goal, we would need to recheck all the dependency paths and
>> >optimise the bottlenecks.
>> >Sounds good :)
>>
>> We have too little manpower for timely processing of pull requests.
>> I think that starting a huge project should be done with full knowledge
>> that it can fail just because it needs too much energy.
>>
>
> I would start by autogenerating sensible metrics, which could then lead to
> incremental improvements as people tackle packages one by one. For example,
> perhaps the total number of dependencies (all depths) multiplied by the
> compile time of the package.
>
>
>> >Maybe making dependency replacement work reliably (symlinking into
>> >> a special directory and referring to this directory?) is more feasible…
>> >
>> >Can you elaborate?
>>
>> One of the bruteforce ways is just to declare «we need reliable global
>> dependency rewriting». In this case we could just have a symlink for
>> every package ever used as dependency so a replacement would mean
>> destructively changing this symlink.
>>
>> I.e. you depend on /nix/store/aaa-bash, there is a symlink
>> /nix/store/aab-bash to /nix/store/aaa-bash, and the builder sees just
>> /nix/store/aab-bash.
>
>
> Perhaps this could be done by nix-store instead, just provide a list of
> replacement packages and it will move the original packages away and
> symlink the replacements in their place. We could also prototype this with
> a pretty simple script.
>
> So almost what you're saying, except that the /nix/store/aaa-bash gets
> moved to /nix/store/OLD-aaa-bash and /nix/store/aaa-bash becomes a symlink
> to aab-bash. No pre-determining of dependency rewrite targets.
>
> Wout.
>
> _______________________________________________
> nix-dev mailing list
> nix-dev at lists.science.uu.nl
> http://lists.science.uu.nl/mailman/listinfo/nix-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.science.uu.nl/pipermail/nix-dev/attachments/20140925/a13576ab/attachment.html
More information about the nix-dev
mailing list