[Nix-dev] octave and atlas, tuning for performance, which isthe nix way here?
Isaac Dupree
isaacdupree at charter.net
Wed Oct 22 23:18:32 CEST 2008
Roggenkamp,Steve wrote:
> Then we're back to the morass with typical package management systems.
> The trouble comes with defining what "well-contained" means.
not hard at all, if you put a little work in, to define what
depends on what, and control it! (as long as you can safely
rewrite path-hashes, that is)
> What you really want is an application interface with well-defined
> semantics. Then, in theory, any package that implemented the interface
> conforming to the semantics should be interchangeable with any other
> package implementing the same. The trouble with this is that we do not
> have good means to strictly define the semantics of software execution,
> other than by running it.
sure, but who said that
"standard-oo-modified-to-link-to-special-atlas" had to
behave the same as "standard-oo"? They're explicitly
different entities...
> The Nix package management system uses the generated hashes to insure
> compatibility and repeatability. It makes a lot of sense.
Repeatability is still guaranteed! (if you do it right.)
Compatibility, of course, always relies on the correctness
of the compiler optimizations that are used, even now.
> The problem with it is that as you go lower in the software stack and
> change things, it requires you to rebuild more packages. Inconvenient,
> but it insures you get the same behavior.
it's true, let's take our hypothetical example: some
configure check in openoffice might give a different result
based on whether the version of atlas it's compiled against
has been optimized for the specific machine (not entirely
implausible -- e.g. mplayer always tells me what
optimizations it's using). Then something might go slightly
odd in the mangled version. But we admitted that it was
mangled, and we also hoped (and tested?) that the difference
was too small to matter.
If/when package-build produces a bit-by-bit deterministic
result, we can experiment with randomly tweaking the
dependencies in various ways and discover for ourselves
which changes are small enough that they don't tend to
affect the result at all.
> It seems to me the trick
> should be to arrange your processes to minimize the changes to the lower
> levels of your software stack.
well, some people try to do that, by delaying recompilation
for specific systems until runtime. Virtual machines and
stuff. Then, everything is entirely functionally managed,
except for the big "if" that the code can *and will*
deliberately do different things at runtime depending on the
exact processor model, and one can never be sure if it's the
same... [not to mention the rest of the hardware which
kernel and apps deal with, actual hardware or microcode
bugs, etc.]
all approaches have weaknesses.
-Isaac
More information about the nix-dev
mailing list