[Nix-dev] A few questions about ARM support and NixOS on a Chromebook
Luke Clifton
ltclifton at gmail.com
Wed Feb 11 02:45:18 CET 2015
Yes, I have done this method in the past as well, but like you say, it only
helps with C and C++. I'd rather have a set up which works for everything
first, and then selectively try and optimise pieces of it (like using
distcc). Although I was wondering how this would play with Nix and getting
"reproducible" builds. I assume that the cross compilers aren't set up by
nix, and that some sort of override is happening? I'll take a look at the
notes on the wiki.
On 10 February 2015 at 23:14, Wout Mertens <wout.mertens at gmail.com> wrote:
> There's another option : build natively with distcc pointing to
> cross-compilers on x86 boxes. All the configuration etc happens natively
> and the compiles themselves are sped up. I wonder if that's the approach
> Vladimír Čunát took earlier? He made the notes on how to set up distcc for
> raspberry pi on the wiki iirc.
>
> Yet another option : run the above option in qemu, running the whole thing on
> an x86 box with the heaviest lifting done natively.
>
> These only speed up C/C++ of course.
>
> Wout.
>
> On Mon, Feb 9, 2015, 5:01 PM Harald van Dijk <harald at gigawatt.nl> wrote:
>
>> On 09/02/2015 15:57, James Haigh wrote:
>>
>> On 09/02/15 14:16, Harald van Dijk wrote:
>>
>> On 09/02/2015 14:55, James Haigh wrote:
>>
>> On 28/01/15 07:42, Luke Clifton wrote:
>>
>> Hi Bjørn,
>>
>> I have read that thread. I agree with you 100% that native builds (on
>> real or virtual hardware) is the only way this can work. Upstream doesn't
>> usually care if their software can cross compile, and they can't maintain
>> it themselves even if they did. Sometimes it isn't even an option, e.g. GHC
>> still can't cross compile template Haskell yet.
>>
>> I don't understand why cross-compilation is even a thing, other than
>> decades of false assumptions being baked into compilers.
>> As I understand, if a compiler (and by ‘compiler’ I'm referring to
>> the whole toolchain required for compilation) is taking source code and
>> compilation options as input, and giving object code for the specified
>> platform as output, it is called ‘cross-compiling’ if the specified target
>> platform is different to the platform that the compiler is running on. If
>> GCC is running on ARM, compiling code ‘natively’ to ARM successfully, it is
>> counterintuitive that it would fail to build for ARM if GCC is running on
>> x86. And vice versa. A compiler should produce object code for a target
>> platform that implements the source code – it may not have the same
>> efficiency as the output of other compilers (or with other compilation
>> options), but should have the same correctness when execution completes. If
>> the source code being compiled is a specific version of the GCC source code
>> itself, and it is compiled for both x86 and ARM, then if the compilation is
>> computationally correct, both compilations of GCC should produce programs
>> that, although will compute in a different way and with different
>> efficiency, should give the exact same object code when given the same
>> source code and parameters. So if the target platform parameter is ARM,
>> they should both build exactly the same ARM machine code program.
>>
>> All of this is true, but the toolchain usually doesn't have any problems
>> with cross-compilations.
>>
>> However, evidently this is not the case unfortunately. So the
>> compilers or their toolchains are, in essence, receiving the platform that
>> they are running on as ‘input’ to the build, and making assumptions that
>> this build platform has something to do with the target platform. I.e. they
>> are _aware_ of the platform that they're building on, whereas
>> theoretically, they shouldn't be. Apparently this has a lot to do with
>> configure scripts.
>>
>> The configure scripts, or similar machinery in non-autoconf packages, are
>> part of the package, not part of the toolchain. Many programs use runtime
>> checks in configure scripts. A trivial example that hopefully doesn't exist
>> in any real package:
>>
>> If a package only compiles on platforms where sizeof(int) == 4, or where
>> special code is needed on platforms where sizeof(int) != 4, might try to
>> detect those platforms by compiling and linking
>>
>> int main(void) {
>> return sizeof(int) != 4;
>> }
>>
>> and then executing it. If the execution succeeds (i.e. returns zero),
>> then sizeof(int) == 4. If the execution doesn't succeed, then the configure
>> script assumes that sizeof(int) != 4, even though it's very well possible
>> that the only reason that execution fails is that the generated executable
>> is for a different platform.
>>
>> Other examples are build environments that build generator tools at
>> compile time, and run them to produce the source files to compile at run
>> time. The generator tool must be compiled with the build compiler, not with
>> the host compiler, or execution will fail when cross-compiling. Still, many
>> packages build such tools with the host compiler anyway, because upstream
>> only tests native compilations. This too is not an issue with the toolchain.
>>
>> But what I'm saying is that if the package succeeds in compiling natively
>> but fails to cross-compile, then this is an issue with the
>> compiler/toolchain. Yes it can be solved by writing configure scripts that
>> support cross-compiling, but really, the compiler toolchain should isolate
>> this such that the compilation is deterministic regardless of build
>> platform.
>> In your example, I'm saying that it should be the job of the compiler
>> toolchain to ensure that ‘sizeof(int) == 4’ gives the correct result for
>> the target platform. If the only feasible way to do this deterministically
>> is to run the configure scripts in a virtual machine, then this Qemu
>> technique should be considered part of the standard compiler toolchain.
>> That way, the determinism and isolation from the build platform is achieved
>> in the compiler toolchain, and upstream packages do not have to make any
>> effort to support cross-compilation beyond supporting the target platform.
>>
>> Oh. So the current toolchain does support cross-compilations, but
>> requires packages to be aware of it and handle it appropriately. You're
>> saying packages shouldn't need to be aware of it. Okay, that's possible,
>> but that's a proposal for an entirely different toolchain (but you're not
>> using the word in the sense most people do), not necessarily a problem with
>> the current toolchain. It does support what you claim it doesn't support,
>> it just doesn't support what you want it to support.
>>
>> What you're suggesting effectively means dropping all support for
>> cross-compilations everywhere, because every package would see a native
>> environment. You can do that if you like, with no support needed from any
>> package whatsoever, and what's more, with no support needed from Nix
>> whatsoever either. You can simply run Nix itself in a Qemu environment. The
>> only time cross-compilations then become an issue is with bootstrapping. I
>> don't know about GHC, but for example GNAT is written in Ada, so you need
>> GNAT for your host platform in order to build GNAT for your host platform,
>> and by dropping support cross-compilations, you would not be able to use
>> the GNAT for your build platform to build the GNAT for your host platform.
>>
>> Cheers,
>> Harald van Dijk
>> _______________________________________________
>> nix-dev mailing list
>> nix-dev at lists.science.uu.nl
>> http://lists.science.uu.nl/mailman/listinfo/nix-dev
>>
>
> _______________________________________________
> nix-dev mailing list
> nix-dev at lists.science.uu.nl
> http://lists.science.uu.nl/mailman/listinfo/nix-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.science.uu.nl/pipermail/nix-dev/attachments/20150211/46961347/attachment.html
More information about the nix-dev
mailing list