View Single Post
Old 09-29-2012, 11:36 AM   #737
knc1
Helpdesk Junkie
knc1 ought to be getting tired of karma fortunes by now.knc1 ought to be getting tired of karma fortunes by now.knc1 ought to be getting tired of karma fortunes by now.knc1 ought to be getting tired of karma fortunes by now.knc1 ought to be getting tired of karma fortunes by now.knc1 ought to be getting tired of karma fortunes by now.knc1 ought to be getting tired of karma fortunes by now.knc1 ought to be getting tired of karma fortunes by now.knc1 ought to be getting tired of karma fortunes by now.knc1 ought to be getting tired of karma fortunes by now.knc1 ought to be getting tired of karma fortunes by now.
 
knc1's Avatar
 
Posts: 6,824
Karma: 6314522
Join Date: Feb 2012
Device: Too many.
Quote:
Originally Posted by Kai771 View Post

Since I never tried compiling anything other than kpdfv, these are just speculations of a noob, but I'd expect most other apps would work with the latest CS/MG, given the same switches used with kpdfv. I think that if you use the latest source of any opensource project you'd like to build, it will require more recent gcc than 4.2, if not now, then soon. So personally, I'd go with the latest, and regress if necessary.
Any objective is worthy of a specific description.

For new work, if your objective is to compile code targeted at a device that uses recent glibc system libraries,
then that requires a gcc version newer than 4.2 to compile those glibc system libraries.
That objective in itself sets a minimum gcc version.
Neither the vendor of the gcc version, the target of the compilation, nor the producer of the gcc compiler binaries is relevant in the selection of the minimum gcc version.

For work targeting an existing system, then the type and version of the existing system libraries may (is likely to) set a different minimum gcc version. This sort of objective may also set a maximum gcc version. It may also set other limits to the selection of the gcc version.
The prime deciding factor is the type and version of the system libraries being run by the targeted existing system.

For work targeting an existing Kindle system, just stating the objective as "a Kindle system" is not a specific enough to qualify the objective.
The Kindle models, and the software system that runs them, has evolved over time. Not all Kindle models run the same type and version of system libraries.
A person's choice in this situation is to better qualify the objective.
If qualified as a specific Kindle model, then that specifies the type and version of system libraries the work is targeted at and that in turn may set requirements on gcc version, per the above.

Quote:
Originally Posted by Kai771 View Post
Regarding Linaro, I think that it might be better than CS/MG. It seems Linaro is pretty popular with building android roms, for example. However, it won't work out of the box (on K3. I think it works out of the box on K5). To make it work on K3, you'd have to build it yourself. I'm reluctant to do it, at least for now. It might not be that hard, but... idk, it seems like too much work for me. So I think I'll stick with CS/MG for now.
There is another consideration to be made, that is not included in the above statements.
Which CS/MG release process is being discussed.
The "for purchase" version, generally released at 3 month intervals and updated during its deployment, will be very close (within 3 months or closer) to the Linaro versions.
The "for free" version, will always be at least 6 to 9 months older than the Linaro project.
This is an artifact of the company not releasing the sources to the current "for purchase" version until that version has been replaced.
With CS/MG - to get a tool-chain comparable to the Linaro project, you have to purchase the binary builds.

It is misleading to compare Apples and 9 month old Oranges.

The Linaro project targets only the ARM processors. A single focus project.
The CS/MG targets most of the common processors used in the world of embedded systems. A multiple focus project. This can reduce the degree of attention paid to any single processor target.
These two large, for-profit, corporations do not work in a vacuum. They both publish their changes to the upstream projects. What gets fixed by one will appear in the sources of the other through the commonality of the upstream projects.

The GPLv2, section 3 requires the release of "all scripts and other files ..." I.E: Everything required to duplicate the binary from the sources.

For Linaro, they use the crosstool-ng build system to create their binary releases. They also post the complete crosstool-ng system used "as configured" with each release of the sources and binaries they make.
You only have to run the crosstool-ng system, with possibly changes in its configuration to match your fully qualified objective.

For CS/MG, they include all of the scripting and configuration they used in the source bundle for each release.
You only have to run it the way they ran it, again with possible changes in its configuration to match your fully qualified objective.

No "invented here" build effort required in either case (other than your local changes to the configuration).

Quote:
Originally Posted by Kai771 View Post
If I were to build Linaro, however, I'd go for the latest stable release, and build that. Bleeding edge is, well... too bleeding edge for me . In my opinion, bleeding edge is for experts, and since I'm just a noob, it's clearly not for me.

Correct me if I'm wrong, but don't apps built for K3 (nano for example) work on K5? If so, I'd use armv6 etc settings for all but performance critical apps (for example, nano doesn't need to be optimized for armv7. Some Video codec, on the other hand...). Some might say it's because I only have K3, but I'm not a fan of optimizations for optimization sake.
Applications can be built to run on both the K3 and the K5 if and only if:
The lowest common denominator of the hardware resources is sufficient; and
(presuming from the non-specific context above: "using the Amazon-lab126 libraries that are pre-installed"):
The lowest common denominator of mutual compatibility between system libraries and kernel headers is used.

If claims of "ABI compatibility" are relied on in making the above configuration decisions, be prepared to discover and deal with any over-sights in those claims.

Quote:
Originally Posted by Kai771 View Post
So, these are my opinions. I don't know how useful they may be to anyone, but I hope that answers your question .
As opinions on a general over-view of the situation, not bad.
As guidance for others, something more specific than generalized opinions is required.

For readers interested in entering this world of system's building . . . .
I strongly recommend that they take the time to work through at least one build of LFS (Linux From Scratch):
http://www.linuxfromscratch.org/lfs/

Once you master that, if interested in doing cross-system development, then I recommend that you work through that specialization of LFS:
http://trac.cross-lfs.org/
knc1 is online now   Reply With Quote