Lionel Debroux (./16) :
You have a narrow-minded view of examples, serving purely as documentation purposes (for a significant number of areas of the headers / library). While being examples is their primary purpose, they can also be used for testing purposes - manual testing first, potentially automated testing later (remember the topic where we posted about making a brainstorming for special instruction sequences ala WTI for programs sending messages to the emulator ?).
Still, it doesn't make sense to complain about a "set of problems" (those on-calc name conflicts) which is purely due to trying to use the examples for something they were not designed for.
As for you, Lionel, you at least did contribute stuff, a long time ago, but you did it in a form which is extremely hard to integrate into the existing project and never bothered trying to improve this situation.
On the part I emphasised, you're outright lying. While it's a fact that my efforts remained unfinished (and therefore not useful to you) until after the end of my studies in CS (and a rewrite in a programming language much more appropriate for the task, which I learnt outside of school in-between), I did try.
On your side, you don't want to use the results produced by the 100-ish SLOC of Perl code I wrote in 2008
Sorry, that's not what I thought of as "trying to improve this situation". I was thinking about improving the actual patchset, splitting it into pieces which can be individually reviewed and merged.
waiting for some hypothetical perfect rewrite of documentation tools (which, albeit being TIGCC/GCC4TI-specific, have worked rather well enough) which would solve several shortcomings, and add a new output format.
Indeed, that would be the proper solution. Your Perl script 1. is unsafe, as it doesn't fully understand the file format and 2. does entirely the wrong thing, adding hardcoded header references instead of removing them, i.e. the exact opposite of what we want.
Despite the informal spec, I have
participated in a nascent rewrite (ask konrad and MathStuf if you don't believe me) of tools matching the current Update* workflow (though the "documentation-to-headers" approach could be questioned, most projects are usually using the opposite "code-to-documentation" approach), which didn't get far.
That just shows that you guys either are not competent enough or didn't spend enough time on it. I'll probably end up having to write it all myself as usual.

And then people ask why I tend to work alone so much…

On my side, I have used the results produced by that small Perl code, and this has made it possible to integrate a number of documentation snippets.
What also made it possible is that you have just added your own snippets because you think they are perfect. They're not. There are plenty of things needing fixing in them. So they need to be split into small reviewable pieces (see e.g. the LKML patch submission guidelines to get an idea of how I expect them to look) which can be proofread, tested where they include code (e.g. address hacks), fixed and merged individually.
This is a much more pragmatic approach.
s/pragmatic/short-term/g

You never care about the long term benefits, you're always happy with quick hacks which are less work in the short term, but will lead to much more work in the long run. If you believe there will be no "long run", as you have given to understand on several occasions when I pointed out your lack of long-term vision, then why are you maintaining that project in the first place? In the short term, the existing TIGCC 0.96 Beta 8 is just fine.
Remember, in AMS native programs, startup code such as SAVE_SCREEN, the overly used __set_in_use_bit code, or other program support code such as e.g. the internal F-Line emulator, get duplicated
across the vast majority of programs (that's what some people term "the stub of _nostub programs"). And we didn't just save "2 or 4 bytes": -16 bytes on SAVE_SCREEN (the code uses fewer registers, reducing side effects on other startup code bits), -10 bytes on the internal F-Line emulator.So do library functions, but they are used by a lower proportion of programs: -22 bytes for OSV*Timer, -2 bytes for _rowread, -2 bytes for the VTI detection. Not to mention sprite routines, on which you never bothered to integrate even changes that I contributed to TIGCC at the same time they were being made in ExtGraph, in 2002-2003, let alone Joey Adams' 2005 modifications + feature extensions.
That code which ends up everywhere is also code which risks breaking a huge number of programs if it's incorrect and so necessitates a lot of QA. It requires checking the entire startup code to see if the changes in register usage don't cause trouble, as sometimes startup sections depend on each other's results to save space. It should also be tested in multiple configurations. So when I ask "what testing have you done?" and I get a "none, it's obviously correct", also showing that the patch author is unfamiliar with the workings of the TIGCCLIB startup code (which is extremely sensitive to register assignments), that doesn't sound confidence-inspiring at all.
Now if the patch fixes some actual bug, I
might close an eye on the testing requirements (you have already noticed that I don't believe testing to be the solution to all problems), but for a minor optimization, the benefits are extremely low, so I don't want to take
any risks. I have already been burned twice by an "optimization" which broke things (in both cases, it was submitted by you), and GCC4TI has been burned by one too (which was again yours, putting your "broken optimization" count at 3). It's better to have negligeably larger, but perfectly working code than to have "optimized" code which doesn't work.
On the one hand, you're pushing people for optimizing their programs for size, but on the other hand, you aren't working even on the extremely low-hanging fruit. Do what I say, which is not what I do.
Because those "extremely low-hanging fruit" are the ones with the highest risk of regressions, and they save so little space that they have little to no practical benefits. And I'm not convinced you tested those changes adequately before merging them. In fact, you definitely didn't test the sprite routines, as those didn't work at all in your release. Yet you're always quick to accuse me of not testing stuff, even when I
cannot test it myself due to not having the required hardware.
It would be much more useful to work on e.g. the compiler
But it's much harder, and:where you can save hundreds if not thousands of bytes on many programs.
That's assuming that the optimization on the m68k target is improving as the GCC version number is increasing... and it isn't.
Work on the compiler is not necessarily limited to upgrading to a new version (but of course that's part of it). For example, I did some tuning of my own: tuning 68k instruction costs under -Os so GCC uses multiplication and division instructions instead of longer shift and add sequences, using linear test and jump sequences for sparse switches instead of balanced trees under -Os (because trees require more jumps and thus more space), additional 68k peepholes which improve both size and speed.
Definitely not in terms of speed ( e.g. http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40454 ), and our own experience with GCC 4.0 and 4.1 hasn't been problem-free at all. Patrick can write more if he wants.
My size measurements have shown program sizes going down on the vast majority of programs.
There are more pressing issues than spending a lot of time upgrading to a new compiler version (which is known to be likely to yield worse results), and then spending another lot of time testing.
Well, I don't consider the same issues "pressing" as you do at all, sorry. Saving 2 bytes in some library function sure isn't "pressing". Upgrading GCC would also automatically bring us features users (including you!) have been asking for, such as being able to set optimization switches per file in the IDE, without even having to touch the IDE. Instead, we'd get
#pragmas for this purpose which work across all platforms supported by GCC, so a much better solution than some custom UI in the IDE significantly complicating its code and the TPR project format.
And again, if you're planning to stay on GCC 4.1 forever, that's quite short-term thinking.
One of the primary goals wrt. testing would be to do a better job testing than you did yourself with GCC 4.0 and 4.1 before proposing them to users... which shouldn't be too hard, given how low your standards were.
… says the author and committer of the sprite routine "optimizations" which didn't work at all!

My GCC updates at least worked on the programs I tested them on!
You're trolling all the time against "Micro$oft", but you used their "methods".
Unlike them, I didn't label my testing versions "releases".

And anyway, not everything they do is necessarily bad.
Due to its large footprint, the only place where the LZMA decompression code could make real sense would be if embedded into a generic launcher (ttstart or SuperStart).
This is nonsense. The compression is so much stronger that for any program of sufficient size, where "sufficient" is around 20 KB uncompressed, the saved space in the PPG already compensates the larger launcher. E.g., after adding the launcher size, my Backgammon game is already smaller when compressed with LZMA than with pucrunch. (Let's call pucrunch by its name, "ttpack" is just pucrunch.)
And its massive slowness (15-20 seconds to decompress a ~60 KB program, i.e. more than an order of magnitude slower than ttunpack and nearly an order of magnitude slower than ttunpack-super-duper-small) makes it significantly less desirable.
That's a startup-only cost and has
no effects whatsoever on runtime speed, i.e. gameplay (for games) or usability (for utilities).
christop (./17) :
It'll take me some time to completely convert my code to use real global variables from the pseudo-global variables (members inside a single "globals" struct named "G") that TIGCC forced upon my OS. It will allow me to interact with C more easily from assembly, and I can easily rewrite some time-critical code in asm (most notably the audio driver's interrupt handler). Actually, I can probably convert a module at a time, and declare "G" as a regular global variable. I would also have to change the start of the heap, but those are the only changes that I see as necessary.
A better solution than your structure which works with the current TIGCC is to have an assembly header of equates:
.equ var1,0x5B00
.equ var2,0x5B02
.equ var3,0x5B06
…
and to
.include that into all .s files and
asm(".include that into all .c files. Then you can just declare your variables as
extern in a C header. Alternatively, you could have a C header of the form:
#define var1 (*(short*)0x5B00)
#define var2 (*(long*)0x5B02)
#define var3 (*(unsigned short*)0x5B06)
…
and you could generate the assembly header (or even both a GNU as and an A68k header) from that C header by a simple
sed invocation.
That way, you don't need the syntactic sugar in the linker and you'll automatically get the smaller and faster short references when you use those variables as destination operands, which doesn't work with PpHd's current linker patch.