Why NIST RS274NGC G-Code Interpreter uses this code style?

805 views Asked by At

I just compiled NIST RS274NGC G-Code Interpreter and saw unbelievable 890 warnings from gcc.

200 of them were caused by this array:

char * _rs274ngc_errors[] = {
/*   0 */ "No error",
/*   1 */ "No error",
/*   2 */ "No error",
/*   3 */ "No error",
/*   4 */ "A file is already open", // rs274ngc_open
<...>

which, according to my basic understanding, should be const char *.

Then I saw these macros (they actually appear several times in different .cc files):

#define AND              &&
#define IS               ==
#define ISNT             !=
#define MAX(x, y)        ((x) > (y) ? (x) : (y))
#define NOT              !
#define OR               ||
#define SET_TO           =

Then I saw a lot of warnings suggest braces around empty body in an 'else' statement [-Wempty-body] caused by really strange control flow altering macros like this (yes, with dangling else!):

#define PRINT0(control) if (1)                        \
          {fprintf(_outfile, "%5d \n", _line_number++); \
           print_nc_line_number();                    \
           fprintf(_outfile, control);                \
          } else

Report suggests that

A.5 Interpreter Bugs

The Interpreter has no known bugs

All of that makes me wonder - why is it written so strangely? I can understand macros like PRINT0 - error handling in C can be a real pain - but why would anyone use SET_TO instead of =?

I can believe that all this code was generated but couldn't it be generated in warning-free way?

I'm not an expert in any way, I'm just really curious.

2

There are 2 answers

2
Mark On BEST ANSWER

As Hans and Foad point out, this was the norm back then. It's referred to as K&R C, after the inventors of the language, Brian Kernighan and Dennis Ritchie. (Note that K&R C can also refer to a formatting style they popularized while writing the first book about the language.)

K&R C was quite forgiving of things that a compiler in C99 or C11 mode would treat as UB (undefined behavior) or a flat-out syntax error, such as defining a function taking args without specifying the types of the args. Back then, such a function was assumed to take int args.

IIRC the first major overhaul of the language was ANSI C '89; computer power, compilers, and the popularity of the language had changed drastically between its invention and that standardization.

If antiquated C interests you, you may wish to look at the source of the Bourne shell. Quoting Wikipedia's Bourne Shell article:

Stephen Bourne's coding style was influenced by his experience with the ALGOL 68C compiler ...

... Bourne took advantage of some macros to give the C source code an ALGOL 68 flavor. These macros (along with the finger command distributed in Unix version 4.2BSD) inspired the IOCCC – International Obfuscated C Code Contest.

An actual example of the code can be found on rsc's site, and (what I assume is) complete source can be found here.


By the way, if you think the number of warnings you got is a lot, you should try turning on additional warnings. There are plenty that are reliable and useful but aren't on by default, though I don't have an up-to-date list of them. -Wall -Wextra would be a decent start.

0
Foad S. Farimani On

Well actually if you look at this from the computer science or mathematical point of view of its time, it makes sense. The "C" syntax is noways so mainstream that we don't even think about it. But back then = was equal, and it still is in mathematical context. Using = as the assignment operator is a bizarre and rather confusing choice. Novice programmers sometimes confuse it with the mathematical equal. In contrast, R uses <- or ->, Maxima and Modelica use :=, which are all languages designed by mathematicians. and, or, not and is are also pretty good choices, which are for example used in Python. From programing point of view using pre-processing macros in this way is horrible idea, but if you think about the rationale, it is actually C to be blamed.