OpenGL specification (glspec45.core.pdf) defines exact bit widths for all GL...
types:
2.2.1 Data Conversion For State-Setting Commands
...
An implementation must use exactly the number of bits indicated in the table to represent a GL type.
...
But so called OpenGL registry (gl.xml - official parsable OpenGL API specification), naively defines those types as aliases to corresponding C types.
...
<type>typedef short <name>GLshort</name>;</type>
<type>typedef int <name>GLint</name>;</type>
<type>typedef int <name>GLclampx</name>;</type>
...
It is potentially unsafe.
Why fixed-width types weren't used instead?
Why no preprocessor logic to determine correct types? Why no compile-time checks of type sizes?
And what's more important, when making an OpenGL functions/extensions loader, how should I define those types?
Yes, it is.
So?
As long as it is not actually unsafe, as long as you do not try to run this code on an odd-ball platform/compiler that doesn't provide the expected sizes for these types, it's fine. It works for GCC, Clang, VC++, and virtually every other major compiler. These definitions work on Windows, MacOS, Linux, Android, iOS, and plenty of other systems and ABIs.
I understand the desire to more strongly adhere to the C and C++ standards. But the reality is that these definitions are fine for most real systems in use. And for systems where they are not fine, it's up to the individual OpenGL providers for that system to provide alternative definitions for these.
Because OpenGL existed before such types did. Remember: they date from C99/C++11; OpenGL dates to 1992.
Back in 1992, C++ wasn't even standardized then. What "compile-time checks" could have existed back then? Also, OpenGL is defined in terms of C, and C's compile-time computation abilities consist primarily of extreme macro programming techniques.
Unless you have a specific need to do otherwise, exactly as gl.xml says to define them.