After thinking it about it a bit, I decided to clear up any possible “mysteries” as to “where I’m coming from” and “what my biases are” by (re-stating some things to some degree on this bulletin board I’m rather sure; bad memory as always; although I'm also quite sure I haven’t put a large part of this on the bulletin board before and I will make a sincere attempt to figure out a way to not do this again in the future). This is kind of long; but storage is cheap and you don’t have to read any more of it than you are interested in; although you might want to skip down to the last section to get a summary of all of the previous sections because the previous sections are nothing but a detailed explanation as to exactly why the things in the summary are all there.
So first off, my “history”. When I was in college I discovered that I was very good at anything to do with computers; not just programming, literally anything. (When I discovered this I was studying Electrical Engineering specializing in “digital circuit design”) (Take note of what immediately follows the next sentences before you conclude that I am just bragging.) And in my entire career I never met anybody who was as good as I was much less better. There are only two times in my entire career where anybody found a bug in my code "after" I had "released it" into "production; and in one case the program specifications had only been given to me verbally (actually quite common, believe it or not) and I had misunderstood them so the program worked perfectly in terms of doing what I thought it was supposed to do; unfortunately that wasn't what my manager wanted it to do. I'm not really sure that this should be considered a bug. I was also, it was estimated by my employers, about an order of magnitude faster than the typical programmer. Now the qualifications: First, I tend to believe that many of the people who regularly post on this bulletin board, as well as Rex himself, are as good or better than as I was. Secondly, and possibly most importantly, that was the very first thing I had ever discovered in my entire life that I was actually good at as opposed to “just competent”, "barely competent", or even worse; and I never discovered anything else in my life either before, or while I was an active programmer-type person (other than teaching computer-related courses; but I just consider that to be an "aspect" of my skill in dealing with computers as a whole; and one of the things that probably made me very good at teaching this was my Electrical Engineering background specializing in digital hardware design and therefore my very "deep" understanding computers; and I was able to impart that understanding to students even if they didn't have that Electrical Engineering background which almost none of them, of course, did) or after I was forced to retire that I was actually "good" at. That could be because, while my memory is so bad now as to be almost crippling, it was never what you would really call “good”. And for these reasons this was the very first thing in my life that I was actually proud of.)
So I was in college when I “discovered” this; and my complete programming language history from then to now is as follows: Fortran on an IBM 1130 (a so-called minicomputer; it was a 16-bit machine with 48K of memory that was probably less powerful than the later Commodore 64 was), as well as IBM 1130 Assembler; Fortran on the CDC (Control Data Corporation) 66000 (a so-called “supercomputer” that consisted of several (I no longer remember the exact number) of parallel floating-point-only processors that had 60-bit words along with 12 16-bit (as I remember) integer only “peripheral” processors to handle I/O that were actually the same piece of physical hardware that was time-sliced 12 ways); I also did a very limited amount of assembler (“assembler” was actively discouraged on that machine and you had to “run through hoops”, so to speak, to actually use it; although it really wasn’t all that sensible on a floating-point only machine in the first place and the “peripheral processors”, not too surprisingly, were totally unavailable to “average” (as opposed to “systems”) programmers using the machine.) I will note here that there were only 4 languages in general use at that time: Fortran, assembler (for the particular machine you were dealing with, of course), COBOL (which was absolutely not used at an “engineering” school), and Basic was starting to come in to vogue. So in my junior year, as I remember, the school acquired a PDP (Peripheral Data Processor; however that name was arrived at) 11/45, a 16-bit machine that had the hardware capability to “bank switch” allowing more than 64K of memory (although 64K was all that was available at any given point in time to a single program). This machine was programmed using a version of Basic; and effectively from the user perspective (and probably even from the perspective of a “systems” programmer without doing some out-of-the-ordinary stuff) the only language that the computer ran was Basic; there wasn’t even the capability to run any other language. Even the (rather primitive) operating system was an "extension" of the Basic language (I wrote the device-driver software for some graphics terminals – again, this was in the early to mid-70’s – for the machine, in Basic. And I found out somewhat accidentally after I had finished the software that the college had gotten the terminals for free in exchange for writing the device-driver software for said terminals, which I, of course, had written. I must admit I had wondered why one particular member of the faculty had taken such a “deep” interest in what I was doing; “suggesting” requirements and testing procedures when I was doing this just for the “fun” and “challenge” of it.) The college also had a PDP 11/10 that had no software of any kind pre-installed; programs were loaded from paper tape and the software needed to do that had to be manually entered by flipping switches on the front panel of the machine. The next machine I had experience with (and this was on my first “real” job as a programmer) was an IBM 370 mainframe. I exclusively wrote Assembler-language programs for this machine at the direction of my employer. At a later point (and another company) I learned PL/1; and I taught PL/1 (as well, I might add, as IBM 370 Assembler, and even COBOL) full time for a significant period of time at a fortune 500 company. I then learned C somewhere along the way (I really no longer remember where or why), and then “transitioned” fully to C++. Now I suppose I will brag a bit here; I was working part time as an instructor at a “business” school teaching C++; and somewhere along the line my employer learned that Microsoft was offering a C++ certification exam. He wanted me to take said exam (he had been “advertising” me as the “instructor who was only rated 9’s and 10’s by the students”, and he wanted to add Microsoft C++ certification to that. He was willing to totally pay for it (plus the transportation and hotel and so on) and I had no objections so I signed up for the test and took it in Atlanta, Georgia. Well, it was a two-hour test that I finished in about 45 minutes and left, and the next day I was called in Chicago by the Microsoft employee who had administered the test from wherever he was in Washington State who told me the he, and several other people who had been taking the exam at the same time because they had made comments about it as they left, had assumed that I had “given up” on the test because I had left after only 45 minutes. The reality was that I tied for the 2nd-highest score ever to that point in time; and he added that “if you had spent another ten or fifteen minutes going over your answers before you left” I would have had “the first perfect score ever to that point in time”. I will add here, with some humor, that I don’t really know how correct he was in that presumption because there has always, for me for whatever reason(s), been a somewhat inverse relationship between how difficult a question was and how likely I was to get it right.
I will also note here that it is my belief that the reason that Microsoft had introduced this exam was because they felt that C++ was a very difficult language (which it was) and that there were too many "charlatans" out there who claimed to know the language well and really didn't and claimed to represent Microsoft in some way which Microsoft absolutely did not want; so they developed and administered this test to "weed these charlatans out". However, it is also my belief that Microsoft eventually came to feel that C++ was just too difficult, the result of which being that they developed and marketed a much-simplified "implementation" of C++ that they called "C#" ("C-Sharp").
Next, the “consequences” of that “history”: Until TCC (and this is now true for the batch language for cmd.exe since the introduction of the “/A” (“arithmetic”) parameter on the “Set” statement) there was a clear-cut, hardware defined, distinction between “numeric” and “character” (string) values; and this difference was pretty much absolute. “Numeric” values came in one of quite a few possible “formats”: pure-integer values (signed or unsigned) that were either 8, 16, 32, or 64 bits long; and in the case of the IBM mainframe, BCD (“binary coded decimal”) numbers ranging from one to fifteen digits long; and floating point formats (generally binary for most machines, but actually base-16 (hexadecimal) for the IBM mainframes, believe it or not) ranging in size from 4 to 16 bytes. The binary integers could be considered to have a fixed binary (as oppose to decimal) point; but this was purely an attribute of the imagination of the programmer. For the IBM mainframe, BCD (again, binary-coded decimal) could be considered to have a fixed number of decimal (of course) places, and the assembler program provided very limited support for this (the P-prime (P’) attribute, as I remember it was called), but very few assembler programmers were even aware of existence of this data attribute much less used it. And floating point numbers could be coded as if they had a fixed number of either binary or hex or decimal places at the discretion of the assembly-language programmer; but this existed purely in the “imagination” of that programmer, there was no support for this whatsoever by either the hardware or the software (assembly-language) program. I will note here that one of the jobs I had fairly early in my career was to entirely rewrite, from scratch, the “calculation engine” for a mainframe spreadsheet (ala Exel) program in floating point. The program had originally been written to use BCD arithmetic to that point, but the software vendor (who I worked for, of course) decided to recode it using (IBM’s) floating-point format because they felt that BCD numbers took up too much storage (strictly integers or +/- 0 (yes, negative zero was a "real" value but it was mostly exactly equivalent to plus zero) contained in 8 bytes for 15 digits or 999,999,999,999,999; but there was absolutely no support for digits after the decimal point and "scaling" by multiples of 10 was left entirely up to the programmer) and was too slow. So they hired me to specifically entirely re-code the “calculation engine”, as well as all of the numeric input conversion and output formatting routines, in floating point. And they initially wanted me to carry all floating point numbers internally as their "real" values multiplied by 100, matching what they had largely done with the binary-coded decimal, so that round-off errors (such as getting .9999999 when the answer really should have been 1.00) for financial values did not occur. (The president of this company, it was a very small company that had only about a half-dozen employees including the president and vice-president of the company, hired me because he had previously been a contractor at a company where I worked and he was familiar with my work.) However, at some point (I really don’t remember the details) they gave up on that idea and I used “regular”, unscaled, floating point. And there are some aspects of that that I’m quite proud of today: Number one, even though the internal “accuracy” of floating point numbers was somewhere between 14½ and 17 decimal digits (since the floating-point format was base 16 it wasn’t a constant value for base 10) and the code I wrote only operated to a smaller number of digits (I believe the number was 12, but it was 25 years ago), this is allowed me to code "sophisticated" (if you’ll pardon me) rounding to convert the floating point to decimal and format it for output, and results like “.99999999” almost never occurred. And another thing that I am proud of is this: for reasons I no longer remember anymore they didn’t want me to use the code from the run-time libraries of either Fortran or PL/I, they wanted me to write that code, from scratch, which I did. We are talking here about log, log10, e to the x, 10 to the x, (in fact, anything to anything), as well as the trigonometric routines (sin, cos, tan, asin, acos, atan, the hyperbolic trigonometric functions) as well as all other numeric routines that the spreadsheet was capable of doing (it was too long ago for me to necessarily really remember the complete list). And, being somewhat paranoid, I suppose, they were very concerned about the speed and accuracy of my code vs. that of the high-level language run-time libraries and did extensive testing of my code, and my code was always as least as accurate if not more accurate (i.e. all tests that that you can think of like "sin(x)**2+cos(x)**2 = 1" and "e**(loge(x)) = x" and the like) and were always at least as fast if not faster than the run-time library code. (I really have no theories as to why my code at least seemed to be better than run-time library code was.) And I got all of the algorithms for my routines (this was pretty much before the very existence of the Internet) from what was called the “Mathematical Handbook”, a very thick red book that contained the formulas for all of these things as well as page after page containing tables for logarithms (both natural and base 10) as well as page after page of tables listing the values of the trigonometric functions for probably thousands of values (I haven’t seen said book for probably 20 years and I no longer remember the precision to which these calculations were done). (I'll note that the reason this existed then and no longer exits now was because this was at the "dawn" of the introduction of "scientific" calculators; what few there were were very expensive.) And finally I'll add that I developed, on my own, a square root routine that did no multiplication or division by by anything other than the number 4 (which was very fast) and straight comparisons and straight additions and/or subtractions, all very fast instructions for the floating-point hardware. And my code was as accurate as it was theoretical capable of being. (It's another, rather long, story as to where I got the idea for the algorithm; I'll just say that I didn't "invent" the "concepts" behind the algorithm, just its implementation in IBM's base-16 floating point hardware/instruction set.)
So the bottom line is this: TCC (and now batch files for cmd.exe via the “Set” command with the “/A” parameter that is also now shared by TCC) don’t really have “numeric” data types; strictly speaking, all data is "character" data. However, based on my previous history I still prefer to maintain the “illusion” of numeric values, and this “illusion” has no downside whatsoever that I can think of “off the top of my head”. But in terms of this “illusion”, 12.96 is a number, whereas “00006931” is a character string that happens to have the numeric value or 931.
And the “@Format” functions are there to convert (usually assumed to be) numeric values to character strings; whereas the statement: “Set Variable+=0” will effectively convert a character string that contains numeric value to a number. (The above “Set” statement produces an error if the value of the “Variable” can not be interpreted as a valid number; neither the “@Format” function nor “@FormatN” function “care” whether the “number” (the 2nd argument for both functions) is actually numeric. (For “@Format” it is a complete irrelevancy and would only be relevant at all if a leading “0” is specified on the “format” argument in the first place, and if that leading zero is specified those leading or trailing zeroes are placed on the result if applicable no matter what the value of the “string” argument is, numeric or otherwise; for “@FormatN” the number is “defined” by what precedes the first non-numeric character of the 2nd argument (the “value”), and if the first character of the 2nd argument is a “Q”, for instance, the value of the returned result is simply zero formatted in whatever way the first argument specifies.)
So it is my "bias" that there is a clear distinction between character strings that contain a valid numeric value and actual "numbers".