This has bugged me for a LONG time. First of all this... my first machine didn't have "characters" at all. It was a KIM1 and characters were just a pattern on the 8 style LED read out - you couldn't even draw some letters like Z or X - yeah, very strange. After this I had a ZX81 - this didn't use ASCII either, in fact not at all, but plugged into the telly and displayed 8 by 8 pixel chars (a whole 64 of them).
The CPU of this machine and others had some very odd assembler ops to cope with converting "raw" numbers to displayable ones. Many CPUs did, it required extra flags and extra instructions... they were only there cos' it was quicker than software. The x86 still has them.
Eventually I had a real machine that had ASCII(ish).
So, I posit the question. Why isn't number 0 (0x00) the character '0', 0x01 is '1' up to 0x09 for '9' THEN A-Z...
Think about this. When we convert from integers to text strings we no longer have to add 0x30 to the output characters. It's worse still if you're printing hex as there's a gap between '9' and 'A' so you have to detect this and add a little more.
How many machine cycles have been wasted doing this? It might not seem important on a quad core, nitrogen cooled behemoth under your desk but when you study the problem the problem is still there.
This has carried on, we've inherited this baggage in the UNICODE spec... Strange...
Maybe that '0x00' was useful as a string terminator... fact is, you SHOULD remember the length of the string. Blimey 0xff is just as easy to detect in HW as 0x00 is if you want a terminator.
Yours,
Ranty 'burnttoy' MacRant.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment