Byte Ordering: On Holy Wars and a Plea for Peace (1980)

70 points
1/20/1970
4 months ago
by oumua_don17

Comments


ale42

Have a look at the date of the document... although the content is serious, the way it's discussed might be a bit in the line of these: https://en.wikipedia.org/wiki/April_Fools'_Day_Request_for_C...

4 months ago

pianosaurus

Who needs CSS for justified text when you can just do it 1980-style monospaced justification? This makes me happy, and I have no idea why.

4 months ago

evnix

Read it from a tiny screen, you will realise why it is a bad idea.

4 months ago

082349872349872

Reading from a tiny screen being the root bad idea.

4 months ago

nine_k

I remember using physical screens that only had 64 or even 40 columns. (A printout was more practical for reading anyway. Just tolerate a few minutes of dot-matrix noise.)

4 months ago

kps

It wasn't meant to be read from a screen. (A browser doesn't show the Form Feed character at each page number.)

4 months ago

silvestrov

I think that more than the font being monospace it is that the font is a good font with high contrast rather than whatever thin and light gray font that many sites use to look fancy.

For +90% of web sites I read I only need a "fix font" for the body text and not a full "reader" version of the page.

4 months ago

fuglede_

This tom7 video might also make you happy then: https://www.youtube.com/watch?v=Y65FRxE7uMc

4 months ago

petee

Probably because it just works, simply. Whereas with the complexity we have today we can barely align things consistently; it's almost comedy

Tip for those with mobile issues: rotate to landscape and the words get bigger ;)

4 months ago

hoseja

I hate it so much, full of wrong linebreaks. I don't need typewriter or dot matrix compatibility.

4 months ago

082349872349872

> Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration. —SKB

4 months ago

yarg

That reads like an XKCD alt-text.

(That would've been RM; SKB is Stan Kelly-Bootle)

4 months ago

bonoboTP

It's actually somewhat of a reality in some image processing code, and some people feel really passionate about whether the topleft corner pixel is located at (0.5,0.5) or (0,0).

4 months ago

1000100_1000101

Direct3D made this a thing. Trying to draw unscaled 2D elements you often end up with blurry images as it bilinearly filters with the neighbouring pixels.

This is because of a mess with where it considered pixels located, where texture samples are considered located, and where, when rasterizing an included pixel, the texture coordinates sampled. See detail at [0].

If your graphics API was blurring all your images, you'd be passionate about that half-pixel offset too.

[0] https://www.gamedev.net/blogs/entry/1848486-understanding-ha...

4 months ago

bonoboTP

I can relate. Spent countless hours on this stuff with computer vision and convnets. The intricacies of align_corners, implementation differences between deep learning frameworks, striding and pooling when numbers aren't neatly divisible, uuh.

4 months ago

djbusby

The center of the pixel is at 0.5 but the top left of the pixel at the top left is 0. And the bottom right of the top left pixel is at 1.

4 months ago

marcosdumay

Or the top left is located at -0.5 and every other place has the coordinates one would naively expect.

4 months ago

danbruc

Pixels are point samples, they do not have corners, they are not little squares. [1]

[1] http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf

4 months ago

account42

Except when they are. No camera samples points, almost all are closer to the little squares model.

4 months ago

danbruc

I am quite deep into dangerous half-knowledge territory here, but I think this is wrong. The optics of a camera will transform a point source according to its point spread function [1] and you are then integrating the contributions of all point spread functions overlapping a given sensor element, which often is a little square. So taking into account the optics, what you have actually sampled is not a square, you only integrated across a square. And each spot within the sensor element was illuminated with different light that got integrated together into a single pixel value, so you can not just turn around and say that each spot on the sensor element was illuminated with the same color, the one you got from integrating across the entire sensor element. If the scene you imaged had no frequency content above the Nyquist frequency [2], then you should be able to exactly reconstruct the illumination of the sensor, including at scales smaller than a single sensor element.

[1] https://en.wikipedia.org/wiki/Point_spread_function

[2] https://en.wikipedia.org/wiki/Nyquist_frequency

4 months ago

yarg

Funnily enough, I actually wrote code that does that very thing this morning.

4 months ago

ranger207

xkcd did reference it in https://xkcd.com/394/

4 months ago

[deleted]
4 months ago

ginko

To this day I think that article mixed up the terms, causing confusion ever since. "Little-endian" to me implies that the least significant byte of a word is at the end of a byte sequence, but it's the other way round.

I understand that it's from Gulliver's Travels where it's about which end to start breaking an egg from - but without knowing this you can easily end up getting this wrong.

4 months ago

ithkuil

Natural languages...

The word "End" can also mean any "extremity" and not just the opposite of "beginning". Otherwise phrases "on both ends of the spectrum" wouldn't make sense.

Thus, a positional encoding of a number has one side (end) where the impact of digits is much higher (big) than the other side (end) where the impact is lower (little).

Little end: the side with lower "weight" Big end: the side with higher "weight"

Being "little endian" is a property of the encoding or architecture, not the property of the word. The word is not "little endian", i.e. its "end" is not "little". The encoding is little endian in that it starts with the little end of the word. You're rightly confused because the fact we're now suddenly talking about the start of the word is implicit and based on the assumption that the reader knows Gulliver's tale.

4 months ago

imtringued

You're doubling down on the ambiguity and thereby proving the point.

If end means start, then why use this word?

Little endian is in reality little startian and big endian is big startian.

In fact, we could simplify this even further and just call big endian, startian and little endian, just endian.

4 months ago

ithkuil

You're right it's ambiguous. It's a playful reference to a piece of literature.

It's also quite an old terminology which is not going to change.

If we could come up with a new terminology from the start we could find better options.

For example:

* Least/Most Significant First (LSF / MSF) * Low/High Address Least Significant (LALS/HALS)

Etc

4 months ago

nine_k

It was / is a very straightforward question. Given this C fragment:

  u16 x = 1;
  u8 * px = (u8 *)&x;
What byte does px point to? LSB orders means that it points to the least significant byte (that has value 1); MSB order means it points to the most significant byte (value 0).
4 months ago

Y_Y

    int* x; // x is an int-pointer
    int *y; // dereferencing y gives an int
    int * z; // int multiplied by z

I'm being silly, but floating the the asterisk between the type and the identifier gives me the same feeling as the "array indices start at 0.5" compromise mentioned earlier.

(For the record, the second way is the universal and objective truth.)

4 months ago

ireflect

But when you say "the second way" are you counting from zero or from one?

4 months ago

rsynnott

Given the context, you've got to wonder if the ambiguous terminology was deliberate.

4 months ago

tempodox

While I prefer to crack my eggs from the little end, I insist on big-endian byte order. Sadly, modern CPUs are mostly made by barbarians (i.e. Little-Endians).

4 months ago

kstenerud

I actually did a writeup on this: https://www.technicalsourcery.net/posts/on-endianness/

TLDR: Little endian is better for most data situations (and incidentally is a more natural ordering for humans), so it's good that it won out in the end.

4 months ago

wakawaka28

The writeup does not convey a consistent message on "naturalness"... The normal way numbers are written in most or all of the world is big-endian so obviously that is the one that would be found "natural" to most people, regardless of whatever perceived advantages going little-endian has. Furthermore, number names in every language I know of start with the biggest units. The direction of writing does not matter so much as the direction of reading. Anything other than big-endian would require readers to skip around in text to actually say the name of a number in a sentence.

4 months ago

kstenerud

What we consider "natural" now is not what was originally considered "natural" during the early centuries of the Hindu-Arabic numerals' journey.

In fact, we can still see the vestiges of the "low order digits first" convention in some languages even today (for example, in German). Even Greek numbers underwent reversals in the early years (earliest known evidence circa 4th century BC).

4 months ago

da_chicken

That still doesn't imply "naturalness". Quite the opposite. It implies that both are natural since both are adopted and both have been switched to after previously adopting the other.

Remember, too, that most people consider every system that they learn first as "natural". Like it's equally true that historically people did not select base 10 very often. Base 12 and base 60 were both popular as well if they're even using positional numbering at all. Nevermind how long we went in positional numbering without a zero. Is zero then unnatural? I think it must be. Is "naturalness" even virtuous then?

4 months ago

kstenerud

"naturalness" is a rather pointless thing to argue over. I only mentioned it in response to the parent (and I only mentioned it in the article to highlight how arbitrary it is).

My point in the article was that the numbering system was originally little endian because it made things easier when multidigit numbers grow in magnitude in the same direction as you write (least significant digit to the right in this case as it was a right-to-left writing system at the time). This written ordering was then maintained for compatibility reasons in the parts of the world that eventually settled on left-to-right. And the vocalizations ultimately followed those of the dominant cultures of the times - which used left-to-right (with some vestigial exceptions - see my sister comment).

4 months ago

Someone

> In fact, we can still see the vestiges of the "low order digits first" convention in some languages even today (for example, in German).

Also in English, with the numbers thirteen through nineteen (and, fairly well hidden, eleven and twelve)

4 months ago

kstenerud

In French it gets even weirder:

80 is "four 20s" (quatrevingt)

92 is "four 20s and 12" (quatrevingt douze)

Not sure where that comes from...

4 months ago

linguae

If I remember correctly, I believe French’s vigesimal system is a vestige from the non-Romance language spoken in France before Latin was introduced there. French isn’t the only language with vigesimal elements; English has the word “score,” made most famous by Abraham Lincoln’s Gettysburg Address (“four score and seven years ago…”).

4 months ago

wakawaka28

I forgot about that one and also the English numbers less than twenty. But in any case, if you consider bigger numbers, the most significant units invariably come first.

4 months ago

kstenerud

Yes, because by the time that the bigger numbers were in common usage, the world had settled upon "big endian" reading of numbers. But it's hard to kill an already entrenched system, thus vier-tausend-neun-hundert-acht-und-dreißig (four-thousand-nine-hundred-eight-and-thirty)

4 months ago

wakawaka28

It's not mere convention. Reading the most significant digits first lets you abbreviate in an ad-hoc fashion and it also supports adding more precision as the number is being written. "Most significant digit" is a clue that those are the figures people consider most important in general. They want to hear those digits first, abbreviate or round to those higher place values, etc. So, big-endian makes a lot of sense for writing and speaking. Little-endian really offers no advantages for speaking or writing. And in doing arithmetic, decimal answers expand in either direction (multiplication to the more significant, division and roots to the less significant), so you can't say one form is universally preferable there either. So big-endian wins!

4 months ago

rsynnott

Oddly, I believe French as spoken in Belgium actually _does_ have a proper word for 80.

4 months ago

Someone

Sextante, octante/huitante and nonante are used for 70, 80, 90 in various French-speaking countries/regions (Suisse, Belgium, parts of Canada (https://en.wikipedia.org/wiki/Acadian_French#Numerals)

4 months ago

tempodox

English also has fourscore (four twenty) = 80.

4 months ago

somat

I always thought eleven and twelve were remnants from when the system was more base twelveish.

It was never actually base 12, that would have required inventing 0, But sometimes I wonder if we would not have been better off sticking with base 12.

And before someone chimes in with "base 10 is natural because we have 10 fingers" no we have 8 fingers, by that logic we should be using base 8, but the real winner in using base 12 is count using your finger bones(there are twelve of them) and use your thumb to keep your count, use both hands and you can get to 100(144 in base 10). this is probably why base twelveish was so common, shepards counting sheep. try counting to 100 in base 10 on your fingers, not so natural now is it.

4 months ago

Someone

> I always thought eleven and twelve were remnants from when the system was more base twelveish.

https://www.etymonline.com/search?q=eleven disagrees. It says eleven means “one left” and twelve “two left”, with an implicit “over ten”. That, to me, doesn’t look like base twelve was leading.

4 months ago

wakawaka28

Do you have any evidence that, say, "three million, one hundred twenty-five thousand, two hundred sixty-nine" would have ever been spoken starting with "nine" and ending with "three million"? I guess that wouldn't be the dumbest thing humans have ever done, but it sure sounds impractical.

4 months ago

kstenerud

The "dumbness" depends entirely on what you're used to. There's no actual need to lead off with the most significant digit other than convention.

"three million, one hundred twenty-five thousand, two hundred sixty-nine"

"nine, sixty, two hundred, thousands five, twenty, one hundred, millions three"

It could work either way.

And in fact, in the early days of the Hindu-Arabic numeral system's penetration into Europe, they actually DID lead off with the least significant digit (although numbers larger than thousands were rarely used, and the archaic wordings have only survived in the ones and tens digits - if at all for a particular language).

4 months ago

wakawaka28

>It could work either way.

Yes but one of those ways does not support abbreviation or interruption, or fractions. If you see a 9 digit number for example, you might want to just round off to one or two significant digits while reading it. Having the smallest components first presents obstacles to speech in much the same way Roman numerals do.

4 months ago

kstenerud

That's also convention talking. Looking at the number 443937215, it's trivial to identify the first two or the last two digits. And for counting digits to get an idea of the magnitude, we use separators like so: 443,937,215 (or 443.937.215 depending on what country you're in).

The only difference is whether you estimate this number as "about four hundred and forty million" or "about ten and five hundred million"

4 months ago

wakawaka28

It's not just the convention talking. People chose the convention over time to be the most convenient overall. But whatever, I don't care enough to keep arguing. Look at my other comments in this thread for more explanation if you want.

4 months ago

AnimalMuppet

> What we consider "natural" now is not what was originally considered "natural" during the early centuries of the Hindu-Arabic numerals' journey.

Which no doubt affected the byte order that the early Hindus and Arabs used for their processors. For processors made in the 20th and 21st centuries, however, the numeral order used by people in those centuries is a more relevant data point.

By raw human nature, humans can do it either way. For the humans we have now, with their background, one way is definitely more "natural" than the other. That is, it's more natural to them, because they come with a cultural background.

4 months ago

[deleted]
4 months ago

Tor3

Well, the bits jump around.. bytes are ordered, but the bits aren't, with LE (between bytes) so there's always room for a never-ending discussion about what's best ("Are you a bit person? Or a byte person? Then your preferences may differ"). The best argument for LE would be that a processor like e.g. the 6502 could start processing the least significant byte while fetching the most significant byte, and that on a VAX you could pass a 4-byte integer to a Fortran function expecting a 2-byte integer and it would actually work (as long as the value was > 65536). That was actually done a lot back in the day.. and created problems when recompiling the Fortran code for a BE architecture.

4 months ago

Tor3

I had another look at the actual article/RFC: This is more than just the little endian/big endian byte order, it's about the bit order of serial messages, where, unlike bytes, the bit order could actually be different (for bytes stored in memory the bits of each individual byte in modern / semi-modern computers are always stored in the same order whether it's a LE or a BE memory architecture). In a serial protocol you could send the most significant bit of the stream first, or the least significant bit first, and that's what's at first discussed in that RFC.

4 months ago

kstenerud

Yes, that's right. Since CPUs abstract away the bit ordering, it only starts to matter when dealing with bit-oriented communications. And since it is abstracted away, there's no real benefit to be gained by BE or LE at the bit-level.

4 months ago

akira2501

LE is easier to handle, BE is easier to read.

4 months ago

nicce

LE is also much faster in terms of performance. For example, because of the efficiency of the add or subtract operations.

4 months ago

bregma

At the circuit level (where the performance is determined) it doesn't matter if your half-adder combines its left and right inputs or its right and left inputs.

So no, there is no difference in efficiency or performance when it comes to endianness. The only time it would make a difference is if your memory bus width is less than your wordsize and you lack any kind of caching.

4 months ago

foldr

I wonder if you're thinking of the efficiency penalty for dealing with big-endian values on a little-endian architecture. As the sister comment says, there's nothing inherently more efficient about either byte order when it comes to the processor performing arithmetic operations in on values in its native byte order.

4 months ago

p_l

Unless you have a bit-serial machine (close to extinct), LE vs BE matters not for ALU so long as you don't have to run the computation through multiple rounds (like having 32bit ALU but doing 64bit arithmetic)

4 months ago

danbruc

[...] so long as you don't have to run the computation through multiple rounds (like having 32bit ALU but doing 64bit arithmetic)

That does not sound right, your byte ordering should not affect the ALU, it will always perform the same operations. If you are doing a multi-word add, you have to add from least significant to most significant word because of the carry. And the ALU has no idea what you are adding, whether the numbers are independent or part of a multi-word integer. At best I could imagine that there might be some impact when fetching operands as in big-endian you have to fetch from decreasing addresses which might be less efficient then fetching from increasing addresses.

4 months ago

magicalhippo

> so long as you don't have to run the computation through multiple rounds

To be fair, this is not exactly uncommon for many workloads, even on todays 64bit machines.

4 months ago

Tor3

.. or a 6502 microprocessor doing 16-bit arithmetic with an 8-bit ALU. The 6502 was designed by a team who used to work with the big endian 6800, and chose little endian for their 6502 for a slight performance improvement.

4 months ago

p_l

That falls into my second case of using smaller ALU in two steps :)

4 months ago

[deleted]
4 months ago

weinzierl

Brilliant write up!

I do not really understand the "Sorting unknown uint-struct blobs" point.

Could you give an example or explain in more detail, what a "unknown uint-struct blob" is?

The odd/even advantage could be put even stronger, because every additional bit you know from the little end gives additional information about the number's divisibility. For example, one bit tells divisibility by two (aka ofd/even), two bits tell divisibility by four, and so on.

4 months ago

kstenerud

For example, if you had a file that comprised the following struct:

    struct someblob {
        uint64_t timestamp;
        uint64_t checksum;
        uint32_t item_count;
        struct something items[0];
    };
Even if you didn't know that a collection of files were structured this way, you could still read, say, the first 128 bits as an unsigned integer and compare them, and they'd just happen to be naturally ordered because the timestamp field grows from right to left, and would have precedence over the "lower 64 bits" of the checksum field.

It's a very minor benefit (of dubious real-world utility), but I wanted to be comprehensive :P

4 months ago

weinzierl

Thanks! That makes sense.

Mentally, I would put this in the "conventional" advantage category, because it relies on comparing fixed length chunks of memory and computationally it should not make a difference if `timestamp` is stored LE or BE for sorting.

4 months ago

foldr

A simpler case is reading only a fraction of a field. For example, suppose that you have a 8 byte key and you read the first four bytes of it. On a big-endian architecture, those are the high bytes and you can sort with them just fine (up to some level of detail). On a little-endian architecture, you'll be sorting by the lower bytes and the results will be meaningless. So the big-endian architecture allows you to sort by the first n bytes of a struct without caring what fields it contains. While there is obviously no guarantee that the results of this will be meaningful in the general case, it is far more likely than for a little-endian architecture.

4 months ago

weinzierl

My counter argument to this would be that it is as expensive to compare LE k[4]s with each other as it is BE k[0]s.

As long as you deal with fixed length chunks of data accessing it from either end should be equal effort (in first approximation[1]).

This is qualitatively different from the odd/even case, because for a number of unknown length you can tell odd/even in O(1) for LE but need O(n) only for BE (you have to find the LSB in n steps).

Mathematically there is more information you get from just having the LSBs than just having the MSBs without knowing the whole number and its length. I think this the only reason, why LE is marginally better, everything else boils down to convention.

[1] I know that on modern architectures it can be faster to read memory upwards than downwards, because of the pre-fetcher, but this is what I meant with the advantage is because of convention. If we had a symmetric pre-fetcher the point would be moot.

4 months ago

foldr

True. There is a significant asymmetry, though, in that you are more likely to be in a situation where you know the starting address of an object and a minimum size than you are to be in a situation where you know the end address of an object and a minimum size. Strictly speaking that's also an arbitrary convention (as I guess the address of a struct could be defined as the address of its last byte), but it's a near-universal one.

4 months ago

kstenerud

Actually, in this case it would. Consider the layout (byte-by-byte):

    BE: t8 t7 t6 t5 t4 t3 t2 t1 c8 c7 c6 c5 c4 c3 c2 c1
In the big endian case, the byte-by-byte of the struct naturally places the timestamp at the high end of the 128 bit value you blindly read.

    LE: t1 t2 t3 t4 t5 t6 t7 t8 c1 c2 c3 c4 c5 c6 c7 c8
In the little endian case, it's the CHECKSUM at the high end of the 128 bit value.
4 months ago

weinzierl

I think we agree, but it nags me that I still can't follow your line of thought.

Do you want to:

- Compare just the timestamp, so

    1970-01-01 00:00 0x01
    1970-01-01 00:00 0x00
    1970-01-01 00:00 0x01
    1970-01-01 00:01 0x01
    1970-01-01 00:01 0x00
    1970-01-01 00:01 0x01
could be a valid ordering, with the first three and last three in arbitrary ordering, because the checksum doesn't play a role.

- Compare timestamp and checksum, in the sense of ordering all files with the same checksum by timestamp, like this

    1970-01-01 00:00 0x00
    1970-01-01 00:01 0x00
    1970-01-01 00:02 0x00
    1970-01-01 00:00 0x01
    1970-01-01 00:01 0x01
    1970-01-01 00:02 0x01
- Compare timestamp and checksum, in the sense that files with the same timestamp are ordered by checksum, in effect grouping equal checksum files together under their respective date.

    1970-01-01 00:00 0x00
    1970-01-01 00:00 0x01
    1970-01-01 00:00 0x02
    1970-01-01 00:01 0x01
    1970-01-01 00:01 0x02
    1970-01-01 00:01 0x02
    1970-01-01 00:01 0x03
In the first case you could just compare the first 64-bit, so I don't think that's it. The second case would be an advantage for little-endian, so it doesn't support your argument. Third case supports the argument for BE, but is an unusual thing to want.

In other words: Is the checksum crucial for your line of argumentation, or could you make your point with just a timestamp? If not, why not compare just 64-bit. If yes, I don't follow why BE is better in this case.

4 months ago

kstenerud

Basically, (and this is getting really esoteric at this point), if you use big endian byte ordering in your data structures when saving to disk, then you can place items in order of descending "sorting order" importance at the beginning of your file. Anyone wishing to sort such files wouldn't need to know anything about the actual structure of the file, or what is stored where. They could simply choose an arbitrary number of bits to read (say, 512 bits), do a big endian sort based on that, and it will always come out right (even though they're technically reading more than they have to).

    struct myfile {
        uint32_t year;
        uint8_t month; // Assuming packed structs here
        uint8_t day;
        uint32_t seconds;
        uint16_t my_custom_ordering;
        uint8_t some_flags;
        uint64_t a_checksum_or_something;
        char name[100];
        ...
    }
Reading the first 64 bytes from this file will give year, then month, then day, then seconds, then my_custom_ordering, then some_flags, then a_checksum_or_something, then the first few bytes of name (assuming we used big endian byte ordering). The extra bytes won't hurt anything because they're lower order when we compare.

To do this with little endian ordered data, you would have to:

1) Reverse the ordering of the "sortable" fields to: my_custom_ordering, seconds, day, month, year

2) Know in advance that you have to read exactly 12 bytes (no more, no less) from any file using this structure. If you read any more, you'll get random ordering based on the reverse of what's in the "name", "a_checksum_or_something", and "some_flags" fields (because they comprise the "higher order" bytes when reading little endian).

3) If you were to add another field "my_extra_custom_ordering", you'd have to adjust the number of bytes you read. With big endian ordering, you can still read 64 bytes and not care. You'd only care once your "sortable fields" exceeds 64 bytes - at which point you'd read, say, 100 bytes to be completely arbitrary... It doesn't matter because with BE everything just sorts itself out.

The comparator function is also much simpler with BE: Just do a byte-by-byte compare until you find a difference. With LE, you have to start at a specific offset (in the above case, 11) and decrement towards 0.

4 months ago

weinzierl

That made it click. Thanks a lot for your patience and the detailed explanation.

4 months ago

tekacs

This comes in really really handy in lexicographical ordering.

For example, if storing in the keys of a KV store a pattern of:

[u32, String, u32, String, …]

If you want those arrays to be sorted lexicographically, you’ll want to store those u32 instances in big endian, so that both those and the strings sort from left-to-right.

4 months ago

[deleted]
4 months ago

USiBqidmOOkAqRb

Silly aside: I didn't read the book and was very confused for a long time since I assumed endian meant the specified byte goes last.

4 months ago

djbusby

Little End In (first) == little endian.

4 months ago

dcminter

It may be a handy mnemonic, but it's not the etymology (you may know this, but I can't resist the pedantic opportunity!)

It comes from Swift's satire about egg eaters. The end in question was the small or large end of the egg and Big Endians broke the big end with the spoon - i.e. it went into the egg cup small end down.

The -ian suffix here is analogous to Christ-ian or Keynes-ian and has nothing to do with "in".

4 months ago

feverzsj

I think most industry wire protocols are still big endian.

4 months ago

unnah

Yes, the so-called "network byte order". Now that big-endian has lost on all other fronts, it is time to switch to little-endian in all future network protocols. We could call it the krowten byte order.

4 months ago

SSLy

Lmao, this has raw actual-ANSI-control-code 000C <control> = FORM FEED (FF) in the text.

4 months ago

dasyatidprime

^L is still normal as a separator in Emacs Lisp code files!

4 months ago

p_l

TECO (and thus Emacs) supported a commands like "read page" which turned into movement by page in Emacs, which is why ^L shows up in Emacs Lisp (and sometimes similar vintage code)

4 months ago

[deleted]
4 months ago

[deleted]
4 months ago