Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Arthur Whitney's 'B' Language (kparc.com)
151 points by chrispsn on March 26, 2019 | hide | past | favorite | 107 comments


I studied this code for a long time. I have lots of notes, going through everything almost line by line. I would be embarrassed of making them public because I never finished (at some point, I was just rewriting the x64 reference), but if somebody is interested I do not mind sharing.

I would recommend everyone doing C to have a very careful look at this. The code is full of nice tricks and you start appreciating the terse style after a while. I understand why this style is not more commonly used, but I really like it.


I wish* exist a simplified implementation of this (or APL) in "normal" looking code.

Array languages are the most "obscure" of all paradigms I have looked into. Even concatenative ones have some few resources about how work.

Also, the community live in a parallel world! So when asked about stuff, redirect me to https://code.jsoftware.com/wiki/Essays/Incunabulum or https://github.com/kevinlawler/kona as if it somehow make everything so obvious ...

P.D*: Because I'm building a language that try to draw some ideas from kdb


You have ivy, for example: https://github.com/robpike/ivy by Rob Pike and, of course, gofmted. I have had a quick look at a few APL implementations and some of them are very readable code (by normal standards).

However, although I totally get your point (I've been there), I have to say there is much more to learn if you try to understand that parallel world. It can take a while, but I think it is worth it.


> Array languages are the most "obscure" of all paradigms I have looked into. Even concatenative ones have some few resources about how work.

Have you tried looking at APL books? There are dozens of books on APL you can find. There are journals and conference proceedings from ACM, IBM, Dyalog, and Vector UK. I started to get into APL more seriously last year and lack of resources is definitely not a problem in my experience, lack of time to read them all is. Also, it's more about thinking of algorithms in terms of matrix and array operations. A lot of the techniques and algorithms can be easily adapted from Matlab.


Well there are books like "Mastering Dyalog APL", but they don't help you think in the language. There is the "Finnish APL Idioms" book which helps a bit. APL and J certainly have libraries, but a lot of functions and libraries that are builtin to most languages are omitted from APL as you can replicate with 3 characters. That's where knowing idioms helps. I am just a novice, but really enjoy writing APL & J. Aaron Hsu has been on a couple of APL related posts on HN that I have linked to multiple times. He is a scheme guru that switched to APL a few years back and wrote the Co-Dfns parallel GPU compiler for Dyalog APL. He has some enlightening talks on YouTube as well. I mention him here as his APL code is extremely terse and similar to Whitney's C, and he did an excellent job defending it in the HN post which changed the way I look at code. Second, as a University TA, he's taught multiple courses on APL as a first programming language and has talked about writing a book that tries to teach how to THINK in APL and not just this is how you transpose the matrix using this symbol...etc. My problem is that I know what the symbols do, but don't know how to switch from imperative/OO thinking to how to think in terms of array primitives (inverse, transpose, sort, outer product...etc).


"J for C programmers" is, in my opinion, an excellent resource for how to think in array-oriented style. http://www.jsoftware.com/help/jforc/contents.htm


I've flipped through it before, but didn't remember it being very enlightening (note that I didn't read the full thing and do all the excercises). I'll have to give it another shot. Thanks for the tip!


I would love to take a look at your notes. I'm fascinated by this and would greatly appreciate any help approaching it


Here it is for anyone who wants to have a look: https://docs.google.com/document/d/1W83ME5JecI2hd5hAUqQ1BVF3...

I recommend you to keep the original files at hand for reference. Once you have understood it the first time, they are a better reference than my many pages document. When AW says he hates scrolling, it is not because he is lazy, it just doesn't go well with this style.

I am sure there are mistakes. If you find any, please let me know.


> b.c

> That was the easy part. Now it gets complicated.

Haha lol'ed. These notes are amazing. Thank you!


Thank you! This is fantastic!


You are a hero


Thank you! This is awesome


I assume you actually managed to compile the code and run t.b? What invocation did you use?


I'd love a copy of the notes if you have them.


For those interested but needing a little help deciphering Thomas Lackner has a handy repo:

https://github.com/tlack/b-decoded


Q is one most important language here at BitMEX. Complicated or not, it's on KDB that all our trades happen, up to $8B in 24H volume last summer - it's highly efficient at its task, and never made us or any of our customers, loose a single Satoshi.


With all due respect, requests to BitMEX during times of high load have >50% probability of hitting a 503 Service Unavailable error. Meanwhile, competitors with similar volume/activity don't have this problem.


They've done studies, you know. They say 60% of the time, it works every time.


Whitney’s code is what I expect code to look like post-singularity when hyper intelligent AI program themselves!


+1 thanks

You got me thinking how AlphaGo reinvented the style of Go for human players. A real AI taught one of our programming languages would code in a way that would be alien to us.

Whitney’s code is fascinating. I thought that someone might use a good C IDE to rename symbols to longer human readable names, but, that would harm the terseness of the code.


Doesn't look like B. I'm a bit irritated.

https://en.wikipedia.org/wiki/B_(programming_language)


Whitney is so fascinating to me. His code is simultaneously awe-inspiring and horrifying. It's like he struck a Faustian bargain to gain programming powers beyond the ken of mere mortals, and is now cursed to write transcendently beautiful code that manifests as unintelligible gibberish to everyone else.


Everything surrounding APL gives me that feeling, but then I remind myself that, just like Chinese (which appears just as baffling to those who haven't learned the language but have experience with Latin family ones only), there's a not-insignificant number of people who use the language every day and are highly productive at it.

I like terse code and my style is closer to the early UNIX/K&R, yet I've noticed a lot of people already find that too terse; the "normal" style these days seems to be gradually getting more and more verbose (look at typical C# or Java, for example.)

IMHO the readability argument in favour of verboseness is a bit of a misnomer --- you can easily "understand" a file full of extremely verbosely named functions all consisting of a single line, but that doesn't help at all with seeing the big picture. Judging by the amount of file-flipping and stack-jumping when I have to debug a project in that style, it probably hinders it.

On the other hand, this is "code with a learning curve". You can't glance at it and understand immediately, but once you do take the time to read it carefully, you can understand more of the whole than if it was written in a more verbose style.


My sense is that, once you get to a certain size of project, it becomes both impossible and undesirable for everyone in the project to be able to keep the whole thing in their head. And that this boundary is something of an event horizon in terms of what's desirable in a coding style - up becomes down and down becomes up.

There's also a lot of code that's so tightly coupled to some business domain that it's natural to write it in the language of the business. That way you don't have to maintain separate mental models and translate between them.


If you think this is horrifying you should see what people-who-aren't-Whitney write when they're copying his style, or adding to his codebase.


I think this is one of the reasons he does not release his code. K is such a small language, with a very limited number of primitives, that were chosen for the task they wanted to solve. Every user would want to change that one he never uses for that one he uses the whole time, so we would have lots of k implementations, each of them with its own mesh of spaghetti code and slightly different behaviors.

Talking for myself, I would love a FOSS version of k, but I am sure as hell I would patch a couple of things, and I am sure I would write much worse code than Whitney's (in his style or in my style).


I think the closest thing is https://github.com/kevinlawler/kona


Kevin Lawler and Scott Locklin (posts on here sometimes) tried to make a tsdb similar to kdb+ called kerf I think. I don't think it worked out, but it would've been nice to have a more affordable competitor.


FWIIW jd is pretty good and is definitely more affordable than Kx or Kerf.


Thanks Scott! I remember reading one of your posts somewhere talking about Jd. I'm curious if you still use it, or if it or your admittedly non-expert J knowledge made sticking with R, Torch7, Lush, Tensorflow...etc more tenable. I figure you might also use Kerf. Speaking on that subject, would you and Kevin ever open-source the code if you're not selling it anymore?


I use jd on my side projects; it's great. J in general is what I invest in for side projects; powerful tool, best user community and decent support for the types of things I need.


Thanks for the recommendation. I'll keep that in mind. If you ever get the chance I'd love to read some more J related articles as I find the language weirdly fascinating even though it ain't easy. The community doesn't write a lot of marketing junk either. I've been reading through "J the Natural Language for Analytic Computing" to scratch the itch when it comes up. I'm always impressed when I find that it has things like derivatives as a native primitive.


That's a really badass trick when you look at how it is implemented. Another cool thing to look at is Obverse. https://code.jsoftware.com/wiki/Vocabulary/Inverses

There's a lot of material on their website, though it's not well organized and some of it is out of date.

https://code.jsoftware.com/wiki/Main_Page

I find it rewarding to screw around with, though the only jobs you're gonna get are K/Kx related.


I've heard of obverse, but had forgotten about it. That is neat. Thanks for sharing.


A more up to date version: https://github.com/JohnEarnest/ok


I think a good introduction to this style of coding is the first "proof of concept" prototype interpreter for a small subset of J, also written by Whitney:

https://code.jsoftware.com/wiki/Essays/Incunabulum

Once I realized it was K&R C, I thought it was pretty straightforward.


This is worth a look. Some of the old A+ stuff is as well. I'm told by reliable sources that an awful lot of Art's code looked like this over the years; even the more recent K7 stuff. Art's gonna do his thing. Opinions differ as to whether or not it is a generally good idea, but you can't argue with the results, and at this point I find stuff like the J source code to be fairly readable, even if it is really different from what most people are used to. Basically, you're just expressing C as APL primitives. If you understand APL primitives, it's not so bad.

If you want to see APL expressed as C primitives, something like Nial is pretty good: https://github.com/danlm/QNial7


I ran it through gcc -E and clang-format.

https://pastebin.com/21qiRDi5


And not surprisingly that makes it less readable. It is a lot harder to have one-character names when you need to scroll across many pages to look up what they mean. So the more code you have, the longer code you need. As Eric Evans points out, naming things in computer code is difficult and important; wrong names can be misleading. This style avoids the problem by avoiding names.



This is from 2015. I really wish something more from Whitney/kparc would materialize.

Whitney/kparc previously covered on HN:

https://news.ycombinator.com/from?site=kparc.com


He just released a new product a few weeks ago: http://shakti.com

Downloadable now: https://www.reddit.com/r/apljk/comments/b1l5hi/shakti_trial_...

Didn't get much attention on HN: https://news.ycombinator.com/item?id=19326007



I did not know that was his. Thanks for the tip.


Well, at least there are comments, right?


    // :+-*% ^&|<=>  x64 JJ Jj o2 cc tst RET cll psh pop acdbsbsd89..  o[m[s|d]] c3 eb+1 e8+4 e9+4 [f2/66/4*][0f] 5* 7*+1 b*+4 0f8*+4  03 23 2b 3b (6b 83) 89 8b ..



Binaries can now be obtained from Anaconda: https://anaconda.org/shaktidb/shakti

Not sure what was on gitlab, I assume not source?


The install for Anaconda is over 500MB.

If it is anything like k, the "shakti" binary is probably small, maybe under 260KB.

Is there a way to download just the shakti binary?

What are the restrictions on the trial version of shakti?

Is it time-limited (e.g., 30 days)?

Does it need an internet connection to be able to phone home? (I think FD/Kx started doing this with their 64bit trial.)

The backstory for all this is that Whitney sold his remaining interest in Kx around July of last year for around 53 million.

Quick US trademark search does not show any filings for a "shakti" mark that covers computer software.


Just download it ‘manually’ https://anaconda.org/shaktidb/shakti/files and tar -xf

bin/k is 185K


Thank you! Exactly what I was looking for. Even give us a static binary.

Alas, I am getting "Illegal instruction (core dumped)". Will have to try another kernel.


You need avx


I guess I need to try a different CPU. Cheers.


There's miniconda which is much much smaller (a few megs), which can still install shakti AFAIK.

miniconda is just the package manager of anaconda, with no preinstalled packages.


License is here: https://shakti.com/license

No phoning home etc. although license does say you're limited to 30 days (although remains to be seen if/how that is enforced)

FWIW, kx didn't _have_ a 64bit trial before they released it with the phone-home stuff, before that trial was limited to 32bit only (still available & no connection required)


I did not mean to imply otherwise. The phoning home was introduced with the 64bit trial.

If I recall correctly, the 32bit trial used to time out after some number of hours after it was launched. This prevented, e.g., running it continuously as a server. Not sure if they still do that.

Going further back, I also recall early k trials that put limits on the size of the workspace.


Yeah the 32bit timeout was removed some time ago


If you just want to use the conda enviroment build / maintenance tool just install 'miniconda' from the Anaconda site.[1] You don't have to have a whole numpy/scipy/jupyter environment if you don't want it.

[1] https://docs.conda.io/en/latest/miniconda.html


404: Not Found


Why does the makefile need to run $(CC) under sudo?


It's writing directly to /bin.


Got it. I guess that answers my question but leaves my eyebrow raised.


Is that the only thing that makes you raise your eyebrow?

:)


If I’m honest, I think it’s the most intelligible file in the folder (besides the readme) so its issues were more accessible to me.


This code is a horrid abomination. I don't care if this guy is a genius and could single-handedly write the best OS in the history of computing; his code is write-only, unmaintainable garbage. There's nothing elegant or praiseworthy about this at all.


In your opinion. I have no issues with it. But it helps if you work a lot with k/j I guess. It is not write only at all; it just feels like it more as it is so terse that if you change something, it will generally affect more % of the code than in other languages. And the readability; again it is so terse that it probably will take you more time to inderstand an A4 of this than an A4 of C#, however, I find it easier to comprehend complex code written like this than the same code spread out over 100s of files with design patterns sprinkled on top. But ymmv; just until you did significant work in an APL, I am not sure you can comment like you did. It is another world.

In short: Like Whitney, I hate scrolling in code. Multiple high res monitors and with bunches of tabs and files open and then scrolling and trying to figure things out with IDE ‘jump to implementation’, ‘find references’ etc is just really not very efficient compared to this. In My Humble Opinion.


I, too, hate scrolling in code (and almost never used Visual Studio's "jump to X" navigation helpers) but the reality is that each and every non-trivial program we write (which is practically ALL of them) will never come close to fitting on our screens. The real problem is always the limit of our ability to hold the mental model of the code within our mindscape as we operate on the patient. For me, having another abstraction to wade through on the screen is just another murky impediment to my reshaping the model.

As an inveterate simplifier of code and its formatting for clarity, I have gone to only doing a single thing on a single line, even variable declarations [Note I only have personal projects now]. The primary purpose behind this is that such extreme simplification will make it easier to process algorithmically, for both meta-code (i.e. IDEish) work and to narrow the horizontal extent of its format on both paper and screen.

Now that I think about it, it seems that Whitney's perspective might flow from his not using a code-folding editor, where each chunk of code can be collapsed to its header comment that states what it does. Aren't we usually dealing with the higher-level flow of logical chunks instead of the itty-gritty details of each?


If you don't know the language, of course most things will seem understandable

    b[Ii]{h:#x;l:0;while(h>l)$[y>x[i:/l+h];l:i+1;h:i];l}
    b -> function name
     [Ii] -> type declaration
          h:#x;l:0; -> initial definitions
And the rest of it are bog-standard binary search operations with implicit return of l. Honestly, it's more understandable than Java with annotations or other magic constructs


itsnotthatitcantbereadbysomeonewhoknowsthelanguageitsthatitsincrediblydenseandhardtoreadandlacksanymeaningfulcontextclues


I actually had no problem reading that!

At first glance, it looks unsightly but when one actually starts reading it (all of it, not just looking for "clues"), how it looks makes little difference.

These threads that mention k are always entertaining. There is usually some amount of comments like yours.

There is this concept called "Tall Poppy Syndrome". I am not sure what we might call this stuff here (it should have name), but we see some of it every time k comes up in a thread on HN.

Fortunately we also see people commenting who like APL or at least are curious. Not everyone is trying to cut Arthur Whitney down for being good at what he does.


    sentence(lang HumanLanguage){
        if(lang!="EN"){
            throw_exception(UNKNOWN_LANGUAGE_EXCEPTION);
        }
        with subject(combine_adjective_noun("context", "clues")){
            verb[to_be::present::plural]"are"([adjective]"overestimated");
        }
        full_stop;
    }
Do you find this more readable than a simple English sentence?

AW's code is way too terse, but most code is much more verbose than I'd like it to be. Finding the sweet spot is difficult (and it may change when you are coding just for yourself and when there is a larger audience). In order to find a good compromise, it is always interesting to explore all the possibilities in both directions.

We have code with many safety measures, extensive comments and visual clues and we have code like this and, in contradiction with every theory, the terse style is doing quite well in terms of bugs and programmer efficiency. Why? Is that a coincidence? Could we have the best of both worlds?

I do not say you should use this style for your projects, you don't even have to like it a little bit, but criticizing it because of its disadvantages without trying to understand its advantages is not a very productive point of view, in my opinion.


Thank you for saving me the trouble of having to write the same thing.

We programmers often seem to forget that we spend far, far more time reading code than writing it, therefore readability should be a (if not the) primary consideration when engineering it.

That said, I do understand where Whitney's terseness impetus comes from: I have always lamented how little of my code will fit on the screen, with the number of vertical lines being the limiting factor; that is precisely why my taskbar is on the left-hand-side. (And don't get me started on monitors designed for watching wide-screen movies -- 1600x1200 FTW!)

As such, I can't really fault his intention as much as his execution, and I feel that ultimately the solution to his (really, our) problem is that all our code (not just his) needs an IDE that can expand such terse definitions as per the current programmer's preferred style. This has been my perspective for many years, especially after wrangling various SQL dialects and the mostly awful formatting preferences of my peers.

As I see it, the ideal solution is to store all code files as its token stream and have a default format that can be customized by each programmer in the IDE. As a result, each program will be stored as its pure content (which would help with version control (so long as whitespace is ignored)) AND each programmer would get to work with their preferred perspective. Of course the problem with this methodology is that the tokenizer and formatter better be flawless or you're f*ed, not to mention the fact that there are various IDEs that people like to work with.


Did he ever release K OS?

I seen a few bits and pieces from Geocar but never seen a release or download.


It says same ops, but it seems some are missing? I see no modulus. Am I reading this right?


Where is k.c?


Why would you expect a k.c? This is b. There's a b.c.


It's referenced in the Makefile.


On a possibly unhelpful tangent, I'm expecting it to look something like this: https://code.jsoftware.com/wiki/Essays/Incunabulum



I'm so tempted to just respond with "wanker", though I know HN norms don't allow that. Either the code is for human consumption (in which case, fulfil that goal) or you're writing demoscene assembler which displays virtuosity without any positive impact on society.


I think the problem is, we intuitively estimate the readability of code based on the space it takes up, when we should take into account the information density.

That code is extremely readable - if you assume it will take just as long to understand as the entirety of the GCC compiler: https://github.com/gcc-mirror/gcc


Read "A programming language", where APL came from - https://www.amazon.com/Programming-Language-Kenneth-Iverson/... - you might get more understanding why it is the way it is.

It is the mathematical notation, put into, as Dijkstra expressed, "technologies of the past", completed (to be Turing-complete) and made executable.

At least, I think, this is the basic idea.


The code I linked to was written in one session (probably) and it really isn't comparable to GCC in complexity. It's an interpreter, but one where the author has made no attempt to make the process of interpretation legible. He's too busy "achieving" things with his code.

I guess there should be a niche for people like that, but the APL/post-APL community seems to think that Arthur has special computer science knowledge that can be best (or only) expressed in this form. We all know that's bullshit.


Companies pay large licensing fees to use this software. There must be some "positive impact" on their business.


k is probably omitted because it's not open-source.


Programming is not telling the machine what to do. That's the easy part.

What programming is about, and what's hard, is telling the next programmer what the machine does.

This makes the easy part a bit easier, and the hard part much harder, than using C. And I already disliked C to begin with, for similar reasons.


Now I think you're right, but there is two strategies which boils down to the following: "you can either make a program that's so complex there is no obvious defect, or you can make a program so simple it's obvious there is no defect."

What it says is that the hard part to make the next programmer understand what the machine does is not reading code, it's code complexity. Training yourself to read terse code is O(1), understanding code is at least O(N) where N is the number of lines of code.

Before you start arguing, try it. Try writing code as simple and succinct as possible. You won't go back to anything else because it works. It does make programming simpler and more productive.


I did, once, when I was at school. It was great - everything was so fast to type, the program (a somewhat complex game) would fit in on two screens. I almost finished it in a week.

And the I had exams, and did not touch the computer for two weeks. When I came back, I forgot it all. Cx? Dzy? Those names were meaningless.

This was a very importantly lesson - programs must be maintaineable. Unless you want to spend your entire life working on a single piece of software, you want to be able to context switch, and do it quickly.


Many people use descriptive variable names to carry semantic information.

APLers use extremely consistent, stereotypical naming conventions which also carry semantic information. Think "Hungarian Notation" except it's just the prefix part. With consistently applied terse names, the same idea will result in the same code. Easier to visually pattern-match.

It's also worth considering that longer names are not necessarily more meaningful. It's pretty common for everyday programmers to use single letters for something like a loop induction variable. Sometimes longer is just... longer. Consider these three semantically identical K definitions:

    a:{x*x}
    b:{[n]n*n}
    c:{[number]number*number}
Is that third version really clearer than the first?


Most people have difficulty with the fact that an array expression can very compactly represent some powerful functions. For example,

  (+/*)\:
is matrix multiplication. The following computes the transitive closure of a binary relation (represented as a boolean square matrix):

  {x|x(|/&)\:x}
Its core has a similar "shape" as matrix multiplication:

  (|/&)\:
So in a sense variable names don't matter all that much! To grok this code you have to stop thinking in terms of item at a time operations and start thinking of operations over collection of items. If you know how to write unix shell pipelines you are already somewhat familiar with this paradigm except that a shell doesn't provide most of the more useful features of array languages!


I think the good rule of thumb is to have longer names in bigger scopes. So a variable for short loop or an argument to 1 line function can be single char, while arguments to medium length functions are a word, and globals (including functions) are one or multiple words.

To get to your example, the preferred signature would be:

    square:{x*x}


You could have made a note of what the variable names stood for and put it at the top or end of each source file, in a separate file or even on a piece of paper.

There is nothing inherently disadvantagous to using abbreviations as long as you have a key to the abbreviations.


Cx? Dzy? Those names were meaningless.

A programmer doesn't rely on variable names to understand what the machine does, or very little. The contrary is a myth. In other terms, you always check what a variable contains anyway. That being said, I make a distinction between obfuscated code and less code. The former is about presentation, which in my opinion you're free to decide while the later is about #node in the ast. Sort of.


I hereby inform you that I, a programmer, _do_ use variable names to understand what other programmers want the machine to do.

This is 100% real, and not a myth.


> Training yourself to read terse code is O(1) understanding code is O(N) where N is the number of lines of code.

Even granting that, this:

> f[iII]{m:0;$[k:-x;W(x){x-:1;j:y;N(k)y[i]:y[+i];y[k]:j;j:f[k;y;z];$[m<j;m:j;]};{N(#y)z[i]:y[i];W(j:z){m+:1;N(j){k:z[i];z[i]:z[j];z[j]:k;j-:1}}}];m}

Is not a single line of "terse code". It's a bunch of lines of code with the newlines and spaces removed.

> Try writing code as simple and succinct as possible.

I already strive to do that (I don't always succeed). But the operating word here is as possible. Not "above anything else".


I didn't wanted to say #node in the ast. The presentation or the variable names doesn't matter in terms of less code I believe. Whitney's code is obfuscated which is distinct from less code imo. To some extent. Succinct code is kind of great.

Just to confirm, I'm saying that after the user, the most important thing is less code.

In more practical terms, I'd say try "above anything else" to see where the actual limit is. From my experience, it's way further from where traditional programming is.


Worth noting that in the Excel world (a primitive array language), for business users, dense formulas without spaces or new lines are the norm (although it is possible to use those features). Syntax highlighting and step-by-step evaluation features make it easier to work with.


"Programming is not telling the machine what to do."

What is that called then?

I am interested in controlling hardware, whatever that is called.

I think Torvalds once said in a comment that is why he wrote his own kernel. He wanted better control of his hardware. Something like that.

"... telling the next programmer what the machine does."

You mean what the code does?

This sounds like drudgery. I can see why one might be overly sensitive about how source code looks before even attempting to read it.

I am just a hobbyist writing programs for myself. There is no "next programmer".


Unless you are writing one-off programs that you never come back to -- which is certainly possible! -- it's possible that "future you" is the next programmer.


That is true. There are some I never edit as long as they continue to work. Others I am continuously editing on a regular basis. Others are really "one-offs" that I keep just in case. If I need them again in the future, I might rewrite from scratch if my learning has changed significantly since the last edit.

I have found the best thing I can do for the "future me" is to keep programs relatively small and keep the number of files low. This is one reason I like terseness and the idea of keeping "everything" (almost) on one page/screen.

I am just not capable of understanding a large project as deeply as I would like. I thought this paragraph summarises the issue well:

"To them, once you've sufficiently studied that screen or two of code, you can understand all of it at the same time. If it's spread out over thousands of files, it's very difficult to understand all of it, which leads to bugs, unnecessary abstraction, and the need for advanced tooling just to work with your own project's code."

This is from https://github.com/tlack/b-decoded


Bingo.


It is tiring and unproductive to parse other programmers' BS made-up semi-languages. If your shit is so complex you can't handle "tab-switches" you are doing it wrong. Stop crafting these 20-dimensional rhombicosidodecahedrons if you work in a team, please. Just use regular paper and fold a bunch of swans.


Whitney follows the enlightened solution to the NL=\n|\r\n debate, just don't end your lines ever.


It is notable that newlines are a relatively new invention anyways in the history of writing. But then again, so is white space.


Which also solves the tab vs spaces debate.


I believe the modern term is 'galaxy brain'




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: