One thing notable about Scheme and Common Lisp is that many identifiers and keywords there are excessively long. (string→integer)
, (lambda)
, (concatenate)
, etc. Yes, you can easily assign aliases to them, but it:
So Spark will provide short identifiers for most common operations (either macros or functions) by default. There won’t be too many long aliases because they will increase the core and require more time to learn and become familiar with for the uninitiated, but some of them will be considered.
For example, a variation on my favourite Scheme statement is:
(vector-set! myarray idx (+ (vector-ref myarray idx) 2))
Which in Perl is:
$myarray[$idx] += 2;
And in Spark would be:
(+= (myarray idx) 2)
Which isn’t much worse than Perl. We’re not trying to beat Perl 5, Perl 6 or much less J in code Golfing in every case - this isn’t a peeing contest. We’re just trying to make easy tasks easy.
Furthermore, while +=, -=, /= and friends will be defined there will be a macro (that will be used defined by the prelude and used there) to define your own +${op}=+ operators. Like (list=)
or (myfunc=)
.
And as you see we will try very hard that every sane expression can become an lvalue.
And in case, you’re worried there will be a “+\+” and “--” operators too, with post-increment/post-decrement and pre-increment/pre-decrement variations.
Finally, by inspiration from Arc, we decided to do something about excessive parens. So we will have a (with (k1 v1 k2 v2 k3 v3) …. )
scoping instead of the unwieldy Scheme (let*)
and (letrec)
(both will be easily replaceable by +(with…)with some macro or VM trickery.). And we’ll have a C-style for-loop instead of the obscure +(do…)
and a while loop, and a Perl 5/Perl 6-style foreach loop, and maybe other loops too. And you can always use recursion.
However we’re not going down the Arc route of assigning extremely short, and hard to pronounce and grep for identifiers. (fn)
How do you pronounce it? fnn…. There’s no such sound in Hebrew, so it’s verboten by your Hebrew-speaking overlords. We like (fun…)
because it puts the fun back in function (“Functional!! Parallelism!!!!” - oh wait! Wrong language.), and we like (sub …)
because it puts the “sub” back in subroutine. And all Hebrew speakers will rejoice because they can pronounce “cat”
exactly like “cut”
and Perl like Pérl and Lisp like Leesp, and they can pronounce TeX and LaTeX with a honest-to-god khaph (or a Heth if they put their mind to to it.), and the god of the Israelite programmers saw there were only 5 and a half vowels and he was pleased.
Seriously now, I don’t like (fn) because it’s hard to pronounce, doesn’t sound right when you read it to your mind’s ear, and is obnoxious. While being succinct is a noble goal, picking psychologically-sound and intuitive conventions is also important. I recall searching the Arc tutorial and documentation for a (not)
function only to found it was spelled (no)
:
(if (no soup) (print "soup is false"))
“If no soup”. Oh no, no! No soup for you. For one year!!! We’re going have (not) like everybody else, and also a “!” alias .
(if (not soup) (print "soup is false"))
(if (! soup) (print "soup is false"))
So we’re going to borrow stuff from Arc, but only when it makes sense. Spark should have an up-to-date documentation manual right from the very start, which will be kept as up-to-date as possible, and naturally will have automated tests, which would serve as automatically verifiable examples. Arc really had none, and often you needed to read the implementation code, or one of the example web applications.