| Commit message (Collapse) | Author | Age |
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
| |
pair or a loose symbol.
The ability to bind chunks of code to run in the session when a module is bound.
|
| | |
|
| |
|
|
|
|
|
|
| |
Add some missing builtins:
* and
* or
* uncons
|
| |\
| |
| | |
mostly syntax highlighting
|
| |/
|
| |
makes things much easier to visually parse
|
| |
|
|
| |
Fixes #1
|
| |
|
|
|
|
| |
I forgot about wrapping.
Fixes #2
|
| | |
|
| | |
|
| |
|
|
|
|
| |
Started a language manual outline.
Removed stray primer.
|
| |
|
|
| |
This includes a fairly complete quasiquote system, and a complete rework of the expander.
|
| | |
|
| |
|
|
|
|
|
|
| |
Two reasons:
1. Making it a builtin defeats tail call optimization, as builtins do not participate in TCO.
2. The `begin` form is occasionally generated by the macro expander, and should not rely on library support.
|
| |
|
|
| |
This makes it a bit easier to pass and return multiple values, and cuts down on the number of extra tuples created and dismantled.
|
| |
|
|
| |
This cuts down the cost of calling a function, as it now reuses an existing continuation rather than reconstructing the continuation for each call. Tail calls are now slightly more explicit.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
This doesn't support macro expansion, but does support some basic syntax
niceties. Macro expansion requires quote and quasiquote support.
|
| |
|
|
| |
This implements a continuation-passing interpreter, which means we get tail calls ferfree. I stopped short of implementing call/cc, because I don't think we need it, but we can get there if we have to.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
Ports are the lisp abstraction of files and streams. Actinide ports additionally guarantee a peek operation.
This makes ``tokenize`` (now ``read_token``) callable as a lisp function, as it
takes a port and reads one token from it. This is a substantial refactoring.
As most of the state is now captured by closures, it's no longer practical to
test individual states as readily. However, the top-level tokenizer tests
exercise the full state space.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add a top-level test that roundtrips sequences of tokens. (This found a real bug. Thanks, Hypothesis!)
* Remove type conversion from the tokenizer. This simplifies the code, and makes testing considerably easier.
* Fix some bugs in string literal parsing (again: Thanks, Hypothesis!)
Document the test cases, and the case-by-case strategy, better.
This also involved prying apart some tests that cover multiple cases.
Stop treating empty strings as if they were EOFs. (Thanks, Hypothesis!)
fixup! Stop treating empty strings as if they were EOFs. (Thanks, Hypothesis!)
Remove type conversion from the tokenizer.
It turns out that this made the tokenizer harder to test, because it was doing too many things. The tokenizer now _only_ divides the input port into tokens, without parsing or converting those tokens.
Fix up tests for fuck
|
| | |
|
| | |
|
| | |
|
| |
|