| Commit message (Collapse) | Author | Age |
| |
|
|
| |
This is primarily renames and repackagings.
|
| |
|
|
| |
(This is part of a larger reorganization.)
|
| |
|
|
| |
sequence.
|
| |
|
|
| |
Per-channel event sequences were a cute idea, but it made reasoning about event resumption much, much harder (case in point: recovering the order of events in a partially-ordered collection is quadratic, since it's basically graph sort). The minor overhead of a global sequence number is likely tolerable, and this simplifies both the API and the internals.
|
| | |
|
| |
|
|
|
|
| |
This (a) reduces the amount of passing secrets around that's needed, and (b) allows tests to log out in a more straightforwards manner.
Ish. The fixtures are a mess, but so is the nomenclature. Fix the latter and the former will probably follow.
|
| |
|
|
|
|
|
|
| |
expires.
When tokens are revoked (logout or expiry), the server now publishes an internal event via the new `logins` event broadcaster. These events are used to guard the `/api/events` stream. When a token revocation event arrives for the token used to subscribe to the stream, the stream is cut short, disconnecting the client.
In service of this, tokens now have IDs, which are non-confidential values that can be used to discuss tokens without their secrets being passed around unnecessarily. These IDs are not (at this time) exposed to clients, but they could be.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
| |
Trying to reliably do expiry mid-request was causing some anomalies:
* Creating a channel with a dup name would fail, then succeed after listing channels.
It was very hard to reason about which operations needed to trigger expiry, to fix this "correctly," so now expiry runs on every request.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
| |
It now includes events for all channels. Clients are responsible for filtering.
The schema for channel events has changed; it now includes a channel name and ID, in the same format as the sender's name and ID. They also now include a `"type"` field, whose only valid value (as of this writing) is `"message"`.
This is groundwork for delivering message deletion (expiry) events to clients, and notifying clients of channel lifecycle events.
|
| | |
|
| |
|
|
| |
This also has the happy effect of removing an unwrap. This feels like a more coherent way of achieving the same result.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
This'll catch style issues, mostly.
|
| |
|
|
|
|
|
|
| |
This silences some `-Wclippy::pedantic` warning, and it's just a good thing to do.
I've made the choice to have the docs comment face programmers, and to provide `hi --help` and `hi -h` content via Clap attributes instead of inferring it from the docs comment.
Internal (private) "rustdoc" comments have been converted to regular comments until I learn how to write better rustdoc.
|
| |
|
|
| |
vector-of-sequence-numbers stream resume.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
streams.
The timestamp-based approach had some formal problems. In particular, it assumed that time always went forwards, which isn't necessarily the case:
* Alice calls `/api/channels/Cfoo` to send a message.
* The server assigns time T to the request.
* The server stalls somewhere in send() for a while, before storing and broadcasting the message. If it helps, imagine blocking on `tx.begin().await?` for a while.
* In this interval, Bob calls `/api/events?channel=Cfoo`, receives historical messages up to time U (after T), and disconnects.
* The server resumes Alice's request and finishes it.
* Bob reconnects, setting his Last-Event-Id header to timestamp U.
In this scenario, Bob never sees Alice's message unless he starts over. It wasn't in the original stream, since it wasn't broadcast while Bob was subscribed, and it's not in the new stream, since Bob's resume point is after the timestamp on Alice's message.
The new approach avoids this. Each message is assigned a _sequence number_ when it's stored. Bob can be sure that his stream included every event, since the resume point is identified by sequence number even if the server processes them out of chronological order:
* Alice calls `/api/channels/Cfoo` to send a message.
* The server assigns time T to the request.
* The server stalls somewhere in send() for a while, before storing and broadcasting.
* In this interval, Bob calls `/api/events?channel=Cfoo`, receives historical messages up to sequence Cfoo=N, and disconnects.
* The server resumes Alice's request, assigns her message sequence M (after N), and finishes it.
* Bob resumes his subscription at Cfoo=N.
* Bob receives Alice's message at Cfoo=M.
There's a natural mutual exclusion on sequence numbers, enforced by sqlite, which ensures that no two messages have the same sequence number. Since sqlite promises that transactions are serializable by default (and enforces this with a whole-DB write lock), we can be confident that sequence numbers are monotonic, as well.
This scenario is, to put it mildly, contrived and unlikely - which is what motivated me to fix it. These kinds of bugs are fiendishly hard to identify, let alone reproduce or understand.
I wonder how costly cloning a map is going to turn out to be…
A note on database migrations: sqlite3 really, truly has no `alter table … alter column` statement. The only way to modify an existing column is to add the column to a new table. If `alter column` existed, I would create the new `sequence` column in `message` in a much less roundabout way. Fortunately, these migrations assume that they are being run _offline_, so operations like "replace the whole table" are reasonable.
|
| | |
|
| |
|
|
| |
This is intended to make it a bit more opaque to callers, and to free me up to experiment with the event ID format. It also makes event IDs tractable for testing.
|
| |
|