| Commit message (Collapse) | Author | Age |
| ... | |
| |
|
|
|
|
| |
For endpoints that are unavailable, that default behaviour no longer needs to be specified: `Required(app)` will do that for you. For endpoints that are redirects until setup is completed, `Require(app).with_fallback(…response…)` will do that.
To make this a bit harder to break by accident, the default unavailable response is now its own type.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The two middlewares were identical but for the specific `IntoResponse` impl used to generate the response when setup has not been completed. However, unifying them while still using `from_fn_with_state` lead to this horrorshow:
.route_layer(middleware::from_fn_with_state(
app.clone(),
|state, req, next| {
setup::middeware::setup_required(UNAVAILABLE, state, req, next)
}
))
It's a lot to read, and it surfaces the entire signature of a state-driven middleware `fn` into the call site solely to close over one argument (`UNAVAILABLE`).
Rather than doing that, I've converted this middleware into a full blown Tower middleware, following <https://docs.rs/axum/latest/axum/middleware/index.html#towerservice-and-pinboxdyn-future>. I considered taking this further and implementing a custom future to remove the allocation for `Box::pin`, but honestly, that allocation isn't hurting anyone and this code already got long enough in the translation.
The new API looks like:
.route_layer(setup::Required::or_unavailable(app.clone()))
Or like:
.route_layer(setup::Required::with_fallback(app.clone(), RESPONSE))
One thing I would have liked to have avoided is the additional `app.clone()` argument, but there isn't a way to extract the _state_ from a request inside of an Axum middleware. It has to be passed in externally - that's what `from_fn_with_state` is doing under the hood, as well. Using `State` as an extractor doesn't work; the `State` extractor is special in a _bunch_ of ways, and this is one of them. Other extractors would work. Realistically, I'd probably want to explore interfaces like
.route_layer(setup::Required(app).or_unavailable())
or
.route_layer(app.setup().required().or_unavailable())
|
| | |
|
| |\
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The added suppression for `manual_ok_err` is a mixed choice; I'd prefer `r.ok()` in most senses, but `BroadcastStream` is still new enough that I wouldn't be entirely surprised if the Tokio team added new error variants, that we'd want to expressly handle.
I do feel a bit better suppressing individual [`clippy::pedantic`][pedantic] lints; they're allow-by-default for this reason anyways, and I opted into them (see 452c8d0d9edb9894c108b6d577806c7c9d0071dd) knowing that not all of them would be perfectly appropriate.
[pedantic]: https://doc.rust-lang.org/clippy/
Merges prop/missed-lints into main.
|
| | |
| |
| |
| |
| |
| |
| |
| | |
The added suppression for `manual_ok_err` is a mixed choice; I'd prefer `r.ok()` in most senses, but `BroadcastStream` is still new enough that I wouldn't be entirely surprised if the Tokio team added new error variants, that we'd want to expressly handle.
I do feel a bit better suppressing individual [`clippy::pedantic`][pedantic] lints; they're allow-by-default for this reason anyways, and I opted into them (see 452c8d0d9edb9894c108b6d577806c7c9d0071dd) knowing that not all of them would be perfectly appropriate.
[pedantic]: https://doc.rust-lang.org/clippy/
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has a couple of material consequences:
* It will be (much) easier to reorganize the source tree, as the path to the output is no longer relative to where the config files are when building the final binary. If we do decide to move `ui` into its own child crate, we won't have to make a bunch of (very similar) changes to the Svelte build process at that time.
* There is less chance of a stale build contaminating a new one, since changes to the crate change the project hash in `OUT_DIR`. For example, while working on this change, `OUT_DIR` was at various points:
* `target/debug/build/pilcrow-7cfeef3536ddd3e7/out`
* `target/debug/build/pilcrow-09d4ddbc12bef36b/out`
* `target/release/build/pilcrow-070d373bd5f850a1`
This may use more space on disk, but it's all reclaimable with `cargo clean` and Rust is _far_ more profligate with disk space than Svelte will ever be.
* It's more consistent with Cargo's expectations around generated source files, and thus potentially easier to onboard Rust developers into.
|
| |
|
|
| |
Notably, one of them was hiding a real (if unreachable) bug, by converting a "the token you have presented is not valid" scenario into an internal server error.
|
| |
|
|
| |
To make unread handling of empty channels coherent (and to make it possible to mark an empty channel as having been read), they need to be associated with a specific point in time. This change exposes their creation time in the snapshot - it was already part of the event view, though the client doesn't know that yet.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A heartbeat is an event that the server synthesizes any time an event stream has been idle for longer than some timeout. They allow clients to detect disconnection and network problems, which would otherwise go unnoticed because event streams are a one-way channel. Most network problems only become clear when the offended party tries to _send_ something, and subscribing to an event stream only sends something during the request phase.
Technically, Pilcrow has always sent these, since we started using Axum's SSE support: it defaults to sending a dummy event after 15 seconds (consisting of `":\n\n"`, which is then ignored). I've built Pilcrow's heartbeat support out of that, by customizing the event sent back. The results _mostly_ look like existing events, but there are two key differences:
* Heartbeats don't have `id` fields in the event stream. They're synthetic, and they don't participate in either the "resume at" sequence management, or the last-event-id header-based resumption management.
* Heartbeats have an `event` but no `type` field in the message body. There are no subtypes.
To make it less likely that clients will race with the server on expiring timeouts, heartbeats are sent about five seconds early. In this change, heartbeats are due after 20 seconds, but are sent after 15. If it takes longer than five seconds for a heartbeat to arrive, a client can and should treat that as a network problem and reconnect, but I'd really like to avoid that happening over differences smaller than a second, so I've left a margin.
I originally sketched this out in conversation with @wlonk as having each event carry a deadline for the next one. I ultimately opted not to do that for a few reasons. First, Axum makes it hard - the built-in keep-alive support only works with a static event, and cannot make dynamic ones whose payloads might vary (for example if the deadline is variable). Second, it's complex, to no apparent gain, and adds deadline information to _every_ event type.
This implementation, instead, sends deadline information as part of boot, as a fixed interval in seconds. Clients are responsible for working out deadlines based on message arrivals. This is fine; heartbeat-based connection management is best effort at the best of times, so a few milliseconds of slop in either direction won't hurt anything.
The existing client ignores these events entirely, which is convenient.
The new heartbeat event type is defined alongside the main event type, to make it less likely that we'll inadvertently make changes to one but not the other. We can still do so advertently, I just don't want it to be an accident.
|
| |\ |
|
| | | |
|
| | |
| |
| |
| |
| |
| | |
357116366c1307bedaac6a3dfe9c5ed8e0e0c210 wasn't updated (and wasn't quite correct then, either).
I haven't found a way to derive it from the name of the type.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |/ |
|
| |
|
|
| |
In a discussion with wlonk, we both agreed that 15 days is _too_ aggressive, but also that it's not quite time to implement configurable expiry.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
| |
There are a couple of migration suggestions from `cargo fix --edition` that I have deliberately skipped, which are intended to make sure that the changes to `if let` scoping don't bite us. They don't, I'm pretty sure, and if I turn out to be wrong, I'd rather fix the scoping issues (as they arise) than use `match` (`cargo fix --edition`'s suggestion).
This change also includes a bulk reformat and a clippy cleanup.
NOTA BENE: As this requires a new Rust toolchain, you'll need to update Rust (`rustup update`, normally) or the server won't build. This also applies to the Debian builder Docker image; it'll need to be rebuilt (from scratch, pulling its base image again) as well.
|
| | |
|
| | |
|
| |\ |
|
| | | |
|
| |/
|
|
|
|
|
|
|
|
|
|
| |
The original retention values were loosely based on Slack's retention, for lack of a more specific motivator. Today's election results have changed my views; the service now defaults to retention more in line with the needs of communities for which deep message history may be a risk:
* Unused channels expire after 7 days.
* Used channels expire when their last message expires (as before).
* Deleted channels are purged after 6 hours (which is in line with the purge behaviour of messages).
* Messages expire after 15 days.
* Deleted messages are purged after 6 hours (as before).
No changes have been made to token expiry.
|
| |
|
|
|
|
| |
Using requests to drive background work (expiring things, mainly) is a hack to avoid the complexity of background workers, but it's reaching its limits. In the live deployment at `hi.grimoire.ca`, we found that requests for the UI were taking 300+ milliseconds as the expiry process required database access. The DB there is slow, which is a separate issue, but also being accessed many times for little benefit.
Since only the API is actually _affected_ by expiry, I've scoped the middleware down to just those endpoints.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is meant to limit the amount of messages that event replay needs to examine. Previously, the query required a table scan; table scans on `message` can be quite large, and should really be avoided. The new schema allows replays to be carried out using an index scan.
The premise of this change is that, for each (channel, message), there is a range of event sequence numbers that the (channel, message) may appear in. We'll notate that as `[start, end]` in the general case, but they are:
* for active channels, `[ch.created_sequence, ch.created_sequence]`.
* for deleted channels, `[ch.created_sequence, ch_del.deleted_sequence]`.
* for active messages, `[mg.sent_sequence, mg.sent_sequence]`.
* for deleted messages, `[mg.sent_seqeunce, mg_del.deleted_sequence]`.
(The two "active" ranges may grow in future releases, to support things like channel renames and message editing. That won't change the logic, but those two things will need to update the new `last_sequence` field.)
There are two families of operations that need to retrieve based on these ranges:
* Boot creates a snapshot as of a specific `resume_at` sequence number, and thus must include any record whose `start` falls on or before `resume_at`. We can't exclude records whose `end` is also before it, as their terminal state may be one that is included in boot (eg. active channels).
* Event replay needs to include any events that fall after the same `resume_at`, and thus must include events from any record whose `end` falls strictly after `resume_at`. We can't exclude records whose `start` is also strictly after `resume_at`, as we'd omit them from replay, inappropriately, if we did.
This gives three interesting cases:
1. Record fully after `resume_at`:
event sequence --increasing-->
x-a … x … x+k …
resume_at start end
This record should be omitted by boot, but included for event replay.
2. Record fully before `resume_at`:
event sequence --increasing-->
x … x+k … x+a
start end resume_at
This record should be omitted for event replay, but included for boot.
3. Record overlapping `resume_at`:
event sequence --increasing-->
x … x+a … x+k
start resume_at end
This record needs to be included for both boot and event replay.
However, the bounds of that range were previously stored in two tables (`channel` and `channel_deleted`, or `message` and `message_deleted`, respectively), which sqlite (indeed most SQL implementations) cannot index. This forced a table scan, leading to the program considering every possible (channel, message) during event replay.
This commit adds a `last_sequence` field to channels and messages, which is set to the above values as channels and messages are operated on. This field is indexed, and queries can use it to rapidly identify relevant rows for event replay, cutting down the amount of reading needed to generate events on resume.
|
| |
|
|
| |
This is an inconsequential change for actual clients, since "resume from the beginning" was never a preferred mode of operation, and it simplifies some internals. It should also mean we get better query plans where `coalesce(cond, true)` was previously being used.
|
| |
|
|
| |
This utility was needed to support a database migration with existing data. I have it on good authority that no further databases exist that are in the state that made this tool necessary.
|
| |
|
|
| |
I mean, it always does, but I'd rather get a panic during message/channel reconstruction than wrong results if that assumption is ever violated inadvertently.
|
| | |
|
| |
|
|
| |
The protocol here re-checks the caller's password, as a "I left myself logged in" anti-pranking check.
|
| | |
|
| |
|
|
| |
Thankfully, channel creation only happens in one place, so we don't need a state machine for this.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Nasty design corner. Logins need to be created in three places:
1. In tests, using app.logins().create(…);
2. On initial setup, using app.setup().initial(…); and
3. When accepting invites, using app.invites().accept(…).
These three places do the same thing with respect to logins, but also do a varying mix of other things. Testing is the simplest and _only_ creates a login. Initial setup and invite acceptance both issue a token for the newly-created login. Accepting an invite also invalidates the invite. Previously, those three functions have been copy-pasted variations on a theme. Now that we have validation, the copy-paste approach is no longer tenable; it will become increasingly hard to ensure that the three functions (plus any future functions) remain in synch.
To accommodate the variations while consolidating login creation, I've added a typestate-based state machine, which is driven by method calls:
* A creation attempt begins with `let create = Create::begin()`. This always succeeds; it packages up arguments used in later steps, but does nothing else.
* A creation attempt can be validated using `let validated = create.validate()?`. This may fail. Input validation and password hashing are carried out at this stage, making it potentially expensive.
* A validated attempt can be stored in the DB, using `let stored = validated.store(&mut tx).await?`. This may fail. The login will be written to the DB; the caller is responsible for transaction demarcation, to allow other things to take place in the same transaction.
* A fully-stored attempt can be used to publish events, using `let login = stored.publish(self.events)`. This always succeeds, and unwraps the state machine to its final product (a `login::History`).
|
| |
|
|
|
|
|
|
| |
There's no good reason to use an empty string as your login name, or to use one so long as to annoy others. Names beginning or ending with whitespace, or containing runs of whitespace, are also a technical problem, so they're also prohibited.
This change does not implement [UTS #39], as I haven't yet fully understood how to do so.
[UTS #39]: https://www.unicode.org/reports/tr39/
|
| | |
|
| |
|
|
| |
return the ID of the affected entity.
|
| |
|
|
| |
This required a re-think of the `.immediately()` combinator, to generalize it to cases where a message is _not_ expected. That (more or less immediately) suggested some mixed combinators, particularly for stream futures (futures of `Option<T>`).
|
| | |
|
| |
|
|
| |
This also found a bug! No live event was being emitted during invite accept. The only way to find out about invites was to reconnect.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|