| Commit message (Collapse) | Author | Age |
| |
|
|
| |
Does not use runes in stores (yet).
|
| | |
|
| | |
|
| |
|
|
| |
For reasons beyond my understanding, the `sqlx prepare` command produces different results for sqlite depending on whether there are or are not rows in certain tables. This ensures that the files are generated consistently with an _empty_ database.
|
| | |
|
| | |
|
| |
|
|
| |
(Svelte 5 upgrade not included.)
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
This was causing messages to persist when switching channels, due to the work minimization performed by Svelte.
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
On Safari, `overflow: scroll` forces scrollbars even where not required, leading to a really janky display studded with stray scrollbars.
|
| | |
| |
| |
| | |
On Safari (iOS and macOS), the permissions prompt can only be done during a user gesture; mounting is sufficiently disconnected from any user gestures that it's not allowed. The browser raises an exception, which, since it is unhandled, then leaks out and interrupts SvelteKit's element unmounting, leading to the whole UI being duplicated when switching channels (the old UI is not unmounted).
|
| | | |
|
| |\| |
|
| | |\
| |/
|/| |
|
| | |
| |
| |
| |
| |
| | |
Using requests to drive background work (expiring things, mainly) is a hack to avoid the complexity of background workers, but it's reaching its limits. In the live deployment at `hi.grimoire.ca`, we found that requests for the UI were taking 300+ milliseconds as the expiry process required database access. The DB there is slow, which is a separate issue, but also being accessed many times for little benefit.
Since only the API is actually _affected_ by expiry, I've scoped the middleware down to just those endpoints.
|
| | | |
|
| | |
| |
| |
| | |
So we know what npm and node versions this expects.
|
| | |\
| |/
|/| |
|
| | |
| |
| |
| | |
message runs.
|
| | | |
|
| | | |
|
| | | |
|
| | |\
| |/
|/| |
|
| | | |
|
| | |
| |
| |
| | |
message display.
|
| | |
| |
| |
| |
| |
| | |
This was causing problems for changing passwords: if the user didn't type anything in the "original password" field, the code path to sending that field to the server was just straight-up omitting the field from the message, rather than setting it to empty string, causing a 422 Unprocessable Entity.
On investigation we had latent bugs related to this in a bunch of spots.
|
| | | |
|
| | | |
|
| | |\
| |/
|/| |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is meant to limit the amount of messages that event replay needs to examine. Previously, the query required a table scan; table scans on `message` can be quite large, and should really be avoided. The new schema allows replays to be carried out using an index scan.
The premise of this change is that, for each (channel, message), there is a range of event sequence numbers that the (channel, message) may appear in. We'll notate that as `[start, end]` in the general case, but they are:
* for active channels, `[ch.created_sequence, ch.created_sequence]`.
* for deleted channels, `[ch.created_sequence, ch_del.deleted_sequence]`.
* for active messages, `[mg.sent_sequence, mg.sent_sequence]`.
* for deleted messages, `[mg.sent_seqeunce, mg_del.deleted_sequence]`.
(The two "active" ranges may grow in future releases, to support things like channel renames and message editing. That won't change the logic, but those two things will need to update the new `last_sequence` field.)
There are two families of operations that need to retrieve based on these ranges:
* Boot creates a snapshot as of a specific `resume_at` sequence number, and thus must include any record whose `start` falls on or before `resume_at`. We can't exclude records whose `end` is also before it, as their terminal state may be one that is included in boot (eg. active channels).
* Event replay needs to include any events that fall after the same `resume_at`, and thus must include events from any record whose `end` falls strictly after `resume_at`. We can't exclude records whose `start` is also strictly after `resume_at`, as we'd omit them from replay, inappropriately, if we did.
This gives three interesting cases:
1. Record fully after `resume_at`:
event sequence --increasing-->
x-a … x … x+k …
resume_at start end
This record should be omitted by boot, but included for event replay.
2. Record fully before `resume_at`:
event sequence --increasing-->
x … x+k … x+a
start end resume_at
This record should be omitted for event replay, but included for boot.
3. Record overlapping `resume_at`:
event sequence --increasing-->
x … x+a … x+k
start resume_at end
This record needs to be included for both boot and event replay.
However, the bounds of that range were previously stored in two tables (`channel` and `channel_deleted`, or `message` and `message_deleted`, respectively), which sqlite (indeed most SQL implementations) cannot index. This forced a table scan, leading to the program considering every possible (channel, message) during event replay.
This commit adds a `last_sequence` field to channels and messages, which is set to the above values as channels and messages are operated on. This field is indexed, and queries can use it to rapidly identify relevant rows for event replay, cutting down the amount of reading needed to generate events on resume.
|
| | |
| |
| |
| | |
This is an inconsequential change for actual clients, since "resume from the beginning" was never a preferred mode of operation, and it simplifies some internals. It should also mean we get better query plans where `coalesce(cond, true)` was previously being used.
|
| | |
| |
| |
| | |
This utility was needed to support a database migration with existing data. I have it on good authority that no further databases exist that are in the state that made this tool necessary.
|
| | |
| |
| |
| | |
I mean, it always does, but I'd rather get a panic during message/channel reconstruction than wrong results if that assumption is ever violated inadvertently.
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
The protocol here re-checks the caller's password, as a "I left myself logged in" anti-pranking check.
|
| | | |
|
| | |
| |
| |
| | |
Thankfully, channel creation only happens in one place, so we don't need a state machine for this.
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Nasty design corner. Logins need to be created in three places:
1. In tests, using app.logins().create(…);
2. On initial setup, using app.setup().initial(…); and
3. When accepting invites, using app.invites().accept(…).
These three places do the same thing with respect to logins, but also do a varying mix of other things. Testing is the simplest and _only_ creates a login. Initial setup and invite acceptance both issue a token for the newly-created login. Accepting an invite also invalidates the invite. Previously, those three functions have been copy-pasted variations on a theme. Now that we have validation, the copy-paste approach is no longer tenable; it will become increasingly hard to ensure that the three functions (plus any future functions) remain in synch.
To accommodate the variations while consolidating login creation, I've added a typestate-based state machine, which is driven by method calls:
* A creation attempt begins with `let create = Create::begin()`. This always succeeds; it packages up arguments used in later steps, but does nothing else.
* A creation attempt can be validated using `let validated = create.validate()?`. This may fail. Input validation and password hashing are carried out at this stage, making it potentially expensive.
* A validated attempt can be stored in the DB, using `let stored = validated.store(&mut tx).await?`. This may fail. The login will be written to the DB; the caller is responsible for transaction demarcation, to allow other things to take place in the same transaction.
* A fully-stored attempt can be used to publish events, using `let login = stored.publish(self.events)`. This always succeeds, and unwraps the state machine to its final product (a `login::History`).
|