summaryrefslogtreecommitdiff
path: root/.sqlx
Commit message (Collapse)AuthorAge
* Store `Message` instances using their events.Owen Jacobson2025-08-26
| | | | I found a test bug! The tests for deleting previously-deleted or previously-expired tests were using the wrong user to try to delete those messages. The tests happened to pass anyways because the message authorship check was done after the message lifecycle check. They would have no longer passed; the tests are fixed to use the sender, instead.
* Store `Conversation` instances using their events.Owen Jacobson2025-08-26
| | | | This replaces the approach of having the repo type know about conversation lifecycle in detail. Instead, the repo type accepts events and applies them to the DB blindly. The SQL written to implement each event does, however, embed assumptions about what order events will happen in.
* Split `user` into a chat-facing entity and an authentication-facing entity.Owen Jacobson2025-08-26
| | | | | | | | | | | | | | The taxonomy is now as follows: * A _login_ is someone's identity for the purposes of authenticating to the service. Logins are not synchronized, and in fact are not published anywhere in the current API. They have a login ID, a name and a password. * A _user_ is someone's identity for the purpose of participating in conversations. Users _are_ synchronized, as before. They have a user ID, a name, and a creation instant for the purposes of synchronization. In practice, a user exists for every login - in fact, users' names are stored in the login table and are joined in, rather than being stored redundantly in the user table. A login ID and its corresponding user ID are always equal, and the user and login ID types support conversion and comparison to facilitate their use in this context. Tokens are now associated with logins, not users. The currently-acting identity is passed down into app types as a login, not a user, and then resolved to a user where appropriate within the app methods. As a side effect, the `GET /api/boot` method now returns a `login` key instead of a `user` key. The structure of the nested value is unchanged.
* Generate tokens in memory and then store them.Owen Jacobson2025-08-26
| | | | This is the leading edge of a larger storage refactoring, where repo types stop doing things like generating secrets or deciding whether to carry out an operation. To make this work, there is now a `Token` type that holds the complete state of a token, in memory.
* Split the `user` table into an authentication portion and a chat portion.Owen Jacobson2025-08-26
| | | | We'll be building separate entities around this in future commits, to better separate the authentication data (non-synchronized and indeed "not public") from the chat data (synchronized and public).
* Rename "channel" to "conversation" within the server.Owen Jacobson2025-07-03
| | | | | | I've split this from the schema and API changes because, frankly, it's huge. Annoyingly so. There are no semantic changes in this, it's all symbol changes, but there are a _lot_ of them because the term "channel" leaks all over everything in a service whose primary role is managing messages sent to channels (now, conversations). I found a buggy test while working on this! It's not fixed in this commit, because it felt mean to hide a real change in the middle of this much chaff.
* Rename "channel" to "conversation" in the database.Owen Jacobson2025-07-03
| | | | | | I've - somewhat arbitrarily - started renaming column aliases, as well, though the corresponding Rust data model, API fields and nouns, and client code still references them as "channel" (or as derived terms). As with so many schema changes, this entails a complete rebuild of a substantial portion of the schema. sqlite3 still doesn't have very many `alter table` primitives, for renaming columns in particular.
* Rename the `login` module to `user`.Owen Jacobson2025-03-23
|
* Rename `user` to `login` at the database.Owen Jacobson2025-03-23
|
* Track an index-friendly sequence range for both channels and messages.Owen Jacobson2024-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is meant to limit the amount of messages that event replay needs to examine. Previously, the query required a table scan; table scans on `message` can be quite large, and should really be avoided. The new schema allows replays to be carried out using an index scan. The premise of this change is that, for each (channel, message), there is a range of event sequence numbers that the (channel, message) may appear in. We'll notate that as `[start, end]` in the general case, but they are: * for active channels, `[ch.created_sequence, ch.created_sequence]`. * for deleted channels, `[ch.created_sequence, ch_del.deleted_sequence]`. * for active messages, `[mg.sent_sequence, mg.sent_sequence]`. * for deleted messages, `[mg.sent_seqeunce, mg_del.deleted_sequence]`. (The two "active" ranges may grow in future releases, to support things like channel renames and message editing. That won't change the logic, but those two things will need to update the new `last_sequence` field.) There are two families of operations that need to retrieve based on these ranges: * Boot creates a snapshot as of a specific `resume_at` sequence number, and thus must include any record whose `start` falls on or before `resume_at`. We can't exclude records whose `end` is also before it, as their terminal state may be one that is included in boot (eg. active channels). * Event replay needs to include any events that fall after the same `resume_at`, and thus must include events from any record whose `end` falls strictly after `resume_at`. We can't exclude records whose `start` is also strictly after `resume_at`, as we'd omit them from replay, inappropriately, if we did. This gives three interesting cases: 1. Record fully after `resume_at`: event sequence --increasing--> x-a … x … x+k … resume_at start end This record should be omitted by boot, but included for event replay. 2. Record fully before `resume_at`: event sequence --increasing--> x … x+k … x+a start end resume_at This record should be omitted for event replay, but included for boot. 3. Record overlapping `resume_at`: event sequence --increasing--> x … x+a … x+k start resume_at end This record needs to be included for both boot and event replay. However, the bounds of that range were previously stored in two tables (`channel` and `channel_deleted`, or `message` and `message_deleted`, respectively), which sqlite (indeed most SQL implementations) cannot index. This forced a table scan, leading to the program considering every possible (channel, message) during event replay. This commit adds a `last_sequence` field to channels and messages, which is set to the above values as channels and messages are operated on. This field is indexed, and queries can use it to rapidly identify relevant rows for event replay, cutting down the amount of reading needed to generate events on resume.
* Resume points are no longer optional.Owen Jacobson2024-10-30
| | | | This is an inconsequential change for actual clients, since "resume from the beginning" was never a preferred mode of operation, and it simplifies some internals. It should also mean we get better query plans where `coalesce(cond, true)` was previously being used.
* Remove `hi-recanonicalize`.Owen Jacobson2024-10-30
| | | | This utility was needed to support a database migration with existing data. I have it on good authority that no further databases exist that are in the state that made this tool necessary.
* Add `change password` UI + API.Owen Jacobson2024-10-29
| | | | The protocol here re-checks the caller's password, as a "I left myself logged in" anti-pranking check.
* Update stored sqlx queriesOwen Jacobson2024-10-29
|
* Update saved SQLOwen Jacobson2024-10-26
|
* To make it easier to correlate deletes to the event stream, have deletes ↵Owen Jacobson2024-10-25
| | | | return the ID of the affected entity.
* Make sure (most) queries avoid table scans.Owen Jacobson2024-10-23
| | | | | | | | | | | | I've exempted inserts (they never scan in the first place), queries on `event_sequence` (at most one row), and the coalesce()s used for event replay (for now; these are obviously a performance risk area and need addressing). Method: ``` find .sqlx -name 'query-*.json' -exec jq -r '"explain query plan " + .query + ";"' {} + > explain.sql ``` Then go query by query through the resulting file.
* Remove tabs in Rust files.Owen Jacobson2024-10-22
|
* Provide `hi-recanonicalize` to recover from canonicalized-name problems.Owen Jacobson2024-10-22
|
* Canonicalize login and channel names.Owen Jacobson2024-10-22
| | | | | | | | | | | | | | | Canonicalization does two things: * It prevents duplicate names that differ only by case or only by normalization/encoding sequence; and * It makes certain name-based comparisons "case-insensitive" (generalizing via Unicode's case-folding rules). This change is complicated, as it means that every name now needs to be stored in two forms. Unfortunately, this is _very likely_ a breaking schema change. The migrations in this commit perform a best-effort attempt to canonicalize existing channel or login names, but it's likely any existing channels or logins with non-ASCII characters will not be canonicalize correctly. Since clients look at all channel names and all login names on boot, and since the code in this commit verifies canonicalization when reading from the database, this will effectively make the server un-usuable until any incorrectly-canonicalized values are either manually canonicalized, or removed It might be possible to do better with [the `icu` sqlite3 extension][icu], but (a) I'm not convinced of that and (b) this commit is already huge; adding database extension support would make it far larger. [icu]: https://sqlite.org/src/dir/ext/icu For some references on why it's worth storing usernames this way, see <https://www.b-list.org/weblog/2018/nov/26/case/> and the refernced talk, as well as <https://www.b-list.org/weblog/2018/feb/11/usernames/>. Bennett's treatment of this issue is, to my eye, much more readable than the referenced Unicode technical reports, and I'm inclined to trust his opinion given that he maintains a widely-used, internet-facing user registration library for Django.
* Unicode normalization on input.Owen Jacobson2024-10-21
| | | | | | | | | | | | | | | | | | This normalizes the following values: * login names * passwords * channel names * message bodies, because why not The goal here is to have a canonical representation of these values, so that, for example, the service does not inadvertently host two channels whose names are semantically identical but differ in the specifics of how diacritics are encoded, or two users whose names are identical. Normalization is done on input from the wire, using Serde hooks, and when reading from the database. The `crate::nfc::String` type implements these normalizations (as well as normalizing whenever converted from a `std::string::String` generally). This change does not cover: * Trying to cope with passwords that were created as non-normalized strings, which are now non-verifiable as all the paths to verify passwords normalize the input. * Trying to ensure that non-normalized data in the database compares reasonably to normalized data. Fortunately, we don't _do_ very many string comparisons (I think only login names), so this isn't a huge deal at this stage. Login names will probably have to Get Fixed later on, when we figure out how to handle case folding for login name verification.
* Merge branch 'wip/retain-deleted'Owen Jacobson2024-10-18
|\
| * Switch to blanking tombstoned data with null, not empty string.Owen Jacobson2024-10-18
| | | | | | | | | | | | | | This accomplishes two things: * It removes the need for an additional `channel_name_reservation` table, since `channel.name` now only contains non-null values for active channels, and * It nicely dovetails with the idea that `null` means an unknown value in SQL-land.
| * Retain deleted messages and channels temporarily, to preserve events for replay.Owen Jacobson2024-10-17
| | | | | | | | | | | | | | | | | | | | | | | | Previously, when a channel (message) was deleted, `hi` would send events to all _connected_ clients to inform them of the deletion, then delete all memory of the channel (message). Any disconnected client, on reconnecting, would not receive the deletion event, and would de-synch with the service. The creation events were also immediately retconned out of the event stream, as well. With this change, `hi` keeps a record of deleted channels (messages). When replaying events, these records are used to replay the deletion event. After 7 days, the retained data is deleted, both to keep storage under control and to conform to users' expectations that deleted means gone. To match users' likely intuitions about what deletion does, deleting a channel (message) _does_ immediately delete some of its associated data. Channels' names are blanked, and messages' bodies are also blanked. When the event stream is replayed, the original channel.created (message.sent) event is "tombstoned", with an additional `deleted_at` field to inform clients. The included client does not use this field, at least yet. The migration is, once again, screamingingly complicated due to sqlite's limited ALTER TABLE … ALTER COLUMN support. This change also contains capabilities that would allow the API to return 410 Gone for deleted channels or messages, instead of 404. I did experiment with this, but it's tricky to do pervasively, especially since most app-level interfaces return an `Option<Channel>` or `Option<Message>`. Redesigning these to return either `Ok(Channel)` (`Ok(Message)`) or `Err(Error::NotFound)` or `Err(Error::Deleted)` is more work than I wanted to take on for this change, and the utility of 410 Gone responses is not obvious to me. We have other, more pressing API design warts to address.
* | Get loaded data using `export let data`, instead of fishing around in $page.Owen Jacobson2024-10-17
|/ | | | | | This is mostly a how-to-Svelte thing. I've also made the API responses for invites a bit more caller-friendly by flattening them and adding the ID field into them. The ID is redundant (the client knows it because the client has the invitation URL), but it makes presenting invitations and actioning them a bit easier.
* Create APIs for inviting users.Owen Jacobson2024-10-11
|
* Provide a separate "initial setup" endpoint that creates a user.Owen Jacobson2024-10-11
|
* Fix invalid migration.Owen Jacobson2024-10-10
| | | | | | The original version of this migration happened to work correctly, by accident, for databases with exactly one login. I missed this, and so did Kit, because both of our test databases _actually do_ contain exactly one login, and because I didn't run the tests before committing the migration. The fixed version works correctly for all scenarios I tested (zero, one, and two users, not super thorough). I've added code to patch out the original migration hash in databases that have it; no further corrective work is needed, as if the migration failed, then it got backed out anyways, and if it succeeded, you fell into the "one user" case.
* Return a flat message list on boot, not nested lists by channel.Owen Jacobson2024-10-09
| | | | This is a bit easier to compute, and sets us up nicely for pulling message boot out of the `/api/boot` response entirely.
* Provide a view of logins to clients.Owen Jacobson2024-10-09
|
* Simplify channel IDs in events. Remove redundant ones.Owen Jacobson2024-10-09
|
* Use sqlx's API, not SQL groveling, to find unwanted migrations.Owen Jacobson2024-10-05
|
* Start fresh with database migrations.Owen Jacobson2024-10-04
| | | | | | The migration path from the original project inception to now was complicated and buggy, and stranded _both_ Kit and I with broken databases due to oversights and incomplete migrations. We've agreed to start fresh, once. If this is mistakenly started with an original-schema-flavour DB, startup will be aborted.
* List messages per channel.Owen Jacobson2024-10-03
|
* Add endpoints for deleting channels and messages.Owen Jacobson2024-10-03
| | | | It is deliberate that the expire() functions do not use them. To avoid races, the transactions must be committed before events get sent, in both cases, which makes them structurally pretty different.
* Represent channels and messages using a split "History" and "Snapshot" model.Owen Jacobson2024-10-03
| | | | | | This separates the code that figures out what happened to an entity from the code that represents it to a user, and makes it easier to compute a snapshot at a point in time (for things like bootstrap). It also makes the internal logic a bit easier to follow, since it's easier to tell whether you're working with a point in time or with the whole recorded history. This hefty.
* First pass on reorganizing the backend.Owen Jacobson2024-10-02
| | | | This is primarily renames and repackagings.
* Provide a resume point to bridge clients from state snapshots to the event ↵Owen Jacobson2024-10-01
| | | | sequence.
* Track event sequences globally, not per channel.Owen Jacobson2024-10-01
| | | | Per-channel event sequences were a cute idea, but it made reasoning about event resumption much, much harder (case in point: recovering the order of events in a partially-ordered collection is quadratic, since it's basically graph sort). The minor overhead of a global sequence number is likely tolerable, and this simplifies both the API and the internals.
* Prevent racing between `limit_stream` and logging out.Owen Jacobson2024-10-01
|
* Reimplement the logout machinery in terms of token IDs, not token secrets.Owen Jacobson2024-09-29
| | | | | | This (a) reduces the amount of passing secrets around that's needed, and (b) allows tests to log out in a more straightforwards manner. Ish. The fixtures are a mess, but so is the nomenclature. Fix the latter and the former will probably follow.
* Shut down the `/api/events` stream when the user logs out or their token ↵Owen Jacobson2024-09-29
| | | | | | | | expires. When tokens are revoked (logout or expiry), the server now publishes an internal event via the new `logins` event broadcaster. These events are used to guard the `/api/events` stream. When a token revocation event arrives for the token used to subscribe to the stream, the stream is cut short, disconnecting the client. In service of this, tokens now have IDs, which are non-confidential values that can be used to discuss tokens without their secrets being passed around unnecessarily. These IDs are not (at this time) exposed to clients, but they could be.
* Wrap credential and credential-holding types to prevent `Debug` leaks.Owen Jacobson2024-09-28
| | | | | | | | | | | | The following values are considered confidential, and should never be logged, even by accident: * `Password`, which is a durable bearer token for a specific Login; * `IdentitySecret`, which is an ephemeral but potentially long-lived bearer token for a specific Login; or * `IdentityToken`, which may hold cookies containing an `IdentitySecret`. These values are now wrapped in types whose `Debug` impls output opaque values, so that they can be included in structs that `#[derive(Debug)]` without requiring any additional care. The wrappers also avoid implementing `Display`, to prevent inadvertent `to_string()`s. We don't bother obfuscating `IdentitySecret`s in memory or in the `.hi` database. There's no point: we'd also need to store the information needed to de-obfuscate them, and they can be freely invalidated and replaced by blanking that table and asking everyone to log in again. Passwords _are_ obfuscated for storage, as they're intended to be durable.
* Expire channels, too.Owen Jacobson2024-09-28
|
* Delete expired messages out of band.Owen Jacobson2024-09-28
| | | | | | | | Trying to reliably do expiry mid-request was causing some anomalies: * Creating a channel with a dup name would fail, then succeed after listing channels. It was very hard to reason about which operations needed to trigger expiry, to fix this "correctly," so now expiry runs on every request.
* Assign sequence numbers from a counter, not by scanning messagesOwen Jacobson2024-09-28
|
* Send created events when channels are added.Owen Jacobson2024-09-28
|
* Use a vector of sequence numbers, not timestamps, to restart /api/events ↵Owen Jacobson2024-09-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | streams. The timestamp-based approach had some formal problems. In particular, it assumed that time always went forwards, which isn't necessarily the case: * Alice calls `/api/channels/Cfoo` to send a message. * The server assigns time T to the request. * The server stalls somewhere in send() for a while, before storing and broadcasting the message. If it helps, imagine blocking on `tx.begin().await?` for a while. * In this interval, Bob calls `/api/events?channel=Cfoo`, receives historical messages up to time U (after T), and disconnects. * The server resumes Alice's request and finishes it. * Bob reconnects, setting his Last-Event-Id header to timestamp U. In this scenario, Bob never sees Alice's message unless he starts over. It wasn't in the original stream, since it wasn't broadcast while Bob was subscribed, and it's not in the new stream, since Bob's resume point is after the timestamp on Alice's message. The new approach avoids this. Each message is assigned a _sequence number_ when it's stored. Bob can be sure that his stream included every event, since the resume point is identified by sequence number even if the server processes them out of chronological order: * Alice calls `/api/channels/Cfoo` to send a message. * The server assigns time T to the request. * The server stalls somewhere in send() for a while, before storing and broadcasting. * In this interval, Bob calls `/api/events?channel=Cfoo`, receives historical messages up to sequence Cfoo=N, and disconnects. * The server resumes Alice's request, assigns her message sequence M (after N), and finishes it. * Bob resumes his subscription at Cfoo=N. * Bob receives Alice's message at Cfoo=M. There's a natural mutual exclusion on sequence numbers, enforced by sqlite, which ensures that no two messages have the same sequence number. Since sqlite promises that transactions are serializable by default (and enforces this with a whole-DB write lock), we can be confident that sequence numbers are monotonic, as well. This scenario is, to put it mildly, contrived and unlikely - which is what motivated me to fix it. These kinds of bugs are fiendishly hard to identify, let alone reproduce or understand. I wonder how costly cloning a map is going to turn out to be… A note on database migrations: sqlite3 really, truly has no `alter table … alter column` statement. The only way to modify an existing column is to add the column to a new table. If `alter column` existed, I would create the new `sequence` column in `message` in a much less roundabout way. Fortunately, these migrations assume that they are being run _offline_, so operations like "replace the whole table" are reasonable.
* Remove the HTML client, and expose a JSON API.Owen Jacobson2024-09-20
| | | | | | | | | | | | | This API structure fell out of a conversation with Kit. Described loosely: kit: ok kit: Here's what I'm picturing in a client kit: list channels, make-new-channel, zero to one active channels, post-to-active. kit: login/sign-up, logout owen: you will likely also want "am I logged in" here kit: sure, whoami
* Expire messages after 90 days.Owen Jacobson2024-09-20
| | | | | | | | | | This is intended to manage storage growth. A community with broadly steady traffic will now reach a steady state (ish) where the amount of storage in use stays within a steady band. The 90 day threshold is a spitball; this should be made configurable for the community's needs. I've also hoisted expiry out into the `app` classes, to reduce the amount of non-database work repo types are doing. This should make it easier to make expiry configurable later on. Includes incidental cleanup and style changes.