summaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Convert the `Users` component into a freestanding struct.Owen Jacobson2025-10-28
| | | | Because `Users` is test-only and is not used in any endpoints, it doesn't need a FromRef impl.
* Convert the `Tokens` component into a freestanding struct.Owen Jacobson2025-10-28
| | | | As with the `Setup` component, I've generalized the associated middleware across anything that can provide a `Tokens`, where possible.
* Convert the `Setup` component into a freestanding struct.Owen Jacobson2025-10-28
| | | | The changes to the setup-requiring middleware are probably more general than was strictly needed, but they will make it work with anything that can provide a `Setup` component rather than being bolted to `App` specifically, which feels tidier.
* Convert the `Messages` component to a freestanding struct.Owen Jacobson2025-10-28
|
* Convert `Logins` into a freestanding component.Owen Jacobson2025-10-28
|
* Convert `Invites` into a freestanding component.Owen Jacobson2025-10-28
|
* Convert the `Events` app component into a freestanding struct.Owen Jacobson2025-10-28
| | | | This one doesn't need a FromRef impl at this time, as it's only ever used in a handler that also uses other components and so will need to continue receiving `App`. However, there's little reason not to make the implementatino of the `Events` struct consistent.
* Convert the `Conversations` component into a freestanding struct.Owen Jacobson2025-10-28
| | | | | | | | Unlike the previous example, this involves cloning an event broadcaster, as well. This is, per the documentation, how the type may be used. From <https://docs.rs/tokio/latest/tokio/sync/broadcast/fn.channel.html>: > The Sender can be cloned to send to the same channel from multiple points in the process or it can be used concurrently from an `Arc`. The language is less firm than the language sqlx uses for its pool, but the intent is clear enough, and it works in practice.
* Make `Boot` a freestanding app type, rather than a view of ↵Owen Jacobson2025-10-28
| | | | | | | | | | | | | | | | | `crate::app::App`'s internals. In the course of working on web push, I determined that we probably need to make `App` generic over the web push client we're using, so that tests can use a dummy client while the real app uses a client created at startup and maintained over the life of the program's execution. The most direct implementation of that is to render App as `App<P>`, where the parameter is occupied by the specific web push client type in use. However, doing this requires refactoring at _every_ site that mentions `App`, including every handler, even though the vast majority of those sites will not be concerned with web push. I reviewed a few options with @wlonk: * Accept the type parameter and apply it everywhere, as the cost of supporting web push. * Hard-code the use of a specific web push client. * Insulate handlers &c from `App` via provider traits, mimicing what we do for repository provider traits today. * Treat each app type as a freestanding state in its own right, so that only push-related components need to consider push clients (as far as is feasible). This is a prototype towards that last point, using a simple app component (boot) as a testbed. `FromRef` allows handlers that take a `Boot` to be used in routes that provide an `App`, so this is a contained change. However, the structure of `FromRef` prevents `Boot` from carrying any lifetime narrower than `'static`, so it now holds clones of the state fields it acquires from App, instead of references. This is fine - that's just a database pool, and sqlx's pool type is designed to be shared via cloning. From <https://docs.rs/sqlx/latest/sqlx/struct.Pool.html>: > Cloning Pool is cheap as it is simply a reference-counted handle to the inner pool state.
* Automatically reorder imports to my preferred style.Owen Jacobson2025-08-30
|
* Remove entirely-redundant synchronization inside of Broadcaster.Owen Jacobson2025-08-26
| | | | | | Per <https://docs.rs/tokio/latest/tokio/sync/broadcast/struct.Sender.html>, a `Sender` is safe to share between threads. The clone behaviour we want is also provided by its `Clone` impl directly, and we don't need to wrap the sender in an `Arc` to share it. It's amazing what you can find in the docs.
* Consolidate `events.map(…).collect()` calls into `Broadcaster`.Owen Jacobson2025-08-26
| | | | | | This conversion, from an iterator of type-specific events (say, `user::Event` or `message::Event`), into a `Vec<event::Event>`, is prevasive, and it needs to be done each time. Having Broadcaster expose a support method for this cuts down on the repetition, at the cost of a slightly alarming amount of type-system nonsense in `broadcast_from`. Historical footnote: the internal message structure is a Vec and not an individual message so that bulk operations, like expiring channels and messages, won't disconnect everyone if they happen to dispatch more than sixteen messages (current queue depth limit) at once. We trade allocation and memory pressure for keeping the connections alive. _Most_ event publishing is an iterator of one item, so the Vec allocation is redundant.
* Store `User` instances using their events.Owen Jacobson2025-08-26
|
* Store `Message` instances using their events.Owen Jacobson2025-08-26
| | | | I found a test bug! The tests for deleting previously-deleted or previously-expired tests were using the wrong user to try to delete those messages. The tests happened to pass anyways because the message authorship check was done after the message lifecycle check. They would have no longer passed; the tests are fixed to use the sender, instead.
* Store `Conversation` instances using their events.Owen Jacobson2025-08-26
| | | | This replaces the approach of having the repo type know about conversation lifecycle in detail. Instead, the repo type accepts events and applies them to the DB blindly. The SQL written to implement each event does, however, embed assumptions about what order events will happen in.
* Allow callers to pass `Instant`s to `Sequence` predicate constructors.Owen Jacobson2025-08-26
|
* Split `user` into a chat-facing entity and an authentication-facing entity.ojacobson2025-08-26
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The taxonomy is now as follows: * A _login_ is someone's identity for the purposes of authenticating to the service. Logins are not synchronized, and in fact are not published anywhere in the current API. They have a login ID, a name and a password. * A _user_ is someone's identity for the purpose of participating in conversations. Users _are_ synchronized, as before. They have a user ID, a name, and a creation instant for the purposes of synchronization. ## API changes * `GET /api/boot` method now returns a `login` key instead of a `user` key. The structure of the nested value is unchanged. This change is not backwards-compatible; the included client and the docs have been updated accordingly. ## Server implementation * Most app methods that took a `&User` as an identity now take a `&Login` as an identity, instead. Where a `User` is needed, the new `tx.users().for_login(&login)` database access method resolves a `Login` to its corresponding `user::History`, which can then be turned into a `User` at whatever point in time is most appropriate. This adds a few new error cases to methods that traverse the login-to-history-to-user chain. Those cases are presently unreachable, but I've fully fleshed them out so that they don't bite us later. Most of the resulting errors, however, are captured as internal server errors. * There is a new `app.logins()` application entry point, dealing with login identities and password-based logins. * `app.tokens()` is a bit more limited in scope to only things that work with an existing token. That has the side effect of splitting up logging in (in `app.logins().with_password(…)`) and logging out (in `app.tokens().logout(…)`). ## Schema changes The `user` table has been split: * `login` holds the data needed for the user to log in - their login ID, their name, and their password. * `user` now holds only the user ID and the event data for the user's `created` instant. Reconstructing a `User` struct requires joining in data from both `login` and `user`. In theory, the relationship is one-way: every user has a login. In practice, it's reciprocal: every login has a user and every user has a login. Relationships with downstream tables have been modified to suit: * `message` still refers to `user` for authorship information. * `invite` still refers to `user` for originator information. * `token` refers to `login` for authentication information. ## Blimy, that's big Yeah, I know. It's hard to avoid and I'm not sure the effort of making this in incremental steps is worth it. Authentication logic has a way of getting into all sorts of corners, and Pilcrow is no different. In order for the new taxonomy to make sense, all of the places that previously used `User` as a representation of an authenticated identity have to be updated, and it's easier to do that all at once, so that we can retire all the code that _supports_ using a `User` that way. Merges split-user into main.
| * Split `user` into a chat-facing entity and an authentication-facing entity.Owen Jacobson2025-08-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The taxonomy is now as follows: * A _login_ is someone's identity for the purposes of authenticating to the service. Logins are not synchronized, and in fact are not published anywhere in the current API. They have a login ID, a name and a password. * A _user_ is someone's identity for the purpose of participating in conversations. Users _are_ synchronized, as before. They have a user ID, a name, and a creation instant for the purposes of synchronization. In practice, a user exists for every login - in fact, users' names are stored in the login table and are joined in, rather than being stored redundantly in the user table. A login ID and its corresponding user ID are always equal, and the user and login ID types support conversion and comparison to facilitate their use in this context. Tokens are now associated with logins, not users. The currently-acting identity is passed down into app types as a login, not a user, and then resolved to a user where appropriate within the app methods. As a side effect, the `GET /api/boot` method now returns a `login` key instead of a `user` key. The structure of the nested value is unchanged.
| * Generate tokens in memory and then store them.Owen Jacobson2025-08-26
| | | | | | | | This is the leading edge of a larger storage refactoring, where repo types stop doing things like generating secrets or deciding whether to carry out an operation. To make this work, there is now a `Token` type that holds the complete state of a token, in memory.
| * Split the `user` table into an authentication portion and a chat portion.Owen Jacobson2025-08-26
| | | | | | | | We'll be building separate entities around this in future commits, to better separate the authentication data (non-synchronized and indeed "not public") from the chat data (synchronized and public).
| * Factor out common authentication test verification steps into helpers.Owen Jacobson2025-08-26
| | | | | | | | These checks tended to be wordy, and were prone to being done subtly differently in different locations for no good reason. Centralizing them cleans this up and makes the tests easier to follow, at the expense of making it somewhat harder to follow what the test is specifically checking.
| * Return an identity, rather than the parts of an identity, when validating an ↵Owen Jacobson2025-08-25
| | | | | | | | | | | | identity token. This is a small refactoring that's been possible for a while, and we only just noticed.
* | Use the imported name, since we have it.Owen Jacobson2025-08-26
|/
* Group Rust imports by crate.Owen Jacobson2025-08-25
| | | | | | I've been doing this by hand anyways, and this makes it a _ton_ less tedious to maintain. I think it looks nice. This does, however, require nightly - for formatting only.
* Stop returning a body from `POST /api/password`.Owen Jacobson2025-08-24
|
* Remove the now-unused return value from the final stage of user creation.Owen Jacobson2025-08-24
|
* Stop returning an HTTP body from `POST /api/invite/:id`.Owen Jacobson2025-08-24
| | | | As with the previous commits, the body was never actually being used.
* Stop returning body data from `POST /api/auth/login`.Owen Jacobson2025-08-24
| | | | As with `/api/setup`, the response was an ad-hoc choice, which we are not using and which constrains future development just by existing.
* Stop returning body data from `POST /api/setup`.Owen Jacobson2025-08-24
| | | | This API response was always ad-hoc, and the client doesn't use it. To free up some maneuvering room for server refactorings, stop sending it. We can add a response in the future if there's a need.
* Define a canonical "empty" response.Owen Jacobson2025-08-24
| | | | This is a bit tidier and easier to assert on than returning a bare HTTP status code, but is otherwise interchangeable with it.
* Collapse redundant "deleted_at" timestaps and "deleted" event instants.Owen Jacobson2025-08-24
| | | | These were separated as there wasn't an obvious way to serialize two fields with the same _type_ with different _prefixes_. Turns out this is a common problem, and someone's written a crate for it that remaps the names for you.
* Hoist `password` out to the top level.Owen Jacobson2025-08-24
| | | | Having this buried under `crate::user` makes it hard to split up the roles `user` fulfils right now. Moving it out to its own module makes it a bit tidier to reuse it in a separate, authentication-only way.
* Add conversions between String and Id<T>.Owen Jacobson2025-08-24
| | | | There's already an implicit conversion (via serialization), it's just awkward to use. However, we now need those conversions more directly.
* Rust 1.89: Add elided lifetime parameters (`'_`) where appropriate.Owen Jacobson2025-08-13
| | | | | | | | | | | | | | | | | | | | Rust 1.89 added a new warning: warning: hiding a lifetime that's elided elsewhere is confusing --> src/setup/repo.rs:4:14 | 4 | fn setup(&mut self) -> Setup; | ^^^^^^^^^ ----- the same lifetime is hidden here | | | the lifetime is elided here | = help: the same lifetime is referred to in inconsistent ways, making the signature confusing help: use `'_` for type paths | 4 | fn setup(&mut self) -> Setup<'_>; | ++++ I don't entirely agree with the style advice here, but lifetime elision style is an evolving area in Rust and I'd rather track the Rust team's recommendations than invent my own, so I've added all of them.
* Stop mentioning private error types in doctest boilerplate.Owen Jacobson2025-08-13
| | | | In 792de8e49fa8a3c04bfb747adadf71572d753055, `crate::cli::Error` was made private. I forgot to update the doctest that mentions it.
* Define ID types as specializations, rather than newtypes.Owen Jacobson2025-07-24
| | | | | | | | | | | | | | | | This is based heavily on the work done for normalized strings, in `crate::normalize`. The key realization in that module is that the logic distinguishing one kind of thing (normalized strings in that case, IDs, in this case) can be packaged up as a type token, and that doing so may reduce the overall complexity. This implementation for ID also borrows heavily from the implementation for normalized strings. It's less flexible: an ID implemented this way can't expose _less_ of `crate::id::ID`'s interface, whereas newtype wrappers can, for example. However, our code doesn't use that flexiblity on purpose anywhere and we're relatively unlikely to change that. In return, the individual ID types require substantially less code - they do not, for example, need to re-implement `Display` for themselves. I very nearly made the trait `Prefix`: ```rust pub trait Prefix { const PREFIX: &str; } ``` however, I think having an effectively-constant method is less surprising overall.
* Remove `pilcrow::cli::Error` from the lib crate's public interface.Owen Jacobson2025-07-22
| | | | This might be the pettiest rude change I've ever made to a Rust program. If I saw this - or did this - in code _intend_ to be used as a library, I'd be appalled.
* Stop linking to private documentation items in public docs.Owen Jacobson2025-07-22
| | | | The Pilcrow crate library docs are something of a wart; Pilcrow isn't meant to be used as a library, and the only public interface it exposes is the CLI entry point. However, we will likely be publishing Pilcrow via crates.io (among other options), and so it will _be usable_ as a library if you're desperate enough to try. The docs should at least be coherent.
* Add a `--umask` option to determine what permissions new files/databases get.Owen Jacobson2025-07-18
| | | | | | | | | | | | | | | | | | | The new `--umask` option takes one of three values: * `--umask masked`, the default, takes the inherited umask and forces o+rwx on. * `--umask inherit` takes the inherited umask as-is. * `--umask OCTAL` sets the umask to exactly `OCTAL` and is broadly equivalent to `umask OCTAL && pilcrow --umask inherit`. This fell out of a conversation with @wlonk, who is working on notifications. Since notifications may require [VAPID] keys, the server will need a way to store those keys. That would generally be "in the pilcrow database," which lead me to the observation that Pilcrow creates that database as world-readable by default. "World-readable" and "encryption/signing keys" are not things that belong in the same sentence. [VAPID]: https://datatracker.ietf.org/doc/html/rfc8292 The most "obvious" solution would be to set the permissions used for the sqlite database when it's created. That's harder than it sounds: sqlite has no built-in facility for doing this. The closest thing that exists today is the [`modeof`] query parameter, which copies the permissions (and ownership) from some other file. We also can't reliably set the permissions ourselves, as sqlite may - depending on build options and configuration - [create multiple files][wal]. [`modeof`]: https://www.sqlite.org/uri.html [wal]: https://www.sqlite.org/wal.html Using `umask` is a whole-process solution to this. As Pilcrow doesn't attempt to create other files, there's little issue with doing it this way, but this is a design risk for future work if it creates files that are _intended_ to be readable by more than just the Pilcrow daemon user.
* Set up a skeleton for swatches.Owen Jacobson2025-07-08
| | | | | | | | | | | | | | | | | | | A swatch is a live, and ideally editable, example of an element of the service. They serve as: * Documentation: what is this element, how do you use it, what does it do? * Demonstration: what does this element look like? * Manual test scaffolding: when I change this element like _so_, what happens? Swatches are collectively available under `/.swatch/` on a running instance, and are set up in a separate [group] from the rest of the UI. They do not require setup or login for simplicity's sake and because they don't _do_ anything that requires either of those things. [group]: https://svelte.dev/docs/kit/advanced-routing#Advanced-layouts-(group) Swatches are manually curated, for a couple of reasons: * We lack the technical infrastructure needed to do this based on static analysis; and * Manual curation lets us include affordances like "recommended values," that would be tricky to express as part of the type or schema for the component. The tradeoff, however, is that swatches may fall out of step with the components they depic, if not reviewed regularly. I hope that, by making them part of the development process, this risk will be mitigated through regular use.
* Move the `/ch` channel view to `/c` (for conversation).Owen Jacobson2025-07-03
|
* Replace `channel` with `conversation` throughout the API.Owen Jacobson2025-07-03
| | | | This is a **breaking change** for essentially all clients. Thankfully, there's presently just the one, so we don't need to go to much effort to accommoate that; the client is modified in this commit to adapt, users can reload their client, and life will go on.
* Rename "channel" to "conversation" within the server.Owen Jacobson2025-07-03
| | | | | | I've split this from the schema and API changes because, frankly, it's huge. Annoyingly so. There are no semantic changes in this, it's all symbol changes, but there are a _lot_ of them because the term "channel" leaks all over everything in a service whose primary role is managing messages sent to channels (now, conversations). I found a buggy test while working on this! It's not fixed in this commit, because it felt mean to hide a real change in the middle of this much chaff.
* Rename "channel" to "conversation" in the database.Owen Jacobson2025-07-03
| | | | | | I've - somewhat arbitrarily - started renaming column aliases, as well, though the corresponding Rust data model, API fields and nouns, and client code still references them as "channel" (or as derived terms). As with so many schema changes, this entails a complete rebuild of a substantial portion of the schema. sqlite3 still doesn't have very many `alter table` primitives, for renaming columns in particular.
* Prevent sending messages to deleted channels.Owen Jacobson2025-07-03
| | | | I've opted to make it clear in the error message which scenario - deleted vs. non-existant - a channel falls into. This isn't particularly consistent with the rest of the API, so we might need to review this decision later, but it's at least relatively harmless if it's mistaken. (Formally, they're both 404s, so clients that go by error code won't notice.)
* Sending messages to a deleted channel should send to the deleted channel's ↵Owen Jacobson2025-07-01
| | | | | | | | | | | | | | | | | | | | | | | ID, not to a fictitious ID. The existing test scenario: * Create a channel (with ID C1). * Delete channel C1. * Roll the dice to invent a channel ID C2. * Send a message to channel C2. * Observe that sending fails. This was not verifying anything about the deleted channel C1 - it was basically reproducing the `nonexistent_channel` test scenario with the most marginal of garnishes on it. This is probably copy-paste damage from when this test was originally written. Sending did fail, so this test scenario passed, but it passed effectively by accident. The new test scenario: * Create a channel (with ID C1). * Delete channel C1. * Send a message to channel C1. * Observe that sending fails. Concerningly, sending does _not_ fail in this scenario (i.e., the test _does_ fail), so the broken test case was masking a real bug.
* Send back the current state as events, not snapshots, during client boot.ojacobson2025-07-01
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are a couple of contributing reasons for this. * Each client's author - even ourselves - is best positioned to know how best to convert history into state to meet the needs of that specific client. There is (probably) no universal solution. You can already see this with the built-in client, where unread tracking gets stapled onto snapshots locally and maintained as events roll in, and I would expect this to happen more and more regularly over time. If we ever sprout other clients, I'd also expect their local state to be different. The API, on the other hand, must expose a surface that's universal to all clients. For boot, that was a very rote list-of-nouns data model. The other option is to expose a surface specific to one client and make other clients accommodate, which is contrary to the goals of this project. * The need to compute snapshots adds friction when adding or changing the behaviour of the API, even when those changes only tangentially touch `/api/boot`. For example, my work on adding messages to multiple conversations got hung up in trying to figure out how to represent that at boot time, plus how to represent that in the event stream. * The rationale for sending back a computed snapshot of the state was to avoid having the client replay events from the beginning of time, and to limit the amount of data sent back. This didn't pan out - most snapshots in practice consisted of the same data you'd get from the event stream anyways, almost with a 1:1 correspondence (with a `sent` or `created` event standing in for a `messages`, `channels`, or `users` entry). Exceptions - deleted messages and channels - were rare, and are ephemeral. * Generating the snapshots requires loading the entire history into memory anyways. We're not saving any server-side IO by computing snapshots, but we are spending server-side compute time to generate them for clients that are then going to throw them away, as above. This change resolves these tensions by delegating state management _entirely_ to the client, removing the server-side state snapshots. The server communicates in events only. ## Alternatives I joked to @wlonk that the "2.0-bis" version of this change always returns `resume_point` 0 and an empty events list. That would be correct, and compatible with the client logic in this change, and would actually work. In fact, we could get rid of the event part of `/api/boot` entirely, and require clients to consume the event stream from the beginning every time they reconnect. The main reason I _don't_ want to do this has to do with reconnects. Right now - both with snapshots, before this change, and with events, after - the client can cleanly delineate "historical" events (to be applied while the state is not presented to the user) and "current" events (to be presented to the user immediately). The `application/event-stream` protocol has no way to make that distinction out of the box, and while we can hack something in, all the approaches I can think of are nasty. Merges boot-events into main.
| * Remove the snapshot fields from `/api/boot`.Owen Jacobson2025-06-20
| | | | | | | | Clients now _must_ construct their state from the event stream; it is no longer possible for them to delegate that work to the server.
| * Include historical events in the boot response.Owen Jacobson2025-06-20
| | | | | | | | The returned events are all events up to and including the `resume_point` in the same response. If combined with the events from `/api/events?resume_point=x`, using the same `resume_point`, the client will have a complete event history, less any events from histories that have been purged.
| * Support querying event sequences via iterators or streams.Owen Jacobson2025-06-20
| | | | | | | | | | | | | | | | | | | | | | | | | | These filters are meant to be used with, respectively, `Iterator::filter_map` and `StreamExt::filter_map`. The two operations are conceptually the same - they pass an item from the underlying sequence to a function that returns an option, drops the values for which the function returns `None`, and yields the value inside of `Some` in the resulting sequence. However, `Iterator::filter_map` takes a function from the iterator elements to `Option<T>`. `StreamExt::filter_map` takes a function from the iterator elements to _a `Future` whose output is `Option<T>`_. As such, you can't easily use functions designed for one use case, for the other. You need an adapter - conventionally, `futures::ready`, if you have a non-async function and need an async one. This provides two sets of sequence filters: * `crate::test::fixtures::event` contains functions which return `Option` directly, and which are intended for use with `Iterator::filter_map`. * `crate::test::fixtures::event::stream` contains lifted versions that return a `Future`, and which are intended for use with `StreamExt::filter_map`. The lifting is done purely manually. I spent a lot of time writing clever-er versions before deciding on this; those were fun to write, but hell to read and not meaningfully better, and this is test support code, so we want it to be dumb and obvious. Complexity for the sake of intellectual satisfaction is a huge antifeature in this context.