summaryrefslogtreecommitdiff
path: root/Cargo.toml
Commit message (Collapse)AuthorAge
* Crate dependency upgrades.Owen Jacobson2025-11-09
| | | | | | We're still constrained by the matching dependency on libsqlite3-sys between sqlx and rusqlite, but we can at least upgrade sqlx freely. Later versions of rusqlite require newer libsqlite3-sys than sqlx supports. This does not upgrade rand/rand_core, as there's a breaking API change in 0.9 that needs more work to integrate.
* Remove support for detecting and rejecting databases from historical ↵Owen Jacobson2025-11-09
| | | | | | migration streams. All known instances of that migration stream have long since disappeared, and this lets us get rid of a dependency.
* De minimis "send me a notification" implementation.Owen Jacobson2025-11-08
| | | | | | | | | | | | | | | | | | When a user clicks "send a test notification," Pilcrow delivers a push message (with a fixed payload) to all active subscriptions. The included client then displays this as a notification, using browser APIs to do so. This lets us verify that push notification works, end to end - and it appears to. The API endpoint for sending a test notification is not documented. I didn't feel it prudent to extensively document an endpoint that is intended to be temporary and whose side effects are very much subject to change. However, for posterity, the endpoint is POST /api/push/ping {} and the push message payload is ping Subscriptions with permanent delivery failures are nuked when we encounter them. Subscriptions with temporary failures cause the `ping` endpoint to return an internal server error, and are not retried. We'll likely want retry logic - including retry logic to handle server restarts - for any more serious use, but for a smoke test, giving up immediately is fine. To make the push implementation testable, `App` is now generic over it. Tests use a dummy implementation that stores sent messages in memory. This has some significant limitations, documented in the test suite, but it beats sending real notifications to nowhere in tests.
* Add an endpoint for creating push subscriptions.Owen Jacobson2025-11-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | The semantics of this endpoint are somewhat complex, and are incompletely captured in the associated docs change. For posterity, the intended workflow is: 1. Obtain Pilcrow's current VAPID key by connecting (it's in the events, either from boot or from the event stream). 2. Use the browser push APIs to create a push subscription, using that VAPID key. 3. Send Pilcrow the push subscription endpoint and keys, plus the VAPID key the client used to create it so that the server can detect race conditions with key rotation. 4. Wait for messages to arrive. This commit does not introduce any actual messages, just subscription management endpoints. When the server's VAPID key is rotated, all existing subscriptions are discarded. Without the VAPID key, the server cannot service those subscriptions. We can't exactly notify the broker to stop processing messages on those subscriptions, so this is an incomplete solution to what to do if the key is being rotated due to a compromise, but it's better than nothing. The shape of the API endpoint is heavily informed by the [JSON payload][web-push-json] provided by browser Web Push implementations, to ease client development in a browser-based context. The idea is that a client can take that JSON and send it to the server verbatim, without needing to transform it in any way, to submit the subscription to the server for use. [web-push-json]: https://developer.mozilla.org/en-US/docs/Web/API/PushSubscription/toJSON Push subscriptions are operationally associated with a specific _user agent_, and have no inherent relationship with a Pilcrow login or token (session). Taken as-is, a subscription created by user A could be reused by user B if they share a user agent, even if user A logs out before user B logs in. Pilcrow therefore _logically_ associates push subscriptions with specific tokens, and abandons those subscriptions when the token is invalidated by * logging out, * expiry, or * changing passwords. (There are no other token invalidation workflows at this time.) Stored subscriptions are also abandoned when the server's VAPID key changes.
* Raise minimum Rust version to 1.90.Owen Jacobson2025-10-24
| | | | We've made stylistic changes that follow from Rust 1.86 through 1.89 anyways, and I'm not at all confident we aren't using APIs that only exist in those versions.
* Generate, store, and deliver a VAPID key.Owen Jacobson2025-08-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | VAPID is used to authenticate applications to push brokers, as part of the [Web Push] specification. It's notionally optional, but we believe [Apple requires it][apple], and in any case making it impossible to use subscription URLs without the corresponding private key available, and thus harder to impersonate the server, seems like a good security practice regardless. [Web Push]: https://developer.mozilla.org/en-US/docs/Web/API/Push_API [apple]: https://developer.apple.com/documentation/usernotifications/sending-web-push-notifications-in-web-apps-and-browsers There are several implementations of VAPID for Rust: * [web_push](https://docs.rs/web-push/latest/web_push/) includes an implementation of VAPID but requires callers to provision their own keys. We will likely use this crate for Web Push fulfilment, but we cannot use it for key generation. * [vapid](https://docs.rs/vapid/latest/vapid/) includes an implementation of VAPID key generation. It delegates to `openssl` to handle cryptographic operations. * [p256](https://docs.rs/p256/latest/p256/) implements NIST P-256 in Rust. It's maintained by the RustCrypto team, though as of this writing it is largely written by a single contributor. It isn't specifically designed for use with VAPID. I opted to use p256 for this, as I believe the RustCrypto team are the most likely to produce a correct and secure implementation, and because openssl has consistently left a bad taste in my mouth for years. Because it's a general implementation of the algorithm, I expect that it will require more work for us to adapt it for use with VAPID specifically; I'm willing to chance it and we can swap it out for the vapid crate if it sucks. This has left me with one area of uncertainty: I'm not actually sure I'm using the right parts of p256. The choice of `ecdsa::SigningKey` over `p256::SecretKey` is based on [the MDN docs] using phrases like "This value is part of a signing key pair generated by your application server, and usable with elliptic curve digital signature (ECDSA), over the P-256 curve." and on [RFC 8292]'s "The 'k' parameter includes an ECDSA public key in uncompressed form that is encoded using base64url encoding. However, we won't be able to test my implementation until we implement some other key parts of Web Push, which are out of scope of this commit. [the MDN docs]: https://developer.mozilla.org/en-US/docs/Web/API/PushSubscription/options [RFC 8292]: https://datatracker.ietf.org/doc/html/rfc8292#section-3.2 Following the design used for storing logins and users, VAPID keys are split into a non-synchronized part (consisting of the private key), whose exposure would allow others to impersonate the Pilcrow server, and a synchronized part (consisting of event coordinates and, notionally, the public key), which is non-sensitive and can be safely shared with any user. However, the public key is derived from the stored private key, rather than being stored directly, to minimize redundancy in the stored data. Following the design used for expiring stale entities, the app checks for and creates, or rotates, its VAPID key using middleware that runs before most API requests. If, at that point, the key is either absent, or more than 30 days old, it is replaced. This imposes a small tax on API request latency, which is used to fund prompt and automatic key rotation without the need for an operator-facing key management interface. VAPID keys are delivered to clients via the event stream, as laid out in `docs/api/events.md`. There are a few reasons for this, but the big one is that changing the VAPID key would immediately invalidate push subscriptions: we throw away the private key, so we wouldn't be able to publish to them any longer. Clients must replace their push subscriptions in order to resume delivery, and doing so promptly when notified that the key has changed will minimize the gap. This design is intended to allow for manual key rotation. The key can be rotated "immedately" by emptying the `vapid_key` and `vapid_signing_key` tables (which destroys the rotated kye); the server will generate a new one before it is needed, and will notify clients that the key has been invalidated. This change includes client support for tracking the current VAPID key. The client doesn't _use_ this information anywhere, yet, but it has it.
* Collapse redundant "deleted_at" timestaps and "deleted" event instants.Owen Jacobson2025-08-24
| | | | These were separated as there wasn't an obvious way to serialize two fields with the same _type_ with different _prefixes_. Turns out this is a common problem, and someone's written a crate for it that remaps the names for you.
* Add a `--umask` option to determine what permissions new files/databases get.Owen Jacobson2025-07-18
| | | | | | | | | | | | | | | | | | | The new `--umask` option takes one of three values: * `--umask masked`, the default, takes the inherited umask and forces o+rwx on. * `--umask inherit` takes the inherited umask as-is. * `--umask OCTAL` sets the umask to exactly `OCTAL` and is broadly equivalent to `umask OCTAL && pilcrow --umask inherit`. This fell out of a conversation with @wlonk, who is working on notifications. Since notifications may require [VAPID] keys, the server will need a way to store those keys. That would generally be "in the pilcrow database," which lead me to the observation that Pilcrow creates that database as world-readable by default. "World-readable" and "encryption/signing keys" are not things that belong in the same sentence. [VAPID]: https://datatracker.ietf.org/doc/html/rfc8292 The most "obvious" solution would be to set the permissions used for the sqlite database when it's created. That's harder than it sounds: sqlite has no built-in facility for doing this. The closest thing that exists today is the [`modeof`] query parameter, which copies the permissions (and ownership) from some other file. We also can't reliably set the permissions ourselves, as sqlite may - depending on build options and configuration - [create multiple files][wal]. [`modeof`]: https://www.sqlite.org/uri.html [wal]: https://www.sqlite.org/wal.html Using `umask` is a whole-process solution to this. As Pilcrow doesn't attempt to create other files, there's little issue with doing it this way, but this is a design risk for future work if it creates files that are _intended_ to be readable by more than just the Pilcrow daemon user.
* Unify `setup_required` middlewares.Owen Jacobson2025-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The two middlewares were identical but for the specific `IntoResponse` impl used to generate the response when setup has not been completed. However, unifying them while still using `from_fn_with_state` lead to this horrorshow: .route_layer(middleware::from_fn_with_state( app.clone(), |state, req, next| { setup::middeware::setup_required(UNAVAILABLE, state, req, next) } )) It's a lot to read, and it surfaces the entire signature of a state-driven middleware `fn` into the call site solely to close over one argument (`UNAVAILABLE`). Rather than doing that, I've converted this middleware into a full blown Tower middleware, following <https://docs.rs/axum/latest/axum/middleware/index.html#towerservice-and-pinboxdyn-future>. I considered taking this further and implementing a custom future to remove the allocation for `Box::pin`, but honestly, that allocation isn't hurting anyone and this code already got long enough in the translation. The new API looks like: .route_layer(setup::Required::or_unavailable(app.clone())) Or like: .route_layer(setup::Required::with_fallback(app.clone(), RESPONSE)) One thing I would have liked to have avoided is the additional `app.clone()` argument, but there isn't a way to extract the _state_ from a request inside of an Axum middleware. It has to be passed in externally - that's what `from_fn_with_state` is doing under the hood, as well. Using `State` as an extractor doesn't work; the `State` extractor is special in a _bunch_ of ways, and this is one of them. Other extractors would work. Realistically, I'd probably want to explore interfaces like .route_layer(setup::Required(app).or_unavailable()) or .route_layer(app.setup().required().or_unavailable())
* Build the Sveltekit UI into Cargo's OUT_DIR.Owen Jacobson2025-05-28
| | | | | | | | | | | | | | | | This has a couple of material consequences: * It will be (much) easier to reorganize the source tree, as the path to the output is no longer relative to where the config files are when building the final binary. If we do decide to move `ui` into its own child crate, we won't have to make a bunch of (very similar) changes to the Svelte build process at that time. * There is less chance of a stale build contaminating a new one, since changes to the crate change the project hash in `OUT_DIR`. For example, while working on this change, `OUT_DIR` was at various points: * `target/debug/build/pilcrow-7cfeef3536ddd3e7/out` * `target/debug/build/pilcrow-09d4ddbc12bef36b/out` * `target/release/build/pilcrow-070d373bd5f850a1` This may use more space on disk, but it's all reclaimable with `cargo clean` and Rust is _far_ more profligate with disk space than Svelte will ever be. * It's more consistent with Cargo's expectations around generated source files, and thus potentially easier to onboard Rust developers into.
* Upgrade to Rust 1.85 and Rust 2024 edition.Owen Jacobson2025-02-20
| | | | | | | | There are a couple of migration suggestions from `cargo fix --edition` that I have deliberately skipped, which are intended to make sure that the changes to `if let` scoping don't bite us. They don't, I'm pretty sure, and if I turn out to be wrong, I'd rather fix the scoping issues (as they arise) than use `match` (`cargo fix --edition`'s suggestion). This change also includes a bulk reformat and a clippy cleanup. NOTA BENE: As this requires a new Rust toolchain, you'll need to update Rust (`rustup update`, normally) or the server won't build. This also applies to the Debian builder Docker image; it'll need to be rebuilt (from scratch, pulling its base image again) as well.
* We no longer need async-trait.Owen Jacobson2025-02-20
| | | | It was here to support axum 0.7.x.
* Upgrade to latest thiserrorOwen Jacobson2025-02-19
|
* Upgrade Axum to 0.8.1.Owen Jacobson2025-02-19
|
* Upgrade sqlx to latest.Owen Jacobson2025-02-18
| | | | | | | | | | | | | | | | | | | | We can't quite update rusqlite to latest as well, as it uses a slightly newer libsqlite3-sys crate. I made sure this pair of versions is valid: % cargo tree --invert libsqlite3-sys libsqlite3-sys v0.30.1 ├── rusqlite v0.32.1 │ └── pilcrow v0.1.0 (/Users/owen/Projects/grimoire.ca/pilcrow) └── sqlx-sqlite v0.8.3 └── sqlx v0.8.3 └── pilcrow v0.1.0 (/Users/owen/Projects/grimoire.ca/pilcrow) libsqlite3-sys v0.30.1 └── sqlx-sqlite v0.8.3 └── sqlx-macros-core v0.8.3 └── sqlx-macros v0.8.3 (proc-macro) └── sqlx v0.8.3 (*) As both sqlx and rusqlite resolve to use the same version of libsqlite3-sys, we're fine.
* Mass-upgrade Rust dependenciesOwen Jacobson2025-02-18
|
* Rename the project to `pilcrow`.Owen Jacobson2024-11-08
|
* Upgrade dependencies.Owen Jacobson2024-11-02
| | | | (Svelte 5 upgrade not included.)
* Remove `hi-recanonicalize`.Owen Jacobson2024-10-30
| | | | This utility was needed to support a database migration with existing data. I have it on good authority that no further databases exist that are in the state that made this tool necessary.
* Load DB paths from a file, rather than hard-coding them in the systemd unit.Owen Jacobson2024-10-30
|
* Restrict login names.Owen Jacobson2024-10-29
| | | | | | | | There's no good reason to use an empty string as your login name, or to use one so long as to annoy others. Names beginning or ending with whitespace, or containing runs of whitespace, are also a technical problem, so they're also prohibited. This change does not implement [UTS #39], as I haven't yet fully understood how to do so. [UTS #39]: https://www.unicode.org/reports/tr39/
* Package `hi` for Debian.Owen Jacobson2024-10-29
| | | | | | | | | | | | This commit provides a Docker-based build process for generating `.deb` packages, which can be run in Docker Desktop. I don't love it, but it's the best option I have _right now_ for doing this. The resulting packages: * Install `hi` (and `hi-recanonicalize`), in `/usr/bin`. * Create a user (`hi`) and a data directory (`/var/lib/hi`). * Create and start a systemd service unit for `hi`. Packages are built for arm64 and amd64 (aka x86_64).
* Tests for purged channels and messages.Owen Jacobson2024-10-25
| | | | This required a re-think of the `.immediately()` combinator, to generalize it to cases where a message is _not_ expected. That (more or less immediately) suggested some mixed combinators, particularly for stream futures (futures of `Option<T>`).
* Set `charset` params on returned content types.Owen Jacobson2024-10-22
| | | | This is a somewhat indirect change; it removes `mime_guess` in favour of some very, uh, "bespoke" mime detection logic that hardcodes mime types for the small repertoire of file extensions actually present in the UI. `mime_guess` doesn't provide a way to set params as it exports its own `Mime` struct, which doesn't provide `with_params()`.
* Provide `hi-recanonicalize` to recover from canonicalized-name problems.Owen Jacobson2024-10-22
|
* Canonicalize login and channel names.Owen Jacobson2024-10-22
| | | | | | | | | | | | | | | Canonicalization does two things: * It prevents duplicate names that differ only by case or only by normalization/encoding sequence; and * It makes certain name-based comparisons "case-insensitive" (generalizing via Unicode's case-folding rules). This change is complicated, as it means that every name now needs to be stored in two forms. Unfortunately, this is _very likely_ a breaking schema change. The migrations in this commit perform a best-effort attempt to canonicalize existing channel or login names, but it's likely any existing channels or logins with non-ASCII characters will not be canonicalize correctly. Since clients look at all channel names and all login names on boot, and since the code in this commit verifies canonicalization when reading from the database, this will effectively make the server un-usuable until any incorrectly-canonicalized values are either manually canonicalized, or removed It might be possible to do better with [the `icu` sqlite3 extension][icu], but (a) I'm not convinced of that and (b) this commit is already huge; adding database extension support would make it far larger. [icu]: https://sqlite.org/src/dir/ext/icu For some references on why it's worth storing usernames this way, see <https://www.b-list.org/weblog/2018/nov/26/case/> and the refernced talk, as well as <https://www.b-list.org/weblog/2018/feb/11/usernames/>. Bennett's treatment of this issue is, to my eye, much more readable than the referenced Unicode technical reports, and I'm inclined to trust his opinion given that he maintains a widely-used, internet-facing user registration library for Django.
* Unicode normalization on input.Owen Jacobson2024-10-21
| | | | | | | | | | | | | | | | | | This normalizes the following values: * login names * passwords * channel names * message bodies, because why not The goal here is to have a canonical representation of these values, so that, for example, the service does not inadvertently host two channels whose names are semantically identical but differ in the specifics of how diacritics are encoded, or two users whose names are identical. Normalization is done on input from the wire, using Serde hooks, and when reading from the database. The `crate::nfc::String` type implements these normalizations (as well as normalizing whenever converted from a `std::string::String` generally). This change does not cover: * Trying to cope with passwords that were created as non-normalized strings, which are now non-verifiable as all the paths to verify passwords normalize the input. * Trying to ensure that non-normalized data in the database compares reasonably to normalized data. Fortunately, we don't _do_ very many string comparisons (I think only login names), so this isn't a huge deal at this stage. Login names will probably have to Get Fixed later on, when we figure out how to handle case folding for login name verification.
* Dependency upgrades (Rust)Owen Jacobson2024-10-18
|
* BREAKING: Remove redundant error conversions now that 1.82 is stable.Owen Jacobson2024-10-18
| | | | MSRV is now 1.82.
* Merge branch 'wip/ui'Owen Jacobson2024-10-05
|\
| * Render the UI at /.Owen Jacobson2024-10-05
| |
* | Dependency update.Owen Jacobson2024-10-05
| |
* | Replace `unsafe` impl of backups with `rusqlite`.Owen Jacobson2024-10-05
| | | | | | | | The unsafe code still exists, but I have more faith in the rusqlite authors than in myself to ensure that the code is correct.
* | Use sqlx's API, not SQL groveling, to find unwanted migrations.Owen Jacobson2024-10-05
| |
* | Make a backup of the `.hi` database before applying migrations.Owen Jacobson2024-10-05
|/ | | | This was motivated by Kit and I both independently discovering that sqlite3 will happily partially apply migrations, leaving the DB in a broken state.
* Represent channels and messages using a split "History" and "Snapshot" model.Owen Jacobson2024-10-03
| | | | | | This separates the code that figures out what happened to an entity from the code that represents it to a user, and makes it easier to compute a snapshot at a point in time (for things like bootstrap). It also makes the internal logic a bit easier to follow, since it's easier to tell whether you're working with a point in time or with the whole recorded history. This hefty.
* Package upgradesOwen Jacobson2024-09-25
|
* Write tests.Owen Jacobson2024-09-20
|
* Remove the HTML client, and expose a JSON API.Owen Jacobson2024-09-20
| | | | | | | | | | | | | This API structure fell out of a conversation with Kit. Described loosely: kit: ok kit: Here's what I'm picturing in a client kit: list channels, make-new-channel, zero to one active channels, post-to-active. kit: login/sign-up, logout owen: you will likely also want "am I logged in" here kit: sure, whoami
* Blanket dependency upgrades, yolo editionOwen Jacobson2024-09-18
|
* App methods now return errors that allow not-found cases to be distinguished.Owen Jacobson2024-09-18
|
* Consolidate channel events into a single stream endpoint.Owen Jacobson2024-09-15
| | | | | | | | | | | | | | While reviewing [MDN], I noticed this note: > SSE suffers from a limitation to the maximum number of open connections, which can be specially painful when opening various tabs as the limit is per browser and set to a very low number (6). […] This limit is per browser + domain, so that means that you can open 6 SSE connections across all of the tabs to www.example1.com and another 6 SSE connections to www.example2.com. I tested it in Safari; this is true, and once six streams are open, _no_ more requests can be made - in any tab, even a fresh one. Since the design _was_ that each channel had its own events endpoint, this is an obvious operations risk. Any client that tries to read multiple channels' streams will hit this limit quickly. This change consolidates all channel events into a single endpoint: `/events`. This takes a list of channel IDs (as query parameters, one `channel=` param per channel), and streams back events from all listed channels. The previous `/:channel/events` endpoint has been removed. Clients can selectively request events for the channels they're interested in. [MDN]: https://developer.mozilla.org/en-US/docs/Web/API/EventSource
* Placeholder UX, probablyOwen Jacobson2024-09-14
|
* Support Last-Event-Id as a method of resuming channel events after a disconnectOwen Jacobson2024-09-13
|
* Tolerate panics in channel::app where they can only be triggered by ↵Owen Jacobson2024-09-13
| | | | implementation errors.
* Transmit messages via `/:chan/send` and `/:chan/events`.Owen Jacobson2024-09-13
|
* Add logout support.Owen Jacobson2024-09-03
|
* Allow login creation and authentication.Owen Jacobson2024-09-03
| | | | | | | | | | This is a beefy change, as it adds a TON of smaller pieces needed to make this all function: * A database migration. * A ton of new crates for things like password validation, timekeeping, and HTML generation. * A first cut at a module structure for routes, templates, repositories. * A family of ID types, for identifying various kinds of domain thing. * AppError, which _doesn't_ implement Error but can be sent to clients.
* Store state in sqlite. Default to .hi in the cwd.Owen Jacobson2024-08-30
|
* Make it an HTTP serverOwen Jacobson2024-08-30
|