| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a user clicks "send a test notification," Pilcrow delivers a push message (with a fixed payload) to all active subscriptions. The included client then displays this as a notification, using browser APIs to do so. This lets us verify that push notification works, end to end - and it appears to.
The API endpoint for sending a test notification is not documented. I didn't feel it prudent to extensively document an endpoint that is intended to be temporary and whose side effects are very much subject to change. However, for posterity, the endpoint is
POST /api/push/ping
{}
and the push message payload is
ping
Subscriptions with permanent delivery failures are nuked when we encounter them. Subscriptions with temporary failures cause the `ping` endpoint to return an internal server error, and are not retried. We'll likely want retry logic - including retry logic to handle server restarts - for any more serious use, but for a smoke test, giving up immediately is fine.
To make the push implementation testable, `App` is now generic over it. Tests use a dummy implementation that stores sent messages in memory. This has some significant limitations, documented in the test suite, but it beats sending real notifications to nowhere in tests.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The semantics of this endpoint are somewhat complex, and are incompletely captured in the associated docs change. For posterity, the intended workflow is:
1. Obtain Pilcrow's current VAPID key by connecting (it's in the events, either from boot or from the event stream).
2. Use the browser push APIs to create a push subscription, using that VAPID key.
3. Send Pilcrow the push subscription endpoint and keys, plus the VAPID key the client used to create it so that the server can detect race conditions with key rotation.
4. Wait for messages to arrive.
This commit does not introduce any actual messages, just subscription management endpoints.
When the server's VAPID key is rotated, all existing subscriptions are discarded. Without the VAPID key, the server cannot service those subscriptions. We can't exactly notify the broker to stop processing messages on those subscriptions, so this is an incomplete solution to what to do if the key is being rotated due to a compromise, but it's better than nothing.
The shape of the API endpoint is heavily informed by the [JSON payload][web-push-json] provided by browser Web Push implementations, to ease client development in a browser-based context. The idea is that a client can take that JSON and send it to the server verbatim, without needing to transform it in any way, to submit the subscription to the server for use.
[web-push-json]: https://developer.mozilla.org/en-US/docs/Web/API/PushSubscription/toJSON
Push subscriptions are operationally associated with a specific _user agent_, and have no inherent relationship with a Pilcrow login or token (session). Taken as-is, a subscription created by user A could be reused by user B if they share a user agent, even if user A logs out before user B logs in. Pilcrow therefore _logically_ associates push subscriptions with specific tokens, and abandons those subscriptions when the token is invalidated by
* logging out,
* expiry, or
* changing passwords.
(There are no other token invalidation workflows at this time.)
Stored subscriptions are also abandoned when the server's VAPID key changes.
|
| |
|
|
|
|
| |
In 4a91792e023a5877f8ac9b8a352e99c4486d698f, I merged in the app component struct changes, but neglected to notice that the `app.vapid()` method had ended up attached to the wrong impl block during the merge. This fixes that.
I've also carried the change to component structs through, so `Vapid` is now a freestanding component, rather than a view of the `App` struct's internals.
|
| |\ |
|
| | |
| |
| |
| | |
Because `Users` is test-only and is not used in any endpoints, it doesn't need a FromRef impl.
|
| | |
| |
| |
| | |
As with the `Setup` component, I've generalized the associated middleware across anything that can provide a `Tokens`, where possible.
|
| | |
| |
| |
| | |
The changes to the setup-requiring middleware are probably more general than was strictly needed, but they will make it work with anything that can provide a `Setup` component rather than being bolted to `App` specifically, which feels tidier.
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
This one doesn't need a FromRef impl at this time, as it's only ever used in a handler that also uses other components and so will need to continue receiving `App`. However, there's little reason not to make the implementatino of the `Events` struct consistent.
|
| | |
| |
| |
| |
| |
| |
| |
| | |
Unlike the previous example, this involves cloning an event broadcaster, as well. This is, per the documentation, how the type may be used. From <https://docs.rs/tokio/latest/tokio/sync/broadcast/fn.channel.html>:
> The Sender can be cloned to send to the same channel from multiple points in the process or it can be used concurrently from an `Arc`.
The language is less firm than the language sqlx uses for its pool, but the intent is clear enough, and it works in practice.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
`crate::app::App`'s internals.
In the course of working on web push, I determined that we probably need to make `App` generic over the web push client we're using, so that tests can use a dummy client while the real app uses a client created at startup and maintained over the life of the program's execution. The most direct implementation of that is to render App as `App<P>`, where the parameter is occupied by the specific web push client type in use. However, doing this requires refactoring at _every_ site that mentions `App`, including every handler, even though the vast majority of those sites will not be concerned with web push.
I reviewed a few options with @wlonk:
* Accept the type parameter and apply it everywhere, as the cost of supporting web push.
* Hard-code the use of a specific web push client.
* Insulate handlers &c from `App` via provider traits, mimicing what we do for repository provider traits today.
* Treat each app type as a freestanding state in its own right, so that only push-related components need to consider push clients (as far as is feasible).
This is a prototype towards that last point, using a simple app component (boot) as a testbed. `FromRef` allows handlers that take a `Boot` to be used in routes that provide an `App`, so this is a contained change. However, the structure of `FromRef` prevents `Boot` from carrying any lifetime narrower than `'static`, so it now holds clones of the state fields it acquires from App, instead of references. This is fine - that's just a database pool, and sqlx's pool type is designed to be shared via cloning. From <https://docs.rs/sqlx/latest/sqlx/struct.Pool.html>:
> Cloning Pool is cheap as it is simply a reference-counted handle to the inner pool state.
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
VAPID is used to authenticate applications to push brokers, as part of the [Web Push] specification. It's notionally optional, but we believe [Apple requires it][apple], and in any case making it impossible to use subscription URLs without the corresponding private key available, and thus harder to impersonate the server, seems like a good security practice regardless.
[Web Push]: https://developer.mozilla.org/en-US/docs/Web/API/Push_API
[apple]: https://developer.apple.com/documentation/usernotifications/sending-web-push-notifications-in-web-apps-and-browsers
There are several implementations of VAPID for Rust:
* [web_push](https://docs.rs/web-push/latest/web_push/) includes an implementation of VAPID but requires callers to provision their own keys. We will likely use this crate for Web Push fulfilment, but we cannot use it for key generation.
* [vapid](https://docs.rs/vapid/latest/vapid/) includes an implementation of VAPID key generation. It delegates to `openssl` to handle cryptographic operations.
* [p256](https://docs.rs/p256/latest/p256/) implements NIST P-256 in Rust. It's maintained by the RustCrypto team, though as of this writing it is largely written by a single contributor. It isn't specifically designed for use with VAPID.
I opted to use p256 for this, as I believe the RustCrypto team are the most likely to produce a correct and secure implementation, and because openssl has consistently left a bad taste in my mouth for years. Because it's a general implementation of the algorithm, I expect that it will require more work for us to adapt it for use with VAPID specifically; I'm willing to chance it and we can swap it out for the vapid crate if it sucks.
This has left me with one area of uncertainty: I'm not actually sure I'm using the right parts of p256. The choice of `ecdsa::SigningKey` over `p256::SecretKey` is based on [the MDN docs] using phrases like "This value is part of a signing key pair generated by your application server, and usable with elliptic curve digital signature (ECDSA), over the P-256 curve." and on [RFC 8292]'s "The 'k' parameter includes an ECDSA public key in uncompressed form that is encoded using base64url encoding. However, we won't be able to test my implementation until we implement some other key parts of Web Push, which are out of scope of this commit.
[the MDN docs]: https://developer.mozilla.org/en-US/docs/Web/API/PushSubscription/options
[RFC 8292]: https://datatracker.ietf.org/doc/html/rfc8292#section-3.2
Following the design used for storing logins and users, VAPID keys are split into a non-synchronized part (consisting of the private key), whose exposure would allow others to impersonate the Pilcrow server, and a synchronized part (consisting of event coordinates and, notionally, the public key), which is non-sensitive and can be safely shared with any user. However, the public key is derived from the stored private key, rather than being stored directly, to minimize redundancy in the stored data.
Following the design used for expiring stale entities, the app checks for and creates, or rotates, its VAPID key using middleware that runs before most API requests. If, at that point, the key is either absent, or more than 30 days old, it is replaced. This imposes a small tax on API request latency, which is used to fund prompt and automatic key rotation without the need for an operator-facing key management interface.
VAPID keys are delivered to clients via the event stream, as laid out in `docs/api/events.md`. There are a few reasons for this, but the big one is that changing the VAPID key would immediately invalidate push subscriptions: we throw away the private key, so we wouldn't be able to publish to them any longer. Clients must replace their push subscriptions in order to resume delivery, and doing so promptly when notified that the key has changed will minimize the gap.
This design is intended to allow for manual key rotation. The key can be rotated "immedately" by emptying the `vapid_key` and `vapid_signing_key` tables (which destroys the rotated kye); the server will generate a new one before it is needed, and will notify clients that the key has been invalidated.
This change includes client support for tracking the current VAPID key. The client doesn't _use_ this information anywhere, yet, but it has it.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The taxonomy is now as follows:
* A _login_ is someone's identity for the purposes of authenticating to the service. Logins are not synchronized, and in fact are not published anywhere in the current API. They have a login ID, a name and a password.
* A _user_ is someone's identity for the purpose of participating in conversations. Users _are_ synchronized, as before. They have a user ID, a name, and a creation instant for the purposes of synchronization.
In practice, a user exists for every login - in fact, users' names are stored in the login table and are joined in, rather than being stored redundantly in the user table. A login ID and its corresponding user ID are always equal, and the user and login ID types support conversion and comparison to facilitate their use in this context.
Tokens are now associated with logins, not users. The currently-acting identity is passed down into app types as a login, not a user, and then resolved to a user where appropriate within the app methods.
As a side effect, the `GET /api/boot` method now returns a `login` key instead of a `user` key. The structure of the nested value is unchanged.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rust 1.89 added a new warning:
warning: hiding a lifetime that's elided elsewhere is confusing
--> src/setup/repo.rs:4:14
|
4 | fn setup(&mut self) -> Setup;
| ^^^^^^^^^ ----- the same lifetime is hidden here
| |
| the lifetime is elided here
|
= help: the same lifetime is referred to in inconsistent ways, making the signature confusing
help: use `'_` for type paths
|
4 | fn setup(&mut self) -> Setup<'_>;
| ++++
I don't entirely agree with the style advice here, but lifetime elision style is an evolving area in Rust and I'd rather track the Rust team's recommendations than invent my own, so I've added all of them.
|
| |
|
|
|
|
| |
I've split this from the schema and API changes because, frankly, it's huge. Annoyingly so. There are no semantic changes in this, it's all symbol changes, but there are a _lot_ of them because the term "channel" leaks all over everything in a service whose primary role is managing messages sent to channels (now, conversations).
I found a buggy test while working on this! It's not fixed in this commit, because it felt mean to hide a real change in the middle of this much chaff.
|
| | |
|
| |
|
|
| |
This utility was needed to support a database migration with existing data. I have it on good authority that no further databases exist that are in the state that made this tool necessary.
|
| |
|
|
| |
This also found a bug! No live event was being emitted during invite accept. The only way to find out about invites was to reconnect.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This separates the code that figures out what happened to an entity from the code that represents it to a user, and makes it easier to compute a snapshot at a point in time (for things like bootstrap). It also makes the internal logic a bit easier to follow, since it's easier to tell whether you're working with a point in time or with the whole recorded history.
This hefty.
|
| | |
|
| |
|
|
| |
This is primarily renames and repackagings.
|
| |
|
|
|
|
|
|
| |
expires.
When tokens are revoked (logout or expiry), the server now publishes an internal event via the new `logins` event broadcaster. These events are used to guard the `/api/events` stream. When a token revocation event arrives for the token used to subscribe to the stream, the stream is cut short, disconnecting the client.
In service of this, tokens now have IDs, which are non-confidential values that can be used to discuss tokens without their secrets being passed around unnecessarily. These IDs are not (at this time) exposed to clients, but they could be.
|
| | |
|
| |
|
|
|
|
|
|
| |
It now includes events for all channels. Clients are responsible for filtering.
The schema for channel events has changed; it now includes a channel name and ID, in the same format as the sender's name and ID. They also now include a `"type"` field, whose only valid value (as of this writing) is `"message"`.
This is groundwork for delivering message deletion (expiry) events to clients, and notifying clients of channel lifecycle events.
|
| | |
|
| |
|
|
| |
vector-of-sequence-numbers stream resume.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This API structure fell out of a conversation with Kit. Described loosely:
kit: ok
kit: Here's what I'm picturing in a client
kit: list channels, make-new-channel, zero to one active channels, post-to-active.
kit: login/sign-up, logout
owen: you will likely also want "am I logged in" here
kit: sure, whoami
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
| |
This is, again, groundwork for logic that requires more than just a database connection.
The login process has been changed to be more conventional, attempting login _before_ account creation rather than after it. This was not previously possible, because the data access methods used to perform these steps did not return enough information to carry out the workflow in that order. Separating storage from password validation and hashing forces the issue, and makes it clearer _at the App_ whether an account exists or not.
This does introduce the possibility of two racing inserts trying to lay claim to the same username. Transaction isolation should ensure that only one of them "wins," which is what you get before this change anyways.
|
|
|
This is a jumping-off point for adding logic that needs more than just the DB for state, such as chat message handling.
The name sucks, but it's the best I've got.
|