| Commit message (Collapse) | Author | Age |
| | |
|
| |
|
|
| |
I've also aligned channel creation with this (it's 409 Conflict). To make server setup more distinct, it now returns 503 Service Unavailable if setup has not been completed.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |\ |
|
| | |
| |
| |
| | |
This - in passing - fixes the problem where the client failed to subscribe after logging in, by causing the whole subscription process to be re-run when returning to the main interface.
|
| | | |
|
| |/
|
|
| |
This is a little excessive, as PasswordHash (which StoredHash converts to) _does_ derive Debug and exposes the hash, but I'll feel better if the hash never ends up in logs.
|
| |
|
|
| |
We now (try to) use the identity cookie in `/ch/:channel`. This will not work, because the cookie's path doesn't include `/ch/`.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
The original version of this migration happened to work correctly, by accident, for databases with exactly one login. I missed this, and so did Kit, because both of our test databases _actually do_ contain exactly one login, and because I didn't run the tests before committing the migration.
The fixed version works correctly for all scenarios I tested (zero, one, and two users, not super thorough). I've added code to patch out the original migration hash in databases that have it; no further corrective work is needed, as if the migration failed, then it got backed out anyways, and if it succeeded, you fell into the "one user" case.
|
| | |
|
| |
|
|
| |
Operational experience with the server has shown that leaving the backup in place is not helpful. The near-automatic choice is to immediately delete it, and the server won't start until it has been deleted. If the backup restore succeeded, then we know the user has a copy of their database, since the sqlite3 online backups API promises to make the target database bitwise-identical to the source database, so there's little chance the user will need a duplicate.
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
This is a bit easier to compute, and sets us up nicely for pulling message boot out of the `/api/boot` response entirely.
|
| | |
|
| | |
|
| |
|
|
| |
This will make it much easier to slot in new event types (login events!).
|
| |
|
|
| |
This structure didn't accomplish anything and made certain refactorings harder.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
The client now takes an initial snapshot from the response to `/api/boot`, then picks up the event stream at the immediately-successive event to the moment the snapshot was taken.
This commit removes the following unused endpoints:
* `/api/channels` (GET)
* `/api/channels/:channel/messages` (GET)
The information therein is now part of the boot response. We can always add 'em back, but I wanted to clear the deck for designing something more capable, for dealing with client needs.
|
| |\ |
|
| | | |
|
| | |
| |
| |
| | |
The unsafe code still exists, but I have more faith in the rusqlite authors than in myself to ensure that the code is correct.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
This was motivated by Kit and I both independently discovering that sqlite3 will happily partially apply migrations, leaving the DB in a broken state.
|
| |/
|
|
|
|
| |
The migration path from the original project inception to now was complicated and buggy, and stranded _both_ Kit and I with broken databases due to oversights and incomplete migrations. We've agreed to start fresh, once.
If this is mistakenly started with an original-schema-flavour DB, startup will be aborted.
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
It is deliberate that the expire() functions do not use them. To avoid races, the transactions must be committed before events get sent, in both cases, which makes them structurally pretty different.
|
| |
|
|
|
|
| |
This separates the code that figures out what happened to an entity from the code that represents it to a user, and makes it easier to compute a snapshot at a point in time (for things like bootstrap). It also makes the internal logic a bit easier to follow, since it's easier to tell whether you're working with a point in time or with the whole recorded history.
This hefty.
|
| | |
|
| |
|
|
| |
This helped me discover an organizational scheme I like more.
|
| | |
|
| |
|
|
| |
This is primarily renames and repackagings.
|
| |
|
|
| |
(This is part of a larger reorganization.)
|
| |
|
|
| |
sequence.
|
| |
|
|
| |
Per-channel event sequences were a cute idea, but it made reasoning about event resumption much, much harder (case in point: recovering the order of events in a partially-ordered collection is quadratic, since it's basically graph sort). The minor overhead of a global sequence number is likely tolerable, and this simplifies both the API and the internals.
|