summaryrefslogtreecommitdiff
path: root/wiki/dev
diff options
context:
space:
mode:
authorOwen Jacobson <owen@grimoire.ca>2020-01-28 20:49:17 -0500
committerOwen Jacobson <owen@grimoire.ca>2020-01-28 23:23:18 -0500
commit0d6f58c54a7af6c8b4e6cd98663eb36ec4e3accc (patch)
treea2af4dc93f09a920b0ca375c1adde6d8f64eb6be /wiki/dev
parentacf6f5d3bfa748e2f8810ab0fe807f82efcf3eb6 (diff)
Editorial pass & migration to mkdocs.
There's a lot in grimoire.ca that I either no longer stand behind or feel pretty weird about having out there.
Diffstat (limited to 'wiki/dev')
-rw-r--r--wiki/dev/buffers.md99
-rw-r--r--wiki/dev/builds.md194
-rw-r--r--wiki/dev/comments.md8
-rw-r--r--wiki/dev/commit-messages.md70
-rw-r--r--wiki/dev/configuring-browser-apps.md108
-rw-r--r--wiki/dev/debugger-101.md86
-rw-r--r--wiki/dev/entry-points.md56
-rw-r--r--wiki/dev/gnu-collective-action-license.md51
-rw-r--r--wiki/dev/go.md112
-rw-r--r--wiki/dev/liquibase.md77
-rw-r--r--wiki/dev/merging-structural-changes.md85
-rw-r--r--wiki/dev/on-rights.md21
-rw-r--r--wiki/dev/papers.md36
-rw-r--r--wiki/dev/rich-shared-models.md102
-rw-r--r--wiki/dev/shutdown-hooks.md29
-rw-r--r--wiki/dev/stop-building-synchronous-web-containers.md41
-rw-r--r--wiki/dev/trackers-from-first-principles.md219
-rw-r--r--wiki/dev/twigs.md24
-rw-r--r--wiki/dev/webapp-versions.md27
-rw-r--r--wiki/dev/webapps.md5
-rw-r--r--wiki/dev/webpack.md236
-rw-r--r--wiki/dev/whats-wrong-with-jenkins.md108
-rw-r--r--wiki/dev/why-scm.md73
23 files changed, 0 insertions, 1867 deletions
diff --git a/wiki/dev/buffers.md b/wiki/dev/buffers.md
deleted file mode 100644
index 62bcad6..0000000
--- a/wiki/dev/buffers.md
+++ /dev/null
@@ -1,99 +0,0 @@
-# Observations on Buffering
-
-None of the following is particularly novel, but the reminder has been useful:
-
-* All buffers exist in one of two states: full (writes outpace reads), or empty
- (reads outpace writes). There are no other stable configurations.
-
-* Throughput on an empty buffer is dominated by the write rate. Throughput on a
- full buffer is dominated by the read rate.
-
-* A full buffer imposes a latency penalty equal to its size in bits, divided by
- the read rate in bits per second. An empty buffer imposes (approximately) no
- latency penalty.
-
-The previous three points suggest that **traffic buffers should be measured in
-seconds, not in bytes**, and managed accordingly. Less obviously, buffer
-management needs to be considerably more sophisticated than the usual "grow
-buffer when full, up to some predefined maximum size."
-
-Point one also implies a rule that I see honoured more in ignorance than in
-awareness: **you can't make a full buffer less full by making it bigger**. Size
-is not a factor in buffer fullness, only in buffer latency, so adjusting the
-size in response to capacity pressure is worse than useless.
-
-There are only three ways to make a full buffer less full:
-
-1. Increase the rate at which data exits the buffer.
-
-2. Slow the rate at which data enters the buffer.
-
-3. Evict some data from the buffer.
-
-In actual practice, most full buffers are upstream of some process that's
-already going as fast as it can, either because of other design limits or
-because of physics. A buffer ahead of disk writing can't drain faster than the
-disk can accept data, for example. That leaves options two and three.
-
-Slowing the rate of arrival usually implies some variety of _back-pressure_ on
-the source of the data, to allow upstream processes to match rates with
-downstream processes. Over-large buffers delay this process by hiding
-back-pressure, and buffer growth will make this problem worse. Often,
-back-pressure can happen automatically: failing to read from a socket, for
-example, will cause the underlying TCP stack to apply back-pressure to the peer
-writing to the socket by delaying TCP-level message acknowledgement. Too often,
-I've seen code attempt to suppress these natural forms of back-pressure without
-replacing them with anything, leading to systems that fail by surprise when
-some other resource – usually memory – runs out.
-
-Eviction relies on the surrounding environment, and must be part of the
-protocol design. Surprisingly, most modern application protocols get very
-unhappy when you throw their data away: the network age has not, sadly, brought
-about protocols and formats particularly well-designed for distribution.
-
-If neither back-pressure nor eviction are available, the remaining option is to
-fail: either to start dropping data unpredictably, or to cease processing data
-entirely as a result of some resource or another running out, or to induce so
-much latency that the data is useless by the time it arrives.
-
------
-
-Some uncategorized thoughts:
-
-* Some buffers exist to trade latency against the overhead of coordination. A
- small buffer in this role will impose more coordination overhead; a large
- buffer will impose more latency.
-
- * These buffers appear where data transits between heterogenous system: for
- example, buffering reads from the network for writes to disk.
-
- * Mismanaged buffers in this role will tend to cause the system to spend
- an inordinate proportion of latency and throughput negotiating buffer
- sizes and message readiness.
-
- * A coordination buffer is most useful when _empty_; in the ideal case, the
- buffer is large enough to absorb one message's worth of data from the
- source, then pass it along to the sink as quickly as possible.
-
-* Some buffers exist to trade latency against jitter. A small buffer in this
- role will expose more jitter to the upstream process. A large buffer in this
- role will impose more latency.
-
- * These tend to appear in _homogenous_ systems with differing throughputs,
- or as a consequence of some other design choice. Store-and-forward
- switching in networks, for example, implies that switches must buffer at
- least one full frame of network data.
-
- * Mis-managed buffers in this role will _amplify_ rather than smoothing out
- jitter. Apparent throughput will be high until the buffer fills, then
- change abruptly when full. Upstream processes are likely to throttle
- down, causing them to under-deliver if the buffer drains, pushing the
- system back to a high-throughput mode. [This problem gets worse the
- more buffers are present in a system](http://www.bufferbloat.net).
-
- * An anti-jitter buffer is most useful when _full_; in exchange for a
- latency penalty, sudden changes in throughput will be absorbed by data
- in the buffer rather than propagating through to the source or sink.
-
-* Multimedia people understand this stuff at a deep level. Listen to them when
- designing buffers for other applications.
diff --git a/wiki/dev/builds.md b/wiki/dev/builds.md
deleted file mode 100644
index abe3d19..0000000
--- a/wiki/dev/builds.md
+++ /dev/null
@@ -1,194 +0,0 @@
-# Nobody Cares About Your Build
-
-Every software system, from simple Python packages to huge enterprise-grade
-systems spanning massive clusters, has a build—a set of steps that must be
-followed to go from a source tree or a checked-out project to a ready-to-use
-build product. A build system's job is to automate these steps.
-
-Build systems are critical to software development.
-
-They're also one of the most common avoidable engineering failures.
-
-A reliable, comfortable build system has measurable benefits for software
-development. Being able to build a testable, deployable system at any point
-during development lets the team test more frequently. Frequent testing
-isolates bugs and integration problems earlier, reducing their impact. Simple,
-working builds allow new team members to ramp up more quickly on a project:
-once they understand how one piece of the system is constructed, they can
-apply that knowledge to the entire system and move on to doing useful work. If
-releases, the points where code is made available outside the development
-team, are done using the same build system that developers use in daily life,
-there will be fewer surprises during releases as the “release” build process
-will be well-understood from development.
-
-## Builds Have Needs, Too
-
-In 1947, Abraham Maslow described a [hierarchy of
-needs](http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs) for a
-person's physical and mental well-being on the premise that all the items at
-the lowest level of the hierarchy must be met before a person will be able to
-focus usefully on higher-level needs. Maslow's hierarchy begins with a set of
-needs that, without which, you do not have a person (for long)—physiological
-needs like “breathing,” “food,” and “water.” At the peak, there are extremely
-high-level needs that are about being a happy and enlightened
-person—“creativity,” “morality,” “curiosity,” and so on.
-
-![A three-tier pyramid. At the bottom: Automatable. Repeatable. Standardized.
-Extensible. Understood. In the middle tier: Simple. Fast. Unit tests. Part of
-the project. Environment independent. At the top: Metrics. Parallel builds.
-Acceptance tests. Product caching. IDE
-integration.](/media/dev/builds/buildifesto-pyramid.png)
-
-Builds, and software engineering as a whole, can be described the same way: at
-the top of the hierarchy is a working system that solves a problem, and at the
-bottom are the things you need to have software at all. If you don't meet
-needs at a given level, you will eventually be forced to stop what you're
-doing at a higher level and face them.
-
-Before a build is a build, there are five key needs to meet:
-
-* **It must be repeatable**. Every time you start your build on a given source
- tree, it must build exactly the same products without any further
- intervention. Without this, you can't reliably decide whether a given build
- is “good,” and can easily wind up with a build that needs to be run several
- times, or a build that relies on running several commands in the right
- order, to produce a build.
-* **It must be automatable**. Build systems are used by developers sitting at
- their desks, but they’re also used by automatic build systems for nightly
- builds and continuous integration, and they can be made into parts of other
- builds. A build system that can only be run by having someone sit down at a
- keyboard and mouse and kicking it off can’t be integrated into anything
- else.
-* **It must be standardized**. If you have multiple projects that build
- similar things—for example, several Java libraries—all of them must be built
- the same way. Without this, it's difficult for a developer to apply
- knowledge from one project to another, and it's difficult to debug problems
- with individual builds.
-* **It must be extensible**. Not all builds are created equal. Where one build
- compiles a set of source files, another needs five libraries and a WSDL
- descriptor before it can compile anything. There must be affordances within
- the standard build that allow developers to describe the ways their build is
- different. Without this, you have to write what amounts to a second build
- tool to ensure that all the “extra” steps for certain projects happen.
-* **Someone must understand it**. A build nobody understands is a time bomb:
- when it finally breaks (and it will), your project will be crippled until
- someone fixes it or, more likely, hacks around it.
-
-If you have these five things, you have a working build. The next step is to
-make it comfortable. Comfortable builds can be used daily for development
-work, demonstrations, and tests as well as during releases; builds that are
-used constantly don't get a chance to “rust” as developers ignore them until a
-release or a demo and don’t hide surprises for launch day.
-
-* **It must be simple**. When a complicated build breaks, you need someone who
- understands it to fix it for you. Simple builds mean more people can
- understand it and fewer things can break.
-* **It must be fast**. A slow build will be hacked around or ignored entirely.
- Ideally, someone creating a local build for a small change should have a
- build ready in seconds.
-* **It must be part of the product**. The team responsible for developing a
- project must be in control of and responsible for its build. Changes to it
- and bugs against it must be treated as changes to the product or bugs in the
- product.
-* **It must run unit tests**. Unit tests, which are completely isolated tests
- written by and for developers, can catch a large number of bugs, but they're
- only useful if they get run. The build must run the unit test suite for the
- product it's building every build.
-* **It must build the same thing in any environment**. A build is no good if
- developers can only get a working build from a specific machine, or where a
- build from one developer's machine is useless anywhere else. If the build is
- uniform on any environment, any developer can cook up a build for a test or
- demo at any time.
-
-Finally, there are “chrome” features that take a build from effective to
-excellent. These vary widely from project to project and from organization to
-organization. Here are some common chrome needs:
-
-* **It should integrate with your IDEs**. This goes both directions: it should
- be possible to run the build without leaving your IDE or editor suite, and
- it should be possible to translate the build system into IDE-specific
- configurations to reduce duplication between IDE settings and the build
- configuration.
-* **It should generate metrics**. If you gather metrics for test coverage,
- common bugs, complexity analysis, or generate reports or documentation, the
- build system should be responsible for it. This keeps all the common
- administrative actions for the project in the same place as the rest of the
- configuration, and provides the same consistency that the system gives the
- rest of the build.
-* **It should support multiple processors**. For medium-sized builds that
- aren’t yet large enough to merit breaking down into libraries, being able to
- perform independent build steps in parallel can be a major time-saver. This
- can extend to distributed build systems, where idle CPU time can be donated
- to other peoples’ builds.
-* **It should run integration and acceptance tests**. Taking manual work from
- the quality control phase of a project and running it automatically during
- builds amplifies the benefits of early testing and, if your acceptance tests
- are good, when your project is done.
-* **It should not need repeating**. Once you declare a particular set of build
- products “done,” you should be able to use those products as-is any time you
- need them. Without this, you will eventually find yourself rebuilding the
- same code from the same release over and over again.
-
-## What Doesn’t Work
-
-Builds, like any other part of software development, have
-antipatterns—recurring techniques for solving a problem that introduce more
-problems.
-
-* **One Source Tree, Many Products**. Many small software projects that
- survive to grow into large, monolithic projects are eventually broken up
- into components. It's easy to do this by taking the existing source tree and
- building parts of it, and it's also wrong. Builds that slice up a single
- source tree require too much discipline to maintain and too much mental
- effort to understand. Break your build into separate projects that are built
- separately, and have each build produce one product.
-* **The Build And Deploy System**. Applications that have a server component
- often choose to automate deployment and setup using the same build system
- that builds the project. Too often, the extra build steps that set up a
- working system from the built project are tacked onto the end of an existing
- build. This breaks standardization, making that build harder to understand,
- and means that that one build is producing more than one thing—it's
- producing the actual project, and a working system around the project.
-* **The Build Button**. IDEs are really good at editing code. Most of them
- will produce a build for you, too. Don't rely on IDE builds for your build
- system, and don't let the IDE reconfigure the build process. Most IDEs don't
- differentiate between settings that apply to the project and settings that
- apply to the local environment, leading to builds that rely on libraries or
- other projects being in specific places and on specific IDE settings that
- are often buried in complex settings dialogs.
-* **Manual Steps**. Anything that gets done by hand will eventually be done
- wrong. Automate every step.
-
-## What Does Work
-
-Similarly, there are patterns—solutions that recur naturally and can be
-applied to many problems.
-
-* **Do One Thing Well**. The UNIX philosophy of small, cohesive tools works
- for build systems, too: if you need to build a package, and then install it
- on a server, write three builds: one that builds the package, one that takes
- a package and installs it, and a third that runs the first two builds in
- order. The individual builds will be small enough to easily understand and
- easy to standardize, and the package ends up installed on the server when
- the main build finishes.
-* **Dependency Repositories**. After a build is done, make the built product
- available to other builds and to the user for reuse rather than rebuilding
- it every time you need it. Similarly, libraries and other inward
- dependencies for a build can be shared between builds, reducing duplication
- between projects.
-* **Convention Over Extension**. While it's great that your build system is
- extensible, think hard about whether you really need to extend your build.
- Each extension makes that project’s build that much harder to understand and
- adds one more point of failure.
-
-## Pick A Tool, Any Tool
-
-Nothing here is new. The value of build systems has been
-[discussed](http://www.joelonsoftware.com/articles/fog0000000043.html)
-[in](http://www.gamesfromwithin.com/articles/0506/000092.html)
-[great](http://c2.com/cgi/wiki?BuildSystem)
-[detail](http://www.codinghorror.com/blog/archives/000988.html) elsewhere.
-Much of the accumulated build wisdom of the software industry has already been
-incorporated to one degree or another into build tools. What matters is that
-you pick one, then use it with the discipline needed to get repeatable results
-without thinking.
diff --git a/wiki/dev/comments.md b/wiki/dev/comments.md
deleted file mode 100644
index 7dc1a68..0000000
--- a/wiki/dev/comments.md
+++ /dev/null
@@ -1,8 +0,0 @@
-# Comment Maturity Model
-
-> * Beginners comment nothing
-> * Apprentices comment the obvious
-> * Journeymen comment the reason for doing it
-> * Masters comment the reason for not doing it another way
-
-Richard C. Haven, via [cluefire.net](http://cluefire.net/)
diff --git a/wiki/dev/commit-messages.md b/wiki/dev/commit-messages.md
deleted file mode 100644
index 6b3702d..0000000
--- a/wiki/dev/commit-messages.md
+++ /dev/null
@@ -1,70 +0,0 @@
-# Writing Good Commit Messages
-
-Rule zero: “good” is defined by the standards of the project you're on. Have a
-look at what the existing messages look like, and try to emulate that first
-before doing anything else.
-
-Having said that, here are some things that will help your commit messages be
-useful later:
-
-* Treat the first line of the message as a one-sentence summary. Most SCM
- systems have an “overview” command that shows shortened commit messages in
- bulk, so making the very beginning of the message meaningful helps make
- those modes more useful for finding specific commits. _It's okay for this to
- be a “what” description_ if the rest of the message is a “why” description.
-
-* Fill out the rest of the message with prose outlining why you made the
- change. The guidelines for a good “why” message are the same as [the
- guidelines for good comments](comments), but commit messages can be
- signifigantly longer. Don't bother reiterating the contents of the change in
- detail; anyone who needs that can read the diff themselves.
-
-* If you use an issue tracker (and you should), include whatever issue-linking
- notes it supports right at the start of the message, where it'll be visible
- even in shortlogs. If your tracker has absurdly long issue-linking syntax,
- or doesn't support issue links in commits at all, include a short issue
- identifier at the front of the message and put the long part somewhere out
- of the way, such as on a line of its own at the end of the message.
-
-* Pick a tense and a mood and stick with them. Reading one commit with a
- present-tense imperative message (“Add support for PNGs”) and another commit
- with a past-tense narrative message (“Fixed bug in PNG support”) is
- distracting.
-
-* If you need rich commit messages (links, lists, and so on), pick one markup
- language and stick with it. It'll be easier to write useful commit
- formatters if you only have to deal with one syntax, rather than four.
- (Personally, I use Markdown on projects I control.)
-
- * This also applies to line-wrapping: either hard-wrap everywhere, or
- hard-wrap nowhere.
-
-## An Example
-
- commit 842e6c5f41f6387781fcc84b59fac194f52990c7
- Author: Owen Jacobson <owen.jacobson@grimoire.ca>
- Date: Fri Feb 1 16:51:31 2013 -0500
-
- DS-37: Add support for privileges, and create a default privileged user.
-
- This change gives each user a (possibly empty) set of privileges. Privileges
- are mediated by roles in the following ways:
-
- * Each user is a member of zero or more roles.
- * Each role implies membership in zero or more roles. If role A implies role
- B, then a member of role A is also a transitive member of role B. This
- relationship is transitive: if A implies B and B implies C, then A implies
- C. This graph should not be cyclic, but it's harmless if it is.
- * Each role grants zero or more privileges.
-
- A user's privileges are the union of all privileges of all roles the user is a
- member of, either directly or transitively.
-
- Obviously, a role that implies no other roles and grants no priveleges is
- meaningless to the authorization system. This may be useful for "advisory"
- roles meant for human consumption.
-
- This also introduces a user with the semi-magical name '*admin' (chosen
- because asterisks cannot collide with player-chosen usernames), and the group
- '*superuser' that is intended to hold all privileges. No privileges are yet
- defined.
diff --git a/wiki/dev/configuring-browser-apps.md b/wiki/dev/configuring-browser-apps.md
deleted file mode 100644
index 8bba0b2..0000000
--- a/wiki/dev/configuring-browser-apps.md
+++ /dev/null
@@ -1,108 +0,0 @@
-# Configuring Browser Apps
-
-I've found myself in he unexpected situation of having to write a lot of
-browser apps/single page apps this year. I have some thoughts on configuration.
-
-## Why Bother
-
-* Centralize environment-dependent facts to simplify management & testing
-* Make it easy to manage app secrets.
-
- [@wlonk](https://twitter.com/wlonk) adds:
-
- > “Secrets”? What this means in a browser app is a bit different.
-
- Which is unpleasantly true. In a freestanding browser app, a “secret” is only as secret as your users and their network connections choose to make it, i.e., not very secret at all. Maybe that should read “make it easy to manage app _tokens_ and _identities_,” instead.
-
-* Keep config data & API tokens out of app's source control
-* Integration point for external config sources (Aerobatic, Heroku, etc)
-* The forces described in [12 Factor App:
- Dependencies](http://12factor.net/dependencies) and, to a lesser extent, [12
- Factor App: Configuration](http://12factor.net/config) apply just as well to
- web client apps as they do to freestanding services.
-
-## What Gets Configured
-
-Yes:
-
-* Base URLs of backend services
-* Tokens and client IDs for various APIs
-
-No:
-
-* “Environments” (sorry, Ember folks - I know Ember thought this through carefully, but whole-env configs make it easy to miss settings in prod or test, and encourage patterns like “all devs use the same backends”)
-
-## Delivering Configuration
-
-There are a few ways to get configuration into the app.
-
-### Globals
-
- <head>
- <script>window.appConfig = {
- "FOO_URL": "https://foo.example.com/",
- "FOO_TOKEN": "my-super-secret-token"
- };</script>
- <script src="/your/app.js"></script>
- </head>
-
-* Easy to consume: it's just globals, so `window.appConfig.foo` will read them.
- * This requires some discipline to use well.
-* Have to generate a script to set them.
- * This can be a `<script>window.appConfig = {some json}</script>` tag or a
- standalone config script loaded with `<script src="/config.js">`
- * Generating config scripts sets a minimum level of complexity for the
- deployment process: you either need a server to generate the script at
- request time, or a preprocessing step at deployment time.
-
- * It's code generation, which is easy to do badly. I had originally
- proposed using `JSON.stringify` to generate a Javascript object literal,
- but this fails for any config values with `</script>` in them. That may
- be an unlikely edge case, but that only makes it a nastier trap for
- administrators.
-
- [There are more edge
- cases](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify).
- I strongly suspect that a hazard-free implementation requires a
- full-blown JS source generator. I had a look at building something out of
- [escodegen](https://github.com/estools/escodegen) and
- [estemplate](https://github.com/estools/estemplate), but
-
- 1. `escodegen`'s node version [doesn't generate browser-safe
- code](https://github.com/estools/escodegen/issues/298), so string literals
- with `</script>` or `</head>` in them still break the page, and
- 2. converting javascript values into parse trees to feed to `estemplate`
- is some seriously tedious code.
-
-### Data Attributes and Link Elements
-
- <head>
- <link rel="foo-url" href="https://foo.example.com/">
- <script src="/your/app.js" data-foo-token="my-super-secret-token"></script>
- </head>
-
-* Flat values only. This is probably a good thing in the grand, since flat configurations are easier to reason about and much easier to document, but it makes namespacing trickier than it needs to be for groups of related config values (URL + token for a single service, for example).
-* Have to generate the DOM to set them.
- * This is only practical given server-side templates or DOM rendering. You can't do this with bare nginx, unless you pre-generate pages at deployment time.
-
-### Config API Endpoint
-
- fetch('/config') /* {"FOO_URL": …, "FOO_TOKEN": …} */
- .then(response => response.json())
- .then(json => someConfigurableService);
-
-* Works even with “dumb” servers (nginx, CloudFront) as the endpoint can be a generated JSON file on disk. If you can generate files, you can generate a JSON endpoint.
-* Requires an additional request to fetch the configuration, and logic for injecting config data into all the relevant configurable places in the code.
- * This request can't happen until all the app code has loaded.
- * It's very tempting to write the config to a global. This produces some hilarious race conditions.
-
-### Cookies
-
-See for example [clientconfig](https://github.com/henrikjoreteg/clientconfig):
-
- var config = require('clientconfig');
-
-* Easy to consume given the right tools; tricky to do right from scratch.
-* Requires server-side support to send the correct cookie. Some servers will allow you to generate the right cookie once and store it in a config file; others will need custom logic, which means (effectively) you need an app server.
-* Cookies persist and get re-sent on subsequent requests, even if the server stops delivering config cookies. Client code has to manage the cookie lifecycle carefully (clientconfig does this automatically)
-* Size limits constrain how much configuration you can do.
diff --git a/wiki/dev/debugger-101.md b/wiki/dev/debugger-101.md
deleted file mode 100644
index 6d7e773..0000000
--- a/wiki/dev/debugger-101.md
+++ /dev/null
@@ -1,86 +0,0 @@
-# Intro to Debuggers
-
-(Written largely because newbies in [##java](http://evanchooly.com) never seem
-to have this knowledge.)
-
-A “debugger” is a mechanism for monitoring and controlling the execution of
-your program, usually interactively. Using a debugger, you can stop your
-program at known locations and examine the _actual_ values of its variables
-(to compare against what you expected), monitor variables for changes (to see
-where they got the values they have, and why), and step through code a line at
-a time (to watch control flow and verify that it matches your expectations).
-
-Pretty much every worthwhile language has debugging support of some kind,
-whether it's via IDE integration or via a command-line debugger.
-
-(Of course, none of this helps if you don't have a mental model of the
-“expected” behaviour of the program. Debuggers can help you read, but can't
-replace having an understanding of the code.)
-
-## Debugging Your First Program
-
-Generally, you start running a debugger because you have a known problem -- an
-exception, or code behaving strangely -- somewhere in your program that you
-want to investigate more closely. Start by setting a _breakpoint_ in your
-program at a statement slightly before the problem area.
-
-Breakpoints are instructions to the debugger, telling it to stop execution
-when the program reaches the statement the breakpoint is set on.
-
-Run the program in the debugger. When it reaches your breakpoint, execution
-will stop (and your program will freeze, rather than exiting). You can now
-_inspect_ values and run expressions in the context of your program in its
-current state. Depending on the debugger and the platform, you may be able to
-modify those values, too, to quickly experiment with the problem and attempt
-to solve it.
-
-Once you've looked at the relevant variables, you can resume executing your
-program - generally in one of five ways:
-
-* _Continue_ execution normally. The debugger steps aside until the program
- reaches the next breakpoint, or exits, and your program executes normally.
-
-* Execute the _next_ statement. Execution proceeds for one statement in the
- current function, then stops again. If the statement is, for example, a
- function or method call, the call will be completely evaluated (unless it
- contains breakpoints of its own). (In some debuggers, this is labelled “step
- over,” since it will step “over” a function call.)
-
-* _Step_ forward one operation. Execution proceeds for one statement, then
- stops again. This mode can single-step into function calls, rather than
- letting them complete uninterrupted.
-
-* _Continue to end of function_. The debugger steps aside until the program
- reaches the end of the current function, then halts the program again.
-
-* _Continue to a specific statement_. Some debuggers support this mode as a
- way of stepping over or through “uninteresting” sections of code quickly and
- easily. (You can implement this yourself with “Continue” and normal
- breakpoints, too.)
-
-Whenever the debugger halts your program, you can do any of several things:
-
-* Inspect the value of a variable or field, printing a useful representation
- to the debugger. This is a more flexible version of the basic idea of
- printing debug output as you go: because the program is stopped, you can
- pick and choose which bits of information to look at on the fly, rather than
- having to rerun your code with extra debug output.
-
-* Inspect the result of an expression. The debugger will evaluate an
- expression “as if” it occurred at the point in the program where the
- debugger is halted, including any local variables. In languages with static
- visibility controls like Java, visibility rules are often relaxed in the
- name of ease of use, allowing you to look at the private fields of objects.
- The result of the expression will be made available for inspection, just
- like a variable.
-
-* Modify a variable or field. You can use this to quickly test hypotheses: for
- example, if you know what value a variable “should” have, you can set that
- value directly and observe the behaviour of the program to check that it
- does what you expected before fixing the code that sets the variable in a
- non-debug run.
-
-* In some debuggers, you can run arbitrary code in the context of the halted
- program.
-
-* Abort the program.
diff --git a/wiki/dev/entry-points.md b/wiki/dev/entry-points.md
deleted file mode 100644
index 0e56ce0..0000000
--- a/wiki/dev/entry-points.md
+++ /dev/null
@@ -1,56 +0,0 @@
-# Entry Points
-
-The following captures a conversation from IRC:
-
-> [Owen J](https://twitter.com/derspiny): Have you run across the idea
-> of an "entry point" in a runtime yet? (You've definitely used it, just
-> possibly not known it had a name.)
->
-> [Alex L](https://twitter.com/aeleitch): I have not!
->
-> [Owen J](https://twitter.com/derspiny): It's the point where the
-> execution of the outside system -- the OS, the browser, the Node
-> runtime, whatever -- stops and the execution of your code starts. Some
-> platforms only give you one: C on Unix is classic, where there's only
-> two entry points: main and signal handlers (and a lot of apps only use
-> main). JS gives you _a shit fucking ton_ of entry points.
->
-> [Owen J](https://twitter.com/derspiny): In a browser, the pageload
-> process is an entry point: your code gets run when the browser
-> encounters a `<script>` tag. So is every event handler. There's none
-> of your code running when an event handler starts, only the browser
-> is running. So is every callback from an external service, like
-> `XmlHttpRequest` or `EventSource` or the `File` APIs. In Node, the top
-> level of your main script is an entry point, but so is every callback
-> from an external service.
->
-> [Alex L](https://twitter.com/aeleitch): Ahahahahahahaha oh my
-> god. There is no way for me to contain them all. _everything the light
-> touches._
->
-> [Owen J](https://twitter.com/derspiny): This is important for
-> reasoning about exception handling! _In JS_, exception handling only
-> propagates one direction: towards the entry point of this sequence of
-> function calls.
->
-> [Alex L](https://twitter.com/aeleitch): Yes. This is what _I_ call a
-> stack trace.
->
-> [Owen J](https://twitter.com/derspiny): If an exception escapes from
-> an entry point, the JS runtime logs it, and then the outside runtime
-> takes over again. That's one of the ways callbacks from external
-> services fuck up the idea of a stack trace as a map of control flow.
->
-> [Alex L](https://twitter.com/aeleitch): Huh. Yes. Yes I can see
-> that. I mean, in my world, control flow is a somewhat handwavey idea
-> right now. I'm starting to understand why so many people hate JS-land.
->
-> [Owen J](https://twitter.com/derspiny): Sure. But, for example, a
-> promise chain is a tool for restructuring control flow. In principle,
-> error handling should provide _some_ kind of map of that, to allow
-> programmers -- you -- to diagnose how a program reached a given error
-> state and maybe one day fix the problem. In THIS future, none of them
-> do that well, though.
->
-> [Alex L](https://twitter.com/aeleitch): Yes. Truly the darkest
-> timeline, but this reviews why I am having these concerns.
diff --git a/wiki/dev/gnu-collective-action-license.md b/wiki/dev/gnu-collective-action-license.md
deleted file mode 100644
index 6a0bc3b..0000000
--- a/wiki/dev/gnu-collective-action-license.md
+++ /dev/null
@@ -1,51 +0,0 @@
-# The GPL As Collective Action
-
-Programmers, like many groups of subject experts, are widely afflicted by the
-belief that all other fields of expertise can be reduced to a special case of
-programming expertise. For a great example of this, watch [programmers argue
-about law](https://xkcd.com/1494/) (which can _obviously_ be reduced to a rules
-system, which is a programming problem),
-[consent](https://www.reddit.com/r/Bitcoin/comments/2e5a7k/could_the_blockchain_be_used_to_prove_consensual/)
-(which is _obviously_ about non-repudiatable proofs, which are a programming
-problem), or [art](https://github.com/google/deepdream) (which is _obviously_
-reducible to simple but large automata). One key symptom of this social pattern
-is a disregard for outside expertise and outside bodies of knowledge.
-
-I believe this habit may have bitten Stallman.
-
-The GNU Public License presents a simple, legally enforceable offer: in return
-for granting the right to distribute the licensed work and its derivatives, the
-GPL demands that derivative works also be released under the GPL. The _intent_,
-as derived from
-[Stallman’s commentaries](http://www.gnu.org/philosophy/open-source-misses-the-point.en.html)
-on the GPL and on the social systems around software, is that people who _use_
-information systems should, morally and legally, be entitled to the tools to
-understand what the system will do and why, and to make changes to those tools
-as they see fit.
-
-This is a form of _collective action_, as implemented by someone who thinks of
-unions and organized labour as something that software could do better. The
-usual lens for critique of the GPL is that GPL’d software cannot be used in
-non-GPL systems (which is increasingly true, as the Free Software Foundation
-catches up with the “as a Service” model of software deliver) _by developers_,
-but I think there’s a more interesting angle on it as an attempt to apply the
-collective bargaining power of programmers as a class to extracting a
-concession from managerial -- business and government -- interests, instead. In
-that reading, the GPL demands that managerial interests in software avoid
-behaviours that would be bad for programmers (framed as “users”, as above) as a
-condition of benefitting from the labour of those programmers.
-
-Sadly, Stallman is not a labour historian or a union organizer. He’s a public
-speaker and a programmer. By attempting to reinvent collective action from
-first principles, and by treating collective action as a special case of
-software development, the GPL acts to divide programmers from non-programming
-computer users, and to weaken the collective position of programmers vis-à-vis
-managerial interests. The rise of “merit”-based open source licenses, such as
-the MIT license (which I use heavily, but advisedly), and the increasing
-pervasiveness of the Github Resume, are both simple consequences of this
-mistake.
-
-I’m pro-organized-labour, and largely pro-union. The only thing worse than
-having two competing powerful interests in the room is having only one powerful
-interest in the room. The GPL should be part of any historical case study for
-the unionization of programmers, since it captures so much of what we do wrong.
diff --git a/wiki/dev/go.md b/wiki/dev/go.md
deleted file mode 100644
index f20914b..0000000
--- a/wiki/dev/go.md
+++ /dev/null
@@ -1,112 +0,0 @@
-# I Do Not Like Go
-
-I use Go at my current day job. I've gotten pretty familiar with it. I do not like it, and its popularity is baffling to me.
-
-## Developer Ergonomics
-
-I've never met a language lead so openly hostile to the idea of developer ergonomics. To pick one example, Rob Pike has been repeatedly and openly hostile to any discussion of syntax highlighting on the Go playground. In response to reasonably-phrased user questions, his public answers have been disdainful and disrespectful:
-
-> Gofmt was written to reduce the number of pointless discussions about code formatting. It succeeded admirably. I'm sad to say it had no effect whatsoever on the number of pointless discussions about syntax highlighting, or as I prefer to call it, spitzensparken blinkelichtzen.
-
-From a [2012 Go-Nuts thread](http://grokbase.com/t/gg/golang-nuts/12asys9jn4/go-nuts-go-playground-syntax-highlighting), and again:
-
-> Syntax highlighting is juvenile. When I was a child, I was taught arithmetic using colored rods (http://en.wikipedia.org/wiki/Cuisenaire_rods). I grew up and today I use monochromatic numerals.
-
-Clearly nobody Rob cares about has ever experienced synaesthesia, dyslexia, or poor eyesight. Rob's resistance to the idea has successfully kept Go's official site and docs highlighting-free as of this writing.
-
-The Go team is not Rob Pike, but they've shared his attitude towards ergonomics in other ways. In a discussion of [union/sum types](https://github.com/golang/go/issues/19412), user ianlancetaylor rejects the request out of hand by specifically identifying an ergonomic benefit and writing it off as too minor to be worth bothering:
-
-> This has been discussed several times in the past, starting from before the open source release. The past consensus has been that sum types do not add very much to interface types. Once you sort it all out, what you get in the end if an interface type where the compiler checks that you've filled in all the cases of a type switch. That's a fairly small benefit for a new language change.
-
-This attitude is at odds with opinions about union types in other languages. JWZ, criticising Java in 2000, wrote:
-
-> Similarly, I think the available idioms for simulating enum and :keywords fairly lame. (There's no way for the compiler to issue that life-saving warning, ``enumeration value `x' not handled in switch'', for example.)
-
-The Java team took criticism in this vein to heart, and Java can now emit this warning for `switch`es over `enum` types. Other languages - including both modern languages such as Rust, Scala, Elixir, and friends, as well as Go's own direct ancestor, C - similarly warn where possible. Clearly, these kinds of warning are useful, but to the Go team, developer comfort is not important enough to merit consideration.
-
-## Politics
-
-No, not the mailing-lists-and-meetups kind. A deeper and more interesting kind.
-
-Go is, like every language, a political vehicle. It embodies a particular set of beliefs about how software should be written and organized. In Go's case, the language embodies an extremely rigid caste hierarchy of "skilled programmers" and "unskilled programmers," enforced by the language itself.
-
-On the unskilled programmers side, the language forbids features considered "too advanced." Go has no generics, no way to write higher-order functions that generalize across more than a single concrete type, and extremely stringent prescriptive rules about the presence of commas, unused symbols, and other infelicities that might occur in ordinary code. This is the world in which Go programmers live - one which is, if anything, even _more_ constrained than Java 1.4 was.
-
-On the skilled programmers side, programmers are trusted with those features, and can expose things built with them to other programmers on both sides of the divide. The language implementation contains generic functions which cannot be implemented in Go, and which satisfy typing relationships the language simply cannot express. This is the world in which the Go _implementors_ live.
-
-I can't speak for Go's genesis within Google, but outside of Google, this underanalysed political stance dividing programmers into "trustworthy" and "not" underlies many arguments about the language.
-
-## Packaging and Distribution of Go Code
-
-`go get` is a disappointing abdication of responsibility. Packaging boundaries are communications boundaries, and the Go team's response of "vendor everything" amounts to refusing to help developers communicate with one another about their code.
-
-I can respect the position the Go team has taken, which is that it's not their problem, but that puts them at odds with every other major language. Considering the disastrous history of attempts at package management for C libraries and the existence of Autotools as an example of how this can go very wrong over a long-enough time scale, it's very surprising to see a language team in this century washing their hands of the situation.
-
-## GOPATH
-
-The use of a single monolithic path for all sources makes version conflicts between dependencies nearly unavoidable. The `vendor` workaround partially addresses the problem, at the cost of substantial repository bloat and non-trivial linkage changes which can introduce bugs if a vendored and a non-vendored copy of the same library are linked in the same application.
-
-Again, the Go team's "not our problem" response is disappointing and frustrating.
-
-## Error Handling in Go
-
-The standard Go approach to operations which may fail involves returning multiple values (not a tuple; Go has no tuples) where the last value is of type `error`, which is an interface whose `nil` value means “no error occurred.”
-
-Because this is a convention, it is not representable in Go's type system. There is no generalized type representing the result of a fallible operation, over which one can write useful combining functions. Furthermore, it's not rigidly adhered to: nothing other than good sense stops a programmer from returning an `error` in some other position, such as in the middle of a sequence of return values, or at the start - so code generation approaches to handling errors are also fraught with problems.
-
-It is not possible, in Go, to compose fallible operations in any way less verbose than some variation on
-```go
- a, err := fallibleOperationA()
- if err != nil {
- return nil, err
- }
-
- b, err := fallibleOperationB(a)
- if err != nil {
- return nil, err
- }
-
- return b, nil
-```
-
-In other languages, this can variously be expressed as
-
-```java
- a = fallibleOperationA()
- b = fallibleOperationB(a)
- return b
-```
-
-in languages with exceptions, or as
-
-```javascript
- return fallibleOperationA()
- .then(a => fallibleOperationB(a))
- .result()
-```
-
-in languages with abstractions that can operate over values with cases.
-
-This has real impact: code which performs long sequences of fallible operations expends a substantial amount of typing effort to write (even with editor support generating the branches), and a substantial amount of cognitive effort to read. Style guides help, but mixing styles makes it worse. Consider:
-
-```go
- a, err := fallibleOperationA()
- if err != nil {
- return nil, err
- }
-
- if err := fallibleOperationB(a); err != nil {
- return nil, err
- }
-
- c, err := fallibleOperationC(a)
- if err != nil {
- return nil, err
- }
-
- fallibleOperationD(a, c)
-
- return fallibleOperationE()
-```
-
-God help you if you nest them, or want to do something more interesting than passing an error back up the stack.
diff --git a/wiki/dev/liquibase.md b/wiki/dev/liquibase.md
deleted file mode 100644
index 01e989f..0000000
--- a/wiki/dev/liquibase.md
+++ /dev/null
@@ -1,77 +0,0 @@
-# Liquibase
-
-Note to self: I think this (a) needs an outline and (b) wants to become a “how
-to automate db upgrades for dummies” page. Also, this is really old (~2008)
-and many things have changed: database migration tools are more
-widely-available and mature now. On the other hand, I still see a lot of
-questions on IRC that are based on not even knowing these tools exist.
-
------
-
-Successful software projects are characterized by extensive automation and
-supporting tools. For source code, we have version control tools that support
-tracking and reviewing changes, marking particular states for release, and
-automating builds. For databases, the situation is rather less advanced in a
-lot of places: outside of Rails, which has some rather nice
-[migration](http://wiki.rubyonrails.org/rails/pages/understandingmigrations)
-support, and [evolutions](http://code.google.com/p/django-evolution/) or
-[South](http://south.aeracode.org) for Django, there are few tools that
-actually track changes to the database or to the model in a reproducible way.
-
-While I was exploring the problem by writing some scripts for my own projects,
-I came to a few conclusions. You need to keep a receipt for the changes a
-database has been exposed to in the database itself so that the database can
-be reproduced later. You only need scripts to go forward from older versions
-to newer versions. Finally, you need to view DDL statements as a degenerate
-form of diff, between two database states, that's not combinable the way
-textual diff is except by concatenation.
-
-Someone on IRC mentioned [Liquibase](http://www.liquibase.org/) and
-[migrate4j](http://migrate4j.sourceforge.net/) to me. Since I was already in
-the middle of writing a second version of my own scripts to handle the issues
-I found writing the first version, I stopped and compared notes.
-
-Liquibase is essentially the tool I was trying to write, only with two years
-of relatively talented developer time poured into it rather than six weeks.
-
-Liquibase operates off of a version table it maintains in the database itself,
-which tracks what changes have been applied to the database, and off of a
-configuration file listing all of the database changes. Applying new changes
-to a database is straightforward: by default, it goes through the file and
-applies all the changes that are in the file that are not already in the
-database, in order. This ensures that incremental changes during development
-are reproduced in exactly the same way during deployment, something lots of
-model-to-database migration tools have a problem with.
-
-The developers designed the configuraton file around some of the ideas from
-[Refactoring
-Databases](http://www.amazon.com/Refactoring-Databases-Evolutionary-Addison-Wesley-Signature/dp/0321293533),
-and provided an [extensive list of canned
-changes](http://www.liquibase.org/manual/home#available_database_refactorings)
-as primitives in the database change scripts. However, it's also possible to
-insert raw SQL commands (either DDL, or DML queries like `SELECT`s and
-`INSERT`s) at any point in the change sequence if some change to the database
-can't be accomplished with its set of refactorings. For truly hairy databases,
-you can use either a Java class implementing your change logic or a shell
-script alongside the configuration file.
-
-The tools for applying database changes to databases are similarly flexible:
-out of the box, liquibase can be embedded in a fairly wide range of Java
-applications using servlet context listeners, a Spring adapter, or a Grails
-adapter; it can also be run from an ant or maven build, or as a standalone
-tool.
-
-My biggest complaint is that liquibase is heavily Java-centric; while the
-developers are planning .Net support, it'd be nice to use it for Python apps
-as well. Triggering liquibase upgrades from anything other than a Java program
-involves either shelling out to the `java` command or creating a JVM and
-writing native glue to control the upgrade process, which are both pretty
-painful. I'm also less than impressed with the javadoc documentation; while
-the manual is excellent, the javadocs are fairly incomplete, making it hard to
-write customized integrations.
-
-The liquibase developers deserve a lot of credit for solving a hard problem
-very cleanly.
-
-*[DDL]: Data Definition Language
-*[DML]: Data Manipulation Language
diff --git a/wiki/dev/merging-structural-changes.md b/wiki/dev/merging-structural-changes.md
deleted file mode 100644
index d1c7a9c..0000000
--- a/wiki/dev/merging-structural-changes.md
+++ /dev/null
@@ -1,85 +0,0 @@
-# Merging Structural Changes
-
-In 2008, a project I was working on set out to reinvent their build process,
-migrating from a mass of poorly-written Ant scripts to Maven and reorganizing
-their source tree in the process. The development process was based on having
-a branch per client, so there was a lot of ongoing development on the original
-layout for clients that hadn't been migrated yet. We discovered that our
-version control tool, [Subversion](http://subversion.tigris.org/), was unable
-to merge the changes between client branches on the old structure and the
-trunk on the new structure automatically.
-
-Curiousity piqued, I cooked up a script that reproduces the problem and
-performs the merge from various directions to examine the results. Subversion,
-sadly, performed dismally: none of the merge scenarios tested retained content
-changes when merging structural changes to the same files.
-
-## The Preferred Outcome
-
-![Both changes survive the
-merge.](/media/dev/merging-structural-changes/ideal-merge-results.png)
-
-The diagram above shows a very simple source tree with one directory, `dir-a`,
-containing one file with two lines in it. On one branch, the file is modified
-to have a third line; on another branch, the directory is renamed to `dir-b`.
-Then, both branches are merged, and the resulting tree contains both sets of
-changes: the file has three lines, and the directory has a new name.
-
-This is the preferred outcome, as no changes are lost or require manual
-merging.
-
-## Subversion
-
-![Subversion loses the content
-change.](/media/dev/merging-structural-changes/subversion-merge-results.png)
-
-There are two merge scenarios in this diagram, with almost the same outcome.
-On the left, a working copy of the branch where the file's content changed is
-checked out, then the changes from the branch where the structure changed are
-merged in. On the right, a working copy of the branch where the structure
-changed is checked out, then the changes from the branch where the content
-changed are merged in. In both cases, the result of the merge has the new
-directory name, and the original file contents. In one case, the merge
-triggers a rather opaque warning about a “missing file”; in the other, the
-merge silently ignores the content changes.
-
-This is a consequence of the way Subversion implements renames and copies.
-When Subversion assembles a changeset for committing to the repository, it
-comes up with a list of primitive operations that reproduce the change. There
-is no primitive that says “this object was moved,” only primitives which say
-“this object was deleted” or “this object was added, as a copy of that
-object.” When you move a file in Subversion, those two operations are
-scheduled. Later, when Subversion goes to merge content changes to the
-original file, all it sees is that the file has been deleted; it's completely
-unaware that there is a new name for the same file.
-
-This would be fairly easy to remedy by adding a “this object was moved to that
-object” primitive to the changeset language, and [a bug report for just such a
-feature](http://subversion.tigris.org/issues/show_bug.cgi?id=898) was filed in
-2002. However, by that time Subversion's repository and changeset formats had
-essentially frozen, as Subversion was approaching a 1.0 release and more
-important bugs _without_ workarounds were a priority.
-
-There is some work going on in Subversion 1.6 to handle tree conflicts (the
-kind of conflicts that come from this kind of structural change) more
-sensibly, which will cause the two merges above to generate a Conflict result,
-which is not as good as automatically merging it but far better than silently
-ignoring changes.
-
-## Mercurial
-
-![Mercurial preserves the content
-change.](/media/dev/merging-structural-changes/mercurial-merge-results.png)
-
-Interestingly, there are tools which get this merge scenario right: the
-diagram above shows how [Mercurial](http://www.selenic.com/mercurial/) handles
-the same two tests. Since its changeset language does include an “object
-moved” primitive, it's able to take a content change for `dir-a/file` and
-apply it to `dir-b/file` if appropriate.
-
-## Git
-
-Git also gets this scenario right, _usually_. Unlike Mercurial, Git does not
-track file copies or renames in its commits at all, prefering to infer them by
-content comparison every time it performs a move-aware operation, such as a
-merge.
diff --git a/wiki/dev/on-rights.md b/wiki/dev/on-rights.md
deleted file mode 100644
index d277b8a..0000000
--- a/wiki/dev/on-rights.md
+++ /dev/null
@@ -1,21 +0,0 @@
-# On Rights
-
-Or: your open-source project is a legal minefield, and fixing that is counterintuitive and unpopular.
-
-The standard approach to releasing an open-source project in this age is to throw your code up on Github with a `LICENSE` file describing the terms under which other people may copy and distribute the project and its derived works. This is all well and good: when you write code for yourself, you generally hold the copyright to that code, and you can license it out however you see fit.
-
-However, Github encourages projects to accept contributions. Pull request activity is, rightly or wrongly, considered a major indicator of project health by the Github community at large. Moreover, each pull request represents a gift of time and labour: projects without a clear policy otherwise are often in no position to reject such a gift unless it has clear defects.
-
-This is a massive problem. The rights to contributed code are, generally, owned by the contributor, and not by the project's original authors, and a pull request, on its own, isn't anywhere near adequate to transfer those rights to the project maintainers.
-
-Intuitively, it may seem like a good idea for each contributor to retain the rights to their contributions. There is a good argument that by contributing code with the intent that it be included in the published project, the contribution is under the same license as the project as a whole, and withholding the rights can effectively prevent the project from ever switching to a more-restrictive license without the contributor's consent.
-
-However, it also cripples the project's legal ability to enforce the license. Someone distributing the project in violation of the license terms is infringing on all of those individual copyrights, and no contributor has obvious standing to bring suit on behalf of any other. Suing someone for copyright infringement becomes difficult: anyone seeking to bring suit either needs to restrict the suit to the portions they hold the copyright to (difficult when each contribution is functionally intertangled with every other), or obtain permission from all of the contributors, including those under pseudonyms or who have _died_, to file suit collectively. This, in turn, de-fangs whatever restrictions the license nominally imposes.
-
-There are a few fixes for this.
-
-The simplest one, from an implementation perspective, is to require that contributors agree in writing to assign the rights to their contribution to the project's maintainers, or to an organization. _This is massively unpopular_: asking a developer to give up rights to their contributions tends to provoke feelings that the project wants to take without giving, and the rationale justifying such a request isn't obvious without a grounding in intellectual property law. As things stand, the only projects that regularly do this are those backed by major organizations, as those organizations tend to be more sensitive to litigation risk and have the resources to understand and demand such an assignment. (Example: [the Sun Contributor Agreement](https://www.openoffice.org/licenses/sca.pdf), which is not popular.)
-
-More complex - too complex to do without an attorney, honestly - is to require that contributors sign an agreement authorizing the project's maintainers or host organization to bring suit on their behalf with respect to their contributions. As attorneys are not free and as there are no "canned" agreements for this, it's not widely done. I anticipate that it might provoke a lot of the same reactions, but it does leave contributors nominally in possession of the rights to their work.
-
-The status quo is, I think, untenable in the long term. We've already seen major litigation over project copyrights, and in the case of the [FSF v. Cisco](https://www.fsf.org/licensing/complaint-2008-12-11.pdf), the Free Software Foundation was fortunate that substantial parts of the infringing use were works to which the FSF held clear copyrights.
diff --git a/wiki/dev/papers.md b/wiki/dev/papers.md
deleted file mode 100644
index 03ae430..0000000
--- a/wiki/dev/papers.md
+++ /dev/null
@@ -1,36 +0,0 @@
-# Papers of Note
-
-On Slack:
-
-> [Ben W](https://twitter.com/bwarren24):
->
-> What are people's favorite CS papers?
-
-* Perlman, Radia (1985). ["An Algorithm for Distributed Computation of a Spanning Tree in an Extended LAN"][1]. ACM SIGCOMM Computer Communication Review. 15 (4): 44–53. doi:10.1145/318951.319004.
-
-* [The related Algorhyme][2], also by Perlman.
-
-* Guy Lewis Steele, Jr.. "[Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO][3]". MIT AI Lab. AI Lab Memo AIM-443. October 1977.
-
-* [What Every Computer Scientist Should Know About Floating-Point Arithmetic][4], by David Goldberg, published in the March, 1991 issue of Computing Surveys. Copyright 1991, Association for Computing Machinery, Inc.
-
-* [RFC 1925][5].
-
-* [The above-cited Thomson NFA paper][6] on regular expressions.
-
-* [The Eight Fallacies of Distributed Computing][7].
-
-* [HAKMEM][8] is another good one. It's _dense_ but rewarding.
-
-* Kahan, William (January 1965), "[Further remarks on reducing truncation errors][9]", Communications of the ACM, 8 (1): 40, doi:10.1145/363707.363723
-
-
-[1]: https://www.researchgate.net/publication/238778689_An_Algorithm_for_Distributed_computation_of_a_Spanning_Tree_in_an_Extended_LAN
-[2]: http://etherealmind.com/algorhyme-radia-perlman/
-[3]: https://dspace.mit.edu/bitstream/handle/1721.1/5753/AIM-443.pdf
-[4]: https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
-[5]: https://www.ietf.org/rfc/rfc1925.txt
-[6]: https://www.fing.edu.uy/inco/cursos/intropln/material/p419-thompson.pdf
-[7]: http://wiki.c2.com/?EightFallaciesOfDistributedComputing
-[8]: http://w3.pppl.gov/~hammett/work/2009/AIM-239-ocr.pdf
-[9]: https://dl.acm.org/citation.cfm?id=363723
diff --git a/wiki/dev/rich-shared-models.md b/wiki/dev/rich-shared-models.md
deleted file mode 100644
index 7fac072..0000000
--- a/wiki/dev/rich-shared-models.md
+++ /dev/null
@@ -1,102 +0,0 @@
-# Rich Shared Models Must Die
-
-In a gaming system I once worked on, there was a single class which was
-responsible for remembering everything about a user: their name and contact
-information, their wagers, their balance, and every other fact about a user
-the system cared about. In a system I'm working with now, there's a set of
-classes that collaborate to track everything about the domain: prices,
-descriptions, custom search properties, and so on.
-
-Both of these are examples of shared, system-wide models.
-
-Shared models are evil.
-
-Shared models _must be destroyed_.
-
-A software system's model is the set of functions and data types it uses to
-decide what to do in response to various events. Models embody the development
-team's assumptions and knowledge about the problem space, and usually reflect
-the structure of the applications that use them. Not all systems have explicit
-models, and it's often hard to draw a line through the code base separating
-the code that is the model from the code that is not as every programmer sees
-models slightly differently.
-
-With the rise of object-oriented development, explicit models became the focus
-of several well-known practices. Many medium-to-large projects are built
-“model first,” with the interfaces to that model being sketched out later in
-the process. Since the model holds the system's understanding of its task,
-this makes sense, and so long as you keep the problem you're actually solving
-in mind, it works well. Unfortunately, it's too easy to lose sight of the
-problem and push the model as the whole reason for the system around it. This,
-in combination with both emotional and technical investment in any existing
-system, strongly encourages building `new` systems around the existing
-model pieces even if the relationship between the new system is tenuous at
-best.
-
-* Why do we share them?
- * Unmanaged growth
- * Adding features to an existing system
- * Building new systems on top of existing tools
- * Misguided applications of “simplicity” and “reuse”
- * Encouraged by distributed object systems (CORBA, EJB, SOAP, COM)
-* What are the consequences?
- * Models end up holding behaviour and data relevant to many applications
- * Every application using the model has to make the same assumptions
- * Changing the model usually requires upgrading everyone at the same time
- * Changes to the model are risky and impact many applications, even if the
- changes are only relevant to one application
-* What should we do instead?
- * Narrow, flat interfaces
- * Each system is responsible for its own modelling needs
- * Systems share data and protocols, not objects
- * Libraries are good, if the entire world doesn't need to upgrade at the
- same time
-
-It's easy to start building a system by figuring out what the various nouns it
-cares about are. In the gambling example, one of our nouns was a user (the guy
-sitting at a web browser somewhere), who would be able to log in, deposit
-money, place a wager, and would have to be notified when the wager was
-settled. This is a clear, reasonable entity for describing the goal of placing
-bets online, which we could make reasonable assumptions about. It's also a
-terrible thing to turn into a class.
-
-The User class in our gambling system was responsible for all of those things;
-as a result, every part of the system ended up using a User object somewhere.
-Because the User class had many responsibilities, it was subject to frequent
-changes; because it was used everywhere, those changes had the capability to
-break nearly any part of the overall system. Worse, because so much
-functionality was already in one place, it became psychologically easy to add
-one more responsibility to its already-bloated interface.
-
-What had been a clean model in the problem space eventually became one of a
-handful of “glue” pieces in a [big ball of
-mud](http://www.laputan.org/mud/mud.html#BigBallOfMud) program. The User
-object did not come about through conscious design, but rather through
-evolution from a simple system. There was no clear point where User became
-“too big”; instead, the vagueness of its role slowly grew until it became the
-default behaviour-holder for all things user-specific.
-
-The same problem modeling exercise also points at a better way to design the
-same system: it describes a number of capabilities the system needed to be
-able to perform, each of which is simpler than “build a gaming website.” Each
-of these capabilities (accept or reject logins, process deposits, accept and
-settle wagers, and send out notification emails to players) has a much simpler
-model and solves a much more constrained of problem. There is no reason the
-authentication service needs to share any data except an identity with the
-wagering service: one cares about login names, passwords, and authorization
-tickets while the other cares about accounting, wins and losses, and posted
-odds.
-
-There is a small set of key facts that can be used to correlate all of pieces:
-usernames, which uniquely identify a user, can be used to associate data and
-behaviour in the login domain with data and behaviour in the accounting and
-wagering domain, and with information in a contact management domain. All of
-these key facts are flat—they have very little structure and no behaviour, and
-can be passed from service to service without dragging along an entire
-application's worth of baggage data.
-
-Sharing model classes between many services creates a huge maintenance
-bottleneck. Isolating models within the services they support helps encourage
-clean separations between services, which in turn makes it much easier to
-understand individual services and much easier to maintain the system as a
-whole. Kindergarten lied: sharing is _wrong_.
diff --git a/wiki/dev/shutdown-hooks.md b/wiki/dev/shutdown-hooks.md
deleted file mode 100644
index 1cc5a81..0000000
--- a/wiki/dev/shutdown-hooks.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# Falsehoods Programmers Believe About Shutdown Hooks
-
-Shutdown hooks are language features allowing programs to register callbacks to run during the underlying runtime's orderly teardown. For example:
-
-* C's [`atexit`](http://man7.org/linux/man-pages/man3/atexit.3.html),
-
-* Python's [`atexit`](https://docs.python.org/library/atexit.html), which is subtly different,
-
-* Ruby's [`Kernel.at_exit`](http://www.ruby-doc.org/core-2.1.3/Kernel.html#method-i-at_exit), which is different again,
-
-* Java's [Runtime.addShutdownHook](http://docs.oracle.com/javase/8/docs/api/java/lang/Runtime.html#addShutdownHook-java.lang.Thread-), which is yet again different
-
-(There's an example in your favourite language.)
-
-The following beliefs are widespread and incorrect:
-
-1. **Your shutdown hook will run.** Non-exhaustively: the power can go away. The OS may terminate the program immediately because of resource shortages. An administrator or process management tool may send `SIGKILL` to the process. All of these things, and others, will not run your shutdown hook.
-
-2. **Your shutdown hook will run last.** Look at the shapes of the various shutdown hook APIs above: they all allow multiple hooks to be registered in arbitrary orders, and at least one _outright requires_ that hooks run concurrently.
-
-3. **Your shutdown hook will not run last.** Sometimes, you win, and objects your hook requires get cleaned up before your hook runs.
-
-4. **Your shutdown hook will run to completion.** Some languages run shutdown hooks even when the original termination request came from, for example, the user logging out. Most environments give programs a finite amount of time to wrap up before forcibly terminating them; your shutdown hook may well be mid-run when this occurs.
-
-5. **Your shutdown hook will be the only thing running.** In languages that support “daemon” threads, shutdown hooks may start before daemon threads terminate. In languages with concurrent shutdown hooks, other hooks will be in flight at the same time. On POSIX platforms, signals can still arrive during your shutdown hook. (Did you start any child processes? `SIGCHLD` can still arrive.)
-
-6. **You need a shutdown hook.** Closing files, terminating threads, and hanging up network connections are all done automatically by the OS as part of process destruction. The behaviour of the final few writes to a file handle aren't completely deterministic (unflushed data can be lost), but that's true even if a shutdown hook tries to close the file.
-
-Programs that rely on shutdown hooks for correctness should be treated as de-facto incorrect, much like object finalization in garbage-collected languages.
diff --git a/wiki/dev/stop-building-synchronous-web-containers.md b/wiki/dev/stop-building-synchronous-web-containers.md
deleted file mode 100644
index 320b3f7..0000000
--- a/wiki/dev/stop-building-synchronous-web-containers.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# Stop Building Synchronous Web Containers
-
-Seriously, stop it. It's surreally difficult to build a sane ansynchronous service on top of a synchronous API, but building a synchronous service on top of an asynchronous API is easy.
-
-* WSGI: container calls the application as a function, and uses the return
- value for the response body. Asynchronous apps generally use a non-WSGI
- base (see for example [Bottle](http://bottlepy.org/docs/dev/async.html)).
-
-* Rack: container calls the application as a method, and uses the return
- value for the complete response. Asynchronous apps generally use a non-Rack
- base (see [this Github ticket](https://github.com/rkh/async-rack/issues/5)).
-
-* Java Servlets: container calls the application as a method, passing a
- callback-bearing object as a parameter. The container commits and closes
- the response as soon as the application method returns. Asynchronous apps
- can use a standard API that operates by _re-invoking_ the servlet method as
- needed.
-
-* What does .Net do?
-
-vs
-
-* ExpressJS: container calls the application as a function, passing a
- callback-bearing object as a parameter. The application is responsible for
- indicating that the response is complete.
-
-## Synchronous web containers are bad API design
-
-* Make the easy parts easy (this works)
-
-* Make the hard parts possible (OH SHIT)
-
-## Writing synchronous adapters for async APIs is easy
-
- def adapter(request, response_callback):
- synchronous_response = synchronous_entry_point(request)
- return response_callback(synchronous_response)
-
-Going the other way is more or less impossible, which is why websocket
-support, HTML5 server-sent event support, and every other async tool for the
-web has an awful server interface.
diff --git a/wiki/dev/trackers-from-first-principles.md b/wiki/dev/trackers-from-first-principles.md
deleted file mode 100644
index d7c7a4c..0000000
--- a/wiki/dev/trackers-from-first-principles.md
+++ /dev/null
@@ -1,219 +0,0 @@
-# Bugs, Tasks, and Tickets from First Principles
-
-Why do we track tasks?
-
-* To communicate about what should, will, has, and will not be done.
- * Consequently, to either build consensus on what to do next or to dictate
- it.
-* To measure and communicate progress.
-* To preserve information for future use.
- * Otherwise we'd just remember it in our heads.
- * Wishlist tasks are not a bad thing!
-
-Bugs/defects are a kind of task but not the only kind. Most teams have a “bug
-tracker” that contains a lot more than bugs. Let's not let bugs dictate the
-system.
-
-* Therefore, “steps to reproduce” should not be a required datum.
-
-Bugs are an _important_ kind of task.
-
-Tasks can be related to software development artifacts: commits, versions,
-builds, releases.
-
-* A task may only be complete as of certain commits/releases/builds.
-* A task may only be valid after (or before) certain commits/releases/builds.
-
-Communication loosely implies publishing. Tracking doesn't, but may rely on
-the publishing of other facts.
-
-## Core Data
-
-Tasks are only useful if they're actionable. To be actionable, they must be
-understood. Understanding requires communication and documentation.
-
-* A protocol-agnostic _name_, for easily identifying a task in related
- conversations.
- * These names need to be _short_ since they're used conversationally. Long
- issue names will be shortened by convention whether the tracker supports
- it or not.
-* An actionable _description_ of the task.
- * Frequently, a short _summary_ of the task, to ease bulk task
- manipulation. Think of the difference between an email subject and an
- email body.
-* A _discussion_, consisting of _remarks_ or _comments_, to track the evolving
- understanding alongside the task.
-
-See [speciation](#speciation), below.
-
-## Responsibility and Ownership
-
-Regardless of whether your team operates with a top-down, command-oriented
-management structure or with a more self-directed and anarchistic process, for
-every task, there is notionally one person currently responsible for ensuring
-that the task is completed.
-
-That relationship can change over time; how it does so is probably
-team-specific.
-
-There may be other people _involved_ in a task that are not _responsible_ for
-a task, in a number of roles. Just because I developed the code for a feature
-does not mean I am necessarily responsible for the feature any more, but it
-might be useful to have a “developed by” list for the feature's task.
-
-Ways of identifying people:
-
-* Natural-language names (“Gianna Grady”)
-* Email addresses
-* Login names
-* Distinguished names in some directory
-* URLs
-
-Task responsibility relationships reflect real-world responsibility, and help
-communicate it, but do not normally define it.
-
-## Workflow
-
-“Workflow” describes both the implications of the states a task can be in and
-the implications of the transitions between states. Most task trackers are, at
-their core, workflow engines of varying sophistication.
-
-Why:
-
-* Improve shared understanding of how tracked tasks are performed.
-* Provide clear hand-off points when responsibility shifts.
-* Provide insight into which tasks need what kinds of attention.
-* Integration points for other behaviour.
-
-States are implicitly time-bounded, and joined to their predecessor and
-successor states by transitions.
-
-Task state is decoupled from the real world: the task in a tracker is not the
-work it describes.
-
-Elemental states:
-
-* “Open”: in this state, the task has not yet been completed. Work may or may
- not be ongoing.
-* “Completed”: in this state, all work on a task has been completed.
-* “Abandoned”: in this state, no further work on a task will be performed, but
- the task has not been completed.
-
-Most real-world workflows introduce some intermediate states that tie into
-process-related handoffs.
-
-For software, I see these divisions, in various combinations, frequently:
-
-* “Open”:
- * “Unverified”: further work needs to be done to decide whether the task
- should be completed.
- * “In Development”: someone is working on the code and asset changes
- necessary to complete the task. This occasionally subsumes preliminary
- work, too.
- * “In Testing”: code and asset changes are ostensibly complete,
- but need testing to validate that the task has been completed
- satisfactorially.
-* “Completed”:
- * “Development Completed”: work (and possibly testing) has been completed
- but the task's results are not yet available to external users.
- * “Released”: work has been completed, and external users can see and use
- the results.
-* “Abandoned”:
- * “Cannot Reproduce”: common in bug/defect tasks, to indicate that the
- task doesn't contain enough information to render the bug fixable.
- * “Won't Complete”: the task is well-understood and theoretically
- completable, but will not be completed.
- * “Duplicate”: the task is identical to, or closely related to, some other
- task, such that completing either would be equivalent to completing
- both.
- * “Invalid”: the task isn't relevant, is incompletely described, doesn't
- make sense, or is otherwise not appropriate work for the team using the
- tracker.
-
-None of these are universal.
-
-Transitions show how a task moves from state to state.
-
-* Driven by external factors (dev work leads to tasks being marked completed)
- * Explicit transitions: “mark this task as completed”
- * Implicit transitions: “This commit also completes these tasks”
-* Drive external factors (tasks marked completed are emailed to testers)
-
-States implicitly describe a _belief_ or a _desire_ about the future of the
-task, which is a human artifact and may be wrong or overly hopeful. Tasks can
-transition to “Completed” or “Abandoned” states when the work hasn't actually
-been completed or abandoned, or from “Completed” or “Abandoned” to an “Open”
-state to note that the work isn't as done as we thought it was. _This is a
-feature_ and trackers that assume every transition is definitely true and
-final encourage ugly workarounds like duplicating tickets to reopen them.
-
-## Speciation
-
-I mentioned above that bugs are a kind of task. The ways in which bugs are
-“different” is interesting:
-
-* Good bugs have a well-defined reproduction case - steps you can follow to
- demonstrate and test them.
-* Good bugs have a well-described expected behaviour.
-* Good bugs have a well-described actual behaviour.
-
-Being able to support this kind of highly detailed speciation of task types
-without either bloating the tracker with extension points (JIRA) or
-shoehorning all features into every task type (Redmine) is hard, but
-necessary.
-
-Supporting structure helps if it leads to more interesting or efficient ways
-of using tasks to drive and understand work.
-
-Bugs are not the only “special” kind of task:
-
-* “Feature” tasks show up frequently, and speciate on having room for
- describing specs and scope.
-* “Support ticket” tasks show up in a few trackers, and speciate dramatically
- as they tend to be tasks describing the work of a single incident rather
- than tasks describing the work on some shared aspect, so they tend to pick
- up fields for relating tickets to the involved parties. (Arguably, incident
- tickets have needs so drastically different that you should use a dedicated
- incident-management tool, not a task/bug tracker.)
-
-Other kinds are possible, and you've probably seen them in the wild.
-
-Ideally, speciation happens to support _widespread_ specialized needs. Bug
-repro is a good example; every task whose goal is to fix a defect should
-include a clear understanding of the defect, both to allow it to be fixed and
-to allow it to be tested. Adding specialized data for bugs supports that by
-encouraging clearer, more structured descriptions of the defect (with implicit
-“fix this” as the task).
-
-## Implementation notes
-
-If we reduce task tracking to “record changes to fields and record discussion
-comments, on a per task basis,” we can describe the current state of a ticket
-using the “most recent” values of each field and the aggregate of all recorded
-comments. This can be done ~2 ways:
-
-1. “Centralized” tracking, where each task has a single, total order of
- changes. Changes are mediated through a centralized service.
-2. “Decentralized” tracking, where each task has only a partial order over the
- history of changes. Changes are mediated by sharing sets of changes, and by
- appending “reconciliation” changes to resolve cases where two incomparable
- changes modify the same field/s. The most obvious partial order is a
- digraph.
-
-Centralized tracking is a well-solved problem. Decentralized tracking so far
-seems to rely heavily on DSCM tools (Git, Mercurial, Fossil) for resolving
-conflicts.
-
-The “work offline” aspect of a distributed tracker is less interesting in as
-much as task tracking is a communications tool. Certain kinds of changes
-should be published and communicated as early as possible so as to avoid
-misunderstandings or duplicated work.
-
-Being able to separate the mechanism of how changes to tasks are recorded from
-the policy of which library of tasks is “canonical” is potentially useful as
-an editorial tool and for progressive publication to wider audiences as work
-progresses.
-
-Issue tracking is considerably more amenable to append-only implementations
-than SCM is, even if you dislike history-editing SCM workflows. This suggests
-that Git is a poor choice of issue-tracking storage backends...
diff --git a/wiki/dev/twigs.md b/wiki/dev/twigs.md
deleted file mode 100644
index c3c7505..0000000
--- a/wiki/dev/twigs.md
+++ /dev/null
@@ -1,24 +0,0 @@
-# Branches and Twigs
-
-## Twigs
-
-* Relatively short-lived
-* Share the commit policy of their parent branch
-* Gain little value from global names
-* Examples: most “topic branches” are twigs
-
-## Branches
-
-* Relatively long-lived
-* Correspond to differences in commit policy
-* Gain lots of value from global names
-* Examples: git-flow 'master', 'develop', &amp;c; hg 'stable' vs 'default';
- release branches
-
-## Commit policy
-
-* Decisions like “should every commit pass tests?” and “is rewriting or
- deleting a commit acceptable?” are, collectively, the policy of a branch
-* Can be very formal or even tool-enforced, or ad-hoc and fluid
-* Shared understanding of commit policy helps get everyone's expectations
- lined up, easing other SCM-mediated conversations
diff --git a/wiki/dev/webapp-versions.md b/wiki/dev/webapp-versions.md
deleted file mode 100644
index ce800e9..0000000
--- a/wiki/dev/webapp-versions.md
+++ /dev/null
@@ -1,27 +0,0 @@
-# Semver Is Wrong For Web Applications
-
-[Semantic Versioning](http://semver.org) (“Semver”) is a great idea, not least
-because it's more of a codification of existing practice than a totally novel
-approach to versioning. However, I think it's wrong for web applications.
-
-Modern web applications tend to be either totally stagnant - in which case
-versioning is irrelevant - or continuously upgraded. Users have no, or very
-little, choice as to which version to run: either they run the version currently
-on the site, or no version at all. Without the flexibility to choose to run a
-specific version, Semver's categorization of versions by what compatibility
-guarantees they offer is at best misleading and at worst irrelevant and
-insulting.
-
-Web applications must still be _versioned_; internal users and operators must be
-able to trace behavioural changes through to deployments and backwards from
-there to [code changes](commit-messages). The continuous and incremental nature
-of most web development suggests that a simple, ordered version identifier may
-be more appropriate: a [build](builds) serial number, or a version _date_, or
-otherwise.
-
-There are _parts_ of web applications that should be semantically versioned: as
-the Semver spec says, “Once you identify your public API, you communicate
-changes to it with specific increments to your version number,” and this remains
-true on the web: whether you choose to support multiple API versions
-simultaneously, or to discard all but the latest API version, a semantic version
-number can be a helpful communication tool _about that API_.
diff --git a/wiki/dev/webapps.md b/wiki/dev/webapps.md
deleted file mode 100644
index c4d99aa..0000000
--- a/wiki/dev/webapps.md
+++ /dev/null
@@ -1,5 +0,0 @@
-# Webapps From The Ground Up
-
-What does a web application do? It sequences side effects and computation. (This should sound familiar: it's what _every_ program does.)
-
-Modern web frameworks do their level best to hide this from you, encouraging code to freely intermix computation, data access, event publishing, logging, responses, _asynchronous_ responses, and the rest. This will damn you to an eternity of debugging.
diff --git a/wiki/dev/webpack.md b/wiki/dev/webpack.md
deleted file mode 100644
index 003152d..0000000
--- a/wiki/dev/webpack.md
+++ /dev/null
@@ -1,236 +0,0 @@
-# A Compiler For The Web
-
-“Compilation” - the translation of code from one language into another - is the manufacturing step of software development. During compilation, the source code, which is written with a human reader in mind and which uses human-friendly abstractions, becomes something the machine can execute. It is during this manufacturing step that a specific design (the application's source code) is realized in a form that can be delivered to users (or rather, executed by their browsers).
-
-Historically, Javascript has had no compilation process. Design and manufacturing were a single process: the browser environment allows developers to write scripts exactly as they'll be delivered by the browser, with no intervening steps. That's a useful property: most notably, it enables the “edit, save, and reload” iteration process that's so popular and so pleasant to work with. However, Javascript's target environment has a few weaknesses that limit the scale of the project you can write this way:
-
-* There's no built-in way to do modular development. All code shares a single, global namespace, and all dependencies have to be resolved and loaded - in the right order - by the developer. If you include third-party code in your project, as a developer you have to obtain that code from somewhere, and insert that into the page. You have to constantly evaluate the tradeoffs between the convenience of third-party content delivery networks versus the reliability of including third-party code direclty in your app's files as-is versus the performance of concatenating it and minifying it into your main script.
-
-* Javascript as a language evolves much faster than browsers do. (Given the break-neck pace of browser evolution, that's really saying something.) Programs written using newer Javascript features, such as the `import` statement (see above) or the compact arrow notation for function literals, require some level of translation before a browser can make sense of the code. Developers targetting the browser directly must balance the convenience offered by new language features against the operational complexity of the translation process.
-
-Historically, the Javascript community has been fairly reluctant to move away from the rapid iteration process provided by the native Javascript ecosystem in the browser. In the last few years, web application development has reached a stage of maturity where those two problems have much more influence over culture and decision-making than they have in the past, so that attitude has started to change. In the last few years we've seen the rise of numerous Javascript translators (compilers, by another name), and frameworks for executing those translators in a repeatable, reproducible way.
-
-# An Aside About Metaphors
-
-Physical manufacturing processes tend to have cost structures where the design step is, unit-wise, expensive, but happens once, while manufacturing is unit-wise quite cheap, but happens endlessly often over the life of the product. Software manufacturing processes are deeply weird by comparison. In software, the design step is, unit-wise, _even more_ expensive, and it happens repeatedly to what is notionally the same product, over most of its life, while the manufacturing step happens a single time, for so little cost that it's rarely worth accounting for.
-
-It's taken a long time to teach manufacturing-trained business people to stop treating development - the design step - like a manufacturing step, but we're finally getting there. Unfortunately, unlike physical manufacturing, software manufacturing is so highly automated that it produces no jobs, even though it's complex enough to support an entire ecosystem of sophisticated, high-quality tools. A software “factory,” for all intents and purposes, operates for free
-
-# Webpack
-
-Webpack is a [compiler system](https://www.destroyallsoftware.com/talks/the-birth-and-death-of-javascript) for the web.
-
-Webpack's compilation process ingests human-friendly source code in a number of languages: primarily Javascript, but in principle any language that can be run by _some_ service the browser provides, including CSS, images, text, and markup. With the help of extensions, it can even ingest things the browser _can't_ serve, such as ES2015 Javascript, or Sass files. It emits, as a target, “bundles” of code which can be loaded using the native tools provided by the browser platform: script tags, stylesheet links, and so on.
-
-It provides, out of the box, solutions to the two core problems of browser development. Webpack provides a lightweight, non-novel module system to allow developers to write applications as a system of modules with well-defined interfaces, even though the browser environment does not have a module loader. Webpack also provides a system of “loaders” which can apply transformations to the input code, which can include the replacement of novel language features with their more-complex equivalents in the browser.
-
-Webpack differentiates itself from its predecessors in a few key ways:
-
-* It targets the whole browser runtime, rather than Javascript specifically. This allows it to include non-Javascript resources, such as stylesheets, in a coherent and consistent way; having a single tool that processes all of your source assets drastically reduces the complexity costs developers have to spend to maintain their asset processing system.
-
- Targetting the browser as a whole also allows Webpack to offer some fairly sophisticated features. Code splitting, for example, allows developers to partition their code so that rarely-used sections are only loaded when actually needed to handle a situation.
-
-* Webpack's output format is, by default, extremely readable and easy to diagnose. The correspondences between source code and the running application are clear, which allows defects found in the running application to be addressed in the original code without introducing extra effort to work backwards from Webpack's output. (It also handles source maps quite well.)
-
-* Webpack's hooks into the application's source code are straight-forward and non-novel. Webpack can ingest sources written using any of three pre-existing Javascript module systems - AMD, CommonJS, and UMD - without serious (or, often, any) changes. Where Webpack offers novel features, it offers them as unobtrusive extensions of existing ideas, rather than inventing new systems from scratch.
-
-* Finally, Webpack's human factors are quite good. The Webpack authors clearly understand the value of the human element; the configuration format is rich without being overly complex, and the watch system works well to keep the “edit, save, and reload” workflow functional and fast while adding a compile step to the Javascript development process.
-
-Webpack is not without tradeoffs, however.
-
-* Webpack's design makes it difficult to link external modules without copying them into the final application. While a classic Javascript app can, for example, reuse a library like jQuery from a CDN, a Webpack application effectively must contain its own copy of that library. There are workarounds for this, such as presuming that the `$` global will be available even without an appropriate `require`, but they're awkward to work with and difficult to reason about in larger codebases.
-
-* The module abstraction can hide a really amazing amount of [code bloat](http://idlewords.com/talks/website_obesity.htm) from developers, and Webpack doesn't provide much tooling for diagnosing or eliminating that bloat. For example, on a personal project, adding `var _ = require('lodash')` to my app caused the Webpack output to grow by a whopping half a megabyte. Surprise!
-
- Worse, given the proliferation of modules on NPM (which are almost all installable via Webpack), an app using a higher-level framework and a few third-party libraries is almost certain to contain multiple modules with overlapping capabilities or even overlapping APIs. When you have to vet every module by hand, this problem becomes apparent to the developer very quickly, but when it's handled automatically, it's very easy for module sets to grow staggeringly large.
-
-* Webpack doesn't eliminate modules during compilation. Instead, it injects a small module loader into your app (the “runtime”, by analogy with the runtime libraries for other languages) to stitch your modules together inside the browser. This code is generated at compile time, and can contain quite a bit of logic if you use the right plugins. In most cases, the cost of sending the Webpack runtime to your users is small, but it's worth being aware of.
-
-* Finally, Webpack's configuration system is behaviour-oriented rather than process-oriented, which gives it a very rigid structure. Most of the exceptions from its canned process are either buried in loaders or provided by plugins, so the plugin system ends up acting as a way to wedge arbitary complexity back in after Webpack's core designed it out.
-
-On the balance, I've been very impressed with Webpack, and have found it to be a pretty effective way to work with browser applications. If you're not using something like Ember that comes with a pre-baked toolkit, then you can probably improve your week by using Webpack to build your Javascript apps.
-
-# Tiny Decisions
-
-To give a sense of what using Webpack is like, here's my current `webpack.config.js`, annotated with the decisions I've made so far and some of the rationales behind them.
-
-This setup allows me to run `webpack` on the CLI to compile my sources into a working app, or `webpack --watch` to leave Webpack running to recompile my app for me as I make changes to the sources. The application is written using the React framework, and uses both React's JSX syntax for components and many ES2105 language features that are unavailable in the browser. It also uses some APIs that are available in some browsers but not in others, and includes polyfills for those interfaces.
-
-You can see the un-annotated file [on Github](https://github.com/ojacobson/webpack-starter/blob/9722f2c873a956ad527947db49bbbe8ecdb4606c/webpack.config.js).
-
- 'use strict'
-
- var path = require('path')
- var keys = require('lodash.keys')
-
-I want to call this `require` out - I've used a similar pattern in my actual app code. Lodash, specifically, has capability bundles that are much smaller than the full Lodash codebase. Using `var _ = require('lodash')` grows the bundle by 500kb or so, while this only adds about 30kb.
-
- var webpack = require('webpack')
- var HtmlWebpackPlugin = require('html-webpack-plugin')
- var ExtractTextPlugin = require("extract-text-webpack-plugin")
-
- var thisPackage = require('./package.json')
-
-We'll see where all of these requires get used later on.
-
- module.exports = {
- entry: {
- app: ['app.less', 'app'],
- vendor: keys(thisPackage.dependencies),
- },
-
-Make two bundles:
-
-* One for application code and stylesheets.
-
-* One for “vendor” code, computed from `package.json`, so that app changes don't _always_ force every client to re-download all of React + Lodash + yada yada. In `package.json`, the `dependencies` key holds only dependencies that should appear in the vendor bundle. All other deps appear in `devDependencies`, instead. Subverting the dependency conventions like this lets me specify the vendor bundle exactly once, rather than having to duplicate part of the dependency list here in `webpack.config.js`.
-
- Because the dependencies are listed as entry point scripts, they will always be run when Webpack loads `vendor.[hash].js`. This makes the vendor bundle an appropriate place both for `require()`able modules and for polyfills that operate through side effects on `window` or other global objects.
-
-This config also invents a third bundle, below. I'll talk about that when I get there.
-
-A lot of this bundle structure is motivated by the gargantuan size of the libraries I'm using. The vendor bundle is approximately two megabytes in my real app, and includes not just React but a number of supporting libraries. Reusing the vendor bundle between versions helps cut down on the number of times users have to download all of that code. I need to address this, but being conscious of browser caching behaviours helps for now.
-
- resolve: {
- root: [
- path.resolve("src"),
- ],
-
-Some project layout:
-
-* `PROJECT/src`: Input files for Webpack compilation.
-
-All inputs go into a single directory, to simplify Webpack file lookups. Separating inputs by type (`js`, `jsx`, `less`, etc) would be consistent with other tools, but makes operating Webpack much more complicated.
-
- // Automatically resolve JSX modules, like JS modules.
- extensions: ["", ".webpack.js", ".web.js", ".js", ".jsx"],
- },
-
-This is a React app, so I've added `.jsx` to the list of default suffixes. This allows constructs like `var MyComponent = require('MyComponent')` to behave as developers expect, without requiring the consuming developer to keep track of which language `MyComponent` was written in.
-
-I could also have addressed this by treating all `.js` files as JSX sources. This felt like a worse option; the JSX preprocessing step _looks_ safe on pure-JS sources, but why worry about it when you can be explicit about which parser to use?
-
- output: {
- path: path.resolve("dist/bundle"),
- publicPath: "/bundle/",
-
-More project layout:
-
-* `PROJECT/dist`: the content root of the web app. Files in `/dist` are expected to be served by a web server or placed in a content delivery network, at the root path of the host.
-
- * `PROJECT/dist/bundle`: Bundled Webpack outputs for the app. A separate directory makes it easier to set Webpack-specific rules in web servers, which we exploit later in this configuration.
-
-I've set `publicPath` so that dynamically-loaded chunks (if you use `require.ensure`, for example) end up with the right URLs.
-
- filename: "[name].[chunkhash].js",
-
-Include a stable version hash in the name of each output file, so that we can safely set `Cache-Control` headers to have browsers store JS and stylesheets for a long time, while maintaining the ability to redeploy the app and see our changes in a timely fashion. Setting a long cache expiry for these means that the user only pays the transfer costs (power, bandwidth) for the bundles on the first pageview after each deployment, or after their browser cache forgets the site.
-
-For each bundle, so long as the contents of that bundle don't change, neither will the hash. Since we split vendor code into its own chunk, _often_ the vendor bundle will end up with the same hash even in different versions of the app, further cutting down the number of times the user has to download the (again, massive) dependencies.
-
- },
-
- module: {
- loaders: [
- {
- test: /\.js$/,
- exclude: /node_modules/,
- loader: "babel",
- query: {
- presets: ['es2015'],
- plugins: ['transform-object-rest-spread'],
- },
- },
-
-You don't need this if you don't want it, but I've found ES2015 to be a fairly reasonable improvement over Javascript. Using an exclude, we treat _local_ JS files as ES2015 files, translating them with Babel before including them in the bundle; I leave modules included from third-party dependencies alone, because I have no idea whether I should trust Babel to do the right thing with someone else's code, or whether it already did the right thing.
-
-I've added `transform-object-rest-spread` because the app I'm working on makes extensive use of `return {...state, modified: field}` constructs, and that syntax is way easier to work with than the equivalent `return Object.assign({}, state, {modified: field})`.
-
- {
- test: /\.jsx$/,
- exclude: /node_modules/,
- loader: "babel",
- query: {
- presets: ['react', 'es2015'],
- plugins: ['transform-object-rest-spread'],
- },
- },
-
-Do the same for _local_ `.jsx` files, but additionally parse them using Babel's React driver, to translate `<SomeComponent />` into approprate React calls. Once again, leave the parsing of third-party code alone.
-
- {
- test: /\.less$/,
- exclude: /node_modules/,
- loader: ExtractTextPlugin.extract("css?sourceMap!less?sourceMap"),
- },
-
-Compile `.less` files using `less-loader` and `css-loader`, preserving source maps. Then feed them to a plugin whose job is to generate a separate `.css` file, so that they can be loaded by a `<link>` tag in the HTML document. The other alternative, `style-loader`, relies on DOM manipulation at runtime to load stylesheets, which both prevents it from parallelizing with script loading and causes some additional DOM churn.
-
-We'll see where `ExtractTextPlugin` actually puts the compiled stylesheets later on.
-
- ],
- },
-
- plugins: [
- new webpack.optimize.OccurrenceOrderPlugin(/* preferEntry=*/true),
-
-This plugin causes webpack to order bundled modules such that the most frequently used modules have the shortest identifiers (lexically; 9 is shorter than 10 but the same length as 2) in the resulting bundle. Providing a predictable ordering is irrelevant semantically, but it helps keep the vendor bundle ordered predictably.
-
- new webpack.optimize.CommonsChunkPlugin({
- name: 'vendor',
- minChunks: Infinity,
- }),
-
-Move all the modules the `vendor` bundle depends on into the `vendor` bundle, even if they would otherwise be placed in the `app` bundle. (Trust me: this is a thing. Webpack's algorithm for locating modules is surprising, but consistent.)
-
- new webpack.optimize.CommonsChunkPlugin({
- name: 'boot',
- chunks: ['vendor'],
- }),
-
-Hoo boy. This one's tricky to explain, and doesn't work very well regardless.
-
-The facts:
-
-1. This creates the third bundle (“boot.[chunkhash].js”) I mentioned above, and makes the contents of the `vendor` bundle “children” of it.
-
-2. This plugin will also put the runtime code, which includes both its module loader (which is the same from build to build) and a table of bundle hashes (which is not, unless the bundles are the same), in the root-most bundle.
-
-3. I really don't want the hash of the `vendor` bundle changing without a good reason, because the `vendor` bundle is grotesquely bloated.
-
-This code effectively moves the Webpack runtime to its own bundle, which loads quickly (it's only a couple of kilobytes long). This bundle's hash changes on nearly every build, so it doesn't get reused between releases, but by moving that change to this tiny bundle, we get to reuse the vendor bundle as-is between releases a lot more often.
-
-Unfortunately, code changes in the app bundle _can_ cause the vendor bundle's constituent modules to be reordered or renumbered, so it's not perfect: sometimes the `vendor` bundle's hash changes between versions even though it contains an identical module list with different identifiers. So it goes: the right fix here is probably to shrink the bundle and to re-merge it into the `app` bundle.
-
- new ExtractTextPlugin("[name].[contenthash].css"),
-
-Emit collected stylesheets into a separate bundle, named after the entry point. Since the only entry point with stylesheets is the `app` entry point, this creates `app.[hash].css` in the `dist/bundle` directory, right next to `app.[hash].js`.
-
- new HtmlWebpackPlugin({
- // put index.html outside the bundle/ subdir
- filename: '../index.html',
- template: 'src/index.html',
- chunksSortMode: 'dependency',
- }),
-
-Generate the entry point page from a template (`PROJECT/src/index.html`), rather than writing it entirely by hand.
-
-You may have noticed that _all four_ of the bundles generated by this build have filenames that include generated chunk hashes. This plugin generates the correct `<script>` tags and `<link>` tags to load those bundles and places them in `dist/index.html`, so that I don't have to manually correct the index page every time I rebuild the app.
-
- ],
-
- devtool: '#source-map',
-
-Make it possible to run browser debuggers against the bundled code as if it were against the original, unbundled module sources. This generates the source maps as separate files and annotates the bundle with a link to them, so that the (bulky) source maps are only downloaded when a user actually opens the debugger. (Thanks, browser authors! That's a nice touch.)
-
-The source maps contain the original, unmodified code, so that the browser doesn't need to have access to a source tree to make sense of them. I don't care if someone sees my sources, since the same someone can already see the code inside the webpack bundles.
-
- }
-
-Things yet to do:
-
-* Webpack 2's “Tree Shaking” mode exploits the static nature of ES2015 `import` statements to fully eliminate unused symbols from ES2105-style modules. This could potentially cut out a lot of the code in the `vendor` bundle.
-
-* [Sean Larkin](https://twitter.com/TheLarkInn) suggests setting `recordsPath` at the top level of the Webpack config object to pin chunk IDs between runs. This works! Unfortunately, some plugins cause the records file to grow every time you run `webpack`, regardless of any changes to the output. This is, obviously, not great.
-
-* A quick primer on React server-side rendering. I know this is a Webpack primer, and not a React primer, but React-in-the-wild often relies on Webpack.
diff --git a/wiki/dev/whats-wrong-with-jenkins.md b/wiki/dev/whats-wrong-with-jenkins.md
deleted file mode 100644
index 4224eb7..0000000
--- a/wiki/dev/whats-wrong-with-jenkins.md
+++ /dev/null
@@ -1,108 +0,0 @@
-# Something's Rotten in the State of Jenkins
-
-Automated, repeatable testing is a fairly widely-accepted cornerstone of
-mature software development. Jenkins (and its predecessor, Hudson) has the
-unique privilege of being both an early player in the niche and
-free-as-in-beer. The blog space is littered with interesting articles about
-continuous builds, automated testing, and continuous deployment, all of which
-conclude on “how do we make Jenkins do it?”
-
-This is unfortunate, because Jenkins has some serious problems, and I want it
-to stop informing the discussion.
-
-## There's A Plugin For That
-
-Almost everything in the following can be addressed using one or more plugins
-from Jenkins' extensive plugin repository. That's good - a build system you
-can't extend is kind of screwed - but it also means that the Jenkins team
-haven't felt a lot of pressure to address key problems in Jenkins proper.
-
-(Plus, the plugin ecosystem is its own kind of screwed. More on that later.)
-
-To be clear: being able to fix it with plugins does not make Jenkins itself
-_good_. Plugins are a non-response to fundamental problems with Jenkins.
-
-## No Granularity
-
-Jenkins builds are atomic: they either pass en suite, or fail en suite. Jenkins has no built-in support for recording that basic compilation succeeded, unit tests failed, but linting also succeeded.
-
-You can fix this by running more builds, but then you run into problems with
-...
-
-## No Gating
-
-... the inability to wait for multiple upstream jobs before continuing a
-downstream job in a job chain. If your notional build pipeline is
-
-1. Compile, then
-2. Lint and unit test, then
-3. Publish binaries for testers/users
-
-then you need to combine the lint and unit test steps into a single build, or
-tolerate occasionally publishing between zero and two copies of the same
-original source tree.
-
-## No Pipeline
-
-The above are actually symptomatic of a more fundamental design problem in
-Jenkins: there's no build pipeline. Jenkins is a task runner: triggers cause
-tasks to run, which can cause further triggers. (Without plugins, Jenkins
-can't even ensure that chains of jobs all build the same revisioins from
-source control.)
-
-I haven't met many projects whose build process was so simple you could treat
-it as a single, pass-fail task, whose results are only interesting if the
-whole thing succeeds.
-
-## Plugin the Gap
-
-To build a functional, non-trivial build process on top of Jenkins, you will
-inevitably need plugins: plugins for source control, plugins for
-notification, plugins for managing build steps, plugins for managing various
-language runtimes, you name it.
-
-The plugin ecosystem is run on an entirely volunteer basis, and anyone can
-get a new plugin into the official plugin registry. This is good, in as much
-as the barrier to entry _should_ be low and people _should_ be encouraged to
-scratch itches, but it also means that the plugin registry is a swamp of
-sporadically-maintained one-offs with inconsistent interfaces.
-
-(Worse, even some _core_ plugins have serious maintenance deficits: have a
-look at how long
-[JENKINS-20767](https://issues.jenkins-ci.org/browse/JENKINS-20767) was open.
-How many Jenkins users use Git?)
-
-## The Plugin API
-
-The plugin API also, critically, locks Jenkins into some internal design
-problems. The sheer number of plugins, and the sheer number of maintainers,
-effectively prevents any major refactoring of Jenkins from making progress.
-Breaking poorly-maintained plugins inevitably pisses off the users who were,
-quite happily, using whatever they'd cooked up, but with the maintainership
-of plugins so spread out and so sporadic, there's no easy way for the Jenkins
-team to, for example, break up the [4,000-line `Jenkins` class](https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/jenkins/model/Jenkins.java).
-
-## What Is To Be Done
-
-Jenkins is great and I'm glad it exists. Jenkins moved the state of the art
-for build servers forward very effectively, and successfully out-competed
-more carefully-designed offerings that were not, in fact, better:
-[Continuum](http://continuum.apache.org) is more or less abandoned, and when
-was the last time you saw a
-[CruiseControl](http://cruisecontrol.sourceforge.net) (caution: SourceForge)
-install?
-
-It's interesting to compare the state of usability in, eg., Jenkins, to the
-state of usability in some paid-product build systems
-([Bamboo](https://www.atlassian.com/software/bamboo) and
-[TeamCity](https://www.jetbrains.com/teamcity/) for example) on the above
-points, as well as looking at the growing number of hosted build systems
-([TravisCI](https://travis-ci.org), [MagnumCI](https://magnum-ci.com)) for
-ideas. A number of folks have also written insightful musings on what they
-want to see in the next CI tool: Susan Potter's
-[Carson](https://github.com/mbbx6spp/carson) includes an interesting
-motivating metaphor (if you're going to use butlers, why not use the whole
-butler mileu?) and some good observations on how Jenkins lets us all down,
-for example.
-
-I think it's time to put Jenkins to bed and write its successor.
diff --git a/wiki/dev/why-scm.md b/wiki/dev/why-scm.md
deleted file mode 100644
index 5985982..0000000
--- a/wiki/dev/why-scm.md
+++ /dev/null
@@ -1,73 +0,0 @@
-# Why we use SCM systems
-
-I'm watching a newly-minted co-op student dealing with her first encounter
-with Git, unhelpfully shepherded by a developer to whom everything below is
-already second nature, so deeply that the reasoning is hard to articulate. It
-is not going well.
-
-I have the same problem, and it could be me trying to give someone an intro to
-Git off the top of my head, but it's not, today. For next time, here are my
-thoughts. They have shockingly little to do with Git.
-
-## Assumptions
-
-* You're working on a software project.
-* You know how to read and write code.
-* You're human.
-* You have end users or customers - people other than yourself who care about
- your code.
-* Your project is going to take more than a few minutes to reach end of life.
-
-## The safety net
-
-Having a record of past states and known-good states means that, when (WHEN)
-you write some code that doesn't work, and when (WHEN) you're stumped as to
-why, you can throw your broken code away and get to a working state again. It
-also helps with less-drastic solutions by letting you run comparisons between
-your broken code and working code, which helps narrow down whatever problem
-you've created for yourself.
-
-(Aside: if you're in a shop that “doesn't use source control,” and for
-whatever insane reason you haven't already run screaming, this safety net is a
-good reason to use source control independently of the organization as a
-whole. Go on, it's easy; modern DSCM tools like Mercurial or Git make
-importing “external” trees pretty straightforward. Your future self thanks
-you.)
-
-## Historical record
-
-Having a record of past, released states means you can go back later and
-recover how your project has changed over time. Even if your commit practices
-are terrible, when (WHEN) your users complain that something stopped working a
-few months ago and they never bothered to mention it until now, you have some
-chance of finding out what caused the problem. Better practices around [commit
-messages](commit-messages) and other workflow-related artifacts improve your
-chances of finding out _why_, too.
-
-## Consensus
-
-Every SCM system and every release process is designed to help the humans in
-the loop agree on what, exactly, the software being released looks like and
-whether or not various releasability criteria have been met. It doesn't matter
-if you use rolling releases or carefully curate and tag every release after
-months of discussion, you still need to be able to point to a specific version
-of your project's source code and say “this will be our next release.”
-
-SCM systems can help direct and contextualize that discussion by recording the
-way your project has changed during those discussion, whether that's part of
-development or a separate post-“freeze” release process.
-
-## Proposals and speculative development
-
-Modern SCM systems (other than a handful of dismal early attempts) also help
-you _propose_ and _discuss_ changes. Distributed source control systems make
-this particularly easy, but even centralized systems can support workflows
-that record speculative development in version control. The ability to discuss
-specific changes and diffs, either within a speculative line of development or
-between a proposed feature and the mainline code base, is incredibly powerful.
-
-## The bottom line
-
-It's about the people, not the tools, stupid. Explaining how Git works to
-someone who doesn't have a good grasp on the relationship between source
-control tools and long-term, collaborative software development won't help.