diff options
Diffstat (limited to 'wiki')
78 files changed, 0 insertions, 7699 deletions
diff --git a/wiki/12factor/3-config.md b/wiki/12factor/3-config.md deleted file mode 100644 index 5d6c6c6..0000000 --- a/wiki/12factor/3-config.md +++ /dev/null @@ -1,22 +0,0 @@ -# Factor 3: Config - -[This section](http://www.12factor.net/config) advises using environment -variables for everything. - -> [Owen J](https://twitter.com/derspiny): I think I disagree with -> 12factor's conclusions on config even though I agree with the premises -> and rationale in general -> -> [Owen J](https://twitter.com/derspiny): environment variables -> are neither exceptionally portable, exceptionally standard, nor -> exceptionally easy to manage -> -> [Owen J](https://twitter.com/derspiny): and therefore should not be -> the exceptional configuration mechanism :) -> -> [Kit L](https://twitter.com/wlonk): that's exactly the critique i have - -Frustratingly, the config section doesn't provide any guidance on sensible -ways to _manage_ environment variables. In any real-world deployment, they're -going to have to be stored somewhere; where's appropriate? `.bash_profile`? -`httpd.con` as `SetEnv` directives? Per-release `rc` files? `/etc/init.d`? diff --git a/wiki/12factor/7-port-binding.md b/wiki/12factor/7-port-binding.md deleted file mode 100644 index a756496..0000000 --- a/wiki/12factor/7-port-binding.md +++ /dev/null @@ -1,31 +0,0 @@ -# Factor 7: Port Binding - -[This](http://www.12factor.net/port-binding) is the exact point where the -Heroku-specific features of the approach overwhelm the general features. - -Factor 7 is over-specific: - -* It presupposes the existence of a front-end routing layer, without providing - any insight into how to deploy, configure, provision, or manage one. - -* It demands HTTP (by name) rather than a more flexible “any well-standardized - protocol,” without explaining why. (Web apps can have non-HTTP internal - components.) - -* It dismisses the value of “pre-existing” container ecosystems that don't - work the way Heroku does. Have a giant, well-managed - [Glassfish](http://glassfish.org) cluster that you deploy components to? TOO - BAD, not Heroku-like enough for these guys even though many aspects run - along similar philosophical lines. - -* It dismisses the value of unix-as-a-container. Unix domain sockets with - controlled permissions? Psh, let's go through the network stack instead. - SysV IPC? (Yeah, I know.) Network. Pipes? Network. There's an implicit - exception for “intra-process” communication, but it's never really - identified or reasoned about. - -* Have you _seen_ the kinds of process control interfaces developers invent, - when left to their own devices? Signals and PID files are well-established - conventions, and smart, competent people still fuck those up all the time. - Command-line arguments are another frequent case of NIH stupidity. Do you - really want every app to have its own startup API? diff --git a/wiki/12factor/index.md b/wiki/12factor/index.md deleted file mode 100644 index 6e75732..0000000 --- a/wiki/12factor/index.md +++ /dev/null @@ -1,19 +0,0 @@ -# 12-Factor Apps - -Some folks over at [Heroku](http://heroku.com/) wrote up their perceived best -practices for building “software as a service”-style applications and called -it [The Twelve-Factor App](http://www.12factor.net). It's a good read, and has -lots of good advice in it. - -I have a few thoughts on it. - ------ - -* [III. Config](3-config) -* [VII. Port Binding](7-port-binding) - ------ - -At some point around sections 6 or 7, the goodness of the advice is overtaken -by the “be more like Heroku specifically”-ness of the advice, to the detriment -of their point. diff --git a/wiki/authnz/users-rolegraph-privs.md b/wiki/authnz/users-rolegraph-privs.md deleted file mode 100644 index fdbf52d..0000000 --- a/wiki/authnz/users-rolegraph-privs.md +++ /dev/null @@ -1,110 +0,0 @@ -# A Users, Roles & Privileges Scheme Using Graphs - -The basic elements: - -* Every agent that can interact with a system is represented by a **user**. -* Every capability the system has is authorized by a distinct **privilege**. -* Each user has a list of zero or more **roles**. - * Roles can **imply** further roles. This relationship is transitive: if - role A implies role B, then a member of role A is a member of role B; if - role B also implies role C, then a member of role A is also a member of - role C. It helps if the resulting role graph is acyclic, but it's not - necessary. - * Roles can **grant** privileges. - -A user's privileges are the union of the privileges granted by the transitive -closure of their roles. - -## In SQL - - create table "user" ( - username varchar - primary key - -- credentials &c - ); - - create table role ( - name varchar - primary key - ); - - create table role_member ( - role varchar - not null - references role, - member varchar - not null - references "user", - primary key (role, member) - ); - - create table role_implies ( - role varchar - not null - references role, - implied_role varchar - not null - ); - - create table privilege ( - privilege varchar - primary key - ); - - create table role_grants ( - role varchar - not null - references role, - privilege varchar - not null - references privilege, - primary key (role, privilege) - ); - -If your database supports recursive CTEs, querying this isn't awful, since we -can have the database do all the graph-walking along roles: - - with recursive user_roles (role) AS ( - select - role - from - role_member - where - member = 'SOME USERNAME' - union - select - implied_role as role - from - user_roles - join role_implies on - user_roles.role = role_implies.role - ) - select distinct - role_grants.privilege as privilege - from - user_roles - join role_grants on - user_roles.role = role_grants.role - order by privilege; - -If not, get a better database. Recursive graph walking with network round -trips at each step is stupid and you shouldn't do it. - -Realistic uses should have fairly simple graphs: elemental privileges are -grouped into abstract roles, which are in turn grouped into meaningful roles -(by department, for example), which are in turn granted to users. In -PostgreSQL, the above schema handles ~10k privileges and ~10k roles with -randomly-generated graph relationships in around 100ms on my laptop, which is -pretty slow but not intolerable. Perverse cases (interconnected total -subgraphs, deeply-nested linear graphs) can take absurd time but do not -reflect any likely permissions scheme. - -## What Sucks - -* Graph theory in my authorization system? It's more likely than you think. -* There's no notion of revoking a privilege. If you have a privilege by any - path through your roles, then it cannot be revoked except by removing all of - the paths that lead back to that privilege. -* Not every system has an efficient way to compute these graphs. - * PostgreSQL, as given above, has a hard time with unrealistically-deep - nested roles. diff --git a/wiki/chat/notes.md b/wiki/chat/notes.md deleted file mode 100644 index 84f60f6..0000000 --- a/wiki/chat/notes.md +++ /dev/null @@ -1,39 +0,0 @@ -# Notes towards a Chat Service - -Now: - -* Chat tools divide discussion by "channel"/"room" -* A channel is an undifferentiated sequence of remarks. -* Social dynamics in small channels: don't interrupt the current channel discussion even if you have another discussion to raise that would be within the channel's purpose. - * Conversations are bimodal: short bursts of generally-interesting remarks, or long chains of interrun responses. Not much middle ground. (Think meme channels vs discussion channels.) - * Small groups + robots: the robots interrupt things anyways, because they're robots. -* Social dynamics in large channels: it's moving too fast to really track, unless it's the _only_ thing you're doing. - -Slack specifically: - -* Per-social-circle UI modality makes it awkward to engage with multiple discussions at a time unless they all happen in the same place. -* Universally poor respect for consent. -* Pricing/business model issues: - -Instead: - -* A channel is a group of distinct discussions, plus a jumping-off point for new discussions. -* A user viewing a channel sees an overview of the ongoing discussions (maintained automatically or semi-automatically) along with lists of their active participants, and any initial remarks that could lead to a new discussion. -* A user can join an ongoing discussion and see the remarks to date, or duck out of it to see the summary again. -* A user can leave an ongoing discussion to indicate that they no longer expect to participate and may not respond to things said. -* Conversations "age out" of channels after they fall silent. -* Aged out conversations are still visible in archives and in the participants' clients, and necroposting brings them back. - -* New remarks to the channel appear as "prompts." -* Responding to a prompt creates a conversation. -* Prompts age out (quickly) if not responded to. - - - - - -Why: - -* Allow multiple concurrent discussions within the same nominal channel with minimal crosstalk/confusion. -* Insulate conversations from accidental interruptions, while making it easy to intentionally participate. -* Closer model to rooms full of people. diff --git a/wiki/cool-urls-can-change.md b/wiki/cool-urls-can-change.md deleted file mode 100644 index b0c489b..0000000 --- a/wiki/cool-urls-can-change.md +++ /dev/null @@ -1,66 +0,0 @@ -# Cool URLs Do Change (Sometimes) - -Required reading: [Cool URLs don't -change](http://www.w3.org/Provider/Style/URI.html). - -When I wrote [Nobody Cares About Your -Build](http://codex.grimoire.ca/2008/09/24/nobody-cares-about-your-build/), I -set up a dedicated publishing platform - Wordpress, as it happens - to host -it, and as part of that process I put some real thought into the choice of -“permalink” schemes to use. I opted to use a “dated” scheme, baking the -publication date of each article into its name - into its URL - for all -eternity. I'm a big believer in the idea that a URL should be a long-term name -for the appropriate bit of data or content, and every part of a dated scheme -“made sense” at the time. - -This turned out to be a mistake. - -The web is not, much, like print media. Something published may be amended; -you don't even have to publish errata or a correction, since you can correct -the original mistake “seamlessly.” This has its good and its -[bad](http://en.wikipedia.org/wiki/Memory_hole) parts, but with judicious use -and [a public history](https://github.com/ojacobson/grimoiredotca), amendment -is more of a win than a loss. However, this plays havoc with the idea of a -“publication” date, even for data that takes the form of an article: is the -publication date the date it was first made public, the date of its most -recent edit, or some other date? - -Because the name - the URL - of an article was set when I first published it, -the date in the name had to be its initial publication date. _This has -actually stopped me from making useful amendments to old articles_ because the -effort of writing a full, free-standing followup article is more than I'm -willing to commit to. Had I left the date out of the URLs, I'd feel more free -to judiciously amend articles in place and include, in the content, a short -amendment summary. - -The W3C's informal suggestions on the subject state that “After the creation -date, putting any information in the name is asking for trouble one way or -another.” I'm starting to believe that this doesn't go far enough: _every_ -part of a URL must have some semantic justification for being there, dates -included: - -1. *Each part must be meaningful*. While - `http://example.com/WW91IGp1c3QgbG9zdCB0aGUgZ2FtZQ==` is fairly easy to - render stable, the meaningless blob renders the name immemorable. - -2. *Each part must be stable*. This is where I screwed up worst: I did not - anticipate that the “date” of an article could be a fluid thing. It's - tempting to privilege the first date, and it's not an unreasonable - solution, but it didn't fit how I wanted to address the contents of - articles. - -Running a web server gives you one namespace to play with. Use it wisely. - -## Ok, But I've Already Got These URLs - -Thankfully, there's a way out - for _some_ URLs. URLs inherently name -resources _accessed using some protocol_, and some protocols provide support -for resources that are, themselves, references to other URLs. HTTP is a good -example, providing a fairly rich set of responses that all, fundamentally, -tell a client to check a second URL for the content relevent to a given URL. -In protocols like this, you can easily replace the content of a URL with a -reference to its new, “better” URL rather than abandoning it entirely. - -Names can evolve organically as the humans that issue them grow a better -understanding of the problem, and don't always have to be locked in stone from -the moment they're first used. diff --git a/wiki/dev/buffers.md b/wiki/dev/buffers.md deleted file mode 100644 index 62bcad6..0000000 --- a/wiki/dev/buffers.md +++ /dev/null @@ -1,99 +0,0 @@ -# Observations on Buffering - -None of the following is particularly novel, but the reminder has been useful: - -* All buffers exist in one of two states: full (writes outpace reads), or empty - (reads outpace writes). There are no other stable configurations. - -* Throughput on an empty buffer is dominated by the write rate. Throughput on a - full buffer is dominated by the read rate. - -* A full buffer imposes a latency penalty equal to its size in bits, divided by - the read rate in bits per second. An empty buffer imposes (approximately) no - latency penalty. - -The previous three points suggest that **traffic buffers should be measured in -seconds, not in bytes**, and managed accordingly. Less obviously, buffer -management needs to be considerably more sophisticated than the usual "grow -buffer when full, up to some predefined maximum size." - -Point one also implies a rule that I see honoured more in ignorance than in -awareness: **you can't make a full buffer less full by making it bigger**. Size -is not a factor in buffer fullness, only in buffer latency, so adjusting the -size in response to capacity pressure is worse than useless. - -There are only three ways to make a full buffer less full: - -1. Increase the rate at which data exits the buffer. - -2. Slow the rate at which data enters the buffer. - -3. Evict some data from the buffer. - -In actual practice, most full buffers are upstream of some process that's -already going as fast as it can, either because of other design limits or -because of physics. A buffer ahead of disk writing can't drain faster than the -disk can accept data, for example. That leaves options two and three. - -Slowing the rate of arrival usually implies some variety of _back-pressure_ on -the source of the data, to allow upstream processes to match rates with -downstream processes. Over-large buffers delay this process by hiding -back-pressure, and buffer growth will make this problem worse. Often, -back-pressure can happen automatically: failing to read from a socket, for -example, will cause the underlying TCP stack to apply back-pressure to the peer -writing to the socket by delaying TCP-level message acknowledgement. Too often, -I've seen code attempt to suppress these natural forms of back-pressure without -replacing them with anything, leading to systems that fail by surprise when -some other resource – usually memory – runs out. - -Eviction relies on the surrounding environment, and must be part of the -protocol design. Surprisingly, most modern application protocols get very -unhappy when you throw their data away: the network age has not, sadly, brought -about protocols and formats particularly well-designed for distribution. - -If neither back-pressure nor eviction are available, the remaining option is to -fail: either to start dropping data unpredictably, or to cease processing data -entirely as a result of some resource or another running out, or to induce so -much latency that the data is useless by the time it arrives. - ------ - -Some uncategorized thoughts: - -* Some buffers exist to trade latency against the overhead of coordination. A - small buffer in this role will impose more coordination overhead; a large - buffer will impose more latency. - - * These buffers appear where data transits between heterogenous system: for - example, buffering reads from the network for writes to disk. - - * Mismanaged buffers in this role will tend to cause the system to spend - an inordinate proportion of latency and throughput negotiating buffer - sizes and message readiness. - - * A coordination buffer is most useful when _empty_; in the ideal case, the - buffer is large enough to absorb one message's worth of data from the - source, then pass it along to the sink as quickly as possible. - -* Some buffers exist to trade latency against jitter. A small buffer in this - role will expose more jitter to the upstream process. A large buffer in this - role will impose more latency. - - * These tend to appear in _homogenous_ systems with differing throughputs, - or as a consequence of some other design choice. Store-and-forward - switching in networks, for example, implies that switches must buffer at - least one full frame of network data. - - * Mis-managed buffers in this role will _amplify_ rather than smoothing out - jitter. Apparent throughput will be high until the buffer fills, then - change abruptly when full. Upstream processes are likely to throttle - down, causing them to under-deliver if the buffer drains, pushing the - system back to a high-throughput mode. [This problem gets worse the - more buffers are present in a system](http://www.bufferbloat.net). - - * An anti-jitter buffer is most useful when _full_; in exchange for a - latency penalty, sudden changes in throughput will be absorbed by data - in the buffer rather than propagating through to the source or sink. - -* Multimedia people understand this stuff at a deep level. Listen to them when - designing buffers for other applications. diff --git a/wiki/dev/builds.md b/wiki/dev/builds.md deleted file mode 100644 index abe3d19..0000000 --- a/wiki/dev/builds.md +++ /dev/null @@ -1,194 +0,0 @@ -# Nobody Cares About Your Build - -Every software system, from simple Python packages to huge enterprise-grade -systems spanning massive clusters, has a build—a set of steps that must be -followed to go from a source tree or a checked-out project to a ready-to-use -build product. A build system's job is to automate these steps. - -Build systems are critical to software development. - -They're also one of the most common avoidable engineering failures. - -A reliable, comfortable build system has measurable benefits for software -development. Being able to build a testable, deployable system at any point -during development lets the team test more frequently. Frequent testing -isolates bugs and integration problems earlier, reducing their impact. Simple, -working builds allow new team members to ramp up more quickly on a project: -once they understand how one piece of the system is constructed, they can -apply that knowledge to the entire system and move on to doing useful work. If -releases, the points where code is made available outside the development -team, are done using the same build system that developers use in daily life, -there will be fewer surprises during releases as the “release” build process -will be well-understood from development. - -## Builds Have Needs, Too - -In 1947, Abraham Maslow described a [hierarchy of -needs](http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs) for a -person's physical and mental well-being on the premise that all the items at -the lowest level of the hierarchy must be met before a person will be able to -focus usefully on higher-level needs. Maslow's hierarchy begins with a set of -needs that, without which, you do not have a person (for long)—physiological -needs like “breathing,” “food,” and “water.” At the peak, there are extremely -high-level needs that are about being a happy and enlightened -person—“creativity,” “morality,” “curiosity,” and so on. - - - -Builds, and software engineering as a whole, can be described the same way: at -the top of the hierarchy is a working system that solves a problem, and at the -bottom are the things you need to have software at all. If you don't meet -needs at a given level, you will eventually be forced to stop what you're -doing at a higher level and face them. - -Before a build is a build, there are five key needs to meet: - -* **It must be repeatable**. Every time you start your build on a given source - tree, it must build exactly the same products without any further - intervention. Without this, you can't reliably decide whether a given build - is “good,” and can easily wind up with a build that needs to be run several - times, or a build that relies on running several commands in the right - order, to produce a build. -* **It must be automatable**. Build systems are used by developers sitting at - their desks, but they’re also used by automatic build systems for nightly - builds and continuous integration, and they can be made into parts of other - builds. A build system that can only be run by having someone sit down at a - keyboard and mouse and kicking it off can’t be integrated into anything - else. -* **It must be standardized**. If you have multiple projects that build - similar things—for example, several Java libraries—all of them must be built - the same way. Without this, it's difficult for a developer to apply - knowledge from one project to another, and it's difficult to debug problems - with individual builds. -* **It must be extensible**. Not all builds are created equal. Where one build - compiles a set of source files, another needs five libraries and a WSDL - descriptor before it can compile anything. There must be affordances within - the standard build that allow developers to describe the ways their build is - different. Without this, you have to write what amounts to a second build - tool to ensure that all the “extra” steps for certain projects happen. -* **Someone must understand it**. A build nobody understands is a time bomb: - when it finally breaks (and it will), your project will be crippled until - someone fixes it or, more likely, hacks around it. - -If you have these five things, you have a working build. The next step is to -make it comfortable. Comfortable builds can be used daily for development -work, demonstrations, and tests as well as during releases; builds that are -used constantly don't get a chance to “rust” as developers ignore them until a -release or a demo and don’t hide surprises for launch day. - -* **It must be simple**. When a complicated build breaks, you need someone who - understands it to fix it for you. Simple builds mean more people can - understand it and fewer things can break. -* **It must be fast**. A slow build will be hacked around or ignored entirely. - Ideally, someone creating a local build for a small change should have a - build ready in seconds. -* **It must be part of the product**. The team responsible for developing a - project must be in control of and responsible for its build. Changes to it - and bugs against it must be treated as changes to the product or bugs in the - product. -* **It must run unit tests**. Unit tests, which are completely isolated tests - written by and for developers, can catch a large number of bugs, but they're - only useful if they get run. The build must run the unit test suite for the - product it's building every build. -* **It must build the same thing in any environment**. A build is no good if - developers can only get a working build from a specific machine, or where a - build from one developer's machine is useless anywhere else. If the build is - uniform on any environment, any developer can cook up a build for a test or - demo at any time. - -Finally, there are “chrome” features that take a build from effective to -excellent. These vary widely from project to project and from organization to -organization. Here are some common chrome needs: - -* **It should integrate with your IDEs**. This goes both directions: it should - be possible to run the build without leaving your IDE or editor suite, and - it should be possible to translate the build system into IDE-specific - configurations to reduce duplication between IDE settings and the build - configuration. -* **It should generate metrics**. If you gather metrics for test coverage, - common bugs, complexity analysis, or generate reports or documentation, the - build system should be responsible for it. This keeps all the common - administrative actions for the project in the same place as the rest of the - configuration, and provides the same consistency that the system gives the - rest of the build. -* **It should support multiple processors**. For medium-sized builds that - aren’t yet large enough to merit breaking down into libraries, being able to - perform independent build steps in parallel can be a major time-saver. This - can extend to distributed build systems, where idle CPU time can be donated - to other peoples’ builds. -* **It should run integration and acceptance tests**. Taking manual work from - the quality control phase of a project and running it automatically during - builds amplifies the benefits of early testing and, if your acceptance tests - are good, when your project is done. -* **It should not need repeating**. Once you declare a particular set of build - products “done,” you should be able to use those products as-is any time you - need them. Without this, you will eventually find yourself rebuilding the - same code from the same release over and over again. - -## What Doesn’t Work - -Builds, like any other part of software development, have -antipatterns—recurring techniques for solving a problem that introduce more -problems. - -* **One Source Tree, Many Products**. Many small software projects that - survive to grow into large, monolithic projects are eventually broken up - into components. It's easy to do this by taking the existing source tree and - building parts of it, and it's also wrong. Builds that slice up a single - source tree require too much discipline to maintain and too much mental - effort to understand. Break your build into separate projects that are built - separately, and have each build produce one product. -* **The Build And Deploy System**. Applications that have a server component - often choose to automate deployment and setup using the same build system - that builds the project. Too often, the extra build steps that set up a - working system from the built project are tacked onto the end of an existing - build. This breaks standardization, making that build harder to understand, - and means that that one build is producing more than one thing—it's - producing the actual project, and a working system around the project. -* **The Build Button**. IDEs are really good at editing code. Most of them - will produce a build for you, too. Don't rely on IDE builds for your build - system, and don't let the IDE reconfigure the build process. Most IDEs don't - differentiate between settings that apply to the project and settings that - apply to the local environment, leading to builds that rely on libraries or - other projects being in specific places and on specific IDE settings that - are often buried in complex settings dialogs. -* **Manual Steps**. Anything that gets done by hand will eventually be done - wrong. Automate every step. - -## What Does Work - -Similarly, there are patterns—solutions that recur naturally and can be -applied to many problems. - -* **Do One Thing Well**. The UNIX philosophy of small, cohesive tools works - for build systems, too: if you need to build a package, and then install it - on a server, write three builds: one that builds the package, one that takes - a package and installs it, and a third that runs the first two builds in - order. The individual builds will be small enough to easily understand and - easy to standardize, and the package ends up installed on the server when - the main build finishes. -* **Dependency Repositories**. After a build is done, make the built product - available to other builds and to the user for reuse rather than rebuilding - it every time you need it. Similarly, libraries and other inward - dependencies for a build can be shared between builds, reducing duplication - between projects. -* **Convention Over Extension**. While it's great that your build system is - extensible, think hard about whether you really need to extend your build. - Each extension makes that project’s build that much harder to understand and - adds one more point of failure. - -## Pick A Tool, Any Tool - -Nothing here is new. The value of build systems has been -[discussed](http://www.joelonsoftware.com/articles/fog0000000043.html) -[in](http://www.gamesfromwithin.com/articles/0506/000092.html) -[great](http://c2.com/cgi/wiki?BuildSystem) -[detail](http://www.codinghorror.com/blog/archives/000988.html) elsewhere. -Much of the accumulated build wisdom of the software industry has already been -incorporated to one degree or another into build tools. What matters is that -you pick one, then use it with the discipline needed to get repeatable results -without thinking. diff --git a/wiki/dev/comments.md b/wiki/dev/comments.md deleted file mode 100644 index 7dc1a68..0000000 --- a/wiki/dev/comments.md +++ /dev/null @@ -1,8 +0,0 @@ -# Comment Maturity Model - -> * Beginners comment nothing -> * Apprentices comment the obvious -> * Journeymen comment the reason for doing it -> * Masters comment the reason for not doing it another way - -Richard C. Haven, via [cluefire.net](http://cluefire.net/) diff --git a/wiki/dev/commit-messages.md b/wiki/dev/commit-messages.md deleted file mode 100644 index 6b3702d..0000000 --- a/wiki/dev/commit-messages.md +++ /dev/null @@ -1,70 +0,0 @@ -# Writing Good Commit Messages - -Rule zero: “good” is defined by the standards of the project you're on. Have a -look at what the existing messages look like, and try to emulate that first -before doing anything else. - -Having said that, here are some things that will help your commit messages be -useful later: - -* Treat the first line of the message as a one-sentence summary. Most SCM - systems have an “overview” command that shows shortened commit messages in - bulk, so making the very beginning of the message meaningful helps make - those modes more useful for finding specific commits. _It's okay for this to - be a “what” description_ if the rest of the message is a “why” description. - -* Fill out the rest of the message with prose outlining why you made the - change. The guidelines for a good “why” message are the same as [the - guidelines for good comments](comments), but commit messages can be - signifigantly longer. Don't bother reiterating the contents of the change in - detail; anyone who needs that can read the diff themselves. - -* If you use an issue tracker (and you should), include whatever issue-linking - notes it supports right at the start of the message, where it'll be visible - even in shortlogs. If your tracker has absurdly long issue-linking syntax, - or doesn't support issue links in commits at all, include a short issue - identifier at the front of the message and put the long part somewhere out - of the way, such as on a line of its own at the end of the message. - -* Pick a tense and a mood and stick with them. Reading one commit with a - present-tense imperative message (“Add support for PNGs”) and another commit - with a past-tense narrative message (“Fixed bug in PNG support”) is - distracting. - -* If you need rich commit messages (links, lists, and so on), pick one markup - language and stick with it. It'll be easier to write useful commit - formatters if you only have to deal with one syntax, rather than four. - (Personally, I use Markdown on projects I control.) - - * This also applies to line-wrapping: either hard-wrap everywhere, or - hard-wrap nowhere. - -## An Example - - commit 842e6c5f41f6387781fcc84b59fac194f52990c7 - Author: Owen Jacobson <owen.jacobson@grimoire.ca> - Date: Fri Feb 1 16:51:31 2013 -0500 - - DS-37: Add support for privileges, and create a default privileged user. - - This change gives each user a (possibly empty) set of privileges. Privileges - are mediated by roles in the following ways: - - * Each user is a member of zero or more roles. - * Each role implies membership in zero or more roles. If role A implies role - B, then a member of role A is also a transitive member of role B. This - relationship is transitive: if A implies B and B implies C, then A implies - C. This graph should not be cyclic, but it's harmless if it is. - * Each role grants zero or more privileges. - - A user's privileges are the union of all privileges of all roles the user is a - member of, either directly or transitively. - - Obviously, a role that implies no other roles and grants no priveleges is - meaningless to the authorization system. This may be useful for "advisory" - roles meant for human consumption. - - This also introduces a user with the semi-magical name '*admin' (chosen - because asterisks cannot collide with player-chosen usernames), and the group - '*superuser' that is intended to hold all privileges. No privileges are yet - defined. diff --git a/wiki/dev/configuring-browser-apps.md b/wiki/dev/configuring-browser-apps.md deleted file mode 100644 index 8bba0b2..0000000 --- a/wiki/dev/configuring-browser-apps.md +++ /dev/null @@ -1,108 +0,0 @@ -# Configuring Browser Apps - -I've found myself in he unexpected situation of having to write a lot of -browser apps/single page apps this year. I have some thoughts on configuration. - -## Why Bother - -* Centralize environment-dependent facts to simplify management & testing -* Make it easy to manage app secrets. - - [@wlonk](https://twitter.com/wlonk) adds: - - > “Secrets”? What this means in a browser app is a bit different. - - Which is unpleasantly true. In a freestanding browser app, a “secret” is only as secret as your users and their network connections choose to make it, i.e., not very secret at all. Maybe that should read “make it easy to manage app _tokens_ and _identities_,” instead. - -* Keep config data & API tokens out of app's source control -* Integration point for external config sources (Aerobatic, Heroku, etc) -* The forces described in [12 Factor App: - Dependencies](http://12factor.net/dependencies) and, to a lesser extent, [12 - Factor App: Configuration](http://12factor.net/config) apply just as well to - web client apps as they do to freestanding services. - -## What Gets Configured - -Yes: - -* Base URLs of backend services -* Tokens and client IDs for various APIs - -No: - -* “Environments” (sorry, Ember folks - I know Ember thought this through carefully, but whole-env configs make it easy to miss settings in prod or test, and encourage patterns like “all devs use the same backends”) - -## Delivering Configuration - -There are a few ways to get configuration into the app. - -### Globals - - <head> - <script>window.appConfig = { - "FOO_URL": "https://foo.example.com/", - "FOO_TOKEN": "my-super-secret-token" - };</script> - <script src="/your/app.js"></script> - </head> - -* Easy to consume: it's just globals, so `window.appConfig.foo` will read them. - * This requires some discipline to use well. -* Have to generate a script to set them. - * This can be a `<script>window.appConfig = {some json}</script>` tag or a - standalone config script loaded with `<script src="/config.js">` - * Generating config scripts sets a minimum level of complexity for the - deployment process: you either need a server to generate the script at - request time, or a preprocessing step at deployment time. - - * It's code generation, which is easy to do badly. I had originally - proposed using `JSON.stringify` to generate a Javascript object literal, - but this fails for any config values with `</script>` in them. That may - be an unlikely edge case, but that only makes it a nastier trap for - administrators. - - [There are more edge - cases](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify). - I strongly suspect that a hazard-free implementation requires a - full-blown JS source generator. I had a look at building something out of - [escodegen](https://github.com/estools/escodegen) and - [estemplate](https://github.com/estools/estemplate), but - - 1. `escodegen`'s node version [doesn't generate browser-safe - code](https://github.com/estools/escodegen/issues/298), so string literals - with `</script>` or `</head>` in them still break the page, and - 2. converting javascript values into parse trees to feed to `estemplate` - is some seriously tedious code. - -### Data Attributes and Link Elements - - <head> - <link rel="foo-url" href="https://foo.example.com/"> - <script src="/your/app.js" data-foo-token="my-super-secret-token"></script> - </head> - -* Flat values only. This is probably a good thing in the grand, since flat configurations are easier to reason about and much easier to document, but it makes namespacing trickier than it needs to be for groups of related config values (URL + token for a single service, for example). -* Have to generate the DOM to set them. - * This is only practical given server-side templates or DOM rendering. You can't do this with bare nginx, unless you pre-generate pages at deployment time. - -### Config API Endpoint - - fetch('/config') /* {"FOO_URL": …, "FOO_TOKEN": …} */ - .then(response => response.json()) - .then(json => someConfigurableService); - -* Works even with “dumb” servers (nginx, CloudFront) as the endpoint can be a generated JSON file on disk. If you can generate files, you can generate a JSON endpoint. -* Requires an additional request to fetch the configuration, and logic for injecting config data into all the relevant configurable places in the code. - * This request can't happen until all the app code has loaded. - * It's very tempting to write the config to a global. This produces some hilarious race conditions. - -### Cookies - -See for example [clientconfig](https://github.com/henrikjoreteg/clientconfig): - - var config = require('clientconfig'); - -* Easy to consume given the right tools; tricky to do right from scratch. -* Requires server-side support to send the correct cookie. Some servers will allow you to generate the right cookie once and store it in a config file; others will need custom logic, which means (effectively) you need an app server. -* Cookies persist and get re-sent on subsequent requests, even if the server stops delivering config cookies. Client code has to manage the cookie lifecycle carefully (clientconfig does this automatically) -* Size limits constrain how much configuration you can do. diff --git a/wiki/dev/debugger-101.md b/wiki/dev/debugger-101.md deleted file mode 100644 index 6d7e773..0000000 --- a/wiki/dev/debugger-101.md +++ /dev/null @@ -1,86 +0,0 @@ -# Intro to Debuggers - -(Written largely because newbies in [##java](http://evanchooly.com) never seem -to have this knowledge.) - -A “debugger” is a mechanism for monitoring and controlling the execution of -your program, usually interactively. Using a debugger, you can stop your -program at known locations and examine the _actual_ values of its variables -(to compare against what you expected), monitor variables for changes (to see -where they got the values they have, and why), and step through code a line at -a time (to watch control flow and verify that it matches your expectations). - -Pretty much every worthwhile language has debugging support of some kind, -whether it's via IDE integration or via a command-line debugger. - -(Of course, none of this helps if you don't have a mental model of the -“expected” behaviour of the program. Debuggers can help you read, but can't -replace having an understanding of the code.) - -## Debugging Your First Program - -Generally, you start running a debugger because you have a known problem -- an -exception, or code behaving strangely -- somewhere in your program that you -want to investigate more closely. Start by setting a _breakpoint_ in your -program at a statement slightly before the problem area. - -Breakpoints are instructions to the debugger, telling it to stop execution -when the program reaches the statement the breakpoint is set on. - -Run the program in the debugger. When it reaches your breakpoint, execution -will stop (and your program will freeze, rather than exiting). You can now -_inspect_ values and run expressions in the context of your program in its -current state. Depending on the debugger and the platform, you may be able to -modify those values, too, to quickly experiment with the problem and attempt -to solve it. - -Once you've looked at the relevant variables, you can resume executing your -program - generally in one of five ways: - -* _Continue_ execution normally. The debugger steps aside until the program - reaches the next breakpoint, or exits, and your program executes normally. - -* Execute the _next_ statement. Execution proceeds for one statement in the - current function, then stops again. If the statement is, for example, a - function or method call, the call will be completely evaluated (unless it - contains breakpoints of its own). (In some debuggers, this is labelled “step - over,” since it will step “over” a function call.) - -* _Step_ forward one operation. Execution proceeds for one statement, then - stops again. This mode can single-step into function calls, rather than - letting them complete uninterrupted. - -* _Continue to end of function_. The debugger steps aside until the program - reaches the end of the current function, then halts the program again. - -* _Continue to a specific statement_. Some debuggers support this mode as a - way of stepping over or through “uninteresting” sections of code quickly and - easily. (You can implement this yourself with “Continue” and normal - breakpoints, too.) - -Whenever the debugger halts your program, you can do any of several things: - -* Inspect the value of a variable or field, printing a useful representation - to the debugger. This is a more flexible version of the basic idea of - printing debug output as you go: because the program is stopped, you can - pick and choose which bits of information to look at on the fly, rather than - having to rerun your code with extra debug output. - -* Inspect the result of an expression. The debugger will evaluate an - expression “as if” it occurred at the point in the program where the - debugger is halted, including any local variables. In languages with static - visibility controls like Java, visibility rules are often relaxed in the - name of ease of use, allowing you to look at the private fields of objects. - The result of the expression will be made available for inspection, just - like a variable. - -* Modify a variable or field. You can use this to quickly test hypotheses: for - example, if you know what value a variable “should” have, you can set that - value directly and observe the behaviour of the program to check that it - does what you expected before fixing the code that sets the variable in a - non-debug run. - -* In some debuggers, you can run arbitrary code in the context of the halted - program. - -* Abort the program. diff --git a/wiki/dev/entry-points.md b/wiki/dev/entry-points.md deleted file mode 100644 index 0e56ce0..0000000 --- a/wiki/dev/entry-points.md +++ /dev/null @@ -1,56 +0,0 @@ -# Entry Points - -The following captures a conversation from IRC: - -> [Owen J](https://twitter.com/derspiny): Have you run across the idea -> of an "entry point" in a runtime yet? (You've definitely used it, just -> possibly not known it had a name.) -> -> [Alex L](https://twitter.com/aeleitch): I have not! -> -> [Owen J](https://twitter.com/derspiny): It's the point where the -> execution of the outside system -- the OS, the browser, the Node -> runtime, whatever -- stops and the execution of your code starts. Some -> platforms only give you one: C on Unix is classic, where there's only -> two entry points: main and signal handlers (and a lot of apps only use -> main). JS gives you _a shit fucking ton_ of entry points. -> -> [Owen J](https://twitter.com/derspiny): In a browser, the pageload -> process is an entry point: your code gets run when the browser -> encounters a `<script>` tag. So is every event handler. There's none -> of your code running when an event handler starts, only the browser -> is running. So is every callback from an external service, like -> `XmlHttpRequest` or `EventSource` or the `File` APIs. In Node, the top -> level of your main script is an entry point, but so is every callback -> from an external service. -> -> [Alex L](https://twitter.com/aeleitch): Ahahahahahahaha oh my -> god. There is no way for me to contain them all. _everything the light -> touches._ -> -> [Owen J](https://twitter.com/derspiny): This is important for -> reasoning about exception handling! _In JS_, exception handling only -> propagates one direction: towards the entry point of this sequence of -> function calls. -> -> [Alex L](https://twitter.com/aeleitch): Yes. This is what _I_ call a -> stack trace. -> -> [Owen J](https://twitter.com/derspiny): If an exception escapes from -> an entry point, the JS runtime logs it, and then the outside runtime -> takes over again. That's one of the ways callbacks from external -> services fuck up the idea of a stack trace as a map of control flow. -> -> [Alex L](https://twitter.com/aeleitch): Huh. Yes. Yes I can see -> that. I mean, in my world, control flow is a somewhat handwavey idea -> right now. I'm starting to understand why so many people hate JS-land. -> -> [Owen J](https://twitter.com/derspiny): Sure. But, for example, a -> promise chain is a tool for restructuring control flow. In principle, -> error handling should provide _some_ kind of map of that, to allow -> programmers -- you -- to diagnose how a program reached a given error -> state and maybe one day fix the problem. In THIS future, none of them -> do that well, though. -> -> [Alex L](https://twitter.com/aeleitch): Yes. Truly the darkest -> timeline, but this reviews why I am having these concerns. diff --git a/wiki/dev/gnu-collective-action-license.md b/wiki/dev/gnu-collective-action-license.md deleted file mode 100644 index 6a0bc3b..0000000 --- a/wiki/dev/gnu-collective-action-license.md +++ /dev/null @@ -1,51 +0,0 @@ -# The GPL As Collective Action - -Programmers, like many groups of subject experts, are widely afflicted by the -belief that all other fields of expertise can be reduced to a special case of -programming expertise. For a great example of this, watch [programmers argue -about law](https://xkcd.com/1494/) (which can _obviously_ be reduced to a rules -system, which is a programming problem), -[consent](https://www.reddit.com/r/Bitcoin/comments/2e5a7k/could_the_blockchain_be_used_to_prove_consensual/) -(which is _obviously_ about non-repudiatable proofs, which are a programming -problem), or [art](https://github.com/google/deepdream) (which is _obviously_ -reducible to simple but large automata). One key symptom of this social pattern -is a disregard for outside expertise and outside bodies of knowledge. - -I believe this habit may have bitten Stallman. - -The GNU Public License presents a simple, legally enforceable offer: in return -for granting the right to distribute the licensed work and its derivatives, the -GPL demands that derivative works also be released under the GPL. The _intent_, -as derived from -[Stallman’s commentaries](http://www.gnu.org/philosophy/open-source-misses-the-point.en.html) -on the GPL and on the social systems around software, is that people who _use_ -information systems should, morally and legally, be entitled to the tools to -understand what the system will do and why, and to make changes to those tools -as they see fit. - -This is a form of _collective action_, as implemented by someone who thinks of -unions and organized labour as something that software could do better. The -usual lens for critique of the GPL is that GPL’d software cannot be used in -non-GPL systems (which is increasingly true, as the Free Software Foundation -catches up with the “as a Service” model of software deliver) _by developers_, -but I think there’s a more interesting angle on it as an attempt to apply the -collective bargaining power of programmers as a class to extracting a -concession from managerial -- business and government -- interests, instead. In -that reading, the GPL demands that managerial interests in software avoid -behaviours that would be bad for programmers (framed as “users”, as above) as a -condition of benefitting from the labour of those programmers. - -Sadly, Stallman is not a labour historian or a union organizer. He’s a public -speaker and a programmer. By attempting to reinvent collective action from -first principles, and by treating collective action as a special case of -software development, the GPL acts to divide programmers from non-programming -computer users, and to weaken the collective position of programmers vis-à-vis -managerial interests. The rise of “merit”-based open source licenses, such as -the MIT license (which I use heavily, but advisedly), and the increasing -pervasiveness of the Github Resume, are both simple consequences of this -mistake. - -I’m pro-organized-labour, and largely pro-union. The only thing worse than -having two competing powerful interests in the room is having only one powerful -interest in the room. The GPL should be part of any historical case study for -the unionization of programmers, since it captures so much of what we do wrong. diff --git a/wiki/dev/go.md b/wiki/dev/go.md deleted file mode 100644 index f20914b..0000000 --- a/wiki/dev/go.md +++ /dev/null @@ -1,112 +0,0 @@ -# I Do Not Like Go - -I use Go at my current day job. I've gotten pretty familiar with it. I do not like it, and its popularity is baffling to me. - -## Developer Ergonomics - -I've never met a language lead so openly hostile to the idea of developer ergonomics. To pick one example, Rob Pike has been repeatedly and openly hostile to any discussion of syntax highlighting on the Go playground. In response to reasonably-phrased user questions, his public answers have been disdainful and disrespectful: - -> Gofmt was written to reduce the number of pointless discussions about code formatting. It succeeded admirably. I'm sad to say it had no effect whatsoever on the number of pointless discussions about syntax highlighting, or as I prefer to call it, spitzensparken blinkelichtzen. - -From a [2012 Go-Nuts thread](http://grokbase.com/t/gg/golang-nuts/12asys9jn4/go-nuts-go-playground-syntax-highlighting), and again: - -> Syntax highlighting is juvenile. When I was a child, I was taught arithmetic using colored rods (http://en.wikipedia.org/wiki/Cuisenaire_rods). I grew up and today I use monochromatic numerals. - -Clearly nobody Rob cares about has ever experienced synaesthesia, dyslexia, or poor eyesight. Rob's resistance to the idea has successfully kept Go's official site and docs highlighting-free as of this writing. - -The Go team is not Rob Pike, but they've shared his attitude towards ergonomics in other ways. In a discussion of [union/sum types](https://github.com/golang/go/issues/19412), user ianlancetaylor rejects the request out of hand by specifically identifying an ergonomic benefit and writing it off as too minor to be worth bothering: - -> This has been discussed several times in the past, starting from before the open source release. The past consensus has been that sum types do not add very much to interface types. Once you sort it all out, what you get in the end if an interface type where the compiler checks that you've filled in all the cases of a type switch. That's a fairly small benefit for a new language change. - -This attitude is at odds with opinions about union types in other languages. JWZ, criticising Java in 2000, wrote: - -> Similarly, I think the available idioms for simulating enum and :keywords fairly lame. (There's no way for the compiler to issue that life-saving warning, ``enumeration value `x' not handled in switch'', for example.) - -The Java team took criticism in this vein to heart, and Java can now emit this warning for `switch`es over `enum` types. Other languages - including both modern languages such as Rust, Scala, Elixir, and friends, as well as Go's own direct ancestor, C - similarly warn where possible. Clearly, these kinds of warning are useful, but to the Go team, developer comfort is not important enough to merit consideration. - -## Politics - -No, not the mailing-lists-and-meetups kind. A deeper and more interesting kind. - -Go is, like every language, a political vehicle. It embodies a particular set of beliefs about how software should be written and organized. In Go's case, the language embodies an extremely rigid caste hierarchy of "skilled programmers" and "unskilled programmers," enforced by the language itself. - -On the unskilled programmers side, the language forbids features considered "too advanced." Go has no generics, no way to write higher-order functions that generalize across more than a single concrete type, and extremely stringent prescriptive rules about the presence of commas, unused symbols, and other infelicities that might occur in ordinary code. This is the world in which Go programmers live - one which is, if anything, even _more_ constrained than Java 1.4 was. - -On the skilled programmers side, programmers are trusted with those features, and can expose things built with them to other programmers on both sides of the divide. The language implementation contains generic functions which cannot be implemented in Go, and which satisfy typing relationships the language simply cannot express. This is the world in which the Go _implementors_ live. - -I can't speak for Go's genesis within Google, but outside of Google, this underanalysed political stance dividing programmers into "trustworthy" and "not" underlies many arguments about the language. - -## Packaging and Distribution of Go Code - -`go get` is a disappointing abdication of responsibility. Packaging boundaries are communications boundaries, and the Go team's response of "vendor everything" amounts to refusing to help developers communicate with one another about their code. - -I can respect the position the Go team has taken, which is that it's not their problem, but that puts them at odds with every other major language. Considering the disastrous history of attempts at package management for C libraries and the existence of Autotools as an example of how this can go very wrong over a long-enough time scale, it's very surprising to see a language team in this century washing their hands of the situation. - -## GOPATH - -The use of a single monolithic path for all sources makes version conflicts between dependencies nearly unavoidable. The `vendor` workaround partially addresses the problem, at the cost of substantial repository bloat and non-trivial linkage changes which can introduce bugs if a vendored and a non-vendored copy of the same library are linked in the same application. - -Again, the Go team's "not our problem" response is disappointing and frustrating. - -## Error Handling in Go - -The standard Go approach to operations which may fail involves returning multiple values (not a tuple; Go has no tuples) where the last value is of type `error`, which is an interface whose `nil` value means “no error occurred.” - -Because this is a convention, it is not representable in Go's type system. There is no generalized type representing the result of a fallible operation, over which one can write useful combining functions. Furthermore, it's not rigidly adhered to: nothing other than good sense stops a programmer from returning an `error` in some other position, such as in the middle of a sequence of return values, or at the start - so code generation approaches to handling errors are also fraught with problems. - -It is not possible, in Go, to compose fallible operations in any way less verbose than some variation on -```go - a, err := fallibleOperationA() - if err != nil { - return nil, err - } - - b, err := fallibleOperationB(a) - if err != nil { - return nil, err - } - - return b, nil -``` - -In other languages, this can variously be expressed as - -```java - a = fallibleOperationA() - b = fallibleOperationB(a) - return b -``` - -in languages with exceptions, or as - -```javascript - return fallibleOperationA() - .then(a => fallibleOperationB(a)) - .result() -``` - -in languages with abstractions that can operate over values with cases. - -This has real impact: code which performs long sequences of fallible operations expends a substantial amount of typing effort to write (even with editor support generating the branches), and a substantial amount of cognitive effort to read. Style guides help, but mixing styles makes it worse. Consider: - -```go - a, err := fallibleOperationA() - if err != nil { - return nil, err - } - - if err := fallibleOperationB(a); err != nil { - return nil, err - } - - c, err := fallibleOperationC(a) - if err != nil { - return nil, err - } - - fallibleOperationD(a, c) - - return fallibleOperationE() -``` - -God help you if you nest them, or want to do something more interesting than passing an error back up the stack. diff --git a/wiki/dev/liquibase.md b/wiki/dev/liquibase.md deleted file mode 100644 index 01e989f..0000000 --- a/wiki/dev/liquibase.md +++ /dev/null @@ -1,77 +0,0 @@ -# Liquibase - -Note to self: I think this (a) needs an outline and (b) wants to become a “how -to automate db upgrades for dummies” page. Also, this is really old (~2008) -and many things have changed: database migration tools are more -widely-available and mature now. On the other hand, I still see a lot of -questions on IRC that are based on not even knowing these tools exist. - ------ - -Successful software projects are characterized by extensive automation and -supporting tools. For source code, we have version control tools that support -tracking and reviewing changes, marking particular states for release, and -automating builds. For databases, the situation is rather less advanced in a -lot of places: outside of Rails, which has some rather nice -[migration](http://wiki.rubyonrails.org/rails/pages/understandingmigrations) -support, and [evolutions](http://code.google.com/p/django-evolution/) or -[South](http://south.aeracode.org) for Django, there are few tools that -actually track changes to the database or to the model in a reproducible way. - -While I was exploring the problem by writing some scripts for my own projects, -I came to a few conclusions. You need to keep a receipt for the changes a -database has been exposed to in the database itself so that the database can -be reproduced later. You only need scripts to go forward from older versions -to newer versions. Finally, you need to view DDL statements as a degenerate -form of diff, between two database states, that's not combinable the way -textual diff is except by concatenation. - -Someone on IRC mentioned [Liquibase](http://www.liquibase.org/) and -[migrate4j](http://migrate4j.sourceforge.net/) to me. Since I was already in -the middle of writing a second version of my own scripts to handle the issues -I found writing the first version, I stopped and compared notes. - -Liquibase is essentially the tool I was trying to write, only with two years -of relatively talented developer time poured into it rather than six weeks. - -Liquibase operates off of a version table it maintains in the database itself, -which tracks what changes have been applied to the database, and off of a -configuration file listing all of the database changes. Applying new changes -to a database is straightforward: by default, it goes through the file and -applies all the changes that are in the file that are not already in the -database, in order. This ensures that incremental changes during development -are reproduced in exactly the same way during deployment, something lots of -model-to-database migration tools have a problem with. - -The developers designed the configuraton file around some of the ideas from -[Refactoring -Databases](http://www.amazon.com/Refactoring-Databases-Evolutionary-Addison-Wesley-Signature/dp/0321293533), -and provided an [extensive list of canned -changes](http://www.liquibase.org/manual/home#available_database_refactorings) -as primitives in the database change scripts. However, it's also possible to -insert raw SQL commands (either DDL, or DML queries like `SELECT`s and -`INSERT`s) at any point in the change sequence if some change to the database -can't be accomplished with its set of refactorings. For truly hairy databases, -you can use either a Java class implementing your change logic or a shell -script alongside the configuration file. - -The tools for applying database changes to databases are similarly flexible: -out of the box, liquibase can be embedded in a fairly wide range of Java -applications using servlet context listeners, a Spring adapter, or a Grails -adapter; it can also be run from an ant or maven build, or as a standalone -tool. - -My biggest complaint is that liquibase is heavily Java-centric; while the -developers are planning .Net support, it'd be nice to use it for Python apps -as well. Triggering liquibase upgrades from anything other than a Java program -involves either shelling out to the `java` command or creating a JVM and -writing native glue to control the upgrade process, which are both pretty -painful. I'm also less than impressed with the javadoc documentation; while -the manual is excellent, the javadocs are fairly incomplete, making it hard to -write customized integrations. - -The liquibase developers deserve a lot of credit for solving a hard problem -very cleanly. - -*[DDL]: Data Definition Language -*[DML]: Data Manipulation Language diff --git a/wiki/dev/merging-structural-changes.md b/wiki/dev/merging-structural-changes.md deleted file mode 100644 index d1c7a9c..0000000 --- a/wiki/dev/merging-structural-changes.md +++ /dev/null @@ -1,85 +0,0 @@ -# Merging Structural Changes - -In 2008, a project I was working on set out to reinvent their build process, -migrating from a mass of poorly-written Ant scripts to Maven and reorganizing -their source tree in the process. The development process was based on having -a branch per client, so there was a lot of ongoing development on the original -layout for clients that hadn't been migrated yet. We discovered that our -version control tool, [Subversion](http://subversion.tigris.org/), was unable -to merge the changes between client branches on the old structure and the -trunk on the new structure automatically. - -Curiousity piqued, I cooked up a script that reproduces the problem and -performs the merge from various directions to examine the results. Subversion, -sadly, performed dismally: none of the merge scenarios tested retained content -changes when merging structural changes to the same files. - -## The Preferred Outcome - - - -The diagram above shows a very simple source tree with one directory, `dir-a`, -containing one file with two lines in it. On one branch, the file is modified -to have a third line; on another branch, the directory is renamed to `dir-b`. -Then, both branches are merged, and the resulting tree contains both sets of -changes: the file has three lines, and the directory has a new name. - -This is the preferred outcome, as no changes are lost or require manual -merging. - -## Subversion - - - -There are two merge scenarios in this diagram, with almost the same outcome. -On the left, a working copy of the branch where the file's content changed is -checked out, then the changes from the branch where the structure changed are -merged in. On the right, a working copy of the branch where the structure -changed is checked out, then the changes from the branch where the content -changed are merged in. In both cases, the result of the merge has the new -directory name, and the original file contents. In one case, the merge -triggers a rather opaque warning about a “missing file”; in the other, the -merge silently ignores the content changes. - -This is a consequence of the way Subversion implements renames and copies. -When Subversion assembles a changeset for committing to the repository, it -comes up with a list of primitive operations that reproduce the change. There -is no primitive that says “this object was moved,” only primitives which say -“this object was deleted” or “this object was added, as a copy of that -object.” When you move a file in Subversion, those two operations are -scheduled. Later, when Subversion goes to merge content changes to the -original file, all it sees is that the file has been deleted; it's completely -unaware that there is a new name for the same file. - -This would be fairly easy to remedy by adding a “this object was moved to that -object” primitive to the changeset language, and [a bug report for just such a -feature](http://subversion.tigris.org/issues/show_bug.cgi?id=898) was filed in -2002. However, by that time Subversion's repository and changeset formats had -essentially frozen, as Subversion was approaching a 1.0 release and more -important bugs _without_ workarounds were a priority. - -There is some work going on in Subversion 1.6 to handle tree conflicts (the -kind of conflicts that come from this kind of structural change) more -sensibly, which will cause the two merges above to generate a Conflict result, -which is not as good as automatically merging it but far better than silently -ignoring changes. - -## Mercurial - - - -Interestingly, there are tools which get this merge scenario right: the -diagram above shows how [Mercurial](http://www.selenic.com/mercurial/) handles -the same two tests. Since its changeset language does include an “object -moved” primitive, it's able to take a content change for `dir-a/file` and -apply it to `dir-b/file` if appropriate. - -## Git - -Git also gets this scenario right, _usually_. Unlike Mercurial, Git does not -track file copies or renames in its commits at all, prefering to infer them by -content comparison every time it performs a move-aware operation, such as a -merge. diff --git a/wiki/dev/on-rights.md b/wiki/dev/on-rights.md deleted file mode 100644 index d277b8a..0000000 --- a/wiki/dev/on-rights.md +++ /dev/null @@ -1,21 +0,0 @@ -# On Rights - -Or: your open-source project is a legal minefield, and fixing that is counterintuitive and unpopular. - -The standard approach to releasing an open-source project in this age is to throw your code up on Github with a `LICENSE` file describing the terms under which other people may copy and distribute the project and its derived works. This is all well and good: when you write code for yourself, you generally hold the copyright to that code, and you can license it out however you see fit. - -However, Github encourages projects to accept contributions. Pull request activity is, rightly or wrongly, considered a major indicator of project health by the Github community at large. Moreover, each pull request represents a gift of time and labour: projects without a clear policy otherwise are often in no position to reject such a gift unless it has clear defects. - -This is a massive problem. The rights to contributed code are, generally, owned by the contributor, and not by the project's original authors, and a pull request, on its own, isn't anywhere near adequate to transfer those rights to the project maintainers. - -Intuitively, it may seem like a good idea for each contributor to retain the rights to their contributions. There is a good argument that by contributing code with the intent that it be included in the published project, the contribution is under the same license as the project as a whole, and withholding the rights can effectively prevent the project from ever switching to a more-restrictive license without the contributor's consent. - -However, it also cripples the project's legal ability to enforce the license. Someone distributing the project in violation of the license terms is infringing on all of those individual copyrights, and no contributor has obvious standing to bring suit on behalf of any other. Suing someone for copyright infringement becomes difficult: anyone seeking to bring suit either needs to restrict the suit to the portions they hold the copyright to (difficult when each contribution is functionally intertangled with every other), or obtain permission from all of the contributors, including those under pseudonyms or who have _died_, to file suit collectively. This, in turn, de-fangs whatever restrictions the license nominally imposes. - -There are a few fixes for this. - -The simplest one, from an implementation perspective, is to require that contributors agree in writing to assign the rights to their contribution to the project's maintainers, or to an organization. _This is massively unpopular_: asking a developer to give up rights to their contributions tends to provoke feelings that the project wants to take without giving, and the rationale justifying such a request isn't obvious without a grounding in intellectual property law. As things stand, the only projects that regularly do this are those backed by major organizations, as those organizations tend to be more sensitive to litigation risk and have the resources to understand and demand such an assignment. (Example: [the Sun Contributor Agreement](https://www.openoffice.org/licenses/sca.pdf), which is not popular.) - -More complex - too complex to do without an attorney, honestly - is to require that contributors sign an agreement authorizing the project's maintainers or host organization to bring suit on their behalf with respect to their contributions. As attorneys are not free and as there are no "canned" agreements for this, it's not widely done. I anticipate that it might provoke a lot of the same reactions, but it does leave contributors nominally in possession of the rights to their work. - -The status quo is, I think, untenable in the long term. We've already seen major litigation over project copyrights, and in the case of the [FSF v. Cisco](https://www.fsf.org/licensing/complaint-2008-12-11.pdf), the Free Software Foundation was fortunate that substantial parts of the infringing use were works to which the FSF held clear copyrights. diff --git a/wiki/dev/papers.md b/wiki/dev/papers.md deleted file mode 100644 index 03ae430..0000000 --- a/wiki/dev/papers.md +++ /dev/null @@ -1,36 +0,0 @@ -# Papers of Note - -On Slack: - -> [Ben W](https://twitter.com/bwarren24): -> -> What are people's favorite CS papers? - -* Perlman, Radia (1985). ["An Algorithm for Distributed Computation of a Spanning Tree in an Extended LAN"][1]. ACM SIGCOMM Computer Communication Review. 15 (4): 44–53. doi:10.1145/318951.319004. - -* [The related Algorhyme][2], also by Perlman. - -* Guy Lewis Steele, Jr.. "[Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO][3]". MIT AI Lab. AI Lab Memo AIM-443. October 1977. - -* [What Every Computer Scientist Should Know About Floating-Point Arithmetic][4], by David Goldberg, published in the March, 1991 issue of Computing Surveys. Copyright 1991, Association for Computing Machinery, Inc. - -* [RFC 1925][5]. - -* [The above-cited Thomson NFA paper][6] on regular expressions. - -* [The Eight Fallacies of Distributed Computing][7]. - -* [HAKMEM][8] is another good one. It's _dense_ but rewarding. - -* Kahan, William (January 1965), "[Further remarks on reducing truncation errors][9]", Communications of the ACM, 8 (1): 40, doi:10.1145/363707.363723 - - -[1]: https://www.researchgate.net/publication/238778689_An_Algorithm_for_Distributed_computation_of_a_Spanning_Tree_in_an_Extended_LAN -[2]: http://etherealmind.com/algorhyme-radia-perlman/ -[3]: https://dspace.mit.edu/bitstream/handle/1721.1/5753/AIM-443.pdf -[4]: https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html -[5]: https://www.ietf.org/rfc/rfc1925.txt -[6]: https://www.fing.edu.uy/inco/cursos/intropln/material/p419-thompson.pdf -[7]: http://wiki.c2.com/?EightFallaciesOfDistributedComputing -[8]: http://w3.pppl.gov/~hammett/work/2009/AIM-239-ocr.pdf -[9]: https://dl.acm.org/citation.cfm?id=363723 diff --git a/wiki/dev/rich-shared-models.md b/wiki/dev/rich-shared-models.md deleted file mode 100644 index 7fac072..0000000 --- a/wiki/dev/rich-shared-models.md +++ /dev/null @@ -1,102 +0,0 @@ -# Rich Shared Models Must Die - -In a gaming system I once worked on, there was a single class which was -responsible for remembering everything about a user: their name and contact -information, their wagers, their balance, and every other fact about a user -the system cared about. In a system I'm working with now, there's a set of -classes that collaborate to track everything about the domain: prices, -descriptions, custom search properties, and so on. - -Both of these are examples of shared, system-wide models. - -Shared models are evil. - -Shared models _must be destroyed_. - -A software system's model is the set of functions and data types it uses to -decide what to do in response to various events. Models embody the development -team's assumptions and knowledge about the problem space, and usually reflect -the structure of the applications that use them. Not all systems have explicit -models, and it's often hard to draw a line through the code base separating -the code that is the model from the code that is not as every programmer sees -models slightly differently. - -With the rise of object-oriented development, explicit models became the focus -of several well-known practices. Many medium-to-large projects are built -“model first,” with the interfaces to that model being sketched out later in -the process. Since the model holds the system's understanding of its task, -this makes sense, and so long as you keep the problem you're actually solving -in mind, it works well. Unfortunately, it's too easy to lose sight of the -problem and push the model as the whole reason for the system around it. This, -in combination with both emotional and technical investment in any existing -system, strongly encourages building `new` systems around the existing -model pieces even if the relationship between the new system is tenuous at -best. - -* Why do we share them? - * Unmanaged growth - * Adding features to an existing system - * Building new systems on top of existing tools - * Misguided applications of “simplicity” and “reuse” - * Encouraged by distributed object systems (CORBA, EJB, SOAP, COM) -* What are the consequences? - * Models end up holding behaviour and data relevant to many applications - * Every application using the model has to make the same assumptions - * Changing the model usually requires upgrading everyone at the same time - * Changes to the model are risky and impact many applications, even if the - changes are only relevant to one application -* What should we do instead? - * Narrow, flat interfaces - * Each system is responsible for its own modelling needs - * Systems share data and protocols, not objects - * Libraries are good, if the entire world doesn't need to upgrade at the - same time - -It's easy to start building a system by figuring out what the various nouns it -cares about are. In the gambling example, one of our nouns was a user (the guy -sitting at a web browser somewhere), who would be able to log in, deposit -money, place a wager, and would have to be notified when the wager was -settled. This is a clear, reasonable entity for describing the goal of placing -bets online, which we could make reasonable assumptions about. It's also a -terrible thing to turn into a class. - -The User class in our gambling system was responsible for all of those things; -as a result, every part of the system ended up using a User object somewhere. -Because the User class had many responsibilities, it was subject to frequent -changes; because it was used everywhere, those changes had the capability to -break nearly any part of the overall system. Worse, because so much -functionality was already in one place, it became psychologically easy to add -one more responsibility to its already-bloated interface. - -What had been a clean model in the problem space eventually became one of a -handful of “glue” pieces in a [big ball of -mud](http://www.laputan.org/mud/mud.html#BigBallOfMud) program. The User -object did not come about through conscious design, but rather through -evolution from a simple system. There was no clear point where User became -“too big”; instead, the vagueness of its role slowly grew until it became the -default behaviour-holder for all things user-specific. - -The same problem modeling exercise also points at a better way to design the -same system: it describes a number of capabilities the system needed to be -able to perform, each of which is simpler than “build a gaming website.” Each -of these capabilities (accept or reject logins, process deposits, accept and -settle wagers, and send out notification emails to players) has a much simpler -model and solves a much more constrained of problem. There is no reason the -authentication service needs to share any data except an identity with the -wagering service: one cares about login names, passwords, and authorization -tickets while the other cares about accounting, wins and losses, and posted -odds. - -There is a small set of key facts that can be used to correlate all of pieces: -usernames, which uniquely identify a user, can be used to associate data and -behaviour in the login domain with data and behaviour in the accounting and -wagering domain, and with information in a contact management domain. All of -these key facts are flat—they have very little structure and no behaviour, and -can be passed from service to service without dragging along an entire -application's worth of baggage data. - -Sharing model classes between many services creates a huge maintenance -bottleneck. Isolating models within the services they support helps encourage -clean separations between services, which in turn makes it much easier to -understand individual services and much easier to maintain the system as a -whole. Kindergarten lied: sharing is _wrong_. diff --git a/wiki/dev/shutdown-hooks.md b/wiki/dev/shutdown-hooks.md deleted file mode 100644 index 1cc5a81..0000000 --- a/wiki/dev/shutdown-hooks.md +++ /dev/null @@ -1,29 +0,0 @@ -# Falsehoods Programmers Believe About Shutdown Hooks - -Shutdown hooks are language features allowing programs to register callbacks to run during the underlying runtime's orderly teardown. For example: - -* C's [`atexit`](http://man7.org/linux/man-pages/man3/atexit.3.html), - -* Python's [`atexit`](https://docs.python.org/library/atexit.html), which is subtly different, - -* Ruby's [`Kernel.at_exit`](http://www.ruby-doc.org/core-2.1.3/Kernel.html#method-i-at_exit), which is different again, - -* Java's [Runtime.addShutdownHook](http://docs.oracle.com/javase/8/docs/api/java/lang/Runtime.html#addShutdownHook-java.lang.Thread-), which is yet again different - -(There's an example in your favourite language.) - -The following beliefs are widespread and incorrect: - -1. **Your shutdown hook will run.** Non-exhaustively: the power can go away. The OS may terminate the program immediately because of resource shortages. An administrator or process management tool may send `SIGKILL` to the process. All of these things, and others, will not run your shutdown hook. - -2. **Your shutdown hook will run last.** Look at the shapes of the various shutdown hook APIs above: they all allow multiple hooks to be registered in arbitrary orders, and at least one _outright requires_ that hooks run concurrently. - -3. **Your shutdown hook will not run last.** Sometimes, you win, and objects your hook requires get cleaned up before your hook runs. - -4. **Your shutdown hook will run to completion.** Some languages run shutdown hooks even when the original termination request came from, for example, the user logging out. Most environments give programs a finite amount of time to wrap up before forcibly terminating them; your shutdown hook may well be mid-run when this occurs. - -5. **Your shutdown hook will be the only thing running.** In languages that support “daemon” threads, shutdown hooks may start before daemon threads terminate. In languages with concurrent shutdown hooks, other hooks will be in flight at the same time. On POSIX platforms, signals can still arrive during your shutdown hook. (Did you start any child processes? `SIGCHLD` can still arrive.) - -6. **You need a shutdown hook.** Closing files, terminating threads, and hanging up network connections are all done automatically by the OS as part of process destruction. The behaviour of the final few writes to a file handle aren't completely deterministic (unflushed data can be lost), but that's true even if a shutdown hook tries to close the file. - -Programs that rely on shutdown hooks for correctness should be treated as de-facto incorrect, much like object finalization in garbage-collected languages. diff --git a/wiki/dev/stop-building-synchronous-web-containers.md b/wiki/dev/stop-building-synchronous-web-containers.md deleted file mode 100644 index 320b3f7..0000000 --- a/wiki/dev/stop-building-synchronous-web-containers.md +++ /dev/null @@ -1,41 +0,0 @@ -# Stop Building Synchronous Web Containers - -Seriously, stop it. It's surreally difficult to build a sane ansynchronous service on top of a synchronous API, but building a synchronous service on top of an asynchronous API is easy. - -* WSGI: container calls the application as a function, and uses the return - value for the response body. Asynchronous apps generally use a non-WSGI - base (see for example [Bottle](http://bottlepy.org/docs/dev/async.html)). - -* Rack: container calls the application as a method, and uses the return - value for the complete response. Asynchronous apps generally use a non-Rack - base (see [this Github ticket](https://github.com/rkh/async-rack/issues/5)). - -* Java Servlets: container calls the application as a method, passing a - callback-bearing object as a parameter. The container commits and closes - the response as soon as the application method returns. Asynchronous apps - can use a standard API that operates by _re-invoking_ the servlet method as - needed. - -* What does .Net do? - -vs - -* ExpressJS: container calls the application as a function, passing a - callback-bearing object as a parameter. The application is responsible for - indicating that the response is complete. - -## Synchronous web containers are bad API design - -* Make the easy parts easy (this works) - -* Make the hard parts possible (OH SHIT) - -## Writing synchronous adapters for async APIs is easy - - def adapter(request, response_callback): - synchronous_response = synchronous_entry_point(request) - return response_callback(synchronous_response) - -Going the other way is more or less impossible, which is why websocket -support, HTML5 server-sent event support, and every other async tool for the -web has an awful server interface. diff --git a/wiki/dev/trackers-from-first-principles.md b/wiki/dev/trackers-from-first-principles.md deleted file mode 100644 index d7c7a4c..0000000 --- a/wiki/dev/trackers-from-first-principles.md +++ /dev/null @@ -1,219 +0,0 @@ -# Bugs, Tasks, and Tickets from First Principles - -Why do we track tasks? - -* To communicate about what should, will, has, and will not be done. - * Consequently, to either build consensus on what to do next or to dictate - it. -* To measure and communicate progress. -* To preserve information for future use. - * Otherwise we'd just remember it in our heads. - * Wishlist tasks are not a bad thing! - -Bugs/defects are a kind of task but not the only kind. Most teams have a “bug -tracker” that contains a lot more than bugs. Let's not let bugs dictate the -system. - -* Therefore, “steps to reproduce” should not be a required datum. - -Bugs are an _important_ kind of task. - -Tasks can be related to software development artifacts: commits, versions, -builds, releases. - -* A task may only be complete as of certain commits/releases/builds. -* A task may only be valid after (or before) certain commits/releases/builds. - -Communication loosely implies publishing. Tracking doesn't, but may rely on -the publishing of other facts. - -## Core Data - -Tasks are only useful if they're actionable. To be actionable, they must be -understood. Understanding requires communication and documentation. - -* A protocol-agnostic _name_, for easily identifying a task in related - conversations. - * These names need to be _short_ since they're used conversationally. Long - issue names will be shortened by convention whether the tracker supports - it or not. -* An actionable _description_ of the task. - * Frequently, a short _summary_ of the task, to ease bulk task - manipulation. Think of the difference between an email subject and an - email body. -* A _discussion_, consisting of _remarks_ or _comments_, to track the evolving - understanding alongside the task. - -See [speciation](#speciation), below. - -## Responsibility and Ownership - -Regardless of whether your team operates with a top-down, command-oriented -management structure or with a more self-directed and anarchistic process, for -every task, there is notionally one person currently responsible for ensuring -that the task is completed. - -That relationship can change over time; how it does so is probably -team-specific. - -There may be other people _involved_ in a task that are not _responsible_ for -a task, in a number of roles. Just because I developed the code for a feature -does not mean I am necessarily responsible for the feature any more, but it -might be useful to have a “developed by” list for the feature's task. - -Ways of identifying people: - -* Natural-language names (“Gianna Grady”) -* Email addresses -* Login names -* Distinguished names in some directory -* URLs - -Task responsibility relationships reflect real-world responsibility, and help -communicate it, but do not normally define it. - -## Workflow - -“Workflow” describes both the implications of the states a task can be in and -the implications of the transitions between states. Most task trackers are, at -their core, workflow engines of varying sophistication. - -Why: - -* Improve shared understanding of how tracked tasks are performed. -* Provide clear hand-off points when responsibility shifts. -* Provide insight into which tasks need what kinds of attention. -* Integration points for other behaviour. - -States are implicitly time-bounded, and joined to their predecessor and -successor states by transitions. - -Task state is decoupled from the real world: the task in a tracker is not the -work it describes. - -Elemental states: - -* “Open”: in this state, the task has not yet been completed. Work may or may - not be ongoing. -* “Completed”: in this state, all work on a task has been completed. -* “Abandoned”: in this state, no further work on a task will be performed, but - the task has not been completed. - -Most real-world workflows introduce some intermediate states that tie into -process-related handoffs. - -For software, I see these divisions, in various combinations, frequently: - -* “Open”: - * “Unverified”: further work needs to be done to decide whether the task - should be completed. - * “In Development”: someone is working on the code and asset changes - necessary to complete the task. This occasionally subsumes preliminary - work, too. - * “In Testing”: code and asset changes are ostensibly complete, - but need testing to validate that the task has been completed - satisfactorially. -* “Completed”: - * “Development Completed”: work (and possibly testing) has been completed - but the task's results are not yet available to external users. - * “Released”: work has been completed, and external users can see and use - the results. -* “Abandoned”: - * “Cannot Reproduce”: common in bug/defect tasks, to indicate that the - task doesn't contain enough information to render the bug fixable. - * “Won't Complete”: the task is well-understood and theoretically - completable, but will not be completed. - * “Duplicate”: the task is identical to, or closely related to, some other - task, such that completing either would be equivalent to completing - both. - * “Invalid”: the task isn't relevant, is incompletely described, doesn't - make sense, or is otherwise not appropriate work for the team using the - tracker. - -None of these are universal. - -Transitions show how a task moves from state to state. - -* Driven by external factors (dev work leads to tasks being marked completed) - * Explicit transitions: “mark this task as completed” - * Implicit transitions: “This commit also completes these tasks” -* Drive external factors (tasks marked completed are emailed to testers) - -States implicitly describe a _belief_ or a _desire_ about the future of the -task, which is a human artifact and may be wrong or overly hopeful. Tasks can -transition to “Completed” or “Abandoned” states when the work hasn't actually -been completed or abandoned, or from “Completed” or “Abandoned” to an “Open” -state to note that the work isn't as done as we thought it was. _This is a -feature_ and trackers that assume every transition is definitely true and -final encourage ugly workarounds like duplicating tickets to reopen them. - -## Speciation - -I mentioned above that bugs are a kind of task. The ways in which bugs are -“different” is interesting: - -* Good bugs have a well-defined reproduction case - steps you can follow to - demonstrate and test them. -* Good bugs have a well-described expected behaviour. -* Good bugs have a well-described actual behaviour. - -Being able to support this kind of highly detailed speciation of task types -without either bloating the tracker with extension points (JIRA) or -shoehorning all features into every task type (Redmine) is hard, but -necessary. - -Supporting structure helps if it leads to more interesting or efficient ways -of using tasks to drive and understand work. - -Bugs are not the only “special” kind of task: - -* “Feature” tasks show up frequently, and speciate on having room for - describing specs and scope. -* “Support ticket” tasks show up in a few trackers, and speciate dramatically - as they tend to be tasks describing the work of a single incident rather - than tasks describing the work on some shared aspect, so they tend to pick - up fields for relating tickets to the involved parties. (Arguably, incident - tickets have needs so drastically different that you should use a dedicated - incident-management tool, not a task/bug tracker.) - -Other kinds are possible, and you've probably seen them in the wild. - -Ideally, speciation happens to support _widespread_ specialized needs. Bug -repro is a good example; every task whose goal is to fix a defect should -include a clear understanding of the defect, both to allow it to be fixed and -to allow it to be tested. Adding specialized data for bugs supports that by -encouraging clearer, more structured descriptions of the defect (with implicit -“fix this” as the task). - -## Implementation notes - -If we reduce task tracking to “record changes to fields and record discussion -comments, on a per task basis,” we can describe the current state of a ticket -using the “most recent” values of each field and the aggregate of all recorded -comments. This can be done ~2 ways: - -1. “Centralized” tracking, where each task has a single, total order of - changes. Changes are mediated through a centralized service. -2. “Decentralized” tracking, where each task has only a partial order over the - history of changes. Changes are mediated by sharing sets of changes, and by - appending “reconciliation” changes to resolve cases where two incomparable - changes modify the same field/s. The most obvious partial order is a - digraph. - -Centralized tracking is a well-solved problem. Decentralized tracking so far -seems to rely heavily on DSCM tools (Git, Mercurial, Fossil) for resolving -conflicts. - -The “work offline” aspect of a distributed tracker is less interesting in as -much as task tracking is a communications tool. Certain kinds of changes -should be published and communicated as early as possible so as to avoid -misunderstandings or duplicated work. - -Being able to separate the mechanism of how changes to tasks are recorded from -the policy of which library of tasks is “canonical” is potentially useful as -an editorial tool and for progressive publication to wider audiences as work -progresses. - -Issue tracking is considerably more amenable to append-only implementations -than SCM is, even if you dislike history-editing SCM workflows. This suggests -that Git is a poor choice of issue-tracking storage backends... diff --git a/wiki/dev/twigs.md b/wiki/dev/twigs.md deleted file mode 100644 index c3c7505..0000000 --- a/wiki/dev/twigs.md +++ /dev/null @@ -1,24 +0,0 @@ -# Branches and Twigs - -## Twigs - -* Relatively short-lived -* Share the commit policy of their parent branch -* Gain little value from global names -* Examples: most “topic branches” are twigs - -## Branches - -* Relatively long-lived -* Correspond to differences in commit policy -* Gain lots of value from global names -* Examples: git-flow 'master', 'develop', &c; hg 'stable' vs 'default'; - release branches - -## Commit policy - -* Decisions like “should every commit pass tests?” and “is rewriting or - deleting a commit acceptable?” are, collectively, the policy of a branch -* Can be very formal or even tool-enforced, or ad-hoc and fluid -* Shared understanding of commit policy helps get everyone's expectations - lined up, easing other SCM-mediated conversations diff --git a/wiki/dev/webapp-versions.md b/wiki/dev/webapp-versions.md deleted file mode 100644 index ce800e9..0000000 --- a/wiki/dev/webapp-versions.md +++ /dev/null @@ -1,27 +0,0 @@ -# Semver Is Wrong For Web Applications - -[Semantic Versioning](http://semver.org) (“Semver”) is a great idea, not least -because it's more of a codification of existing practice than a totally novel -approach to versioning. However, I think it's wrong for web applications. - -Modern web applications tend to be either totally stagnant - in which case -versioning is irrelevant - or continuously upgraded. Users have no, or very -little, choice as to which version to run: either they run the version currently -on the site, or no version at all. Without the flexibility to choose to run a -specific version, Semver's categorization of versions by what compatibility -guarantees they offer is at best misleading and at worst irrelevant and -insulting. - -Web applications must still be _versioned_; internal users and operators must be -able to trace behavioural changes through to deployments and backwards from -there to [code changes](commit-messages). The continuous and incremental nature -of most web development suggests that a simple, ordered version identifier may -be more appropriate: a [build](builds) serial number, or a version _date_, or -otherwise. - -There are _parts_ of web applications that should be semantically versioned: as -the Semver spec says, “Once you identify your public API, you communicate -changes to it with specific increments to your version number,” and this remains -true on the web: whether you choose to support multiple API versions -simultaneously, or to discard all but the latest API version, a semantic version -number can be a helpful communication tool _about that API_. diff --git a/wiki/dev/webapps.md b/wiki/dev/webapps.md deleted file mode 100644 index c4d99aa..0000000 --- a/wiki/dev/webapps.md +++ /dev/null @@ -1,5 +0,0 @@ -# Webapps From The Ground Up - -What does a web application do? It sequences side effects and computation. (This should sound familiar: it's what _every_ program does.) - -Modern web frameworks do their level best to hide this from you, encouraging code to freely intermix computation, data access, event publishing, logging, responses, _asynchronous_ responses, and the rest. This will damn you to an eternity of debugging. diff --git a/wiki/dev/webpack.md b/wiki/dev/webpack.md deleted file mode 100644 index 003152d..0000000 --- a/wiki/dev/webpack.md +++ /dev/null @@ -1,236 +0,0 @@ -# A Compiler For The Web - -“Compilation” - the translation of code from one language into another - is the manufacturing step of software development. During compilation, the source code, which is written with a human reader in mind and which uses human-friendly abstractions, becomes something the machine can execute. It is during this manufacturing step that a specific design (the application's source code) is realized in a form that can be delivered to users (or rather, executed by their browsers). - -Historically, Javascript has had no compilation process. Design and manufacturing were a single process: the browser environment allows developers to write scripts exactly as they'll be delivered by the browser, with no intervening steps. That's a useful property: most notably, it enables the “edit, save, and reload” iteration process that's so popular and so pleasant to work with. However, Javascript's target environment has a few weaknesses that limit the scale of the project you can write this way: - -* There's no built-in way to do modular development. All code shares a single, global namespace, and all dependencies have to be resolved and loaded - in the right order - by the developer. If you include third-party code in your project, as a developer you have to obtain that code from somewhere, and insert that into the page. You have to constantly evaluate the tradeoffs between the convenience of third-party content delivery networks versus the reliability of including third-party code direclty in your app's files as-is versus the performance of concatenating it and minifying it into your main script. - -* Javascript as a language evolves much faster than browsers do. (Given the break-neck pace of browser evolution, that's really saying something.) Programs written using newer Javascript features, such as the `import` statement (see above) or the compact arrow notation for function literals, require some level of translation before a browser can make sense of the code. Developers targetting the browser directly must balance the convenience offered by new language features against the operational complexity of the translation process. - -Historically, the Javascript community has been fairly reluctant to move away from the rapid iteration process provided by the native Javascript ecosystem in the browser. In the last few years, web application development has reached a stage of maturity where those two problems have much more influence over culture and decision-making than they have in the past, so that attitude has started to change. In the last few years we've seen the rise of numerous Javascript translators (compilers, by another name), and frameworks for executing those translators in a repeatable, reproducible way. - -# An Aside About Metaphors - -Physical manufacturing processes tend to have cost structures where the design step is, unit-wise, expensive, but happens once, while manufacturing is unit-wise quite cheap, but happens endlessly often over the life of the product. Software manufacturing processes are deeply weird by comparison. In software, the design step is, unit-wise, _even more_ expensive, and it happens repeatedly to what is notionally the same product, over most of its life, while the manufacturing step happens a single time, for so little cost that it's rarely worth accounting for. - -It's taken a long time to teach manufacturing-trained business people to stop treating development - the design step - like a manufacturing step, but we're finally getting there. Unfortunately, unlike physical manufacturing, software manufacturing is so highly automated that it produces no jobs, even though it's complex enough to support an entire ecosystem of sophisticated, high-quality tools. A software “factory,” for all intents and purposes, operates for free - -# Webpack - -Webpack is a [compiler system](https://www.destroyallsoftware.com/talks/the-birth-and-death-of-javascript) for the web. - -Webpack's compilation process ingests human-friendly source code in a number of languages: primarily Javascript, but in principle any language that can be run by _some_ service the browser provides, including CSS, images, text, and markup. With the help of extensions, it can even ingest things the browser _can't_ serve, such as ES2015 Javascript, or Sass files. It emits, as a target, “bundles” of code which can be loaded using the native tools provided by the browser platform: script tags, stylesheet links, and so on. - -It provides, out of the box, solutions to the two core problems of browser development. Webpack provides a lightweight, non-novel module system to allow developers to write applications as a system of modules with well-defined interfaces, even though the browser environment does not have a module loader. Webpack also provides a system of “loaders” which can apply transformations to the input code, which can include the replacement of novel language features with their more-complex equivalents in the browser. - -Webpack differentiates itself from its predecessors in a few key ways: - -* It targets the whole browser runtime, rather than Javascript specifically. This allows it to include non-Javascript resources, such as stylesheets, in a coherent and consistent way; having a single tool that processes all of your source assets drastically reduces the complexity costs developers have to spend to maintain their asset processing system. - - Targetting the browser as a whole also allows Webpack to offer some fairly sophisticated features. Code splitting, for example, allows developers to partition their code so that rarely-used sections are only loaded when actually needed to handle a situation. - -* Webpack's output format is, by default, extremely readable and easy to diagnose. The correspondences between source code and the running application are clear, which allows defects found in the running application to be addressed in the original code without introducing extra effort to work backwards from Webpack's output. (It also handles source maps quite well.) - -* Webpack's hooks into the application's source code are straight-forward and non-novel. Webpack can ingest sources written using any of three pre-existing Javascript module systems - AMD, CommonJS, and UMD - without serious (or, often, any) changes. Where Webpack offers novel features, it offers them as unobtrusive extensions of existing ideas, rather than inventing new systems from scratch. - -* Finally, Webpack's human factors are quite good. The Webpack authors clearly understand the value of the human element; the configuration format is rich without being overly complex, and the watch system works well to keep the “edit, save, and reload” workflow functional and fast while adding a compile step to the Javascript development process. - -Webpack is not without tradeoffs, however. - -* Webpack's design makes it difficult to link external modules without copying them into the final application. While a classic Javascript app can, for example, reuse a library like jQuery from a CDN, a Webpack application effectively must contain its own copy of that library. There are workarounds for this, such as presuming that the `$` global will be available even without an appropriate `require`, but they're awkward to work with and difficult to reason about in larger codebases. - -* The module abstraction can hide a really amazing amount of [code bloat](http://idlewords.com/talks/website_obesity.htm) from developers, and Webpack doesn't provide much tooling for diagnosing or eliminating that bloat. For example, on a personal project, adding `var _ = require('lodash')` to my app caused the Webpack output to grow by a whopping half a megabyte. Surprise! - - Worse, given the proliferation of modules on NPM (which are almost all installable via Webpack), an app using a higher-level framework and a few third-party libraries is almost certain to contain multiple modules with overlapping capabilities or even overlapping APIs. When you have to vet every module by hand, this problem becomes apparent to the developer very quickly, but when it's handled automatically, it's very easy for module sets to grow staggeringly large. - -* Webpack doesn't eliminate modules during compilation. Instead, it injects a small module loader into your app (the “runtime”, by analogy with the runtime libraries for other languages) to stitch your modules together inside the browser. This code is generated at compile time, and can contain quite a bit of logic if you use the right plugins. In most cases, the cost of sending the Webpack runtime to your users is small, but it's worth being aware of. - -* Finally, Webpack's configuration system is behaviour-oriented rather than process-oriented, which gives it a very rigid structure. Most of the exceptions from its canned process are either buried in loaders or provided by plugins, so the plugin system ends up acting as a way to wedge arbitary complexity back in after Webpack's core designed it out. - -On the balance, I've been very impressed with Webpack, and have found it to be a pretty effective way to work with browser applications. If you're not using something like Ember that comes with a pre-baked toolkit, then you can probably improve your week by using Webpack to build your Javascript apps. - -# Tiny Decisions - -To give a sense of what using Webpack is like, here's my current `webpack.config.js`, annotated with the decisions I've made so far and some of the rationales behind them. - -This setup allows me to run `webpack` on the CLI to compile my sources into a working app, or `webpack --watch` to leave Webpack running to recompile my app for me as I make changes to the sources. The application is written using the React framework, and uses both React's JSX syntax for components and many ES2105 language features that are unavailable in the browser. It also uses some APIs that are available in some browsers but not in others, and includes polyfills for those interfaces. - -You can see the un-annotated file [on Github](https://github.com/ojacobson/webpack-starter/blob/9722f2c873a956ad527947db49bbbe8ecdb4606c/webpack.config.js). - - 'use strict' - - var path = require('path') - var keys = require('lodash.keys') - -I want to call this `require` out - I've used a similar pattern in my actual app code. Lodash, specifically, has capability bundles that are much smaller than the full Lodash codebase. Using `var _ = require('lodash')` grows the bundle by 500kb or so, while this only adds about 30kb. - - var webpack = require('webpack') - var HtmlWebpackPlugin = require('html-webpack-plugin') - var ExtractTextPlugin = require("extract-text-webpack-plugin") - - var thisPackage = require('./package.json') - -We'll see where all of these requires get used later on. - - module.exports = { - entry: { - app: ['app.less', 'app'], - vendor: keys(thisPackage.dependencies), - }, - -Make two bundles: - -* One for application code and stylesheets. - -* One for “vendor” code, computed from `package.json`, so that app changes don't _always_ force every client to re-download all of React + Lodash + yada yada. In `package.json`, the `dependencies` key holds only dependencies that should appear in the vendor bundle. All other deps appear in `devDependencies`, instead. Subverting the dependency conventions like this lets me specify the vendor bundle exactly once, rather than having to duplicate part of the dependency list here in `webpack.config.js`. - - Because the dependencies are listed as entry point scripts, they will always be run when Webpack loads `vendor.[hash].js`. This makes the vendor bundle an appropriate place both for `require()`able modules and for polyfills that operate through side effects on `window` or other global objects. - -This config also invents a third bundle, below. I'll talk about that when I get there. - -A lot of this bundle structure is motivated by the gargantuan size of the libraries I'm using. The vendor bundle is approximately two megabytes in my real app, and includes not just React but a number of supporting libraries. Reusing the vendor bundle between versions helps cut down on the number of times users have to download all of that code. I need to address this, but being conscious of browser caching behaviours helps for now. - - resolve: { - root: [ - path.resolve("src"), - ], - -Some project layout: - -* `PROJECT/src`: Input files for Webpack compilation. - -All inputs go into a single directory, to simplify Webpack file lookups. Separating inputs by type (`js`, `jsx`, `less`, etc) would be consistent with other tools, but makes operating Webpack much more complicated. - - // Automatically resolve JSX modules, like JS modules. - extensions: ["", ".webpack.js", ".web.js", ".js", ".jsx"], - }, - -This is a React app, so I've added `.jsx` to the list of default suffixes. This allows constructs like `var MyComponent = require('MyComponent')` to behave as developers expect, without requiring the consuming developer to keep track of which language `MyComponent` was written in. - -I could also have addressed this by treating all `.js` files as JSX sources. This felt like a worse option; the JSX preprocessing step _looks_ safe on pure-JS sources, but why worry about it when you can be explicit about which parser to use? - - output: { - path: path.resolve("dist/bundle"), - publicPath: "/bundle/", - -More project layout: - -* `PROJECT/dist`: the content root of the web app. Files in `/dist` are expected to be served by a web server or placed in a content delivery network, at the root path of the host. - - * `PROJECT/dist/bundle`: Bundled Webpack outputs for the app. A separate directory makes it easier to set Webpack-specific rules in web servers, which we exploit later in this configuration. - -I've set `publicPath` so that dynamically-loaded chunks (if you use `require.ensure`, for example) end up with the right URLs. - - filename: "[name].[chunkhash].js", - -Include a stable version hash in the name of each output file, so that we can safely set `Cache-Control` headers to have browsers store JS and stylesheets for a long time, while maintaining the ability to redeploy the app and see our changes in a timely fashion. Setting a long cache expiry for these means that the user only pays the transfer costs (power, bandwidth) for the bundles on the first pageview after each deployment, or after their browser cache forgets the site. - -For each bundle, so long as the contents of that bundle don't change, neither will the hash. Since we split vendor code into its own chunk, _often_ the vendor bundle will end up with the same hash even in different versions of the app, further cutting down the number of times the user has to download the (again, massive) dependencies. - - }, - - module: { - loaders: [ - { - test: /\.js$/, - exclude: /node_modules/, - loader: "babel", - query: { - presets: ['es2015'], - plugins: ['transform-object-rest-spread'], - }, - }, - -You don't need this if you don't want it, but I've found ES2015 to be a fairly reasonable improvement over Javascript. Using an exclude, we treat _local_ JS files as ES2015 files, translating them with Babel before including them in the bundle; I leave modules included from third-party dependencies alone, because I have no idea whether I should trust Babel to do the right thing with someone else's code, or whether it already did the right thing. - -I've added `transform-object-rest-spread` because the app I'm working on makes extensive use of `return {...state, modified: field}` constructs, and that syntax is way easier to work with than the equivalent `return Object.assign({}, state, {modified: field})`. - - { - test: /\.jsx$/, - exclude: /node_modules/, - loader: "babel", - query: { - presets: ['react', 'es2015'], - plugins: ['transform-object-rest-spread'], - }, - }, - -Do the same for _local_ `.jsx` files, but additionally parse them using Babel's React driver, to translate `<SomeComponent />` into approprate React calls. Once again, leave the parsing of third-party code alone. - - { - test: /\.less$/, - exclude: /node_modules/, - loader: ExtractTextPlugin.extract("css?sourceMap!less?sourceMap"), - }, - -Compile `.less` files using `less-loader` and `css-loader`, preserving source maps. Then feed them to a plugin whose job is to generate a separate `.css` file, so that they can be loaded by a `<link>` tag in the HTML document. The other alternative, `style-loader`, relies on DOM manipulation at runtime to load stylesheets, which both prevents it from parallelizing with script loading and causes some additional DOM churn. - -We'll see where `ExtractTextPlugin` actually puts the compiled stylesheets later on. - - ], - }, - - plugins: [ - new webpack.optimize.OccurrenceOrderPlugin(/* preferEntry=*/true), - -This plugin causes webpack to order bundled modules such that the most frequently used modules have the shortest identifiers (lexically; 9 is shorter than 10 but the same length as 2) in the resulting bundle. Providing a predictable ordering is irrelevant semantically, but it helps keep the vendor bundle ordered predictably. - - new webpack.optimize.CommonsChunkPlugin({ - name: 'vendor', - minChunks: Infinity, - }), - -Move all the modules the `vendor` bundle depends on into the `vendor` bundle, even if they would otherwise be placed in the `app` bundle. (Trust me: this is a thing. Webpack's algorithm for locating modules is surprising, but consistent.) - - new webpack.optimize.CommonsChunkPlugin({ - name: 'boot', - chunks: ['vendor'], - }), - -Hoo boy. This one's tricky to explain, and doesn't work very well regardless. - -The facts: - -1. This creates the third bundle (“boot.[chunkhash].js”) I mentioned above, and makes the contents of the `vendor` bundle “children” of it. - -2. This plugin will also put the runtime code, which includes both its module loader (which is the same from build to build) and a table of bundle hashes (which is not, unless the bundles are the same), in the root-most bundle. - -3. I really don't want the hash of the `vendor` bundle changing without a good reason, because the `vendor` bundle is grotesquely bloated. - -This code effectively moves the Webpack runtime to its own bundle, which loads quickly (it's only a couple of kilobytes long). This bundle's hash changes on nearly every build, so it doesn't get reused between releases, but by moving that change to this tiny bundle, we get to reuse the vendor bundle as-is between releases a lot more often. - -Unfortunately, code changes in the app bundle _can_ cause the vendor bundle's constituent modules to be reordered or renumbered, so it's not perfect: sometimes the `vendor` bundle's hash changes between versions even though it contains an identical module list with different identifiers. So it goes: the right fix here is probably to shrink the bundle and to re-merge it into the `app` bundle. - - new ExtractTextPlugin("[name].[contenthash].css"), - -Emit collected stylesheets into a separate bundle, named after the entry point. Since the only entry point with stylesheets is the `app` entry point, this creates `app.[hash].css` in the `dist/bundle` directory, right next to `app.[hash].js`. - - new HtmlWebpackPlugin({ - // put index.html outside the bundle/ subdir - filename: '../index.html', - template: 'src/index.html', - chunksSortMode: 'dependency', - }), - -Generate the entry point page from a template (`PROJECT/src/index.html`), rather than writing it entirely by hand. - -You may have noticed that _all four_ of the bundles generated by this build have filenames that include generated chunk hashes. This plugin generates the correct `<script>` tags and `<link>` tags to load those bundles and places them in `dist/index.html`, so that I don't have to manually correct the index page every time I rebuild the app. - - ], - - devtool: '#source-map', - -Make it possible to run browser debuggers against the bundled code as if it were against the original, unbundled module sources. This generates the source maps as separate files and annotates the bundle with a link to them, so that the (bulky) source maps are only downloaded when a user actually opens the debugger. (Thanks, browser authors! That's a nice touch.) - -The source maps contain the original, unmodified code, so that the browser doesn't need to have access to a source tree to make sense of them. I don't care if someone sees my sources, since the same someone can already see the code inside the webpack bundles. - - } - -Things yet to do: - -* Webpack 2's “Tree Shaking” mode exploits the static nature of ES2015 `import` statements to fully eliminate unused symbols from ES2105-style modules. This could potentially cut out a lot of the code in the `vendor` bundle. - -* [Sean Larkin](https://twitter.com/TheLarkInn) suggests setting `recordsPath` at the top level of the Webpack config object to pin chunk IDs between runs. This works! Unfortunately, some plugins cause the records file to grow every time you run `webpack`, regardless of any changes to the output. This is, obviously, not great. - -* A quick primer on React server-side rendering. I know this is a Webpack primer, and not a React primer, but React-in-the-wild often relies on Webpack. diff --git a/wiki/dev/whats-wrong-with-jenkins.md b/wiki/dev/whats-wrong-with-jenkins.md deleted file mode 100644 index 4224eb7..0000000 --- a/wiki/dev/whats-wrong-with-jenkins.md +++ /dev/null @@ -1,108 +0,0 @@ -# Something's Rotten in the State of Jenkins - -Automated, repeatable testing is a fairly widely-accepted cornerstone of -mature software development. Jenkins (and its predecessor, Hudson) has the -unique privilege of being both an early player in the niche and -free-as-in-beer. The blog space is littered with interesting articles about -continuous builds, automated testing, and continuous deployment, all of which -conclude on “how do we make Jenkins do it?” - -This is unfortunate, because Jenkins has some serious problems, and I want it -to stop informing the discussion. - -## There's A Plugin For That - -Almost everything in the following can be addressed using one or more plugins -from Jenkins' extensive plugin repository. That's good - a build system you -can't extend is kind of screwed - but it also means that the Jenkins team -haven't felt a lot of pressure to address key problems in Jenkins proper. - -(Plus, the plugin ecosystem is its own kind of screwed. More on that later.) - -To be clear: being able to fix it with plugins does not make Jenkins itself -_good_. Plugins are a non-response to fundamental problems with Jenkins. - -## No Granularity - -Jenkins builds are atomic: they either pass en suite, or fail en suite. Jenkins has no built-in support for recording that basic compilation succeeded, unit tests failed, but linting also succeeded. - -You can fix this by running more builds, but then you run into problems with -... - -## No Gating - -... the inability to wait for multiple upstream jobs before continuing a -downstream job in a job chain. If your notional build pipeline is - -1. Compile, then -2. Lint and unit test, then -3. Publish binaries for testers/users - -then you need to combine the lint and unit test steps into a single build, or -tolerate occasionally publishing between zero and two copies of the same -original source tree. - -## No Pipeline - -The above are actually symptomatic of a more fundamental design problem in -Jenkins: there's no build pipeline. Jenkins is a task runner: triggers cause -tasks to run, which can cause further triggers. (Without plugins, Jenkins -can't even ensure that chains of jobs all build the same revisioins from -source control.) - -I haven't met many projects whose build process was so simple you could treat -it as a single, pass-fail task, whose results are only interesting if the -whole thing succeeds. - -## Plugin the Gap - -To build a functional, non-trivial build process on top of Jenkins, you will -inevitably need plugins: plugins for source control, plugins for -notification, plugins for managing build steps, plugins for managing various -language runtimes, you name it. - -The plugin ecosystem is run on an entirely volunteer basis, and anyone can -get a new plugin into the official plugin registry. This is good, in as much -as the barrier to entry _should_ be low and people _should_ be encouraged to -scratch itches, but it also means that the plugin registry is a swamp of -sporadically-maintained one-offs with inconsistent interfaces. - -(Worse, even some _core_ plugins have serious maintenance deficits: have a -look at how long -[JENKINS-20767](https://issues.jenkins-ci.org/browse/JENKINS-20767) was open. -How many Jenkins users use Git?) - -## The Plugin API - -The plugin API also, critically, locks Jenkins into some internal design -problems. The sheer number of plugins, and the sheer number of maintainers, -effectively prevents any major refactoring of Jenkins from making progress. -Breaking poorly-maintained plugins inevitably pisses off the users who were, -quite happily, using whatever they'd cooked up, but with the maintainership -of plugins so spread out and so sporadic, there's no easy way for the Jenkins -team to, for example, break up the [4,000-line `Jenkins` class](https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/jenkins/model/Jenkins.java). - -## What Is To Be Done - -Jenkins is great and I'm glad it exists. Jenkins moved the state of the art -for build servers forward very effectively, and successfully out-competed -more carefully-designed offerings that were not, in fact, better: -[Continuum](http://continuum.apache.org) is more or less abandoned, and when -was the last time you saw a -[CruiseControl](http://cruisecontrol.sourceforge.net) (caution: SourceForge) -install? - -It's interesting to compare the state of usability in, eg., Jenkins, to the -state of usability in some paid-product build systems -([Bamboo](https://www.atlassian.com/software/bamboo) and -[TeamCity](https://www.jetbrains.com/teamcity/) for example) on the above -points, as well as looking at the growing number of hosted build systems -([TravisCI](https://travis-ci.org), [MagnumCI](https://magnum-ci.com)) for -ideas. A number of folks have also written insightful musings on what they -want to see in the next CI tool: Susan Potter's -[Carson](https://github.com/mbbx6spp/carson) includes an interesting -motivating metaphor (if you're going to use butlers, why not use the whole -butler mileu?) and some good observations on how Jenkins lets us all down, -for example. - -I think it's time to put Jenkins to bed and write its successor. diff --git a/wiki/dev/why-scm.md b/wiki/dev/why-scm.md deleted file mode 100644 index 5985982..0000000 --- a/wiki/dev/why-scm.md +++ /dev/null @@ -1,73 +0,0 @@ -# Why we use SCM systems - -I'm watching a newly-minted co-op student dealing with her first encounter -with Git, unhelpfully shepherded by a developer to whom everything below is -already second nature, so deeply that the reasoning is hard to articulate. It -is not going well. - -I have the same problem, and it could be me trying to give someone an intro to -Git off the top of my head, but it's not, today. For next time, here are my -thoughts. They have shockingly little to do with Git. - -## Assumptions - -* You're working on a software project. -* You know how to read and write code. -* You're human. -* You have end users or customers - people other than yourself who care about - your code. -* Your project is going to take more than a few minutes to reach end of life. - -## The safety net - -Having a record of past states and known-good states means that, when (WHEN) -you write some code that doesn't work, and when (WHEN) you're stumped as to -why, you can throw your broken code away and get to a working state again. It -also helps with less-drastic solutions by letting you run comparisons between -your broken code and working code, which helps narrow down whatever problem -you've created for yourself. - -(Aside: if you're in a shop that “doesn't use source control,” and for -whatever insane reason you haven't already run screaming, this safety net is a -good reason to use source control independently of the organization as a -whole. Go on, it's easy; modern DSCM tools like Mercurial or Git make -importing “external” trees pretty straightforward. Your future self thanks -you.) - -## Historical record - -Having a record of past, released states means you can go back later and -recover how your project has changed over time. Even if your commit practices -are terrible, when (WHEN) your users complain that something stopped working a -few months ago and they never bothered to mention it until now, you have some -chance of finding out what caused the problem. Better practices around [commit -messages](commit-messages) and other workflow-related artifacts improve your -chances of finding out _why_, too. - -## Consensus - -Every SCM system and every release process is designed to help the humans in -the loop agree on what, exactly, the software being released looks like and -whether or not various releasability criteria have been met. It doesn't matter -if you use rolling releases or carefully curate and tag every release after -months of discussion, you still need to be able to point to a specific version -of your project's source code and say “this will be our next release.” - -SCM systems can help direct and contextualize that discussion by recording the -way your project has changed during those discussion, whether that's part of -development or a separate post-“freeze” release process. - -## Proposals and speculative development - -Modern SCM systems (other than a handful of dismal early attempts) also help -you _propose_ and _discuss_ changes. Distributed source control systems make -this particularly easy, but even centralized systems can support workflows -that record speculative development in version control. The ability to discuss -specific changes and diffs, either within a speculative line of development or -between a proposed feature and the mainline code base, is incredibly powerful. - -## The bottom line - -It's about the people, not the tools, stupid. Explaining how Git works to -someone who doesn't have a good grasp on the relationship between source -control tools and long-term, collaborative software development won't help. diff --git a/wiki/devops/autodeploy.md b/wiki/devops/autodeploy.md deleted file mode 100644 index 801c3eb..0000000 --- a/wiki/devops/autodeploy.md +++ /dev/null @@ -1,38 +0,0 @@ -# Notes towards automating deployment - -This is mostly aimed at the hosted-apps folks; deploying packaged software for -end users requires a slightly different approach. - -## Assumptions - -1. You have one or more _services_ to deploy. (If not, what are you doing -here?) - -2. Your services are tracked in _source control_. (If not, go sort that out, -then come back. No, seriously, _now_.) - -3. You will be deploying your services to one or more _environments_. An -environment is an abstract thing: think “production,” not -“web01.public.example.com.” (If not, where, exactly, will your service run?) - -4. For each service, in each environment, there are one or more _servers_ to -host the service. These servers are functionally identical. (If not, go pave -them and rebuild them using Puppet, Chef, CFengine, or, hell, shell scripts -and duct tape. An environment full of one-offs is the kind of hell I wouldn't -wish on my worst enemy.) - -5. For each service, in each environment, there is a canonical series of steps -that produce a “deployed” system. - ------ - -1. Decide what code should be deployed. (This is a version control activity.) -2. Get the code onto the fucking server. -3. Decide what configuration values should be deployed. (This is also a - version control activity, though possibly not in the same repositories as - the code.) -4. Get the configuration onto the fucking server. -5. Get the code running with the configuration. -6. Log to fucking syslog. -7. When the machine reboots, make sure the code comes back running the same - configuration. diff --git a/wiki/devops/continuous-signing.md b/wiki/devops/continuous-signing.md deleted file mode 100644 index 422ec49..0000000 --- a/wiki/devops/continuous-signing.md +++ /dev/null @@ -1,7 +0,0 @@ -# Code Signing on Build Servers - -We sign things so that we can authenticate them later, but authentication is -largely a conscious function. Computers are bad at answering "is this real". - -Major signing systems (GPG, jarsigner) require presentation of credentials at -signing time. CI servers don't generally have safe tools for this. diff --git a/wiki/devops/glassfish-and-upstart.md b/wiki/devops/glassfish-and-upstart.md deleted file mode 100644 index ce5d0eb..0000000 --- a/wiki/devops/glassfish-and-upstart.md +++ /dev/null @@ -1,153 +0,0 @@ -# Glassfish and Upstart - -**Warning**: the article you're about to read is largely empirical. Take -everything in it in a grain of salt, and _verify it yourself_ before putting -it into production. You have been warned. - -The following observations apply to Glassfish 3.1.2.2. Other versions probably -act similarly, but check the docs. - -## `asadmin create-service` - -Glassfish is capable of emitting SysV init scripts for the DAS, or for any -instance. These init scripts wrap `asadmin start-domain` and `asadmin -start-local-instance`. However, the scripts it emits are (justifiably) -minimalist, and it makes some very strong assumptions about the layout of your -system's rc.d trees and about your system's choice of runlevels. The minimal -init scripts avoid any integration with platform “enhancements” (such as -Redhat's `/var/lock/subsys` mechanism and `condrestart` convention, or -Debian's `start-stop-daemon` helpers) in the name of portability, and the -assumptions it makes about runlevels and init layout are becoming -incrementally more fragile as more distributions switch to alternate init -systems with SysV compatiblity layers. - -## Fork and `expect` - -Upstart's process tracking mechanism relies on services following one of three -forking models, so that it can accurately track which children of PID 1 are -associated with which services: - -* No `expect` stanza: The service's “main” process is expected not to fork at - all, and to remain running. The process started by upstart is the “main” - process. - -* `expect fork`: The service is expected to call `fork()` or `clone()` once. - The process started by upstart itself is not the “main” process, but its - first child process is. - -* `expect daemon`: The service is expected to call `fork()` or `clone()` - twice. The first grandchild process of the one started by upstart itself is - the “main” process. This corresponds to classical Unix daemons, which fork - twice to properly dissociate themselves from the launching shell. - -Surprisingly, `asadmin`-launched Glassfish matches _none_ of these models, and -using `asadmin start-domain` to launch Glassfish from Upstart is not, as far -as I can tell, possible. It's tricky to debug why, since JVM thread creation -floods `strace` with chaff, but I suspect that either `asadmin` or Glassfish -itself is forking too many times. - -From [this mailing list -thread](https://java.net/projects/glassfish/lists/dev/archive/2012-02/message/9), -though, it appears to be safe to launch Glassfish directly, using `java -jar -GLASSFISH_ROOT/modules/glassfish.jar -domain DOMAIN`. This fits nicely into -Upstart's non-forking expect mode, but you lose the ability to pass VM -configuration settings to Glassfish during startup. Any memory settings or -Java environment properties you want to pass to Glassfish have to be passed to -the `java` command manually. - -You also lose `asadmin`'s treatment of Glassfish's working directory. Since -Upstart can configure the working directory, this isn't a big deal. - -## `SIGTERM` versus `asadmin stop-domain` - -Upstart always stops services by sending them a signal. While you can dictate -which signal it uses, you cannot replace signals with another mechanims. -Glassfish shuts down abruptly when it recieves `SIGTERM` or `SIGINT`, leaving -some ugly noise in the logs and potentially aborting any transactions and -requests in flight. The Glassfish developers believe this is harmless and that -the server's operation is correct, and that's probably true, but I've not -tested its effect on outward-facing requests or on in-flight operations far -enough to be comfortable with it. - -I chose to run a “clean”(er) shutdown using `asadmin stop-domain`. This fits -nicely in Upstart's `pre-stop` step, _provided you do not use Upstart's -`respawn` feature_. Upstart will correctly notice that Glassfish has already -stopped after `pre-stop` finishes, but when `respawn` is enabled Upstart will -treat this as an unexpected termination, switch goals from `stop` to -`respawn`, and restart Glassfish. - -(The Upstart documentation claims that `respawn` does not apply if the tracked -process exits during `pre-stop`. This may be true in newer versions of -Upstart, but the version used in Ubuntu 12.04 does restart Glassfish if it -stops during `pre-stop`.) - -Yes, this does make it impossible to stop Glassfish, ever, unless you set a -respawn limit. - -Fortunately, you don't actually want to use `respawn` to manage availability. -The `respawn` mode cripples your ability to manage the service “out of band” -by forcing Upstart to restart it as a daemon every time it stops for any -reason. This means you cannot stop a server with `SIGTERM` or `SIGKILL`; it'll -immediately start again. - -## `initctl reload` - -It sends `SIGHUP`. This does not reload Glassfish's configuration. Deal with -it; use `initctl restart` or `asadmin restart-domain` instead. Most of -Glassfish's configuration can be changed on the fly with `asadmin set` or -other commands anyways, so this is not a big limitation. - -## Instances - -Upstart supports “instances” of a service. This slots nicely into Glassfish's -ability to host multiple domains and instances on the same physical hardware. -I ended up with a generic `glassfish-domain.conf` Upstart configuration: - - description "Glassfish DAS" - console log - - instance $DOMAIN - - setuid glassfish - setgid glassfish - umask 0022 - chdir /opt/glassfish3 - - exec /usr/bin/java -jar /opt/glassfish3/glassfish/modules/glassfish.jar -domain "${DOMAIN}" - - pre-stop exec /opt/glassfish3/bin/asadmin stop-domain "${DOMAIN}" - -Combined with a per-domain wrapper: - - description "Glassfish 'example' domain" - console log - - # Consider using runlevels here. - start on started networking - stop on deconfiguring-networking - - pre-start script - start glassfish-domain DOMAIN=example - end script - - post-stop script - stop glassfish-domain DOMAIN=example - end script - -## Possible refinements - -* Pull system properties and VM flags from the domain's own `domain.xml` - correctly. It might be possible to abuse the (undocumented, unsupported, but - helpful) `--_dry-run` argument from `asadmin start-domain` for this, or it - might be necessary to parse `domain.xml` manually, or it may be possible to - exploit parts of Glassfish itself for this. - -* The `asadmin` cwd is actually the domain's `config` dir, not the Glassfish - installation root. - -* Something something something password files. - -* Syslog and logrotate integration would be useful. The configurations above - spew Glassfish's startup output and stdout to - `/var/log/upstart/glassfish-domain-FOO.log`, which may not be rotated by - default. diff --git a/wiki/devops/notes-on-bootstrapping-grimoire-dot-ca.md b/wiki/devops/notes-on-bootstrapping-grimoire-dot-ca.md deleted file mode 100644 index 36cea2c..0000000 --- a/wiki/devops/notes-on-bootstrapping-grimoire-dot-ca.md +++ /dev/null @@ -1,71 +0,0 @@ -# Notes on Bootstrapping This Host - -Presented without comment: - -* Package updates: - - apt-get update - apt-get upgrade - -* Install Git: - - apt-get install git - -* Set hostname: - - echo 'grimoire' > /etc/hostname - sed -i -e $'s,ubuntu,grimoire.ca\tgrimoire,' /etc/hosts - poweroff - - To verify: - - hostname -f # => grimoire.ca - hostname # => grimoire - -* Add `owen` user: - - adduser owen - adduser owen sudo - - To verify: - - id owen # => uid=1000(owen) gid=1000(owen) groups=1000(owen),27(sudo) - -* Install Puppetlabs Repos: - - wget https://apt.puppetlabs.com/puppetlabs-release-pc1-trusty.deb - dpkg -i puppetlabs-release-pc1-trusty.deb - apt-get update - -* Install Puppet server: - - apt-get install puppetserver - sed -i \ - -e '/^JAVA_ARGS=/ s,2g,512m,g' \ - -e '/^JAVA_ARGS=/ s, -XX:MaxPermSize=256m,,' \ - /etc/default/puppetserver - service puppetserver start - -* Test Puppet agent: - - /opt/puppetlabs/bin/puppet agent --test --server grimoire.ca - - This should output the following: - - Info: Retrieving pluginfacts - Info: Retrieving plugin - Info: Caching catalog for grimoire.ca - Info: Applying configuration version '1446415926' - Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml - Notice: Applied catalog in 0.01 seconds - -* Install environment: - - git init --bare /root/puppet.git - # From workstation, `git push root@grimoire.ca:puppet.git master` to populate the repo - rm -rf /etc/puppetlabs/code/environments/production - git clone /root/puppet.git /etc/puppetlabs/code/environments/production - -* Bootstrap puppet: - - /opt/puppetlabs/bin/puppet agent --test --server grimoire.ca diff --git a/wiki/devops/puppet-2.7-to-3.1.md b/wiki/devops/puppet-2.7-to-3.1.md deleted file mode 100644 index aaaf302..0000000 --- a/wiki/devops/puppet-2.7-to-3.1.md +++ /dev/null @@ -1,51 +0,0 @@ -# Notes on upgrading Puppet from 2.7 to 3.1 - -## Bad - -* As usual, you have to upgrade the puppet master first. 2.7 agents can speak - to 3.1 masters just fine, but 3.1 agents cannot speak to 2.7 masters. - -* I tried to upgrade the Puppet master using both `puppet agent` (failed when - package upgrades shut down the puppet master) and `puppet apply` (failed for - Ubuntu-specific reasons outlined below) - -* [This bug](https://projects.puppetlabs.com/issues/19308). - -* You more or less can't upgrade Puppet using Puppet. - -## Good - -* My 2.7 manifests worked perfectly under 3.1. - -* Puppet's CA and SSL certs survived intact and required no maintenance after - the upgrade. - -* The Hiera integration into class parameters works as advertised and really - does help a lot. - -* Once I figured out how to execute it, the upgrade was pretty smooth. - -* No Ruby upgrade! - -* Testing the upgrade in a VM sandbox meant being able to fuck up safely. - [Vagrant](http://www.vagrantup.com) is super awesome. - -## Package Management Sucks - -Asking Puppet to upgrade Puppet went wrong on Ubuntu because of the way Puppet -is packaged: there are three (ish) Puppet packages, and Puppet's resource -evaluation bits try to upgrade and install one package at a time. Upgrading -only “puppetmaster” upgraded “puppet-common” but not “puppet,” causing Apt to -remove “puppet”; upgrading only “puppet” similarly upgraded “puppet-copmmon” -but not “puppetmaster,” causing Apt to remove “puppetmaster.” - -The Puppet aptitude provider (which I use instead of apt-get) for Package -resources also doesn't know how to tell aptitude what to do with config files -during upgrades. This prevented Puppet from being able to upgrade pacakges -even when running standalone (via `puppet apply`). - -Finally, something about the switchover from Canonical's Puppet .debs to -Puppetlabs' .debs caused aptitude to consider all three packages “broken” -after a manual upgrade ('aptitude upgrade puppet puppetmaster'). Upgrading the -packages a second time corrected it; this is the path I eventually took with -my production puppetmaster and nodes. diff --git a/wiki/devops/self-daemonization-sucks.md b/wiki/devops/self-daemonization-sucks.md deleted file mode 100644 index b527da8..0000000 --- a/wiki/devops/self-daemonization-sucks.md +++ /dev/null @@ -1,78 +0,0 @@ -# Self-daemonizing code is awful - -The classical UNIX approach to services is to implement them as “daemons,” -programs that run without a terminal attached and provide some service. The -key feature of a classical daemon is that, when started, it carefully -detaches itself from its initial environment and terminal, then continues -running in the background. - -This is awful and I'm glad modern init replacements discourage it. - -## Process Tracking - -Daemons don't exist in a vacuum. Administrators and owners need to be able to -start and stop daemons reliably, and check their status. The classic -self-daemonization approach makes this impossible. - -Traditionally, daemons run as children of `init` (pid 1), even if they start -out as children of some terminal or startup process. Posix only provides -deterministic APIs for processes to manage their children and their immediate -parents; the classic daemonisation protocol hands the newly-started daemon -process off from its original parent process, which knows how to start and -stop it, to an unsuspecting `init`, which has no idea how this specific -daemon is special. - -The standard workaround has daemons write their own PIDs to a file, but a -file is “dead” data: it's not automatically updated if the daemon dies, and -can linger long enough to contain the PID of some later, unrelated program. -PID file validity checks generally suffer from subtle (or, sometimes, quite -gross) race conditions. - -## Complexity - -The actual _code_ to correctly daemonize a process is surprisingly complex, -given the individual interfaces' relative simplicity: - -* The daemon must start its own process group - -* The daemon must detach from its controlling terminal - -* The daemon should close (and may reopen) file handles inherited from its - parent process (generally, a shell) - -* The daemon should ensure its working directory is predictable and - controllable - -* The daemon should ensure its umask is predictable and controllable - -* If the daemon uses privileged resources (such as low-numbered ports), it - should carefully manage its effective, real, and session UID and GIDs - -* Daemons must ensure that all of the above steps happen in signal-safe ways, - so that a daemon can be shut down sanely even if it's still starting up - -See [this list](http://www.freedesktop.org/software/systemd/man/daemon.html) -for a longer version. It's worse than you think. - -All of this gets even more complicated if the daemon has its own child -processes, a pattern common to network services. Naturally, a lot of daemons -in the real world get some of these steps wrong. - -## The Future - -[Supervisord](http://supervisord.org), -[Foreman](http://ddollar.github.io/foreman/), -[Upstart](http://upstart.ubuntu.com), -[Launchd](https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/launchctl.1.html), -[systemd](http://www.freedesktop.org/wiki/Software/systemd/), and [daemontools](http://cr.yp.to/daemontools.html) all -encourage services _not_ to self-daemonize by providing a sane system for -starting the daemon with the right parent process and the right environment -in the first place. - -This is a great application of -[DRY](http://c2.com/cgi/wiki?DontRepeatYourself), as the daemon management -code only needs to be written once (in the daemon-managing daemon) rather -than many times over (in each individual daemon). It also makes daemon -execution more predictable, since daemons “in production” behave more like -they do when run attached to a developer's console during debugging or -development. diff --git a/wiki/email.md b/wiki/email.md deleted file mode 100644 index 53350a8..0000000 --- a/wiki/email.md +++ /dev/null @@ -1,19 +0,0 @@ -# Why I Didn't Answer Your Email - - - -I get a lot of email, often while I'm in [the middle of something -thought-intensive](http://blog.ninlabs.com/2013/01/programmer-interrupted/). -Managing interruptions and my attention means I have to triage emails based on -only two things: who sent them, and what they wrote in the subject line. If I -didn't answer yours, it's probably not personal: I probably glanced at it when -it arrived and mentally put it on the “later” pile instead of the “now” pile, -and “later” can be a very long time indeed. - -If it was actually important that I read and respond to your email, and I -couldn't tell that from the subject, what the hell is wrong with your writing? -If you can live without my reply, well, you're smart: work something out -without me. I'll probably think it's cool. diff --git a/wiki/ethics/lg-smart-tv.md b/wiki/ethics/lg-smart-tv.md deleted file mode 100644 index f544f02..0000000 --- a/wiki/ethics/lg-smart-tv.md +++ /dev/null @@ -1,98 +0,0 @@ -# LG Smart TVs are dumb - -(Or, corporate entitlement run amok.) - -[According to a UK -blogger](http://doctorbeet.blogspot.co.uk/2013/11/lg-smart-tvs-logging-usb-fil -enames-and.html), LG Smart TVs not only offer “smart” features, but also -track your viewing habits _extremely_ closely by submitting events back to LG -and to LG's advertising affiliates. - -Under his diagnosis, the TV sends an event to LG that identifies the specific TV - -* every time the viewer changes channels (containing the name of the channel being watched) - -* whenever a USB device is inserted (containing the names of files stored on the USB stick) - -The page comments additionally suggest that the TV sends back information -whenever the menu is opened, as well. - -This information is used to provide targeted advertising, likely to offset -the operational cost of the TV's “intelligent” features. Consumer protections -around personal data and tracking have traditionally been very weak, so it's -not entirely surprising that LG would choose to extract revenue this way -instead of raising the price of the product to cover the operational costs and instead of offering the intelligent features as a subscription service, but this is extremely disappointing. - -## How is this harmful? - -LG uses this information to sell [targeted -advertising](http://us.lgsmartad.com/main/main.lge), extracting value for -itself out from the presence of other peoples' eyeballs. We've collectively -chosen to accept that content producers -- website owners, for example -- can -sell advertising as a way to augment their income from the content they -produce. However, LG is not a content producer; while you can choose to leave -a website that uses invasive ad tracking, LG's position is more analogous to -that of the web browser itself: they get to watch the customer's habits no matter what they choose to watch. - -There is a material difference between advertising targeted by time slot and -by the content distributors (television networks) on their own behalf, which -has been part of television nearly from its inception, and the kind of -personally-invasive and cross-channel targeted advertising LG is engaging in. -LG's ability to correlate viewing habits across every channel and across -non-public media the user watches places them in a position where they may -well derive more information about the people watching TV than those peoples' -own spouses or parents would be trusted with. We've already seen this kind of -comprehensive statistical modelling go wrong; [Target's advertising folks -landed in hot water last -year](http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-ou - t-a-teen-girl-was-pregnant-before-her-father-did/) after their -purchase-habit-derived models revealed information about a customer that she -didn't even have about herself. - -LG is also taking zero care to ensure that the private information it's -silently extracting from viewers is not diseminated further. The TV sends -viewing information - channel names, file names from USB sticks, and so on - -over the internet in plain text, allowing anyone on the network path between -the TV and LG to intercept it and use it for their own ends. This kind of -information is incredibly useful for targeted fraud, and I'm sure the NSA is -thrilled to have such a useful source of personally-identifying and -habit-revealing data available for free, too. - -## Icing on the cake - -The TV's settings menu contains an item entitled “Collection of watching -info” which can be turned to “On” (the default, even if the customer rejects -the end-user license agreement on the television and disables the -“intelligent” features) or “Off.” It would be reasonable to expect that this -option would stop the TV from communicating viewing habits to the internet; -however, the setting appears to do very little. The article shows packet -captures of the TV submitting viewing information to LG with the setting in -either position. - -The setting also has no help text to guide customers to understanding what it -_actually_ does or to clarify expectations around it. - -## LG's stance is morally indefensible - -From the blog post, LG's representative claims that viewers “agree” to this -monitoring when they accept the TV's end-user license agreement, and that -it's up to the retailer to inform the user of the contents of the license -agreement. However: - -1. LG does not ensure that retailers tell potential buyers about the end-user license conditions; they claim it's up to the retailer's individual discretion. - -2. There's no incentive for retailers to tell customers about the license agreement, as the agreement is between LG and the customer, not between the retailer and the customer. Stopping each sale to talk about license terms is likely to reduce the number of sales, too. - -3. It would be impractical for retailers to inform customers of every license for every product they sell, as there are unique licenses for nearly every piece of software and for most computer-enabled products (i.e., most of them). Retailers do not habitually employ contract lawyers to accurately guide customers through the license agreements. - -4. LG's own packaging makes the license agreement effectively unviewable without committing the money to buy a TV. It's only presented on the TV itself after it's installed and turned on (which often voids the customer's ability to return it to the retailer), and in retailer-specific parts of LG's own website, which isn't practically available while the customer is standing in a shop considering which TV to buy. - -It is not reasonable to expect customers to assume their TV will track -viewing habits publicly. This is not a behaviour that TVs have had over their -multi-decade existence, and it's disingenuous for LG to act like the customer -“should have known” in any sense that the LG TV acts in this way. - -LG is hiding behind the modern culture of unfair post-sale contracts to -impose a novel, deeply-invasive program of customer monitoring for their own -benefit, relying on corporate law to protect themselves from consumer -reprisals. This cannot be allowed to continue; vote with your dollars. diff --git a/wiki/ethics/linkedin-intro.md b/wiki/ethics/linkedin-intro.md deleted file mode 100644 index 20b8c5c..0000000 --- a/wiki/ethics/linkedin-intro.md +++ /dev/null @@ -1,187 +0,0 @@ -# LinkedIn Intro is Unethical Software - -[LinkedIn Intro](https://intro.linkedin.com) is a mail filtering service -provided by LinkedIn that inserts LinkedIn relationship data into the user's -incoming and outgoing mail. This allows, for example, LinkedIn to decorate -incoming mail with a toolbar linking to the sender's LinkedIn account, and -automatically injects a short “signature” of your LinkedIn profile into -outgoing mail. - -These are useful features, and the resulting interaction is quite smooth. -However, the implementation has deep, unsolvable ethical problems. - -LinkedIn Intro reconfigures the user's mobile device, replacing their mail -accounts with proxy mail accounts that use LinkedIn's incoming and outgoing -mail servers. All of LinkedIn's user-facing features are implemented using -HTML and JavaScript injected directly into the email message. - -## Password Concerns - -LinkedIn Intro's proxy mail server must be able to log into the user's real -incoming mail server to retrieve mail, and often must log into the user's real -outgoing mail server to deliver mail with correct SPF or DKIM validation. This -implies that LinkedIn Intro must know the user's email credentials, which it -acquires from their mobile device. Since this is a “use” of a password, not -merely a “validation” of an incoming password, the password must be available -_to LinkedIn_ as plain text. There are two serious problems with this that -are directly LinkedIn's responsibilty, and a third that's indirect but -important. (Some email providers - notably Google - support non-password, -revokable authentication mechanisms for exactly this sort of use. It's not -clear whether LinkedIn Intro uses these safer mechanisms, but it doesn't -materially change my point.) - -LinkedIn has a somewhat unhappy security history. In 2012, they had a -[security -breach](http://www.nytimes.com/2012/06/11/technology/linkedin-breach-exposes-light-security-even-at-data-companies.html) -that exposed part of their authentication database to the internet. While they -have very likely tightened up safeguards in response, it's unclear whether -those include a cultural change towards more secure practices. Certainly, it -will take longer than the year that's passed for them to build better trust -from the technical community. - -Worse, the breach revealed that LinkedIn was actively disregarding known -problems with password storage for authentication. [Since at least the late -70's](http://cm.bell-labs.com/cm/cs/who/dmr/passwd.ps), the security community -has been broadly aware of weaknesses of unsalted hash-based password -obfuscation. More recently, [it's become -clear](http://www.win.tue.nl/cccc/sha-1-challenge.html) that CPU-optimized -hash algorithms (including MD5 and both SHA-1 and SHA-2) are weak protection -against massively parallel password cracking — cracking that's quite cheap -using modern GPUs. Algorithms like -[bcrypt](http://codahale.com/how-to-safely-store-a-password/) which address -this specific weakness have been available since the late 90's. LinkedIn's -leaked password database was stored using unsalted SHA-1 digests, suggesting -either a lack of research or a lack of understanding of the security -implications of their password system. - -Rebuilding trust after this kind of public shaming should have involved a -major, visible shift in the company's culture. There's easy marketing among -techies — a major portion of LinkedIn's audience, even now — to be done by -showing how on the ball you can be about protecting their data; none of this -marketing has appeared. The impact of raising the priority of security issues -throughout product development should be visible from the outside, as risky -features get pushed aside to address more fundamental security issues; no such -shift in priorities has been visible. It is reasonable, observing LinkedIn's -behaviour in the last year, to conclude that LinkedIn, as a company, still -treats data security as an easy problem to be solved with as little effort as -possible. This is not a good basis on which to ask users to hand over their -email passwords. - -While the security community has been making real efforts to educate users to -use a unique password for each service they use, the sad reality is that most -users still use the same password for everything. As LinkedIn Intro must -necessarily store _plain text_ passwords, it will be a very attractive target -for future break-ins, for employee malfeasance, and for United States court -orders. - -## What Gets Seen - -LinkedIn Intro is not selective. Every email that passes through an -Intro-enabled email account is visible, entirely, to LinkedIn. The fact that -the email occurred is fodder for their recommendation engine and for any other -analysis they care to run. The contents may be retained indefinitely, outside -of either the sender's or the recipients' control. LinkedIn is in a position -to claim that Intro users have given it _permission_ to be intrusive into -their email in this way. - -Very few people use a dedicated email account for “corporate networking” and -recruiting activities. A CEO (LinkedIn's own example) recieves mail pertaining -to many sensitive aspects of a corporation's running: lawsuit notices, gossip -among the exec team, planning emails discussing the future of the company, -financials, email related to external partnerships at the C*O level, and many, -many other things. LinkedIn's real userbase, recruiters and work-seeking -people, often use the same email account for LinkedIn and for unrelated -private activities. LinkedIn _has no business_ reading these emails or even -knowing of their existence, but Intro provides no way to restrict what -LinkedIn sees. - -Users in heavily-regulated industries, such as health care or finance, may be -exposing their whole organization to government interventions by using Intro, -as LinkedIn is not known to be HIPAA, SOX, or PCI compliant. - -The resulting “who mailed what to whom” database is hugely valuable. I expect -LinkedIn to be banking on this; such a corpus of conversational data would -greatly help them develop new features targetting specific groups of users, -and could improve the overall effectiveness of their recommendation engine. -However, it's also valuable to others; as above, this information would be a -gold mine for marketers, a target for break-ins, and, worryingly, _immensely_ -useful to the United States' intelligence apparatus (who can obtain court -orders preventing LinkedIn from discussing their requests, to boot). - -(LinkedIn's recommendation engine also has issues; it's notorious for -[recommending people to their own -ex-partners](http://community.linkedin.com/questions/31650/linkedin-sent-an-ex-girlfriend-a-request-to-someon.html) -and to people actively suing one another. Giving it more data to work with -makes this more likely, especially when the data is largely unrelated to -professional concerns..) - -LinkedIn Intro's injected HTML is also suspect by default. Tracking email open -rates is standard practice for email marketing, but Intro allows _LinkedIn_ to -track the open rate of emails _you send_ and of emails _you recieve_, -regardless of whether those emails pertain to LinkedIn's primary business or -not. - -## User Education - -All of the risks outlined above are manageable. With proper information, the -end user can make an informed decision as to whether - -* to ignore Intro at all, or -* to use Intro with a dedicated “LinkedIn Only” email account, or -* to use Intro with everything - -LinkedIn's own marketing materials outline _absolutely none_ of these risks. -They're designed, as most app landing materials are, to make the path to -downloading and configuring Intro as smooth and unthreatening as possible: the -option to install the application is presented before the page describes what -the app _does_, and it never describes how the app _works_ — that information -is never stated outright, not even in Intro's own -[FAQ](https://intro.linkedin.com/micro/faq). Witholding the risks from users -vastly increases the chances of a user making a decision they aren't -comfortable with, or that increases their own risk of social or legal problems -down the road. - -## LinkedIn's Response - -Shortly after Intro's first round of public mockery, a LinkedIn employee -[posted a -response](http://blog.linkedin.com/2013/10/26/the-facts-about-linkedin-intro/) -to some of the security concerns. The post is interesting, and I recommend you -read it. - -The key point about the response is that it underscores how secure Intro is -_for LinkedIn_. It does absolutely nothing to discuss how LinkedIn is curating -its users' security needs. In particular: - -> We isolated Intro in a separate network segment and implemented a -> tight security perimeter across trust boundaries. - -A breach in LinkedIn proper may not imply a breach in LinkedIn Intro, and vice -versa, but there must be at least some data passing back and forth for Intro -to operate. The nature and structure of the security mechanisms that permit -the “right” kind of data are not elaborated on; it's impossible to decide how -well they actually insulate Intro from LinkedIn. Furthermore, a breach in -LinkedIn Intro is still incredibly damaging even if it doesn't span LinkedIn -itself. - -> Our internal team of experienced testers also penetration-tested the -> final implementation, and we worked closely with the Intro team to -> make sure identified vulnerabilities were addressed. - -This doesn't address the serious concerns with LinkedIn Intro's _intended_ -use; it also doesn't do much to help users understand how thorough the testing -was or to understand who vetted the results. - -## The Bottom Line - -_If_ LinkedIn Intro works as built, and _if_ their security safeguards are as -effective as they claim and hope, then Intro exposes its users to much greater -risk of password compromise and helps them expose themselves to surveillence, -both government and private. If either of those conditions does not hold, it's -worse. - -The software industry is young, and immature, and wealthy. There is no ethics -body to complain to; had the developers of Intro said “no,” they would very -likely have been replaced by another round of developers who would help -LinkedIn violate their users' privacy. That does not excuse LinkedIn; their -product is vile, and must not be tolerated in the market. diff --git a/wiki/ethics/musings.md b/wiki/ethics/musings.md deleted file mode 100644 index b9a899b..0000000 --- a/wiki/ethics/musings.md +++ /dev/null @@ -1,76 +0,0 @@ -# Undirected Musings about Ethics - -## Further reading - -* [The Fantasy and Abuse of the Manipulable User](http://modelviewculture.com/pieces/the-fantasy-and-abuse-of-the-manipulable-user) -* [Ethics for Programmers: Primum non Nocere](https://glyph.twistedmatrix.com/2005/11/ethics-for-programmers-primum-non.html) -* [The Internet with a Human Face](http://idlewords.com/bt14.htm) -* [Ethics vs Morals](http://www.diffen.com/difference/Ethics_vs_Morals) -* [Yes means Yes](http://yesmeansyesblog.wordpress.com) - -## Why bother? - -Everyone _thinks_ they're doing good most of the time. Ethical codes help -guide that sense into alignment with the surrounding social and political -context: doing good for whom, why, and with what kinds of caveats. - -## It's not about engineering, it's about people - -An ethical code for software development should not waste too much space -talking about _engineering practices_. Certainly there is value in getting -more developers and systems people to follow good engineering practice, but -an ethical code should focus on the interaction between trustworthiness, the -greater good, the personal good of _all_ the participants in the system, and -software itself. - -(This comes up in Ethics for Programmers, above.) - -It's no good to build a wonderfully-engineered system that is cheap to run -and easy to integrate with if it systematically disenfranchises and abuses -its users for the benefit of its owners, and that's a problem we actually -have via Facebook, Github, Twitter, and numerous others. - -## Ethical codes are fundamentally extrinsic - -Ethical codes exist so that others can judge our behaviour, not so that we -can judge our own behaviour. - -## Ethical codes must be constraining - -Ethical codes do not exist in a vacuum. A code that authorizes its adherents -to behave in any way they see fit, subject only to their own judgement, is no -ethical code at all. We already have that and the results have not been great. - -_This is important_ - a meaningful ethical code for software would probably -cripple most software business models. An ethical code that prioritizes -active consent, for example, completely cripples advertising and analytics, -and puts a big roadblock in buyouts like Instagram's. This may well be good -for society. - -## Integrity is not about contracts or legislation - -Ethics, personal integrity, and group integrity are tangled together, but -modern Western conceptions of group integrity tend to revolve around “does -this group break the law or engender lawsuits,” not “does this group act in -the best interests of people outside of it.” - -## Assumptions - -I've embedded some of my personal morality into the “ethics” articles in this -section, in the absence of a published moral code. Those, obviously, aren't -absolute, but you can reason about their validity if you assume that I -believe the “end user's” privacy and active consent take priority over the -technical cleverness or business value of a software system. - -### Consent and social software - -This has some complicated downstream effects: “active consent” means -something you can't handwave away by putting implied consent (for example, to -future changes) in an EULA or privacy statement. I haven't written much that -calls out this pattern because it's _pervasive_. - -The “end user is the real product” business model most social networks -operate on is fundamentally unethical under this code. It will always be more -valuable to the “real customers” (advertisers, analytics platforms, law -enforcement, and intelligence agencies) for users to be opted into new -measurements by default, _assuming_ consent rather than obtaining it. diff --git a/wiki/games/dark-souls.md b/wiki/games/dark-souls.md deleted file mode 100644 index de560b0..0000000 --- a/wiki/games/dark-souls.md +++ /dev/null @@ -1,57 +0,0 @@ -# Dark Souls is pretty important - -The following is from a Slack session and needs work into an actual writeup. - ------ - -> Owen: can I blither, I might need to talk this one out before I figure out what I’m trying to say about the game, it does something interesting with heroic arcs and moral arcs -> -> Alex: Bliiither away! -> -> Owen: so dark souls uses the trappings of heroic myth and the hero’s journey pretty consciously. you start from humble beginnings and go on to smite at god himself to set right the world -> -> except. -> -> except. -> -> at every scale, lordran is a dying world. -> -> you’ve got a curse that’ll eventually leave you a mindless rotted husk. -> -> the people you meet are all depressed and dealing with it in their own unique ways -> -> or responding to the slow collapse of the kingdoms around them -> -> that you will die is inevitable ludically (it’s the name of the HD release, for example: “Prepare To Die”) as well as narratively -> -> there is, very explicitly, no hope. -> -> only different ways to live before the end. -> -> the entire game is an exploration of whether that can be enough for a hero -> -> even your tools for helping other players are finite, and costly (as well as your tools for tormenting them) -> -> the game’s plodding, staccato pacing forces the player to stop and contemplate this on a regular basis -> -> that there is no hope isn’t even questioned, really: there’s no resolution to _why_ everything is dying, and it doesn’t really matter to any of the characters or to any of the game’s plot points -> -> for all that, I find it a surprisingly optimistic game, because the answer “yes, it’s enough that you’re a hero _now_” is one the game considers valid -> -> Alex: Huh. That’s… very contemporary. -> -> Owen: okay I think that might be what i got but I think Dark Souls is a rad piece of game art - ------ - -> Owen: ugh no it’s weirder than that, “everything is dying” is a tautological element of the game world. right from the start the game explains the coming fall in terms of a fire dying out -> -> it’s almost the anti-fascism: “everything was once grand, and has fallen” but instead of “and here’s who to blame and how to take it back” it’s “and this is the natural order of things" -> -> (one of the endings riffs on that by allowing you to throw your cursed self into the flames to keep them alight for a while longer; another has you extinguish the sacred flame yourself, and that’s the _good_ ending) -> -> I mean, the player-character dies as a person either way, either as a martyr or as a promethean figure of legend -> -> it’s also strongly implied that the once-grand kingdoms whose rotting corpses you’re rolling around in were, by and large, gloriously shitty places to actually live -> -> as places ruled by feuding god-kings tend to be diff --git a/wiki/git/config.md b/wiki/git/config.md deleted file mode 100644 index 9ee058b..0000000 --- a/wiki/git/config.md +++ /dev/null @@ -1,58 +0,0 @@ -# git-config Settings You Want - -Git comes with some fairly [lkml](http://www.tux.org/lkml/)-specific -configuration defaults. You should fix this. All of the items below can be set -either for your entire login account (`git config --global`) or for a specific -repository (`git config`). - -Full documentation is under `git help config`, unless otherwise stated. - -* `git config user.name 'Your Full Name'` and `git config user.email - 'your-email@example.com'`, obviously. - -* `git config push.default simple` - the default behaviour (called `matching`) - of an unqualified `git push` is to identify pairs of branches by name and - push all matches from your local repository to the remote. Given that - branches have explicit “upstream” configuration identifying which, if any, - branch in which, if any, remote they're associated with, this is dumb. The - `simple` mode pushes the current branch to its upstream remote, if and only - if the local branch name and the remote branch name match _and_ the local - branch tracks the remote branch. Requires Git 1.8 or later; will be the - default in Git 2.0. (For older versions of Git, use `upstream` instead, - which does not require that branch names match.) - -* `git config merge.defaultToUpstream true` - causes an unqualified `git - merge` to merge the current branch's configured upstream branch, rather than - being an error. (`git rebase` always has this behaviour. Consistent!) You - should still merge thoughtfully. - -* `git config rebase.autosquash true` - causes `git rebase -i` to parse magic - comments created by `git commit --squash=some-hash` and `git commit - --fixup=some-hash` and reorder the commit list before presenting it for - further editing. See the descriptions of “squash” and “fixup” in `git help - rebase` for details; autosquash makes amending commits other than the most - recent easier and less error-prone. - -* `git config branch.autosetupmerge always` - newly-created branches whose - start point is a branch (`git checkout master -b some-feature`, `git branch - some-feature origin/develop`, and so on) will be configured to have the - start point branch as their upstream. By default (with `true` rather than - `always`) this only happens when the start point is a remote-tracking - branch. - -* `git config rerere.enabled true` - enable “reuse recorded resolution.” The - `git help rerere` docs explain it pretty well, but the short version is that - git can record how you resolve conflicts during a “test” merge and reuse the - same approach when resolving the same conflict later, in a “real” merge. - -## For advanced users - -A few things are nice when you're getting started, but become annoying when -you no longer need them. - -* `git config advice.detachedHead` - if you already understand the difference - between having a branch checked out and having a commit checked out, and - already understand what “detatched head” means, the warning on every `git - checkout ...some detatched thing...` isn't helping anyone. This is also - useful repositories used for deployment, where specific commits (from tags, - for example) are regularly checked out. diff --git a/wiki/git/detached-sigs.md b/wiki/git/detached-sigs.md deleted file mode 100644 index b94013c..0000000 --- a/wiki/git/detached-sigs.md +++ /dev/null @@ -1,298 +0,0 @@ -# Notes Towards Detached Signatures in Git - -Git supports a limited form of object authentication: specific object -categories in Git's internal model can have [GPG](/gpg/terrible) signatures -embedded in them, allowing the authorship of the objects to be verified using -[GPG](/gpg/cool)'s underlying trust model. Tag signatures can be used to -verify the authenticity and integrity of the _snapshot associated with a -tag_, and the authenticity of the tag itself, filling a niche broadly similar -to code signing in binary distribution systems. Commit signatures can be used -to verify the authenticity of the _snapshot associated with the commit_, and -the authorship of the commit itself. (Conventionally, commit signatures are -assumed to also authenticate either the entire line of history leading to a -commit, or the diff between the commit and its first parent, or both.) - -Git's existing system has some tradeoffs. - -* Signatures are embedded within the objects they sign. The signature is part - of the object's identity; since Git is content-addressed, this means that - an object can neither be retroactively signed nor retroactively stripped of - its signature without modifying the object's identity. Git's distributed - model means that these sorts of identity changes are both complicated and - easily detected. - -* Commit signatures are second-class citizens. They're a relatively recent - addition to the Git suite, and both the implementation and the social - conventions around them continue to evolve. - -* Only some objects can be signed. While Git has relatively weak rules about - workflow, the signature system assumes you're using one of Git's more - widespread workflows by limiting your options to at most one signature, and - by restricting signatures to tags and commits (leaving out blobs, trees, - and refs). - -I believe it would be useful from an authentication standpoint to add -"detached" signatures to Git, to allow users to make these tradeoffs -differently if desired. These signatures would be stored as separate (blob) -objects in a dedicated `refs` namespace, supporting retroactive signatures, -multiple signatures for a given object, "policy" signatures, and -authentication of arbitrary objects. - -The following notes are partially guided by Git's one existing "detached -metadata" facility, `git notes`. Similarities are intentional; divergences -will be noted where appropriate. Detached signatures are meant to -interoperate with existing Git workflow as much as possible: in particular, -they can be fetched and pushed like any other bit of Git metadata. - -A detached signature cryptographically binds three facts together into an -assertion whose authenticity can be checked by anyone with access to the -signatory's keys: - -1. An object (in the Git sense; a commit, tag, tree, or blob), -2. A policy label, and -3. A signatory (a person or agent making the assertion). - -These assertions can be published separately from or in tandem with the -objects they apply to. - -## Policies - -Taking a hint from Monotone, every signature includes a "policy" identifying -how the signature is meant to be interpreted. Policies are arbitrary strings; -their meaning is entirely defined by tooling and convention, not by this -draft. - -This draft uses a single policy, `author`, for its examples. A signature -under the `author` policy implies that the signatory had a hand in the -authorship of the designated object. (This is compatible with existing -interpretations of signed tags and commits.) (Authorship under this model is -strictly self-attested: you can claim authorship of anything, and you cannot -assert anyone else's authorship.) - -The Monotone documentation suggests a number of other useful policies related -to testing and release status, automated build results, and numerous other -factors. Use your imagination. - -## What's In A Signature - -Detached signatures cover the disk representation of an object, as given by - - git cat-file <TYPE> <SHA1> - -For most of Git's object types, this means that the signed content is plain -text. For `tree` objects, the signed content is the awful binary -representation of the tree, _not_ the pretty representation given by `git -ls-tree` or `git show`. - -Detached signatures include the "policy" identifier in the signed content, to -prevent others from tampering with policy choices via `refs` hackery. (This -will make more sense momentarily.) The policy identifier is prepended to the -signed content, terminated by a zero byte (as with Git's own type -identifiers, but without a length field as length checks are performed by -signing and again when the signature is stored in Git). - -To generate the _complete_ signable version of an object, use something -equivalent to the following shell snippet: - - # generate-signable POLICY TYPE SHA1 - function generate-signable() { - printf '%s\0' "$1" - git cat-file "$2" "$3" - } - -(In the process of writing this, I discovered how hard it is to get Unix's -C-derived shell tools to emit a zero byte.) - -## Signature Storage and Naming - -We assume that a userid will sign an object at most once. - -Each signature is stored in an independent blob object in the repository it -applies to. The signature object (described above) is stored in Git, and its -hash recorded in `refs/signatures/<POLICY>/<SUBJECT SHA1>/<SIGNER KEY -FINGERPRINT>`. - - # sign POLICY TYPE SHA1 FINGERPRINT - function sign() { - local SIG_HASH=$( - generate-signable "$@" | - gpg --batch --no-tty --sign -u "$4" | - git hash-object --stdin -w -t blob - ) - git update-ref "refs/signatures/$1/$3/$4" - } - -Stored signatures always use the complete fingerprint to identify keys, to -minimize the risk of colliding key IDs while avoiding the need to store full -keys in the `refs` naming hierarchy. - -The policy name can be reliably extracted from the ref, as the trailing part -has a fixed length (in both path segments and bytes) and each ref begins with -a fixed, constant prefix `refs/signatures/`. - -## Signature Verification - -Given a signature ref as described above, we can verify and authenticate the -signature and bind it to the associated object and policy by performing the -following check: - -1. Pick apart the ref into policy, SHA1, and key fingerprint parts. -2. Reconstruct the signed body as above, using the policy name extracted from - the ref. -3. Retrieve the signature from the ref and combine it with the object itself. -4. Verify that the policy in the stored signature matches the policy in the - ref. -5. Verify the signature with GPG: - - # verify-gpg POLICY TYPE SHA1 FINGERPRINT - verify-gpg() { - { - git cat-file "$2" "$3" - git cat-file "refs/signatures/$1/$3/$4" - } | gpg --batch --no-tty --verify - } - -6. Verify the key fingerprint of the signing key matches the key fingerprint - in the ref itself. - -The specific rules for verifying the signature in GPG are left up to the user -to define; for example, some sites may want to auto-retrieve keys and use a -web of trust from some known roots to determine which keys are trusted, while -others may wish to maintain a specific, known keyring containing all signing -keys for each policy, and skip the web of trust entirely. This can be -accomplished via `git-config`, given some work, and via `gpg.conf`. - -## Distributing Signatures - -Since each signature is stored in a separate ref, and since signatures are -_not_ expected to be amended once published, the following refspec can be -used with `git fetch` and `git push` to distribute signatures: - - refs/signatures/*:refs/signatures/* - -Note the lack of a `+` decoration; we explicitly do not want to auto-replace -modified signatures, normally; explicit user action should be required. - -## Workflow Notes - -There are two verification workflows for signatures: "static" verification, -where the repository itself already contains all the refs and objects needed -for signature verification, and "pre-receive" verification, where an object -and its associated signature may be being uploaded at the same time. - -_It is impractical to verify signatures on the fly from an `update` hook_. -Only `pre-receive` hooks can usefully accept or reject ref changes depending -on whether the push contains a signature for the pushed objects. (Git does -not provide a good mechanism for ensuring that signature objects are pushed -before their subjects.) Correctly verifying object signatures during -`pre-receive` regardless of ref order is far too complicated to summarize -here. - -## Attacks - -### Lies of Omission - -It's trivial to hide signatures by deleting the signature refs. Similarly, -anyone with access to a repository can delete any or all detached signatures -from it without otherwise invalidating the signed objects. - -Since signatures are mostly static, sites following the recommended no-force -policy for signature publication should only be affected if relatively recent -signatures are deleted. Older signatures should be available in one or more -of the repository users' loca repositories; once created, a signature can be -legitimately obtained from anywhere, not only from the original signatory. - -The signature naming protocol is designed to resist most other forms of -assertion tampering, but straight-up omission is hard to prevent. - -### Unwarranted Certification - -The `policy` system allows any signatory to assert any policy. While -centralized signature distribution points such as "release" repositories can -make meaningful decisions about which signatures they choose to accept, -publish, and propagate, there's no way to determine after the fact whether a -policy assertion was obtained from a legitimate source or a malicious one -with no grounds for asserting the policy. - -For example, I could, right now, sign an `all-tests-pass` policy assertion -for the Linux kernel. While there's no chance on Earth that the LKML team -would propagate that assertion, if I can convince you to fetch signatures -from my repository, you will fetch my bogus assertion. If `all-tests-pass` is -a meaningful policy assertion for the Linux kernel, then you will have very -few options besides believing that I assert that all tests have passed. - -### Ambigiuous Policy - -This is an ongoing problem with crypto policy systems and user interfaces -generally, but this design does _nothing_ to ensure that policies are -interpreted uniformly by all participants in a repository. In particular, -there's no mechanism described for distributing either prose or programmatic -policy definitions and checks. All policy information is out of band. - -Git already has ambiguity problems around commit signing: there are multiple -ways to interpret a signature on a commit: - -1. I assert that this snapshot and commit message were authored as described - in this commit's metadata. (In this interpretation, the signature's - authenticity guarantees do _not_ transitively apply to parents.) - -2. I assert that this snapshot and commit message were authored as described - in this commit's metadata, based on exactly the parent commits described. - (In this interpretation, the signature's authenticity guarantees _do_ - transitively apply to parents. This is the interpretation favoured by XXX - LINK HERE XXX.) - -3. I assert that this _diff_ and commit message was authored as described in - this commit's metadata. (No assertions about the _snapshot_ are made - whatsoever, and assertions about parentage are barely sensical at all. - This meshes with widespread, diff-oriented policies.) - -### Grafts and Replacements - -Git permits post-hoc replacement of arbitrary objects via both the grafts -system (via an untracked, non-distributed file in `.git`, though some -repositories distribute graft lists for end-users to manually apply) and the -replacements system (via `refs/replace/<SHA1>`, which can optionally be -fetched or pushed). The interaction between these two systems and signature -verification needs to be _very_ closely considered; I've not yet done so. - -Cases of note: - -* Neither signature nor subject replaced - the "normal" case -* Signature not replaced, subject replaced (by graft, by replacement, by both) -* Signature replaced, subject not replaced -* Both signature and subject replaced - -It's tempting to outright disable `git replace` during signing and -verification, but this will have surprising effects when signing a ref-ish -instead of a bare hash. Since this is the _normal_ case, I think this merits -more thought. (I'm also not aware of a way to disable grafts without -modifying `.git`, and having the two replacement mechanisms treated -differently may be dangerous.) - -### No Signed Refs - -I mentioned early in this draft that Git's existing signing system doesn't -support signing refs themselves; since refs are an important piece of Git's -workflow ecosystem, this may be a major omission. Unfortunately, this -proposal doesn't address that. - -## Possible Refinements - -* Monotone's certificate system is key+value based, rather than label-based. - This might be useful; while small pools of related values can be asserted - using mutually exclusive policy labels (whose mutual exclusion is a matter - of local interpretation), larger pools of related values rapidly become - impractical under the proposed system. - - For example, this proposal would be inappropriate for directly asserting - third-party authorship; the asserted author would have to appear in the - policy name itself, exposing the user to a potentially very large number of - similar policy labels. - -* Ref signing via a manifest (a tree constellation whose paths are ref names - and whose blobs sign the refs' values). Consider cribbing DNSSEC here for - things like lightweight absence assertions, too. - -* Describe how this should interact with commit-duplicating and - commit-rewriting workflows. diff --git a/wiki/git/integrate.md b/wiki/git/integrate.md deleted file mode 100644 index 801ddd5..0000000 --- a/wiki/git/integrate.md +++ /dev/null @@ -1,41 +0,0 @@ -# Integrating with Git: A Field Guide - -Pretty much everything you might want to do to a Git repository when writing -tooling or integrations should be done by shelling out to one `git` command or -another. - -## Finding Git's trees - -Git commands can be invoked from locations other than the root of the work -tree or git directory. You can find either of those by invoking `git -rev-parse`. - -To find the absolute path to the root of the work tree: - - git rev-parse --show-toplevel - -This will output the absolute path to the root of the work tree on standard -output, followed by a newline. Since the work tree's absolute path can contain -whitespace (including newlines), you should assume every byte of output save -the final newline is part of the path, and if you're using this in a shell -script, quote defensively. - -To find the relative path from the current working directory: - - git rev-parse --show-cdup - -This will output the relative path to the root of the work tree on standard -output, followed by a newline. - -For bare repositories, both commands will output nothing and exit with a zero -status. (Surprise!) - -To find *a* path to the root of the git directory: - - git rev-parse --git-dir - -This will output either the relative or the absolute path to the git -directory, followed by a newline. - -All three of these commands will exit with non-zero status when run outside of -a work tree or git directory. Check for it. diff --git a/wiki/git/pull-request-workflow.md b/wiki/git/pull-request-workflow.md deleted file mode 100644 index 700eeb6..0000000 --- a/wiki/git/pull-request-workflow.md +++ /dev/null @@ -1,101 +0,0 @@ -# Life With Pull Requests - -I've been party to a number of discussions with folks contributing to -pull-request-based projects on Github (and other hosts, but mostly Github). -Because of Git's innate flexibility, there are lots of ways to work with pull -requests. Here's mine. - -I use a couple of naming conventions here that are not stock `git`: - -origin -: The repository to which you _publish_ proposed changes - -upstream -: The repository from which you receive ongoing development, and which will - receive your changes. - -## One-time setup - -Do these things once, when starting out on a project. Keep the results around -for later. - -I'll be referring to the original project repository as `upstream` and -pretending its push URL is `UPSTREAM-URL` below. In real life, the URL will -often be something like `git@github.com:someguy/project.git`. - -### Fork the project - -Use the repo manager's forking tool to create a copy of the project in your -own namespace. This generally creates your copy with a bunch of useless tat; -feel free to ignore all of this, as the only purpose of this copy is to -provide somewhere for _you_ to publish _your_ changes. - -We'll be calling this repository `origin` later. Assume it has a URL, which -I'll abbreviate `ORIGIN-URL`, for `git push` to use. - -(You can leave this step for later, but if you know you're going to do it, why -not get it out of the way?) - -### Clone the project and configure it - -You'll need a clone locally to do work in. Create one from `origin`: - - git clone ORIGIN-URL some-local-name - -While you're here, `cd` into it and add the original project as a remote: - - cd some-local-name - git remote add upstream UPSTREAM-URL - -## Feature process - -Do these things for each feature you work on. To switch features, just use -`git checkout my-feature`. - -### Create a new feature branch locally - -We use `upstream`'s `master` branch here, so that your feature includes all of -`upstream`'s state initially. We also need to make sure our local cache of -`upstream`'s state is correct: - - git fetch upstream - git checkout upstream/master -b my-feature - -### Do work - -If you need my help here, stop now. - -### Integrate upstream changes - -If you find yourself needing something that's been added upstream, use -_rebase_ to integrate it to avoid littering your feature branch with -“meaningless” merge commits. - - git checkout my-feature - git fetch upstream - git rebase upstream/master - -### Publish your branch - -When you're “done,” publish your branch to your personal repository: - - git push origin my-feature - -Then visit your copy in your repo manager's web UI and create a pull request -for `my-feature`. - -### Integrating feedback - -Very likely, your proposed changes will need work. If you use history-editing -to integrate feedback, you will need to use `--force` when updating the -branch: - - git push --force origin my-feature - -This is safe provided two things are true: - -1. **The branch has not yet been merged to the upstream repo.** -2. You are only force-pushing to your fork, not to the upstream repo. - -Generally, no other users will have work based on your pull request, so -force-pushing history won't cause problems. diff --git a/wiki/git/scratch.md b/wiki/git/scratch.md deleted file mode 100644 index a26c98f..0000000 --- a/wiki/git/scratch.md +++ /dev/null @@ -1,55 +0,0 @@ -# Git Is Not Magic - -I'm bored. Let's make a git repository out of whole cloth. - -Git repos are stored in .git: - - fakegit$ mkdir .git - -They have a “symbolic ref” (which are text files, see [`man -git-symbolic-ref`](http://jk.gs/git-symbolic-ref.html)) named `HEAD`, pointing -to the currently checked-out branch. Let's use `master`. Branches are refs -under `refs/heads` (see [`man git-branch`](http://jk.gs/git-branch.html)): - - fakegit ((unknown))$ echo 'ref: refs/heads/master' > .git/HEAD - -The have an object database and a refs database, both of which are simple -directories (see [`man -gitrepository-layout`](http://jk.gs/gitrepository-layout.html) and [`man -gitrevisions`](http://jk.gs/gitrevisions.html)). Let's also enable the reflog, -because it's a great safety net if you use history-editing tools in git: - - fakegit ((ref: re...))$ mkdir .git/refs .git/objects .git/logs - fakegit (master #)$ - -Now `__git_ps1`, at least, is convinced that we have a working git repository. -Does it work? - - fakegit (master #)$ echo 'Hello, world!' > hello.txt - fakegit (master #)$ git add hello.txt - fakegit (master #)$ git commit -m 'Initial commit' - [master (root-commit) 975307b] Initial commit - 1 file changed, 1 insertion(+) - create mode 100644 hello.txt - - fakegit (master)$ git log - commit 975307ba0485bff92e295e3379a952aff013c688 - Author: Owen Jacobson <owen.jacobson@grimoire.ca> - Date: Wed Feb 6 10:07:07 2013 -0500 - - Initial commit - -[Eeyup](https://www.youtube.com/watch?v=3VwVpaWUu30). - ------ - -Should you do this? **Of course not.** Anywhere you could run these commands, -you could instead run `git init` or `git clone`, which set up a number of -other structures, including `.git/config` and any unusual permissions options. -The key part here is that a directory's identity as “a git repository” is -entirely a function of its contents, not of having been blessed into being by -`git` itself. - -You can infer a lot from this: for example, you can infer that it's “safe” to -move git repositories around using FS tools, or to back them up with the same -tools, for example. This is not as obvious to everyone as you might hope; people diff --git a/wiki/git/stop-using-git-pull-to-deploy.md b/wiki/git/stop-using-git-pull-to-deploy.md deleted file mode 100644 index 078c95b..0000000 --- a/wiki/git/stop-using-git-pull-to-deploy.md +++ /dev/null @@ -1,98 +0,0 @@ -# Stop using `git pull` for deployment! - -## The problem - -* You have a Git repository containing your project. -* You want to “deploy” that code when it changes. -* You'd rather not download the entire project from scratch for each - deployment. - -## The antipattern - -“I know, I'll use `git pull` in my deployment script!” - -Stop doing this. Stop teaching other people to do this. It's wrong, and it -will eventually lead to deploying something you didn't want. - -Deployment should be based on predictable, known versions of your code. -Ideally, every deployable version has a tag (and you deploy exactly that tag), -but even less formal processes, where you deploy a branch tip, should still be -deploying exactly the code designated for release. `git pull`, however, can -introduce new commits. - -`git pull` is a two-step process: - -1. Fetch the current branch's designated upstream remote, to obtain all of the - remote's new commits. -2. Merge the current branch's designated upstream branch into the current - branch. - -The merge commit means the actual deployed tree might _not_ be identical to -the intended deployment tree. Local changes (intentional or otherwise) will be -preserved (and merged) into the deployment, for example; once this happens, -the actual deployed commit will _never_ match the intended commit. - -`git pull` will approximate the right thing “by accident”: if the current -local branch (generally `master`) for people using `git pull` is always clean, -and always tracks the desired deployment branch, then `git pull` will update -to the intended commit exactly. This is pretty fragile, though; many git -commands can cause the local branch to diverge from its upstream branch, and -once that happens, `git pull` will always create new commits. You can patch -around the fragility a bit using the `--ff-only` option, but that only tells -you when your deployment environment has diverged and doesn't fix it. - -## The right pattern - -Quoting [Sitaram Chamarty](http://gitolite.com/the-list-and-irc/deploy.html): - -> Here's what we expect from a deployment tool. Note the rule numbers -- -> we'll be referring to some of them simply by number later. -> -> 1. All files in the branch being deployed should be copied to the -> deployment directory. -> -> 2. Files that were deleted in the git repo since the last deployment -> should get deleted from the deployment directory. -> -> 3. Any changes to tracked files in the deployment directory after the -> last deployment should be ignored when following rules 1 and 2. -> -> However, sometimes you might want to detect such changes and abort if -> you found any. -> -> 4. Untracked files in the deploy directory should be left alone. -> -> Again, some people might want to detect this and abort the deployment. - -Sitaram's own documentation talks about how to accomplish these when -“deploying” straight out of a bare repository. That's unwise (not to mention -impractical) in most cases; deployment should use a dedicated clone of the -canonical repository. - -I also disagree with point 3, preferring to keep deployment-related changes -outside of tracked files. This makes it much easier to argue that the changes -introduced to configure the project for deployment do not introduce new bugs -or other surprise features. - -My deployment process, given a dedicated clone at `$DEPLOY_TREE`, is as -follows: - - cd "${DEPLOY_TREE}" - git fetch --all - git checkout --force "${TARGET}" - # Following two lines only required if you use submodules - git submodule sync - git submodule update --init --recursive - # Follow with actual deployment steps (run fabric/capistrano/make/etc) - -`$TARGET` is either a tag name (`v1.2.1`) or a remote branch name -(`origin/master`), but could also be a commit hash or anything else Git -recognizes as a revision. This will detach the head of the `$DEPLOY_TREE` -repository, which is fine as no new changes should be authored in this -repository (so the local branches are irrelevant). The warning Git emits when -`HEAD` becomes detached is unimportant in this case. - -The tracked contents of `$DEPLOY_TREE` will end up identical to the desired -commit, discarding local changes. The pattern above is very similar to what -most continuous integration servers use when building from Git repositories, -for much the same reason. diff --git a/wiki/git/survival.md b/wiki/git/survival.md deleted file mode 100644 index 60d1b62..0000000 --- a/wiki/git/survival.md +++ /dev/null @@ -1,81 +0,0 @@ -# Git Survival Guide - -I think the `git` UI is pretty awful, and encourages using Git in ways that -will screw you. Here are a few things I've picked up that have saved my bacon. - -* You will inevitably need to understand Git's “internals” to make use of it - as an SCM tool. Accept this early. If you think your SCM tool should not - expose you to so much plumbing, [don't](http://mercurial.selenic.com) - [use](http://bazaar.canonical.com) [Git](http://subversion.apache.org). - * Git weenies will claim that this plumbing is what gives Git all of its - extra power. This is true; it gives Git the power to get you out of - situations you wouldn't be in without Git. -* `git log --graph --decorate --oneline --color --all` -* Run `git fetch` habitually. Stale remote-tracking branches lead to sadness. -* `git push` and `git pull` are **not symmetric**. `git push`'s - opposite operation is `git fetch`. (`git pull` is equivalent to `git fetch` - followed by `git merge`, more or less). -* [Git configuration values don't always have the best defaults](config). -* The upstream branch of `foo` is `foo@{u}`. The upstream branch of your - checked-out branch is `HEAD@{u}` or `@{u}`. This is documented in `git help - revisions`. -* You probably don't want to use a merge operation (such as `git pull`) to - integrate upstream changes into topic branches. The resulting history can be - very confusing to follow, especially if you integrate upstream changes - frequently. - * You can leave topic branches “real” relatively safely. You can do - a test merge to see if they still work cleanly post-integration without - actually integrating upstream into the branch permanently. - * You can use `git rebase` or `git pull --rebase` to transplant your - branch to a new, more recent starting point that includes the changes - you want to integrate. This makes the upstream changes a permanent part - of your branch, just like `git merge` or `git pull` would, but generates - an easier-to-follow history. Conflict resolution will happen as normal. -* Example test merge, using `origin/master` as the upstream branch and `foo` - as the candidate for integration: - - git fetch origin - git checkout origin/master -b test-merge-foo - git merge foo - # run tests, examine files - git diff origin/master..HEAD - - To discard the test merge, delete the branch after checking out some other - branch: - - git checkout foo - git branch -D test-merge-foo - - You can combine this with `git rerere` to save time resolving conflicts in - a later “real,” permanent merge. - -* You can use `git checkout -p` to build new, tidy commits out of a branch - laden with “wip” commits: - - git fetch - git checkout $(git merge-base origin/master foo) -b foo-cleaner-history - git checkout -p foo -- paths/to/files - # pick out changes from the presented patch that form a coherent commit - # repeat 'git checkout -p foo --' steps for related files to build up - # the new commit - git commit - # repeat 'git checkout -p foo --' and 'git commit' steps until no diffs remain - - * Gotcha: `git checkout -p` will do nothing for files that are being - created. Use `git checkout`, instead, and edit the file if necessary. - Thanks, Git. - * Gotcha: The new, clean branch must diverge from its upstream branch - (`origin/master`, in the example above) at exactly the same point, or - the diffs presented by `git checkout -p foo` will include chunks that - revert changes on the upstream branch since the “dirty” branch was - created. The easiest way to find this point is with `git merge-base`. - -## Useful Resources - -That is, resoures that can help you solve problems or understand things, not -resources that reiterate the man pages for you. - -* Sitaram Chamarty's [git concepts - simplified](http://sitaramc.github.com/gcs/) -* Tv's [Git for Computer - Scientists](http://eagain.net/articles/git-for-computer-scientists) diff --git a/wiki/git/theory-and-practice/index.md b/wiki/git/theory-and-practice/index.md deleted file mode 100644 index f257b12..0000000 --- a/wiki/git/theory-and-practice/index.md +++ /dev/null @@ -1,42 +0,0 @@ -# Git Internals 101 - -Yeah, yeah, another article about “how Git works.” There are tons of these -already. Personally, I'm fond of Sitaram Chamarty's [fantastic series of -articles](http://gitolite.com/master-toc.html) explaining Git from both ends, -and of [Git for Computer -Scientists](http://eagain.net/articles/git-for-computer-scientists/). Maybe -you'd rather read those. - -This page was inspired by very specific, recurring issues I've run into while -helping people use Git. I think Git's “porcelain” layer -- its user interface --- is terrible, and does a bad job of insulating non-expert users from Git's -internals. While I'd love to fix that (and I do contribute to discussions on -that front, too), we still have the `git(1)` UI right now and people still get -into trouble with it right now. - -Git follows the New Jersey approach laid out in Richard Gabriel's [The Rise of -“Worse is Better”](http://www.dreamsongs.com/RiseOfWorseIsBetter.html): given -the choice between a simple implementation and a simple interface, Git chooses -the simple implementation almost everywhere. This internal simplicity can give -users the leverage to fix the problems that its horrible user interface leads -them into, so these pages will focus on explaining the simple parts and giving -users the tools to examine them. - -Throughout these articles, I've written “Git does X” a lot. Git is -_incredibly_ configurable; read that as “Git does X _by default_.” I'll try to -call out relevant configuration options as I go, where it doesn't interrupt -the flow of knowledge. - -* [Objects](objects) -* [Refs and Names](refs-and-names) - -By the way, if you think you're just going to follow the -[many](http://git-scm.com/documentation) -[excellent](http://www.atlassian.com/git/tutorial) -[git](http://try.github.io/levels/1/challenges/1) -[tutorials](https://www.kernel.org/pub/software/scm/git/docs/gittutorial.html) -out there and that you won't need this knowledge, well, you will. You can -either learn it during a quiet time, when you can think and experiment, or you -can learn it when something's gone wrong, and everyone's shouting at each -other. Git's high-level interface doesn't do much to keep you on the sensible -path, and you will eventually need to fix something. diff --git a/wiki/git/theory-and-practice/objects.md b/wiki/git/theory-and-practice/objects.md deleted file mode 100644 index 6bf975a..0000000 --- a/wiki/git/theory-and-practice/objects.md +++ /dev/null @@ -1,125 +0,0 @@ -# Objects - -Git's basest level is a storage and naming system for things Git calls -“objects.” These objects hold the bulk of the data about files and projects -tracked by Git: file contents, directory trees, commits, and so on. Every -object is identified by a SHA-1 hash, which is derived from its contents. - -SHA-1 hashes are obnoxiously long, so Git allows you to substitue any unique -prefix of a SHA-1 hash, so long as it's at least four characters long. If the -hash `0b43b9e3e64793f5a222a644ed5ab074d8fa1024` is present in your repository, -then Git commands will understand `0b43`, `0b43b9`, and other patterns to all -refer to the same object, so long as no other object has the same SHA-1 -prefix. - -## Blobs - -The contents of every file that's ever been stored in a Git repository are -stored as `blob` objects. These objects are very simple: they contain the file -contents, byte for byte. - -## Trees - -File contents (and trees, and Other Things we'll get to later) are tied -together into a directory structure by `tree` objects. These objects contain a -list of records, with one child per record. Each record contains a permissions -field corresponding to the POSIX permissions mask of the object, a type, a -SHA-1 for another object, and a name. - -A directory containing only files might be represented as the tree - - 100644 blob 511542ad6c97b28d720c697f7535897195de3318 config.md - 100644 blob 801ddd5ae10d6282bbf36ccefdd0b052972aa8e2 integrate.md - 100644 blob 61d28155862607c3d5d049e18c5a6903dba1f85e scratch.md - 100644 blob d7a79c144c22775239600b332bfa120775bab341 survival.md - -while a directory with subdirectories would also have some `tree` children: - - 040000 tree f57ef2457a551b193779e21a50fb380880574f43 12factor - 040000 tree 844697ce99e1ef962657ce7132460ad7a38b7584 authnz - 100644 blob 54795f9b774547d554f5068985bbc6df7b128832 cool-urls-can-change.md - 040000 tree fc3f39eb5d1a655374385870b8be56b202be7dd8 dev - 040000 tree 22cbfb2c1d7b07432ea7706c36b0d6295563c69c devops - 040000 tree 0b3e63b4f32c0c3acfbcf6ba28d54af4c2f0d594 git - 040000 tree 5914fdcbd34e00e23e52ba8e8bdeba0902941d3f java - 040000 tree 346f71a637a4f8933dc754fef02515a8809369c4 mysql - 100644 blob b70520badbb8de6a74b84788a7fefe64a432c56d packaging-ideas.md - 040000 tree 73ed6572345a368d20271ec5a3ffc2464ac8d270 people - -## Commits - -Blobs and trees are sufficient to store arbitrary directory trees in Git, and -you could use them that way, but Git is mostly used as a revision-tracking -system. Revisions and their history are represented by `commit` objects, which contain: - -* The SHA-1 hash of the root `tree` object of the commit, -* Zero or more SHA-1 hashes for parent commits, -* The name and email address of the commit's “author,” -* The name and email address of the commit's “committer,” -* Timestamps representing when the commit was authored and committed, and -* A commit message. - -Commit objects' parent references form a directed acyclic graph; the subgraph -reachable from a specific commit is that commit's _history_. - -When working with Git's user interface, commit parents are given in a -predictable order determined by the `git checkout` and `git merge` commands. - -## Tags - -Git's revision-tracking system supports “tags,” which are stable names for -specific configurations. It also, uniquely, supports a concept called an -“annotated tag,” represented by the `tag` object type. These annotated tag -objects contain - -* The type and SHA-1 hash of another object, -* The name and email address of the person who created the tag, -* A timestamp representing the moment the tag was created, and -* A tag message. - -## Anonymity - -There's a general theme to Git's object types: no object knows its own name. -Every object only has a name in the context of some containing object, or in -the context of [Git's refs mechanism](refs-and-names), which I'll get to -shortly. This means that the same `blob` object can be reused for multiple -files (or, more probably, the same file in multiple commits), if they happen -to have the same contents. - -This also applies to tag objects, even though their role is part of a system -for providing stable, meaningful names for commits. - -## Examining objects - -* `git cat-file <type> <sha1>`: decodes the object `<sha1>` and prints its - contents to stdout. This prints the object's contents in their raw form, - which is less than useful for `tree` objects. - -* `git cat-file -p <sha1>`: decodes the object `<sha1>` and pretty-prints it. - This pretty-printing stays close to the underlying disk format; it's most - useful for decoding `tree` objects. - -* `git show <sha1>`: decodes the object `<sha1>` and formats its contents to - stdout. For blobs, this is identical to what `git cat-file blob` would do, - but for trees, commits, and tags, the output is reformated to be more - readable. - -## Storage - -Objects are stored in two places in Git: as “loose objects,” and in “pack -files.” Newly-created objects are initially loose objects, for ease of -manipulation; transferring objects to another repository or running certain -administrative commands can cause them to be placed in pack files for faster -transfer and for smaller storage. - -Loose objects are stored directly on the filesystem, in the Git repository's -`objects` directory. Git takes a two-character prefix off of each object's -SHA-1 hash, and uses that to pick a subdirectory of `objects` to store the -object in. The remainder of the hash forms the filename. Loose objects are -compressed with zlib, to conserve space, but the resulting directory tree can -still be quite large. - -Packed objects are stored together in packed files, which live in the -repository's `objects/pack` directory. These packed files are both compressed -and delta-encoded, allowing groups of similar objects to be stored very -compactly. diff --git a/wiki/git/theory-and-practice/refs-and-names.md b/wiki/git/theory-and-practice/refs-and-names.md deleted file mode 100644 index 025ae88..0000000 --- a/wiki/git/theory-and-practice/refs-and-names.md +++ /dev/null @@ -1,94 +0,0 @@ -# Refs and Names - -Git's [object system](objects) stores most of the data for projects tracked in -Git, but only provides SHA-1 hashes. This is basically useless if you want to -make practical use of Git, so Git also has a naming mechanism called “refs” -that provide human-meaningful names for objects. - -There are two kinds of refs: - -* “Normal” refs, which are names that resolve directly to SHA-1 hashes. These - are the vast majority of refs in most repositories. - -* “Symbolic” refs, which are names that resolve to other refs. In most - repositories, only a few of these appear. (Circular references are possible - with symbolic refs. Git will refuse to resolve these.) - -Anywhere you could use a SHA-1, you can use a ref instead. Git interprets them -identically, after resolving the ref down to the SHA-1. - -## Namespaces - -Every operation in Git that uses a name of some sort, including branching -(branch names), tagging (tag names), fetching (remote-tracking branch names), -and pushing (many kinds of name), expands those names to refs, using a -namespace convention. The following namespaces are common: - -* `refs/heads/NAME`: branches. The branch name is the ref name with - `refs/heads/` removed. Names generally point to commits. - -* `refs/remotes/REMOTE/NAME`: “remote-tracking” branches. These are maintained - in tandem by `git remote` and `git fetch`, to cache the state of other - repositories. Names generally point to commits. - -* `refs/tags/NAME`: tags. The tag name is the ref name with `refs/heads/` - removed. Names generally point to commits or tag objects. - -* `refs/bisect/STATE`: `git bisect` markers for known-good and known-bad - revisions, from which the rest of the bisect state can be derived. - -There are also a few special refs directly in the `refs/` namespace, most -notably: - -* `refs/stash`: The most recent stash entry, as maintained by `git stash`. - (Other stash entries are maintained by a separate system.) Names generally - point to commits. - -Tools can invent new refs for their own purposes, or manipulate existing refs; -the convention is that tools that use refs (which is, as I said, most of them) -respect the state of the ref as if they'd created that state themselves, -rather than sanity-checking the ref before using it. - -## Special refs - -There are a handful of special refs used by Git commands for their own -operation. These refs do _not_ begin with `refs/`: - -* `HEAD`: the “current” commit for most operations. This is set when checking - out a commit, and many revision-related commands default to `HEAD` if not - given a revision to operate on. `HEAD` can either be a symbolic ref - (pointing to a branch ref) or a normal ref (pointing directly to a commit), - and is very frequently a symbolic ref. - -* `MERGE_HEAD`: during a merge, `MERGE_HEAD` resolves to the commit whose - history is being merged. - -* `ORIG_HEAD`: set by operations that change `HEAD` in potentially destructive - ways by resolving `HEAD` before making the change. - -* `CHERRY_PICK_HEAD` is set during `git cherry-pick` to the commit whose - changes are being copied. - -* `FETCH_HEAD` is set by the forms of `git fetch` that fetch a single ref, and - points to the commit the fetched ref pointed to. - -## Examining and manipulating refs - -The `git show-ref` command will list the refs in namespaces under `refs` in -your repository, printing the SHA-1 hashes they resolve to. Pass `--head` to -also include `HEAD`. - -The following commands can be used to manipulate refs directly: - -* `git update-ref <ref> <sha1>` forcibly sets `<ref>` to the passed `<sha1>`. - -* `git update-ref -d <ref>` deletes a ref. - -* `git symbolic-ref <ref>` prints the target of `<ref>`, if `<ref>` is a - symbolic ref. (It will fail with an error message for normal refs.) - -* `git symbolic-ref <ref> <target>` forcibly makes `<ref>` a symbolic ref - pointing to `<target>`. - -Additionally, you can see what ref a given name resolves to using `git -rev-parse --symbolic-full-name <name>` or `git show-ref <name>`. diff --git a/wiki/github-nomic/notes.md b/wiki/github-nomic/notes.md deleted file mode 100644 index 67541e6..0000000 --- a/wiki/github-nomic/notes.md +++ /dev/null @@ -1,118 +0,0 @@ -# Notes towards initial rules for a Github Nomic - -This document is not part of the rules of a Nomic, and is present solely as a guide to the design of [this initial ruleset](rules), for play on Github. -It should be removed before the game starts, and at no time should it be consulted to guide gameplay directly. - -Peter Suber's [Nomic](http://legacy.earlham.edu/~peters/writing/nomic.htm) is a game of rule-making for one or more players. -For details on the rationale behind the game and the reasons the game might be interesting, see Suber's own description. - -# Changes from Suber's rules - -## Format - -I've marked up Suber's rules into Markdown, one of Github's “native” text markup formats. -This highly-structured format produces quite readable results when viewed through the Github website, and allows useful things like HTML links that point to specific rules. - -I've also made some diff-friendliness choices around the structure of those Markdown documents. -For want of a better idea, the source documents are line-broken with one sentence per line, so that diffs naturally span whole sentences rather than arbitrarily-wrapped text (or unwrapped text). -Since Github automatically recombines sequences of non-blank lines into a single HTML paragraph, the rendering on the web site is still quite readable. - -I have not codified this format in the rules themselves. - -## Asynchrony - -In its original form, Nomic is appropriate for face-to-face play. -The rules assume that it is practical for the players to identify one another using out-of-game context, and that it is practical for the players to take turns. -Each player is expected to wait indefinitely (or, more likely, to apply non-game social pressure) if the preceding player takes inordinately long to complete their turn. -Similarly, Judgement interrupts the flow of game play and brings turns to a stop. - -This Nomic is to be played on Github, and the players are _not_ likely to be present simultaneously, or to be willing to wait indefinitely. - -It's possible for Suber's original Nomic rules to be amended, following themselves, into a form suitable for asynchronous play. -This has happened several times: for examples, see [Agora](http://agoranomic.org/) and [BlogNomic](http://blognomic.com/), though there are a multitude of others. -However, this process of amendment takes _time_, and, starting from Suber's initial rules, would require a period of one-turn-at-a-time rule-changes before the game could be played more naturally in the Github format. -This period is not very interesting, and is incredibly demanding of the initial players' attention spans. - -In the interests of preserving the players' time, I have modified Suber's initial ruleset to replace sequential play with a simple asynchronous model of play. In summary: - -* Every player can begin a turn at any time, even during another player's (or players') turn, so long as they aren't already taking a turn. -* Actions can be resolved in any order, depending on which proposals players choose to vote on, and in what order. -* The initial rules allow for players to end their turns without gathering every vote, once gameplay has proceeded far enough for non-unanimous votes to be possible. - -I have attempted to leave the rules as close to Suber's original rules as possible otherwise while implementing this change to the initial ruleset. -I have faith that the process of playing Nomic will correct any deficiencies, or, failing that, will clearly identify where these changes break the game entirely. - -I have, as far as I am able, emulated Suber's preference for succinctness over thoroughness, and resisted the urge to fix or clarify rules even where defects seem obvious to me. -In spite of my temptation to remove it, I have even left the notion of “winning” intact. - -## Rule-numbering - -The intent of this Nomic is to explore the suitability of Github's suite of tools for proposing, reviewing, and accepting changes to a corpus of text are suitable for self-governed rulemaking processes, as modelled by Nomic. -Note that this is a test of Github, not of Git: it is appropriate and intended that the players rely on non-Git elements of Github's workflow (issues, wiki pages, Github Pages, and so on), and similarly it is appropriate and intended that the authentic copy of the game in play is the Github project hosting it, not the Git repo the project contains, and certainly not forks of the project or other clones of the repository. - -To support this intention, I have re-labelled the initial rules with negative numbers, rather than digits, so that proposals can be numbered starting from 1 without colliding with existing rules, and so that they can be numbered by their Pull Requests and Github issue numbers. -(A previous version of these rules used Roman numerals for the initial rules. -However, correctly accounting for the priority of new rules over initial rules, following Suber, required more changes than I was comfortable making to Suber's ruleset.) -I have made it explicit in these initial rules that Github, not the players, assigns numbers to proposals. -This is the only rule which mentions Github by name. -I have not explicitly specified that the proposals should be implemented through pull requests; this is an intentional opportunity for player creativity. - -## Projects & Ideas - -A small personal collection of other ideas to explore: - -### Repeal or replace the victory criteria entirely - -“Winning” is not an objective I'm personally interested in, and Suber's race to 200 points by popularity of proposal is structurally quite dull. -If the game is to have a victory condition, it should be built from the ground up to meet the players' motivations, rather than being retrofitted onto the points-based system. - -### Codify the use of Git commits, rather than prose, for rules-changes - -This is unstated in this ruleset, despite being part of my intention for playing. -So is the relationship between proposals and the Git repository underpinning the Github project hosting the game. - -### Clarify the immigration and exit procedures - -The question of who the players _are_, or how one becomes a player, is left intentionally vague. -In Suber's original rules, it appears that the players are those who are engaged in playing the game: tautological on paper, but inherently obvious by simple observation of the playing-space. - -On Github, the answer to this question may not be so simple. -A public repository is _visible_ to anyone with an internet connection, and will accept _proposed_ pull requests (and issue reports) equally freely. -This suggests that either everyone is, inherently, a player, or that player-ness is somehow a function of engaging with the game. -I leave it to the players to resolve this situation to their own satisfaction, but my suggestion is to track player-ness using repository collaborators or organization member accounts. - -### Figure out how to regulate the use of Github features - -Nomic, as written, largely revolves around sequential proposals. -That's fine as far as it goes, but Github has a very wide array of project management features - and that set of features changes over time, outside the control of the players, as Github roll out improvements (and, sometimes, break things). - -Features of probable interest: - -* The `gh-pages` branch and associated web site. -* Issue and pull request tagging and approval settings. -* Third-party integrations. -* Whether to store non-rule state, as such arises, in the repository, or in the wiki, or elsewhere. -* Pull request reactions and approvals. -* The mutability of most Github features. - -### Expand the rules-change process to permit a single proposal to amend many rules - -This is a standard rules patch, as Suber's initial rule-set is (I believe intentionally) very restrictive. - -This may turn out to be less relevant on Github, if players are allowed to submit turns in rapid succession with themselves. - -### Transition from immediate amendment to a system of sessions - -Why not? Parliamentary procedure is fun, right? - -In an asynchronous environment, the discrete phases of a session system (where proposals are gathered, then debated, then voted upon, then enacted as a unit) might be a better fit for the Github mode of play. - -### Evaluate other models of proposal vetting besides majority vote - -Github open source projects regularly have a small core team of maintainers supporting a larger group of users. -Is it possible to mirror this structure in Nomic? -Is it wise to do so? - -I suspect this is only possible with an inordinately large number of players, but Github could, at least in principle, support that number of players. - -Note that this is a fairly standard Nomic passtime. diff --git a/wiki/github-nomic/rules.md b/wiki/github-nomic/rules.md deleted file mode 100644 index 768905a..0000000 --- a/wiki/github-nomic/rules.md +++ /dev/null @@ -1,180 +0,0 @@ -# Nomic - -## Immutable Rules - -### Rule -216. - -All players must always abide by all the rules then in effect, in the form in which they are then in effect. -The rules in the Initial Set are in effect whenever a game begins. -The Initial Set consists of rules -216 through -201 (immutable) and rules -112 through -101 (mutable). - -### Rule -215. - -Initially, rules -216 through -201 are immutable, and rules -112 through -101 are mutable. -Rules subsequently enacted or transmuted (that is, changed from immutable to mutable or vice versa) may be immutable or mutable regardless of their numbers, and rules in the Initial Set may be transmuted regardless of their numbers. - -### Rule -214. - -A rule-change is any of the following: - -1. the enactment, repeal, or amendment of a mutable rule; - -2. the enactment, repeal, or amendment of an amendment of a mutable rule; or - -3. the transmutation of an immutable rule into a mutable rule or vice versa. - -(Note: This definition implies that, at least initially, all new rules are mutable; immutable rules, as long as they are immutable, may not be amended or repealed; mutable rules, as long as they are mutable, may be amended or repealed; any rule of any status may be transmuted; no rule is absolutely immune to change.) - -### Rule -213. - -All rule-changes proposed in the proper way shall be voted on. -They will be adopted if and only if they receive the required number of votes. - -### Rule -212. - -Every player is an eligible voter. - -### Rule -211. - -All proposed rule-changes shall be written down before they are voted on. -If they are adopted, they shall guide play in the form in which they were voted on. - -### Rule -210. - -No rule-change may take effect earlier than the moment of the completion of the vote that adopted it, even if its wording explicitly states otherwise. -No rule-change may have retroactive application. - -### Rule -209. - -Each proposed rule-change shall be given a number for reference. -The numbers shall be assigned by Github, so that each rule-change proposed in the proper way shall receive the a distinct integer from all prior proposals, whether or not the proposal is adopted. - -If a rule is repealed and reenacted, it receives the number of the proposal to reenact it. -If a rule is amended or transmuted, it receives the number of the proposal to amend or transmute it. -If an amendment is amended or repealed, the entire rule of which it is a part receives the number of the proposal to amend or repeal the amendment. - -### Rule -208. - -Rule-changes that transmute immutable rules into mutable rules may be adopted if and only if the vote is unanimous among the eligible voters. -Transmutation shall not be implied, but must be stated explicitly in a proposal to take effect. - -### Rule -207. - -In a conflict between a mutable and an immutable rule, the immutable rule takes precedence and the mutable rule shall be entirely void. -For the purposes of this rule a proposal to transmute an immutable rule does not "conflict" with that immutable rule. - -### Rule -206. - -If a rule-change as proposed is unclear, ambiguous, paradoxical, or destructive of play, or if it arguably consists of two or more rule-changes compounded or is an amendment that makes no difference, or if it is otherwise of questionable value, then the other players may suggest amendments or argue against the proposal before the vote. -A reasonable time must be allowed for this debate. -The proponent decides the final form in which the proposal is to be voted on and, unless the Judge has been asked to do so, also decides the time to end debate and vote. - -### Rule -205. - -The state of affairs that constitutes winning may not be altered from achieving _n_ points to any other state of affairs. -The magnitude of _n_ and the means of earning points may be changed, and rules that establish a winner when play cannot continue may be enacted and (while they are mutable) be amended or repealed. - -### Rule -204. - -A player always has the option to forfeit the game rather than continue to play or incur a game penalty. -No penalty worse than losing, in the judgment of the player to incur it, may be imposed. - -### Rule -203. - -There must always be at least one mutable rule. -The adoption of rule-changes must never become completely impermissible. - -### Rule -202. - -Rule-changes that affect rules needed to allow or apply rule-changes are as permissible as other rule-changes. -Even rule-changes that amend or repeal their own authority are permissible. -No rule-change or type of move is impermissible solely on account of the self-reference or self-application of a rule. - -### Rule -201. - -Whatever is not prohibited or regulated by a rule is permitted and unregulated, with the sole exception of changing the rules, which is permitted only when a rule or set of rules explicitly or implicitly permits it. - -## Mutable Rules - -### Rule -112. - -A player may begin a turn at any time that suits them. -Turns may overlap: one player may begin a turn while another player's is in progress. -No player may begin a turn unless all of their previous turns have ended. - -All players begin with zero points. - -### Rule -111. - -One turn consists of two parts in this order: - -1. proposing one rule-change and having it voted on, and - -2. scoring the proposal and adding that score to the proposing player's score. - -A proposal is scored by taking the proposal number, adding nine to it, multiplying the result by the fraction of favourable votes the proposal received, and rounding that result to the nearest integer. - -(This scoring system yields a number between 0 and 10 for the first proposal, with the upper limit increasing by one for each new proposal; more points are awarded for more popular proposals.) - -### Rule -110. - -A rule-change is adopted if and only if the vote in favour is unanimous among the eligible voters. -If this rule is not amended before each player has had two turns, it automatically changes to require only a simple majority. - -If and when rule-changes can only be adopted unanimously, the voting may be ended as soon as an opposing vote is counted. -If and when rule-changes can be adopted by simple majority, the voting may be ended as soon as a simple majority in favour or a simple majority against is counted. - -### Rule -109. - -If and when rule-changes can be adopted without unanimity, the players who vote against winning proposals shall receive 10 points each. - -### Rule -108. - -An adopted rule-change takes full effect at the moment of the completion of the vote that adopted it. - -### Rule -107. - -When a proposed rule-change is defeated, the player who proposed it loses 10 points. - -### Rule -106. - -Each player always has exactly one vote. - -### Rule -105. - -The winner is the first player to achieve 200 (positive) points. - -### Rule -104. - -At no time may there be more than 25 mutable rules. - -### Rule -103. - -If two or more mutable rules conflict with one another, or if two or more immutable rules conflict with one another, then the rule with the lowest ordinal number takes precedence. - -If at least one of the rules in conflict explicitly says of itself that it defers to another rule (or type of rule) or takes precedence over another rule (or type of rule), then such provisions shall supersede the numerical method for determining precedence. - -If two or more rules claim to take precedence over one another or to defer to one another, then the numerical method again governs. - -### Rule -102. - -If players disagree about the legality of a move or the interpretation or application of a rule, then the player moving may ask any other player to be the Judge and decide the question. -Disagreement for the purposes of this rule may be created by the insistence of any player. -This process is called invoking Judgment. - -When Judgment has been invoked, no player may begin his or her turn without the consent of a majority of the other players. - -The Judge's Judgment may be overruled only by a unanimous vote of the other players taken before the next turn is begun. -If a Judge's Judgment is overruled, then the Judge may ask any player other than the moving player, and other than any player who has already been the Judge for the question, to become the new Judge for the question, and so on, except that no player is to be Judge during his or her own turn or during the turn of a team-mate. - -Unless a Judge is overruled, one Judge settles all questions arising from the game until the next turn is begun, including questions as to his or her own legitimacy and jurisdiction as Judge. - -New Judges are not bound by the decisions of old Judges. -New Judges may, however, settle only those questions on which the players currently disagree and that affect the completion of the turn in which Judgment was invoked. -All decisions by Judges shall be in accordance with all the rules then in effect; but when the rules are silent, inconsistent, or unclear on the point at issue, then the Judge shall consider game-custom and the spirit of the game before applying other standards. - -### Rule -101. - -If the rules are changed so that further play is impossible, or if the legality of a move cannot be determined with finality, or if by the Judge's best reasoning, not overruled, a move appears equally legal and illegal, then the first player unable to complete a turn is the winner. - -This rule takes precedence over every other rule determining the winner. diff --git a/wiki/gossamer/coda.md b/wiki/gossamer/coda.md deleted file mode 100644 index 1edd5b3..0000000 --- a/wiki/gossamer/coda.md +++ /dev/null @@ -1,19 +0,0 @@ -# A Coda - -[**kit**](https://mastodon.transneptune.net/wlonk): - -> How would you make a site where the server operator can't get at a user's data, and given handling complaints and the fact that people can still screen cap receipts etc, would you? -> -> Is it a valuable goal? - -[**owen**](https://mastodon.transneptune.net/owen): - -> That's what torpedoed my interest in developing [gossamer](.) further, honestly -> -> meg laid out an abuse case so dismal that I consider the whole concept compromised -> -> centralizing the service a little - mastodon-ishly, say - improves the situation a bit, but if they can't get at their users' data their options are limited -> -> I think secrecy and republication resilience are kind of non-goals, and the lesson I took is that accountability (and thus locality and continuity of identity) are way more important -> -> specifically accountability between community members, not accountability to the operator or to the state diff --git a/wiki/gossamer/index.md b/wiki/gossamer/index.md deleted file mode 100644 index 7964d13..0000000 --- a/wiki/gossamer/index.md +++ /dev/null @@ -1,435 +0,0 @@ -# Gossamer: A Decentralized Status-Sharing Network - -Twitter's pretty great. The short format encourages brief, pithy remarks, and -the default assumption of visibility makes it super easy to pitch in on a -conversation, or to find new people to listen to. Unfortunately, Twitter is a -centralized system: one Bay-area company in the United States controls and -mediates _all_ Twitter interactions. - -From all appearances, Twitter, Inc. is relatively benign, as social media -corporations go. There are few reports of censorship, and while their -response to abuse of the Twitter network has not been consistently awesome, -they can be made to listen. However, there exists the capacity for Twitter, -Inc. to subvert the entire Twitter system, either voluntarily or at the -behest of governments around the world. - -(Just ask Turkish people. Or the participants in the Arab Spring.) - -Gossamer is a Twitter-alike system, designed from the ground up to have no -central authority. It resists censorship, enables individual participants to -control their own data, and allows anyone at all to integrate new software -into the Gossamer network. - -Gossamer does not exist, but if it did, the following notes describe what it -might look like, and the factors to consider when implementing Gossamer as -software. I have made [fatal mistakes](mistakes) while writing it; I have not -rushed to build it specifically because Twitter, Gossamer's model, is so -deeply woven into so many peoples' lives. A successor must make fewer -mistakes, not merely different mistakes, and certainly not more mistakes. - -The following is loosely inspired by [Rumor -Monger](http://www.mememotes.com/meme_motes/2005/02/rumor_monger.html), at -“whole world” scale. - -## Design Goals - -* Users must be in control of their own privacy and identity at all times. - (This is a major failing with Diaspora, which limits access to personal - ownership of data by being hard to run.) - -* Users must be able to communicate without the consent or support of an - intermediate authority. Short of being completely offline, Gossamer should - be resilient to infrastructural damage. - -* Any functional communication system _will_ be used for illicit purposes. - This is an unavoidable consequence of being usable for legitimate purposes - without a central authority. Rather than revealing illicit conversations, - Gossamer should do what it can to preserve the anonymity and privacy of - legitimate ones. - -* All nodes are as equal as possible. The node _I_ use is not more - authoritative for messages from me than any other node. You can hear my - words from anyone who has heard my words, and I can hear yours from anyone - who has heard your words, so long as some variety of authenticity and - privacy are maintained. - -* If an identity's secrets are removed, a node should contain no data that - correlates the owner with his or her Gossamer identities. Relaying and - authoring must be as indistinguishable as possible, to limit the utility of - traffic analysis. - -## Public and Private Information - -Every piece of data Gossamer uses, either internally or to communicate with -other ndoes, is classified as either _public_ or _private_. Public -information can be communicated to other nodes, and is assumed to be safe if -recovered out of band. Private information includes anything which may be -used to associate a Gossamer identity with the person who controls it, except -as noted below. - -Gossamer must ensure users understand what information that they provide will -be made public, and what will be kept private, so that they can better decide -what, if anything, to share and so that they can better make decisions about -their own safety and comfort against abusive parties. - -Internally, Gossamer _always_ stores private information encrypted, and -_never_ transmits it to another node. Gossamer _must_ provide a tool to -safely obliterate private data. - -### Public Information - -Details on the role of each piece of information are covered below. - -* Public status updates, obviously. Gossamer exists to permit users to easily - share short messages with one another. - -* The opaque form of a user's incoming and outgoing private messages. - -* The users' identities' public keys. (But not their relationship to one - another.) - -* Any information the user places in their profile. (This implies that - profiles _must not_ be auto-populated from, for example, the user's address - book.) - -* The set of identities verified by the user's identity. - -Any other information Gossamer retains _must_ be private. - -## Republishing - -Gossamer is built on the assumption that every participant is willing to act -as a relay for every other participant. This is a complicated assumption at -the human layer. - -Inevitably, someone will use the Gossamer network to communicate something -morally repugnant or deeply illegal: the Silk Road guy, for example, got done -for trying to contract someone to commit murder. Every Gossamer node is -complicit in delivering those messages to the rest of the network, whether -they're in the clear (status updates) or not (private messages). It's unclear -how this interacts with the various legal frameworks, moral codes, and other -social constructs throughout the world, and it's ethically troubling to put -users in that position by default. - -The strong alternative, that each node only relay content with the -controlling user's explicit and ongoing consent, is also troubling: it limits -the Gossamer network's ability to deliver messages _at all_, and exposes -information about which identities each node's owner considers interesting -and publishable. - -I don't have an obvious resolution to this. Gossamer's underlying protocol -relies on randomly-selected nodes being more likely to propagate a message -than to ignore it, because this helps make Gossamer resilient to hostile -users, nosy intelligence agencies, and others who believe communication must -be restrictable. On the other hand, I'd like not to put a user in Taiwan at -risk of legal or social reprisals because a total stranger in Canada decided -to post something vile. - -(This is one of the reasons I haven't _built_ the damn thing yet. Besides -being A Lot Of Code, there's no way to shut off Gossamer once more than one -node exists, and I want to be sure I've thought through what I'm doing before -creating a prototype.) - -## Identity in the Gossamer Network - -Every Gossamer _message_ carries with it an _identity_. Gossamer identities -are backed by public-key cryptography. However, unlike traditional public key -systems such as GPG, Gossamer identities provide _continuity_, rather than -_authenticity_: two Gossamer messages signed by the same key are from the -same identity, but there is no inherent guarantee that that identity is -legitimate. - -Gossamer maintains relationships between identities to allow users to -_verify_ the identities of one another, and to publish attestations of that -to other Gossamer nodes. From this, Gossamer can recover much of GPG's “web -of trust.” - -**TODO**: revocation of identities, revocation of verifications. Both are -important; novice users are likely to verify people poorly, and there should -be a recovery path less drastic than GPG's “you swore it, you're stuck with -it” model. - -Gossamer encourages users to create additional identities as needed to, for -example, support the separation of work and home conversations, or to provide -anonymity when discussing reputationally-hazardous topics. Identities are not -correlated by the Gossamer codebase. - -Each identity can optionally include a _profile_: a block of data describing -the person behind the identity. The contents of a profile are chosen by the -person holding the private key for an identity, and the profile is attached -to every new message created with the corresponding identity. A user can -update their profile at will; potentially, every message can be sent with a -distinct profile. Gossamer software treats the profile it's seen with the -highest timestamp as authoritative, retroactively applying it to old messages. - -### Multiple Devices and Key Security - -A Gossamer identity is entirely contained in its private key. An identity's -key must be stored safely, either using the host operating system's key -management facilities or using a carefully-designed key store. Keys must not -hit long-term storage unprotected; this may involve careful integration with -the underlying OS's memory management facilities to avoid, eg., placing -identities in swap. This is _necessary_ to protect users from having their -identities recovered against their will via, for example, hard drive -forensics. - -Gossamer allows keys to be exported into password-encrypted archive files, -which can be loaded into other Gossamer applications to allow them to share -the same identity. - -**GOSSAMER MUST TREAT THESE FILES WITH EXTREME CARE, BECAUSE USERS PROBABLY -WON'T**. Identity keys protect the user's Gossamer identity, but they _also_ -protect the user's private messages (see below) and other potentially -identifying data. The export format must be designed to be as resilient as -possible, and Gossamer's software must take care to ensure that “used” -identity files are _automatically_ destroyed safely wherever possible and to -discourage users from following practices that weaken their own safety -unknowingly. - -Exported identity files are intrinsically vulnerable to offline brute-force -attacks; once obtained, an attacker can try any of the worryingly common -passwords at will, and can easily validate a password by using the recovered -keys to regenerate some known fact about the original, such as a verification -or a message signature. This implies that exported identities _must_ use a -key derivation system which has a high computational cost and which is -believed to be resilient to, for example, GPU-accelerated cracking. - -Secure deletion is a Hard Problem; where possible, Gossamer must use -operating system-provided facilities for securely destroying files. - -## Status Messages - -Status messages are messages visible to any interested Gossamer users. These -are the primary purpose of Gossamer. Each contains up to 140 Unicode -characters, a markup section allowing Gossamer to attach URLs and metadata -(including Gossamer locators) to the text, and an attachments section -carrying arbitrary MIME blobs of limited total size. - -All three sections are canonicalized (**TODO**: how?) and signed by the -publishing identity's private key. The public key, the identity's most recent -profile, and the signed status message are combined into a single Gossamer -message and injected into the user's Gossamer node exactly as if it had -arrived from another node. - -Each Gossamer node maintains a _follow list_ of identities whose messages the -user is interested in seeing. When Gossamer receives a novel status message -during a gossip exchange, it displays it to the user if and only if its -identity is on the node's follow list. Otherwise, the message is not -displayed, but will be shared onwards with other nodes. In this way, every -Gossamer node acts as a relay for every other Gossamer node. - -If Gossamer receives a message signed by an identity it has seen attestations -for, it attaches those attestations to the message before delivering them -onwards. In this way, users' verifications of one another's identity spread -through the network organically. - -## Private Messages - -Gossamer can optionally encrypt messages, allowing users to send one another -private messages. These messages are carried over the Gossamer network as -normal, but only nodes holding the appropriate identity key can decrypt them -and display them to the user. (At any given time, most Gossamer nodes hold -many private messages they cannot decrypt.) - -Private messages _do not_ carry the author's identity or full profile in the -clear. The author's bare identity is included in the encrypted part of the -message, to allow the intended recipient to identify the sender. - -**TODO**: sign-then-encrypt, or encrypt-then-sign? If sign-then-encrypt, are -private messages exempted from the “drop broken messages” rule above? - -## Following Users - -Each Gossamer node maintains a database of _followed_ identities. (This may -or may not include the owner's own identity.) Any message stored in the node -published by an identity in this database will be shown to the user in a -timeline-esque view. - -Gossamer's follow list is _purely local_, and is not shared between nodes -even if they have identities in common. The follow list is additionally -stored encrypted using the node's identities (any one identity is sufficient -to recover the list), to ensure that the follow list is not easily available -to others without the node owner's permission. - -Exercises such as [Finding Paul Revere](http://kieranhealy.org/blog/archives/2013/06/09/using-metadata-to-find-paul-revere/) -have shown that the collection of graph edges showing who communicates with -whom can often be sufficient to map identities into people. Gossamer attempts -to restrict access to this data, believing it is not the network's place to -know who follows who. - -## Verified Identities - -Gossamer allows identities to sign one anothers' public keys. These -signatures form _verifications_. Gossamer considers an identity _verified_ if -any of the following hold: - -* Gossamer has access to the identity key for the identity itself. - -* Gossamer has access to the identity key for at least one of the identity's - verifications. - -* The identity is signed by at least three (todo: or however many, I didn't - do the arithmetic yet) verified identities. - -Verified identities are marked in the user interface to make it obvious to -the user whether a message is from a known friend or from an unknown identity. - -Gossamer allows users to sign new verifications for any identity they have -seen. These verifications are initially stored locally, but will be published -as messages transit the node as described below. Verification is a _public_ -fact: everyone can see which identities have verified which other identities. -This is a potentially very powerful tool for reassociating identities with -real-world people; Gossamer _must_ make this clear to users. - -(I'm pretty sure you could find me, personally, just by watching whose -identities I verify.) - -Each Gossamer node maintains a database of every verification it has ever -seen or generated. If the node receives a message from an identity that -appears in the verification database, and if the message is under some total -size, Gossamer appends verifications from its database to the message before -reinjecting it into the network. This allows verifications to propagate -through - -## Blocking Users - -Any social network will attract hostile users who wish to disrupt the network -or abuse its participants. Users _must_ be able to filter out these users, -and must not provide too much feedback to blocked users that could otherwise -be used to circumvent blocks. - -Each Gossamer node maintains a database of blocked identities. Any message -from an identity in this database, or from an identity that is verified by -three or more identities in this database, will automatically be filtered out -from display. (Additionally, transitively-blocked users will automatically be -added to the block database. Blocking is contagious.) (**TODO**: should -Gossamer _drop_ blocked messages? How does that interact with the inevitable -“shared blocklist” systems that arise in any social network?) - -As with the follow list, the block database is encrypted using the node's -identities. - -Gossamer encourages users to create new identities as often as they see fit -and attempts to separate identities from one another as much as possible. -This is fundamentally incompatible with strong blocking. It will _always_ be -possible for a newly-created identity to deliver at least one message before -being blocked. _This is a major design problem_; advice encouraged. - -## Gossamer Network Primitives - -The Gossamer network is built around a gossip protocol, wherein _nodes_ -connect to one another periodically to exchange _messages_ with one another. -Connections occur over the existing IP internet infrastructure, traversing -NAT networks where possible to ensure that users on residential and corporate -networks can still participate. - -Gossamer bootstraps its network using a number of paths: - -* Gossamer nodes in the same broadcast domain discover one another using UDP - broadcasts as well as Bonjour/mDNS. - -* Gossamer can generate _locator_ strings, which can be shared “out of band” - via email, SMS messages, Twitter, graffiti, etc. - -* Gossamer nodes share knowledge of nodes whenever they exchange messages, to - allow the Gossamer network to recover from lost nodes and to permit nodes - to remain on the network as “known” nodes are lost to outages and entropy. - -### Locators - -A Gossamer _locator_ is a URL in the `g` scheme, carrying an encoding of one -or more network addresses as well as an encoding of one or more identities -(see below). Gossamer's software attempts to determine an appropriate -identifier for any identities it holds based on the host computer's network -configuration, taking into account issues like NAT traversal wherever -possible. - -**TODO**: Gossamer and uPNP, what do locators _look_ like? - -When presented with an identifier, Gossamer offers to _follow_ the identities -it contains, and uses the _nodes_ whose addresses it contains to connect to -the Gossamer network. This allows new clients to bootstrap into Gossamer, and -provides an easy way for users to exchange Gossamer identities to connect to -one another later. - -(Clever readers will note that the address list is actually independent of -the identity list.) - -### Gossip - -Each Gossamer node maintains a pair of “freshness” databases, associating -some information with a freshness score (expressed as an integer). One -freshness database holds the addresses of known Gossamer nodes, and another -holds Gossamer messages. - -Whenever two Gossamer nodes interact, each sends the other a Gossamer node -from its current node database, and a message from its message database. When -selecting an item to send for either category, Gossamer uses a random -selection that weights towards items with a higher “freshness” score. -(**TODO**: how?) - -When sending a fact, if the receiving node already knows the fact, both nodes -decrement that fact's freshness by one. If the receiving node _does not_ -already know the fact, the sending node leaves its freshness unaltered, and -the receiving node sets its freshness to the freshest possible value. This -system encourages nodes to exchange “fresh” facts, then cease exchanging them -as the network becomes aware of them. - -During each exchange, Gossamer nodes send each other one Gossamer node -address, and one Gossamer message. Both nodes adjust their freshness -databases, as above. - -If fact exchange fails while communicating with a Gossamer node, both nodes -decrement their peer's freshness. Unreliable nodes can continue to initiate -connections to other nodes, but will rarely be contacted by other Gossamer -nodes. - -**TODO**: How do we avoid DDOSing brand-new gossamer nodes with the full -might of Gossamer's network? - -**TODO**: Can we reuse Bittorrent's DHT system (BEP-5) to avoid having every -node know the full network topology? - -**TODO**: Are node-to-node exchanges encrypted? If so, why and how? - -### Authenticity - -Gossamer node addresses are not authenticated. Gossamer relies on freshness -to avoid delivering excess traffic to systems not participating in the -Gossamer network. (**TODO**: this is a shit system for avoiding DDOS, though.) - -Gossamer messages _are_ partially authenticated: each carries with it a -public key, and a signature. If the signature cannot be verified with the -included public key, it _must_ be discarded immediately and it _must not_ be -propagated to other nodes. The node delivering the message _may_ also be -penalized by having its freshness reduced in the receiving node's database. - -### Gossip Triggers - -Gossamer triggers a new Gossip exchange under the following circumstances: - -* 15 seconds, plus a random jitter between zero and 15 more seconds, elapse - since the last exchange attempt. - -* Gossamer completes an exchange wherein it learned a new fact from another - node. - -* A user injects a fact into Gossamer directly. - -Gossamer exchanges that fail, or that deliver only already-known facts, do -not trigger further exchanges immediately. - -**TODO**: how do we prevent Gossamer from attempting to start an unbounded -number of exchanges at the same time? - -### Size - -Gossamer must not exhaust the user's disk. Gossamer discards _extremely_ -un-fresh messages, attempting to keep the on-disk size of the message -database to under 10% of the total local storage, or under a -user-configurable threshold. - -Gossamer rejects over-large messages. Public messages carry with them the -author's profile and a potentially large collection of verifications. -Messages over some size (**TODO** what size?) are discarded on receipt -without being stored, and the message exchange is considered to have failed. diff --git a/wiki/gossamer/mistakes.md b/wiki/gossamer/mistakes.md deleted file mode 100644 index 23b731b..0000000 --- a/wiki/gossamer/mistakes.md +++ /dev/null @@ -1,81 +0,0 @@ -# Design Mistakes - -## Is Gossamer Up? - -[@megtastique](https://twitter.com/megtastique) points out that two factors -doom the whole design: - -1. There's no way to remove content from Gossamer once it's published, and - -2. Gossamer can anonymously share images. - -Combined, these make Gossamer the _perfect_ vehicle for revenge porn and -other gendered, sexually-loaded network abuse. - -This alone is enough to doom the design, as written: even restricting the -size of messages to the single kilobyte range still makes it trivial to -irrevocably disseminate _links_ to similar content. - -## Protected Feeds? Who Needs Those? - -Gossamer's design does not carry forward an important Twitter feature: the -protected feed. In brief, protected feeds allow people to be choosy about who -reads their status updates, without necessarily having to pick and choose who -gets to read them on a message by message basis. - -This is an important privacy control for people who wish to engage with -people they know without necessarily disclosing their whereabouts and -activities to the world at large. In particular, it's important to vulnerable -people because it allows them to create their own safe spaces. - -Protected feeds are not mere technology, either. Protected feeds carry with -them social expectations: Twitter clients often either refuse to copy text -from a protected feed, or present a warning when the user tries to copy text, -which acts as a very cheap and, apparently, quite effective brake on the -casual re-sharing that Twitter encourages for public feeds. - -## DDOS As A Service - -Gossamer's network protocol converges towards a total graph, where every node -knows how to connect to every other node, and new information (new posts) -rapidly push out to every single node. - -If you've ever been privy to the Twitter “firehose” feed, you'll understand -why this is a drastic mistake. Even a moderately successful social network -sees on the order of millions of messages a day. Delivering _all_ of this -directly to _every_ node _all_ of the time would rapidly drown users in -bandwidth charges and render their internet connections completely unusable. - -Gossamer's design also has no concept of “quiet” periods: every fifteen to -thirty seconds, rain or shine, every node is supposed to wake up and exchange -data with some other node, regardless of how long it's been since either node -in the exchange has seen new data. This very effectively ensures that -Gossamer will continue to flood nodes with traffic at all times; the only way -to halt the flood is to shut off the Gossamer client. - -## Passive Nodes Matter - -It's impractical to run an inbound data service on a mobile device. Mobile -devices are, by and large, not addressable or reachable by the internet at -large. - -Mobile devices also provide a huge proportion of Twitter's content: the -ability to rapidly post photos, location tags, and short text while away from -desks, laptops, and formal internet connections is a huge boon for ad-hoc -social organization. You can invite someone to the pub from your phone, from -in front of the pub. - -(This interacts ... poorly with the DDOS point, above.) - -## Traffic Analysis - -When a user enters a new status update or sends a new private message, their -Gossamer node immediately forwards it to at least one other node to inject it -into the network. This makes unencrypted Gossamer relatively vulnerable to -traffic analysis for correlating Gossamer identities with human beings. - -Someone at a network “pinch point” -- an ISP, or a coffee shop wifi router -- -can monitor Gossamer traffic entering and exiting nodes on their network and -easily identify which nodes originated which messages, and thus which nodes -have access to which identities. This seriously compromises the effectiveness -of Gossamer's decentralized, self-certifying identities. diff --git a/wiki/gpg/cool.md b/wiki/gpg/cool.md deleted file mode 100644 index ae5962c..0000000 --- a/wiki/gpg/cool.md +++ /dev/null @@ -1,67 +0,0 @@ -# GPG Is Pretty Cool - -The GPG software suite is a pretty elegant cryptosystem. It provides: - -* A standard, well-maintained set of tools for creating and storing keys, and - associating them with identities - -* A suite of reliable tools for encrypting, signing, decrypting, and - verifying data that can be easily assembled into any combination of - integrity checks, authenticity checks, and privacy management - -* A key distribution network that does not rely on hierarchal authority and - that can be bootstrapped from scratch quickly and easily - -While GPG [sucks in a number of important ways](terrible), it's also the best -tool we have right now for restoring privacy to private correspondance over -the internet. - -## Code Signing - -Pretty much every Linux distribution relies on GPG for code signing. Rather -than using GPG's web-of-trust model for key distribution, however, code -signing with GPG usually creates a hierarchal PKI so that the root keys can -be shipped with the operating system. - -This works shockingly well, and support for GPG is extremely well integrated -into common package management systems such as apt and yum. - -## Source Control - -Which is basically code signing, admittedly, but even Git's support for GPG -is basically great. Tools like Fossil embed it even deeper, and work quite -well. - -## Email - -GPG's integration with email is surprisingly clever, follows a number of -long-standing best practices for extending email, and does a _very_ good job -of providing some guarantees that make sense in a not-terribly-long-ago view -of email as a communications medium. In particular, if - -* who you talk to is not a secret, and -* what, broadly, you are talking about is not a secret, but -* the specifics of the discussion _are_ a secret, and -* all participants are using GPG on their own mailers - -then GPG works brilliantly and modern GPG integration is very effective. - -These assumptions pretty accurately reflect the majority of email use up -through the late 90s and early 2000s: technical or personal correspondence -between known acquaintences. - -The internet has moved on from email for casual correspondence, but that -doesn't invalidate the elegance of GPG's integration for GPG users. - -## Distributed Verification - -Even though GPG's trust model has some serious privacy costs and concerns, it -works as a great proof of concept for CA-free identity management. That's -huge: centralized CAs have even more onerous costs and worse risks than GPG's -trust network, while offering less transparency to help offset those costs. - -Others have written some pretty interesting things on how to improve GPG's -trust model and make it less succeptible to errors or key leaks by -small-to-middling numbers of participants. [This -post](https://lists.torproject.org/pipermail/tor-talk/2013-September/030235.html) -to tor-talk last year is probably the most complete. diff --git a/wiki/gpg/keys.md b/wiki/gpg/keys.md deleted file mode 100644 index bf3f714..0000000 --- a/wiki/gpg/keys.md +++ /dev/null @@ -1,727 +0,0 @@ -# GPG Keys - -If you've read [GPG Is Terrible](terrible) and [GPG Is Pretty Cool](cool), -and their references, and for some reason still feel the need to use GPG, my -key fingerprint is `77BD C4F1 6EFD 607E 85AA B639 5023 2991 F10D FFD0`. The -key itself is below. - - -----BEGIN PGP PUBLIC KEY BLOCK----- - - mQENBFOWElgBCADSFR0SmdJX5yOFjejxTpjdyc2UwjglM4WqFNne7C9rYkbLGj8U - y6aVdLop4kFdiZrtuAyJrZnKawZglMar6erBgoNXe3vrbEzopPI1Uev/kY7UHSR+ - dA8EYw50/FOvDYlrJxntvIEfNYskIvhS+c8Y0HSrK9VnKfkfi7hYJP+93sqP/4Lz - oCnCWQCJSOaOdpora241/bsEU7w8MCiexCdm2NaPc6q445K5XAO5CoLkTwcJxJHM - xbPH7prSgqdDz5Y00hUDqm+ByLCMVyAFu4/6sEMWZMaOIIEh0a/kpD+xJVkXKszh - 5SsLNZ5oADj9DWHvFoemj1gOixzYlEMdqL3PABEBAAG0IE93ZW4gSmFjb2Jzb24g - PG93ZW5AZ3JpbW9pcmUuY2E+iQFABBMBCgAqAhsDBQsJCAcDBRUKCQgLBRYCAwEA - Ah4BAheAAhkBBQJYVJfKBQkFr8fyAAoJEFAjKZHxDf/QyYYH/jyWWeqoNx6R8RV5 - 7ggEv9tjTS2xUADjq6+iJPjLE1rRsH6QvbNWT3VLgP7U4sVGkWW0wWqZdvl9YPEs - Xi5mlHIiMg7drPxyWhYuwylfHbK9zGBIXf3vvc7w4QFAAVrQJYI/jyNqlBf3E2PL - y5uxWqQL9eVpRFiVsrKImZC9QHBcytwcvtEYJh3MpbqNSzozrAU6uohIQHLdv0tY - qjVg3p6adQVrMzZNfDaCsupJ7vmkNuPNRm8m/OgKU2AdgAlY2Yk4o7w7cFlmFXh+ - MV6lNyemBnpheHcCzVdVwqK9NOnrlikPKqCO7LH5jZuePgNJa8UtPu9zcAeqSXzX - VR4IkFGJARwEEwEKAAYFAlgqSLsACgkQcKkNVKzCJH7jUQgAwjEMDckA0L9maZuj - SdR1EtMk9fGamTOxRvMbU/JPhVU8EV1A5fTHTeSJY9qOGHzO5x1cU7crBoZ3Yo+w - kVkMlLskeqVwCdWUbqJWpPqGkeNce9zdisiWulCDeLzftEciiX1dwiAiXnbp7K4q - /RNCGy5kJjxwOV0+lc3uXRVNCY7k6Bp2bPdx7bEzPOHvXEeoZRp/rzzA7kSr31RL - lUnVRqggzfIlw2/HBHy+cCxUeEPNB9XwTLPZf18Q04QGrPO4popZgw0hRwUzWFWR - 4zkKS63ClP5aSUVXAK4Epw0GmziASVTybDd8b/Y/HSswwTrJf3t6rMvKp13y2FmX - ohhSJLQpT3dlbiBKYWNvYnNvbiA8b3dlbi5qYWNvYnNvbkBncmltb2lyZS5jYT6J - ARwEEwEKAAYFAlPySowACgkQ0rGZnxxR0iPuMggAhqa39V0Cv0EHzfmScxDjxBBq - pCvUqum8vxAIxZ6f+VyAwHPkVIxmY8iauhApqSMqo1slXk4so5kZDE00bE5I/pZJ - tQQMKI/Unhd11MKiA8fZ48vSD7JiOho7bi5jpUiM1pj2mV/wNCoPeCRxkFfRCNVP - 6rczG+EUbw7HhL6RcNXmphcMuIE9kzUm/U8NjvvsyXonVm/jPmi95opVl8KN0nuH - TRBD6ecoFYh4Tiuqo3C0KczRKhdPnOWiBkp0Ir2ZMXOZMxpWYBKNg7vFaEjLdtLL - Yp79zIK6nbN1z8iEsmKjgP/YdGYNRGwyPwyEgk3h4nKa0bDZVBgHfTSTRxJRa4kB - PQQTAQoAJwIbAwULCQgHAwUVCgkICwUWAgMBAAIeAQIXgAUCWFSXywUJBa/H8gAK - CRBQIymR8Q3/0KGOCADGpgiaNyjTlrgI6oPJ8B/YN6igP5Rtx7OKrc7f8/eD4WpS - UA0o1pctyL8i3jYg7iFKrOugs6+b8uiKhhuSXBNxXURAeixXAHFRBh4N56X7mnK6 - c2gtlwwLzqU+DB/EEqvAax+gsZk/cen7NhtbpslWzGsnobIm66lSLwcw131Q1A+W - 9Z+/c4Cfq95UJg/nbz15rhH5+yRnuOYaUIu7k+yh/PUiMQPiJYR2WDGjz28oCc3V - BK6QqJdWzqKLUQoNe4as8zJwQb/WtMmgg+NTI7gll5Rb+o/1YpsSK7HGO2GdbZiJ - M/h3BH4lk7OKWVcV92+WeuWHLdUdf5KD7mH3C4r8iQIcBBIBAgAGBQJTmeWZAAoJ - EBor3qpo/bNM9KEQAI1oQJpYLipRP7lImxU9TeyR5Z36OnsVWTU/b4hTqkhsHrkX - ThD68Ntg9HpZWO26qjEt3kNBhxOxSU4TuTUtCV6XcVT/w8fRB+E7Dr9UycWmmtie - z1AhJBW69BQeCXqRYyHYKIlwEHRdCd3sPBF91Hv2w6vpQc0FkZhtwqOCrruqrYfu - ga4F/MCKFjHRlok8EZvZAL0R/LbFicQpYdp4jkWmONxpkp+0B7uQMv1PTMx0SJwf - m3Ftg0NUHmAE3KGEdqQFqSWmFvauedu3CyTikRo9ir943HC/9VwnKnJKVGT54IIc - OVZSFuBGWavjrTQR0U2eYIXLesDidhf2Qu78ljMHCPhUI2QlgPJMJ1IKe/zOgsn9 - sEVZQSZfQ+/PqAgRDZOvkPri3d2fh8gdx0NcDJlvjGPSMmgb55sOn257ObX8oVCj - s3jnJi9ZdkaEznXlhBnT2dQpOCMTW/kfk1OESzDymfRrqaL6Zwer9XwC7n8RDCH8 - KHYoSq5D8xKMKwY6gk6BBNz5evOaFSR3a6TFU9e9cTwb7+SFP5FEazoW21ztvyin - 9O5N1kIor7GyTIDOYB9zsEFFvFVyRCzvsHewUmDOAWnE064fBWQk3RlBJN7zgNU3 - zRvwkZeue4J/0RhCIAXhXD0lWissuJ/3334lnfCUaoNNap5S0SXNl8/fx+2ZiQEc - BBABAgAGBQJUzHTeAAoJEGCvA2jnd1ioi9cH/2MYWK/Ab6zS0x1gHUcno53FG7lw - 6cFFk3DEk/XhVDR1CVb5im9xC7SJVvSEDcv/zR/DivHW6XnKvFLpyFRRwLUrPuEh - qlZra4yW5RlgB+8ZO61n2DW5EnNbz4OQxg2jBzjCm6msBmrGFwxuNxKiaFE4GmGX - ujyRyufZ41sIiRX+gQT/xiJ7TW+sNoGypsfQ9weU/RYrTl2zAXtVhks5rtHNSdNd - NFF7ATlfDM6AEMxaKBFaUm/C1yABdRNPMfrwbyCY3lNlXDkwzCso5J8A4tDQf5H+ - x18r/OotStOS3/9F3J52Hiy6s0nG1sWAeG/eLNnHiBQGSZnOAps9eqEZ+qOJARwE - EAECAAYFAlTn1uwACgkQlz1WD6hOlS4R6Af9EgA1J68QFmaeUKwugnsa9hNVHsmi - vQFQAVyskE2PMX0jjQXYl7mZSuuy4dvjpYag2qAe5awMX2H7m5UCnfZnK1U+AOTN - 8ULNg6lxHvpSiKilezxS4ofwiVwsNmY24wYHSBzeN4phm+HsIV4l22g4pqqKHj9s - KYVrYPw2p/+JGpmWhBTH1CzCjgOuz5LJoi+SHnuuHynTrHFEPzE36Dd6cOid3NEh - 3/J4B3WmLmZxUHkDxqKIp+6aGvHYwy5Nu5V57foSqt3W74viL4VFuKIDlngRwIBL - UKvnUMtHYhHWLKxDchiShq8yLoFXtjg1lGKVY4nDYXxmnPQWhclREEO0bYkCHAQT - AQoABgUCVPimjAAKCRAJzo2aIsl3RN0DD/4lYH2F2X/pKU6VLx6YwC8MGbJadoE3 - Wt1w8gC+DbvWgngmKqVvNKIP/dDUu+Y+Gd9qsfcZm1+blnN2Evab/QxoWGZliz1y - veAqaDhTXMGOilFIwCaUvUQqANebSpWwtsCs6I2xVpqR1X63GkOP5bckqfoxFS6S - WcMNFqNwp77/5okQblQG1Gx9hrh85j3Yjl+kWHM+Nk5q7jv+HHJ9K2irQHRjD7F9 - TR9fGZGc0tb1cLwGdOcYB1HNYEA+3F+DpHt/c/dfMs8BEm0E1P+vCCU9o4Me9b29 - RoRs34RlQcvan6hIPgLjov7mBScTnl/1KoypqTcNCk0VGCnH3suWwn3UxHo2G4B6 - BVTux9bW1Idnus/MXUO9OY5XH4RQe74x5MlOvdzW3VN4fIhjTj03GPbAqAqJP624 - RWpmXPWhQFpk4B9aI53wD9mTOdtVu5x6THFxOxrbbh7IstDF38Wwmsv3Jh3fjxrz - 7WymOEs1owjZLtXQW9Mupyf05Owe4ilkVa5C8KIMYH7D7D8DEefdlHuuF3Qe81+w - 9gBqmfK7URbMfZ/RR3Yrnhru4XLmA1icnA2Ndti4cd3DeoLcqt0Xpgj8VP8WziMj - rzlo2kjhyH818nb0zuEAdcSwGLs2B8GXWTQ/klSzjJO4JQCI5kcbMGvKeP7jnpP5 - tnY/MAREH+/p+IkBHAQTAQoABgUCWCpIuwAKCRBwqQ1UrMIkfr6cB/9ZJjd7z4tV - jFXHNzLZJDfo1vGM3kGzC1DRPkwJ4LbsPApjiK6Y4QnIJdFCWOWTWIyu18tde4KJ - BFJ+mPiznYO95EvtUy0/FkMwKweNeld/HSops0MwFFue5P3wkHPq2Sk34nN19sos - 25Ht8rIrtNw2l8EzrszhF4fhRA7XsBiCwF0diTLd3Frf0FZdMVSxbP+0to/wWI7m - Hu2/v45piOScUbB+ZN1dCBlL2htpuAmJoYFENUCGrZ008Hox13LZjDuRaW0K4ZEh - iCIbZuY4yDWTcB6KVT/A+rULvX5RfuGx01+/LpIL7Ilj2vpP+OKx2aSNduBfEu70 - 9WlQU22XuqjStCZPd2VuIEphY29ic29uIDxhbmdyeWJhbGRndXlAZ21haWwuY29t - PokBHAQTAQoABgUCU/JKmAAKCRDSsZmfHFHSI2n1CACDZwumtVBJ6eGS6rySvj+l - gH0HDX0KpdL/I0a3N7a/G/xGgJLBdn/vBaSNnvr8jAbdUXKMTMbd3Y0VxDFFS0u9 - lbiryjaljCaNFydXK+auY4HzE0PoOfTb5iPMyfXtO3e+CmyAfhP5Q5Tz+HP8gWhi - 1IvC1UGbfe0VTepUgssw02WHEb8zZ3rOvDpCBFdXYUvwbw1lkUyvlrc3hBGsOrWq - IqlRO8TR2tgZakThkfgVIZi+ZfZKfmSgHzjnb/xMXF7hT/qdCkcSklxSGUTP434n - xwVhhr1dEQYP+soaMliusuBO05wJoxQvtav9FGNdtteAi24RT+aT9JYQ2UtBXxMY - iQE9BBMBCgAnAhsDBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheABQJYVJfLBQkFr8fy - AAoJEFAjKZHxDf/QS0YIAK9XOikWp1XpNdzxQprZ8/xBzSmK1v/Z4EL8+zFQPOLg - HSj5CRJ5qfw30xo2IWUs8+W+Eq3XQMte3Hw1jt8KxQH9K5Ec3Ahz86/4EXRkoXw+ - +WWypPMtyEbIyyU1dbtzj4mbCO5vQWFBZZ6talaOT0Z0pjKA1DmOruNDgdvlgnb5 - ASIqFnbTUqKRWB9VmL3+qxvQZi3sNOB5VatpbKUWrC/xFi8fVb8nHgjOlXmj0EdN - NSneC4mf15a0IQ6tuUoGMLoGDNfqrFZKHGKyGtJ4ZvqS2UY3RoMeUneC+gPsfn7o - WEKf2mhUZ8F3PBia3rEmtwbra8HI0oCbDj0llpV1w6SJAhwEEgECAAYFAlOZ5aYA - CgkQGiveqmj9s0z0Fg//cnnBLqytM02vmj7ltvJeEjdOjwooa8TXGLnkjWqdRtzP - CPsCeUWuIJyvsoJDxv5eGkI3MRD5b+2JxE2C+iU6NKWE1tbaPnvVn61RgLvi2mDo - 2j5nHa/AfrLubONWFydwZTc5tsNFvURr7zGWIoacG01YApHu0JpmUkrZA2nnW/lp - Po/1cHVFqvWge5jnJ/4ZBpEhnFedfRe9DYnpKQvsr/EzZtkCklrL3LjnrrdSg2id - 29VBWWyG6/SvEl2l53c2Cejo2DNrWBhr1PLslHhrl2g19xqlf5h+jeqqeLXSzEbE - aAJWhH+uk8xHGfrEQ68jWnS1Dd+aSq/6A/MCIV858b/NamZ+RX6qwGs21f9WXZHE - hGCcSiQFHr7vK5YjzpirTt1giZFmQ30duPNB3SsQ6MwNWFz4cXtOiSSUE0bJyjNo - NDMOh7jPsnbJEfNM5eKzUPRZvk6fMqsCIvPc5JUtJOvhSPdrvK7SREDgze79u51s - f2sZ79yZ9ryNrCHsnu2heQxnC5PgTTXXULg93I+aAHPIgVOk4SG+/bpluOpou0M+ - teAqvUHtaKVv6+EIebkAhgVpH50EvxMgI2N9ivU7SW6hAz1FbPOEZAd2uJhs95Ab - kUxJPG5ETEAy5JOBdmX2BlJ91PVDt+jF7QB/NH57y7Or49b0dietE1YsqvyH8fOJ - ARwEEAECAAYFAlTMdN4ACgkQYK8DaOd3WKjK7Qf9EXsoNPndlKjUkzxRe3zFZ+rQ - mqjI9mz9VQrsoFsYctDvCIel//ScsG3pQT+9Jmp2j7a/HhrxDwTdOdWR2za7DdfI - M4XtaiVFwboltFx9l9a1X5u+1xUgv7xi9+GHIHxfT5FOI/Bquamu6S87o/kYXq6d - 7ek1nfrsveEfzpCzI9jpiovy1KgupGR1w7dKIOvAaqWDcwRM//zvuZudeXziGjGc - rGoNL/FQbtVP8haC6ESVugEZcuppV90AbJ9i/sybmNx5O0/7FDuLAYtEUbzeMmZh - e7OA4FjwlowXi+mYYy+76jbIGq0maaU5h9vIS1g46Tl1lrIsDubHwe/5LGtsTYkC - HAQTAQoABgUCVPimjQAKCRAJzo2aIsl3RL1HD/47xUwcFIpH1GHyro3U8BYXg7Jr - 4za1fD9oGHhQkCAUFJt/H1MtCb0QOXHWL20jDJayB3+3UcCp27zoA0fZC+ErsX4n - RQVw6Dqx/Lv24tW3hhlcTlBTlTwoZ5ly8jNBoDeijEmaNicXQsZxvDor3eydWgX4 - F4ZN6zR8qZ/yzUuCEjboimuFX64tkL8olX595UTDVcvKsAlpbGz55Ei5+5+dhN0i - W/wOj+zB14mYk4QhFGPl05Uc/7gz2q22acUsiRf/hF2Z4OVBHuuLOZKzhs7jF6Ro - Bg9yP/VnejwKAgcKxK7o1EM7C6+WGijJi6Z+Ahv0z4pGJxEPISKoFGhxV+t2XV9J - ume23f3rUUW4L7/J3SpechK1VT4OF8ZSCvjqmVV2gTaTAID8gNYOLpuP9gq3KzEU - Kws7eLnEmlEJTmUvJCatnvNKBy+mIIWER6Ocz951njWQaGY8d9BwhVfLrCmMtgsf - dSf1iTggOwECI2ZP+AJwqEwF5qb1jWKfHh2JLUHhyftBcVzt5wGT4ZlpeDC/NO3a - Uhu8jOTcCi8ieTwZnzBvD7NguLyRFW59S7RlAiyhqcJA4pKDnDb/XFBGuOz5lu9Y - 8LCtlD6U3hbagGST2ByBtzUJYKFtD5gBYgnpQsQ9XEXPrb+LHb+bkf1tFiDSXXg4 - ecOOM8DBorw4sfy5rYkBHAQTAQoABgUCWCpIuwAKCRBwqQ1UrMIkfp8pCACg/zeJ - xs8x5VCJIGIefR8NxhSl4cUgS4SSUA6GcZV41abmpBV8qS5i8Tfz/ucczumWZSGH - hh1bp3SbolttDmSEpkFcNvH7wBS0Ae2g5FJKsRDGyTN6GbexIIK5jbgbkpKZPSGa - TOC2GYwpV/21jbeJwG9yb/3kj+XN167vSiXkQ8zxPIt0APCBCm0FTiuB1BYTx6IP - dxWlh/8Qj2Zla4ECFGye+8Ajr44AniDx2Ca2R39DiALkMvsFV0DdXkODYhwexQOK - eHFiVSkDSGk5DtfDwywwjj19mHTcc+LauZ8irZGsS360lf8xBVHi2LNJWAQ/Geev - srDRJn7yxhiLPikItC1Pd2VuIEphY29ic29uIDxvd2VuLmphY29ic29uQHVucmVh - c29uZW50LmNvbT6JARwEEwEKAAYFAlPyS4oACgkQ0rGZnxxR0iMPyAf+IawMC3i3 - JU6brNVXIeWjSeCAUlML10mxokf4/wr04itaVETjrVtccaaIvXcqM1pI/Kj5cXPy - urlor+sRbAedlZfo+fIu0JqhNsp0Hsu8Q6FmlpwMatjt8oF16TDOzNXOoIuzxwHB - hE+tScPQ5g+3vaoYIy0tpnfDVDh1lLyu/mL0vGzNSZHBnjTs/qvwHm4TlWaZmQl/ - cV2Z762HiHEbWOyqH2tcEsiUZmeEhcLQuWi+CPpsuwy+2kqULqoePTv7uscZafPv - nG8no+cqeHEuAmgTOufRYxpcLJgANMuIuAmv9f5wWQXX8CXpjH1VvAhaADW5cVEB - lOUJFrUC+NrXG4kBPQQTAQoAJwIbAwULCQgHAwUVCgkICwUWAgMBAAIeAQIXgAUC - WFSXywUJBa/H8gAKCRBQIymR8Q3/0HRnB/43hNzXmO5LIFXKdzc6JAG3eu4S5iXC - JzAs8DP/UAhCqQqKY+8H6RLLTbWIvhFdjmJo3zhtnEW36/5i/7tJzV3arnYGzZxp - jmuifEP2sHfQuchlKzCAr+wdEcMQm1sHI/8pQpE2B2dzGyCESrThnGiX1pi95jxs - g3xZ+BLY9Ae+jr5UPcjHIRrdyK+JXbA+zdQgf+pIcox5CzBRAghcS5Ng25ONtFFw - yDH9Ob31kv/QmYDokWRw8GWAvP+fFlv94oL/aL8AkDQsHozGMrCxE3EB1rmA8lcQ - MpVjXaLxQyWbzj8vCvgQ7VyBIgVYIZEcCB3tLBzWwS3GuCudD1cVe/s8iQIcBBIB - AgAGBQJTmeWmAAoJEBor3qpo/bNMkdcQAKG9d5/h9FRQhEBmL+klpfnJ7FsiBV8Y - H34QFsrAlz8GPnwFRcA10gAhR6vzmBOSq3GNYZsi5ScAGBFwdPrHwGBjcOTIBtln - nuv0LFcScCbNRkUWtLN0grfINBs435nTN++jMwTg6I4/4Hyzm5TeMaHORdnfYgEJ - y18YaCo593PnwdT8O4MEE4j3PlBFl+CfnfHMlckgg6+4ZkwVSv82q+1SpSoRYlZv - SvO9u1POBhLiG/2+2kyPz1RZJ/m37RGKSCj21OBCgivXLuMufUHclA3MkbeeAU1Q - q4FpHpTrzcyVHYhex2/fKxSGQ9mP7v5nmJtPPHpX0Odhoon1mSOpVHgmZ2jTbkIO - gq5doJZTm2wCFTEtOL5oZ++C3pxnX7LLcgK8ZuznVxBpqhidF3h9C0yNYOlGbQJy - 2vVKFUlv/xeWxssyeR45K4iWh9B9whtuS2Xfd8x7e1z53MZMbbpgz06yz1d0HSh7 - DwwuCWnPyjbx8ERUKQqiVbVopF+1SkaK/+qMfz5RF2jdcjGcq7MPFNEjredJc++z - 2OA88qfH8+BuLDaGwS1NANu849FT7fZcSchvnnsjufBwyYoKvDQrUCAgLPjqUYCP - 878hCvt19LTamakQi1wTBdGv3VTMd/eT+QcbjqtBfYMXiSheh2DL6sMq0OCgJGuG - Hs/wDPQM4ymHiQEcBBABAgAGBQJUzHTeAAoJEGCvA2jnd1iomykH/3P10bjGwcnR - W1onCpHXl9a+EsedAe4xeWZLfjT9zN5sK4kNfBW6FQxFVREGBtO8iJ4BaGB07/HJ - R5GxsBoxAvfzAozmV7U2ZjUz1CKvJVolrPbeg9CVvVYTw10g5fbw7Pwi6DRaNpTU - f9k9NsY7u2tpTJd+l386mFrcMafHNqn2adkwtUxNP3sqP7BtGsPB8fDeyDz1WWoq - RpAnSwCEbEDOTIEp/7xXrekTeLxkdN8bfqi3myxQhf6kp4DVE6GY14dD162uAG9N - Kx1fMXlaB6CS+AXCN4nf930cVYJkyD/yUm9wsT0a+8ApaHnlWz7p0BcFmjhupOTF - 4Q53kG7VqxiJAhwEEwEKAAYFAlT4po0ACgkQCc6NmiLJd0TyPRAAg9t8WmNMWB7x - r9YJkU8P8NK+Uk4KfKdo7xyMNGubZQ/RyJHQx6nua8qqXXJfuf69gs/tc02g4g9C - n570wYYGgXJaPplgaoLjI/AE9pjXoC9vkPS+aMLaXVhrBJEwlXFbY9+HbajGjeBO - r3AkXtyzo/cJAsN17hGghYY0bkfylYTIZVftAnDru1tuIM9pXMI2xDP03V19CPja - /77hd/n8z0qFJBNogbRxG1eWant6vB1wCf0AGrcSqMGgJHDBtRnCxs8fbPYj02QG - bt+IRJdLfONkTjy51SGWLpE8XrPY26JrpRXqqWw7c940anGPI128to2hPNlnuXed - CKkiJbhbsE/LCIxHDwrqCmyEUvYEskn46rh51E35S4ecyHqSU24GIugUsrD3BwPa - Gu7zojtebAwmkhthe77e6lKygT1J2+TjjERfibL6pukoQvZiqPoQoilweaMBYIDQ - 3i1v3fzO+akH8m1qEoBpkLB8v9eKO4Ur6W0uVqWVgFRcKuZrXS1ErnOHcqfVf7Gy - 8GY5/Oi6FyM/1vhFo/8ZPcJjbwh/EE84zMe4W/JPQuuXh4JB02svyvD8tsd8lFdU - UqBHTpU59ajgVis5eihtEDqhDNIDFbCvH/u0bo37sFCJ/KQrIRCS4/Xnd40ROEi6 - 4b1f8y7+EjFB92cAAopEq9lC5TmMcjyJARwEEwEKAAYFAlgqSLsACgkQcKkNVKzC - JH7oEAgAoHfVdV6aRtz8bQ/Rx7KGuTGOys5MkW2o7Mh9x/baK/cPiFz9Gm+hx5y5 - 63J2f4gmTHBfyE9mRvD155t7+wjJU4EXQsu2Iej4kODBqVUikKdgFR/+xlj0zqYr - y5RC8lQzzJwZwIbibtXvco1c/CXAtbIten25Suvw5dwNjytr9yXVm+UN8hSsfpBS - r5KQzyJI5kybcXNU6qpW+a5tJRLevOzdizEO6zC5HgW8qQOmrMGsiQARMLhlVNmN - OXCrS+N20KLGKpu5HcxRnltghf6oYrG/qQUvt+1vS3kDAkesbEU0Cqo67rm1pqLD - ufyEBabpnDtXHpV/cgDDaFVMVR6marQrT3dlbiBKYWNvYnNvbiA8aHR0cDovL3R3 - aXR0ZXIuY29tL2RlcnNwaW55PokBPQQTAQoAJwIbAwULCQgHAwUVCgkICwUWAgMB - AAIeAQIXgAUCWFSXywUJBa/H8gAKCRBQIymR8Q3/0Mi2B/4hgdWpd/zN66+tFw38 - vHVJLbnYA34lD77qZja9e3m75ZvMYcO012o2OAOxee2tLdla7tQ58p3BQyolvwNe - GQX8Q6oa/0ZKkJVd8IP0LedSnQjBvSHe5HwBglvHlHFEkUHoNqvV7DeWcUH4172u - 4itGOSLLjK/LvyStMD96M1W0OjuCLYukFS/DARJ+UAXzGuPBrUuqHoTXqen4x3I1 - j2SRwlNdptfs7yXQquU45ctTRpYkBM5tdJEF0BY+3f/n5JmwVgP8cLjUu6lFgNG0 - w0xFJg6Og0ozBI3CMCCTKFpvz6CAxunvjQR+S+2nef1v8diMs3oBwVX1V3kj+YA4 - 0BG9iQIcBBIBAgAGBQJUxotQAAoJEBor3qpo/bNMXhIP/jfJFpSzXey8VkeVkBdz - EJdUHSZtmf6aMcQiECMqiqHjh30vjZMhStBNDCrV0fTxzQEcYK9nghbQ1eeW4z4E - DCUxjtWizpqA4/3FyFojFedndSI8NO32bRRgWv/hbMksc8hHtYfIpGv5bH58NJuN - lD+yIF3s4WmUFU8BV34JW43PfF2LMCMLaa7Y/gLmSiEeooO173hde24yZF5jommE - gQzx8VTdr1GkEmiyZWhTrVDZ/jYqDeh1jVEs7gz1VsvXJN6jI+goMqxnZRhnaqEB - 8TTe0S8QssJxzyn81sHV/u2rDuXTHDMJx8VByft6VZtJsq+ei73onGt1QIO9DfLv - fcVwld1XcdobbtILc9hpXyUHirkhBj1PoHSvOlxTRlfCTP8tDkeBq05wKbJl/xRo - HEGcp2wSdv/FB+lgNvS336fv6c9I4pY94cErCusMDzePf5varM6Dj8k/FiYQ7dFY - +mAOuDfDTsyi6l2WuDMJ2gmYgoY4wj65hJUD3EeEUGcAIZtLfvLsNf+ocmZIxyeb - jv+y7BIMcGAlP4nak0n9w0DHL5ODBGlrg8EWdzZurY/hdqGqpKk9S3MgeDmGtjyq - YMbyw8EjRZhu+Xe7lyR2wJYOrfcrJlpMWoe/y/HjbJlN9nlxbjT+nXrUe8wji2AE - Wky3tgljhdDiP/J0p4PoUlH/iQEcBBABAgAGBQJUzHTeAAoJEGCvA2jnd1ioNgQH - /0J9ATZRn4S/32HE2FoE0ZHoVQV0WkP4Uo63XMnzHOCFz3kl62fFmVIuEBy2rhcd - V7Sk81T1sfIIFK3mT6OdMMHr9Tf4jj70IYfEPmNEjR2NUvNuABw4w5y6e/l7gV/6 - GAK7caK1Ln3cQBuo7ZEmm1JtbnBhrOnHBflF7OBPzTIlZF3z9RggpvlKYMIbp5P/ - TK+gWkiSmS2Ct9zXcJojZTC0p1cVwWmR87P3S3akSmRddX5EXwVqQWiFIzjBMI7h - 2pKFJgzjXg5r+/g5SuOldoyC9q+n/t/V3gLJ8bwheZjSRjxpzDLWhLgqjHA23UDJ - 2XURN66ixkBONwT78Ag17wKJAhwEEwEKAAYFAlT4po0ACgkQCc6NmiLJd0SmkQ// - ebuVkxfHHj0iJXyTaBRWgf465LvgvKeiNhdSE1+5p5+ea7mYKTBD9YRDR7luJ/3l - 8mDBrezUOdfd4auXtXBOgkB9IPCJ+XRLjMY6lfvSp0ogVY6SQJlKvmKRUsflNDsm - iALHl2uhOA1cQWDcizzkyTqe9P80bOU7nmyVFnlU/YnISvJmVCW/eicDBolb03EB - v5PuBkrMq7kDdZSvwcgz6wjg85nLSGTgw7QjOejVyvvV7cBFNlmqqtaFhbD/1HCX - d4xoftV14NPG69zmBxjJkFDP8kzFc3Yo8qm6L1OPQu0c3klr5jh9tbdoXZoCcMjQ - woLlxW4vhlhYeK4iWicmfLD7rStOhDlulBGYLxXM6HlBufVPXBDiw9eOXAc8pajv - 5/ZBgAgJBGOa9jPQ49Hn8df4B4qx19gY85CZIPyWDXbb8YN3TuHU8e2oxXTWJIOH - PY4+1Ku2qhVaq+eWng//LbyKTgmDsIODNZGFpUZlXGjf89TNHKJXXlKyKl7hcFqz - rVTPb3nEBukP6uIHvNjTgOYzMc64VLOMXoULeijHsIvzBq2lMEko2/WPsjPeqaZX - hzlEzYxbDOTR83myKpNQFArOHc27F6Cta1iQK4iEi8Cxh0Gd5Zs+inf244dh78gQ - +w8Zelung6eSdfUlCK73ONzwIG41vgEu40FY8seC51GJARwEEwEKAAYFAlgqSLsA - CgkQcKkNVKzCJH6sqwgAhjyYE+m3V5MQAJmwibFveQ4N0bmW3Q3o5ngoYKqzCMaV - XSptmFaxBnFDovxNtsMle579ZJI1lqQS7DWaGX9vbTVZEJmWViPfGk/f8N2XdkWY - NR+h4boP/a1kZVvDInypHTeZNuGo/EHH/uwTesXif/D+895EI8qX+lT/WilpSkVm - 1VyF/mAGLU9V1fQc8py49EqpmdNOragsoo0XlXNvOEFG6bOfj6qmr6kYNLEdtPKK - cLp0PY+bffgSHkbUekN1ur87IU6xJDkZ/jthDDpoCpzpkdz90WXoCqcb1IFDwfJc - 8DwOmzb+4siP7+o9ww6xFp49CkK7zFuDbyNIad97jLQjT3dlbiBKYWNvYnNvbiA8 - aHR0cDovL2dyaW1vaXJlLmNhLz6JAT0EEwEKACcCGwMFCwkIBwMFFQoJCAsFFgID - AQACHgECF4AFAlhUl8sFCQWvx/IACgkQUCMpkfEN/9DAdQgAsFZ5wXQjwKCn/Etk - ozOD/SEhMwG6NqQ5DomNZeCJQXjEheXRUqBTdx+cO6yJ8KSuA0+fyCbB7GT1+hR6 - h/lbbikbq5wQrFXtAChaopPioHz8RM6mmMpRXJSgCFYr6wilVSPG4RBEX0dQIulM - 1PLwU/dSDfb8o5QJVeI7bLo5PMpfcP+NzvRhGLS1kNVwNpM+c6EIHkt2QFix+Wvn - x4xHDrQDnWYM9SfHNMZ4lAt6tiiTyh7+W3qjXAWC+KFgmHLgBCDPujlELPIQycx/ - OCpdAQ/bVxyIWV3kx9XSPdcoVj7i59f7JtyfxDuTRK/fMEYJjQnHfppMT5hw96gm - bFfLg4kCHAQSAQIABgUCVMaLRgAKCRAaK96qaP2zTBqkD/9VkcEyaeYkYrU34oae - WfOR93MZS4DEBIsKjUD0QVTvVb0CLXRU+92POUCSQlqhv9nF4emrLCjDQTXgW/Tm - M6gl9UTDGsWMpmfIuTlJcQHGkqhXKN/iEgb0Xkutt+q6tnvpgX1lWOBGO6upmVqn - QYRcKVdfkArmD3tGoaIZsb/vWOgp4OGYDvGYpuaasdkLaBFGPGYEuyofkgrU8ssO - 4xq9SdGUOhgAkFrl78iTXIpe5VTOy3LoVwWSUv10JKK3Yz+LxlYDUTiXKFm/mkvV - 6NhZMDMsCyXPJcsGZlpP7fT2JrJJ5AkfPMw+6FXTuY54hBtZEze+ukrnhXd8QV7e - 149UYpMT+9OHYysLRmQ7//2HUsPGwlvQzVTzee2D/LTtFUiwt2b6aU/7yvJRTraq - CaZlotOuPM5ZlimbPUTsqahP46Y7NNx/oE/vcEAJu+C2r48gcLmf6g9IK5WUEsX8 - ZcSY/UhTECsMOqQUqmuNRRaqvujLV22V5oMKHDb8fzG0Cbgm/qP5o5RpAgCG7iL/ - xeKfi7XZJ582wpIoV4JJrGjVrzgK0Ljf/xntdCL/2hUbxM93+djJFWIaqerAkbzE - Yznt+N/ZCqSApiDwedukyiJkCPm0Zz1Dowd29g3SJsUDzroaSAEMRYkMH8EdFeOJ - FcmrMjQhQJIEypdZ5Ll3JK1v0YkBHAQQAQIABgUCVMx03gAKCRBgrwNo53dYqJfT - B/wPZvD8enoGEU4ZeXTXYQ53wYqYF13FJNikrmj8Ze+IsYuZprXJKzLRkL2DnbdN - W91BudibPJo0DeLiyXGA8pw2IGCllfkpa6ZtxalPJWJLAbiOmXzui/HJ2Md1tnSD - GfKCZ6MiaQQ0ceKoqOhPP7d3Vtcc5uQkzSYQu6SqKmCrjicnu+hWKAT9Iy21wvBC - LJkYMit/Bzue7NRV+PwYLdD24ZXwKfnyP9I33gcxEMIeG6L042NVUY1vsySYrcXR - sXyIvYvd2CH1FqQY1GPTcUEQbQH21v5z/PtgYv/UCckRJvEJUDE8DCF168FnflVB - 1ZHLmFNCrcLlrSKwydmuzNIUiQIcBBMBCgAGBQJU+KaNAAoJEAnOjZoiyXdE43MP - /Rf8TvMNTxLdwRL4ghHVX4GpPzfYJd1d+FvOg3Nn+H0EqFym2dJPfVxf6bXwaYgI - AICyjTXmGPGOKvHgzNMAeS5P8Myrt6VGLjt0LhTuMyxgDcuRp+thOAFtdZfXCSBP - sCbhdU3JJhfV0xoZAOarwaVhXS+brX/vnzLVF1UpGOx1pGayMBLHoMK8D3AYus5i - bblvkiyU7jDkpS4oS9YZg+7U6aA2gAassQk3k5DiC89kcPMeVQpnUFWvgl3P5qSw - 4ucatXE4nrrhaBySVNed6x0SflWubfbVZry/n9gUXd579mYxKA0RRSaRyzLCf5fk - JAFxuwbmVYDpYJcpi3QBc5b/lrG2MtRxRMeV6hF7towimT+rgu6TUWPdoKhPExrr - wScFN8Wfy/8Bg0poAWX7fWlkunKd9Xpkh9Cb41JFzS/7m8R43tlBU7Qf1c+isOb1 - pcy7Zj5q4uiymZlGZVRaLL94jvRMu5QqTZnT5llxj8KK/LMEm1BnRUS3kB3cJOPa - clcHqSx3IThcku8ft1R14dwSYmoEqiYpoTw5FayfaU179w4D1+kVwM4ZO/3hDYK8 - C+9RorSmnjW68wqXofSVJel9UdgcLXsNOnZt8oFam1KPAw5yHK8dqWMtWUTfdK2U - 6Hodz7Q8fD/miLczEvQl0EIZh65FDA+888wRpkPobYuNiQEcBBMBCgAGBQJYKki7 - AAoJEHCpDVSswiR+nU8H/jPQv/tufRQ8/pbR6iGleJupCY31T13lLDW6kJNUptZA - j/Lvth6xaUNybzXBOD0wLA88USmdVFUaLXCfb++wp3NwTk7qR/T0HvEpd1Ywg9gH - nu9zQ6+PZfs93/Sya+Z3luPsaMNKS7bD3mr2Y1eo0Maoxf1sl1pEtNF3Z76tQDIM - vBVaE9FcSukILPw/0YT30ITD2w0UQ4YCSviZK/yJOcVRR4IDrh7EBFWpoplHVPBP - Fx/l3pUJKIc5bmOE/0191Ijll2UJSXb+dJ5M+WH6MMnstrboTcSoUam9JpZFOwaL - Xn1B412yvkBx7Ho27xGYmpQYls8imUMqDBJS95CcQsa0Ik93ZW4gSmFjb2Jzb24g - PGRlcnNwaW55QGdtYWlsLmNvbT6JAT0EEwEKACcCGwMFCwkIBwMFFQoJCAsFFgID - AQACHgECF4AFAlhUl8sFCQWvx/IACgkQUCMpkfEN/9DhKQf/ZLa9qVHqVXAL3fYW - uMdZhmopLOHPxSKMRMsFl0RHwkxWJderqJou00D3vgGwom3TPxD1nH0ZAcp4GPMm - nY7Xe2fwGAv1BRjIqkp26/bLFw3w5Do3UPrgRQJ11joBZp/cymm15WdSinZ0tnQ7 - 6i98JBEIU7quiPWkuo13ByV7VswB7uV45ov7rYiWPAOzCsJ8//e0BxxNJd1CRRB5 - jqzhKHUgrc3PVqGq1pFtbN+Nt6W+4xXopcnhhME4/g6GmzHT2cbV3qj3dZKh8s5t - 0HgzJ19b6iYLDHAzrNsmSpZufmT3uri2dzPTyu7FJ2CcnKOPmYNih8v1IaAmbwnL - /cnTkYkBHAQQAQIABgUCVMx03gAKCRBgrwNo53dYqGY2CACXJYu+FB199mm6EvRZ - 3srLPyEHRbFcAFCvTdYLC3MJjl35LC+OkjtG8dVHE4tzPoIhJdMNNVVBfNxhfIQ1 - JBLGqJFQRo6Xh2WcZdDH+HopVDhqQG7+Euj+fPm9+I3Rg+pr3wa1PYnVIvTBFfrD - oq0yo42ZOEqiJViluXf2+N/g+8bOam1S8xIWWrvEdRDQobJ/Jr1l0s5nCqaGifAh - 0vwwhcBmoWkhdo/1rPwhf15yv0qrTZTs/2E2Wsz0bxCJJ0IR+ywlLdg9BGyjQ2MQ - V0ojJMrCX6FZjK8wmq1HIzpciLyBtRIi7ka+pmoyRE3xt6JFx+DaCW9d5xxC2gt4 - rs0XiQIcBBMBCgAGBQJU+KaNAAoJEAnOjZoiyXdEUssP/iDiNuIpIazdEYOnbEHX - ypuIqB5cC1eHYoIb6b6wkMn2Rtkpa2nZA0kjl4yRC4QeZA59QnbJha3pdSeZslUm - J5cgTGHA/luzR2TmcEpw/DJB5mGqbFTG2uRGWCIqVx2w4+ACAt4GuqEibV28YB7w - AJSBIWltrvJtwprmDi0NylSoqT80Ky8fXsU0lN/bnq3NvHDpBnTapNrk/6nq2IVy - TCVwQreSobq6/QsluoIuuuurdb86zIoaUxMCV7uPo2PZEw5bMqe2LeX0fK5MR2rp - y/X1ADAZHvEEva/XNfdAh5m328thQyDa8oGzM4sN7VzQnqPeO+nMzoKBAhrf/Q+u - 9apVF7dYfxB9q1jh0UlKXONKfd8sas4giALfCyQ9NgJ4S/w1XYXpQ60qCZ31oPIO - 3bukh9Cmv1d2yEWS/qZjsNUi1WuCVwWuA2eTggVzfZA12DCx/oj6gZ9vN5p6XPeJ - aSk3VHooMFJMuGGCYFSup1zl+y86xLGT2NoawX+QUR8Y5KRGuxBVoeYkNyh+Bm4R - Q0GKoHe+zBzfJXMi/3TKs0WElaGEo8pPv8Eucdr/pUM6hbgMLp05jiTBGSEL9cPH - glUpGgGTlmujj9zyOA0qdAhbQoZStrCOqkksVXOWrbJlcxQOuo84XXIzi/vicf3a - qOhet2E+nnBVK8vZKf/KnHAQiQEcBBMBCgAGBQJYKki7AAoJEHCpDVSswiR+7GII - AMbZlYKDi4kmJrvFO8nz4RLQVN4tm4za2jCsf39MT8GcuKOslGzrMEPCB27nPfWT - WWnlyCCEz/mv4MhP+DVO7jmTuiB0fZtTC+8FGAZtgigdRfEK+yIcFcnxN4EFzZGT - rUmWsY6MuAcMoZPxXv0yDDunIztvTC23PjZfdM6Z7zfsgReLFHZm7BhzaeZwS4OX - GZyxleUZ7oBpPxBcgaUlQOhCwpGZY7aq5AUtLMDL1HwZkktUmhNi2AGyIv1Y6rIJ - ZV+hXOq6xBCnSOdqPCpUAv+lS7cZvoVIeX051EvwFEC4B4/gzqhJG1OjwZc4saPI - gVdXCqTmYtEzafJ9oQC3zRfR/wAASib/AABKIQEQAAEBAAAAAAAAAAAAAAAA/9j/ - 4AAQSkZJRgABAQEASABIAAD/4QLiRXhpZgAATU0AKgAAAAgACgEPAAIAAAAGAAAA - hgEQAAIAAAAVAAAAjAESAAMAAAABAAEAAAEaAAUAAAABAAAAogEbAAUAAAABAAAA - qgExAAIAAAANAAAAsgEyAAIAAAAUAAAAwAE7AAIAAAAOAAAA1IKYAAIAAAAlAAAA - 4odpAAQAAAABAAABCAAAAABDYW5vbgBDYW5vbiBFT1MgNUQgTWFyayBJSQAAAAAA - SAAAAAEAAABIAAAAAUFwZXJ0dXJlIDMuNgAAMjAxNTowMzowNCAxMDo1MTo0OABP - d2VuIEphY29ic29uAENvcHlyaWdodCAyMDEwIC0gQWxsIFJpZ2h0cyBSZXNlcnZl - ZAAAAB2CmgAFAAAAAQAAAmqCnQAFAAAAAQAAAnKIIgADAAAAAQADAACIJwADAAAA - AQGQAACQAAAHAAAABDAyMjGQAwACAAAAFAAAAnqQBAACAAAAFAAAAo6RAQAHAAAA - BAECAwCSAQAKAAAAAQAAAqKSAgAFAAAAAQAAAqqSBAAKAAAAAQAAArKSBQAFAAAA - AQAAArqSBwADAAAAAQADAACSCQADAAAAAQAJAACSCgAFAAAAAQAAAsKSkAACAAAA - Azk1AACSkQACAAAAAzk1AACSkgACAAAAAzk1AACgAAAHAAAABDAxMDCgAQADAAAA - AQABAACgAgAEAAAAAQAAAKugAwAEAAAAAQAAAQCiDgAFAAAAAQAAAsqiDwAFAAAA - AQAAAtKiEAADAAAAAQACAACkAQADAAAAAQAAAACkAgADAAAAAQAAAACkAwADAAAA - AQABAACkBgADAAAAAQAAAAAAAAAAAAAAAQAAAH0AAAAOAAAABTIwMTU6MDM6MDQg - MTA6NTE6NDgAMjAxNTowMzowNCAxMDo1MTo0OAAAAAAHAAAAAQAAAAMAAAABAAAA - AgAAAAMAAAHFAAAA9wAAAFUAAAABAAMN3wAAADQACGiIAAAAjf/hDDNodHRwOi8v - bnMuYWRvYmUuY29tL3hhcC8xLjAvADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0i - VzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0i - YWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IlhNUCBDb3JlIDUuNC4wIj4gPHJkZjpS - REYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1z - eW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6 - eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczphdXg9Imh0 - dHA6Ly9ucy5hZG9iZS5jb20vZXhpZi8xLjAvYXV4LyIgeG1sbnM6ZGM9Imh0dHA6 - Ly9wdXJsLm9yZy9kYy9lbGVtZW50cy8xLjEvIiB4bWxuczpwaG90b3Nob3A9Imh0 - dHA6Ly9ucy5hZG9iZS5jb20vcGhvdG9zaG9wLzEuMC8iIHhtcDpSYXRpbmc9IjEi - IHhtcDpDcmVhdGVEYXRlPSIyMDE1LTAzLTA0VDEwOjUxOjQ4Ljk1IiB4bXA6TW9k - aWZ5RGF0ZT0iMjAxNS0wMy0wNFQxMDo1MTo0OC45NSIgeG1wOkNyZWF0b3JUb29s - PSJBcGVydHVyZSAzLjYiIGF1eDpMZW5zSUQ9IjE1NSIgYXV4OkxlbnNJbmZvPSI4 - NS8xIDg1LzEgMC8wIDAvMCIgYXV4OkxlbnM9IkNhbm9uIEVGIDg1bW0gZi8xLjgg - VVNNIiBhdXg6Rmxhc2hDb21wZW5zYXRpb249IjAvMSIgYXV4OkZpcm13YXJlPSJG - aXJtd2FyZSBWZXJzaW9uIDIuMC40IiBhdXg6U2VyaWFsTnVtYmVyPSIxOTIxMjAw - MzAxIiBhdXg6T3duZXJOYW1lPSJPd2VuIEphY29ic29uIiBwaG90b3Nob3A6RGF0 - ZUNyZWF0ZWQ9IjIwMTUtMDMtMDRUMTA6NTE6NDguOTUiPiA8ZGM6Y3JlYXRvcj4g - PHJkZjpTZXE+IDxyZGY6bGk+T3dlbiBKYWNvYnNvbjwvcmRmOmxpPiA8L3JkZjpT - ZXE+IDwvZGM6Y3JlYXRvcj4gPGRjOnJpZ2h0cz4gPHJkZjpBbHQ+IDxyZGY6bGkg - eG1sOmxhbmc9IngtZGVmYXVsdCI+Q29weXJpZ2h0IDIwMTAgLSBBbGwgUmlnaHRz - IFJlc2VydmVkPC9yZGY6bGk+IDwvcmRmOkFsdD4gPC9kYzpyaWdodHM+IDwvcmRm - OkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg - ICAgICAgICAgICAgICAgICAgICAgIDw/eHBhY2tldCBlbmQ9InciPz4A/+0AslBo - b3Rvc2hvcCAzLjAAOEJJTQQEAAAAAAB6HAFaAAMbJUccAgAAAgACHAJQAA1Pd2Vu - IEphY29ic29uHAI+AAgyMDE1MDMwNBwCPwAGMTA1MTQ4HAI3AAgyMDE1MDMwNBwC - PAAGMTA1MTQ4HAJ0ACRDb3B5cmlnaHQgMjAxMCAtIEFsbCBSaWdodHMgUmVzZXJ2 - ZWQ4QklNBCUAAAAAABDe4+pbcWGiZqKXLRcZ++/g/+IMWElDQ19QUk9GSUxFAAEB - AAAMSExpbm8CEAAAbW50clJHQiBYWVogB84AAgAJAAYAMQAAYWNzcE1TRlQAAAAA - SUVDIHNSR0IAAAAAAAAAAAAAAAAAAPbWAAEAAAAA0y1IUCAgAAAAAAAAAAAAAAAA - AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAARY3BydAAAAVAAAAAz - ZGVzYwAAAYQAAABsd3RwdAAAAfAAAAAUYmtwdAAAAgQAAAAUclhZWgAAAhgAAAAU - Z1hZWgAAAiwAAAAUYlhZWgAAAkAAAAAUZG1uZAAAAlQAAABwZG1kZAAAAsQAAACI - dnVlZAAAA0wAAACGdmlldwAAA9QAAAAkbHVtaQAAA/gAAAAUbWVhcwAABAwAAAAk - dGVjaAAABDAAAAAMclRSQwAABDwAAAgMZ1RSQwAABDwAAAgMYlRSQwAABDwAAAgM - dGV4dAAAAABDb3B5cmlnaHQgKGMpIDE5OTggSGV3bGV0dC1QYWNrYXJkIENvbXBh - bnkAAGRlc2MAAAAAAAAAEnNSR0IgSUVDNjE5NjYtMi4xAAAAAAAAAAAAAAASc1JH - QiBJRUM2MTk2Ni0yLjEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA - AAAAAAAAAAAAAAAAAAAAAFhZWiAAAAAAAADzUQABAAAAARbMWFlaIAAAAAAAAAAA - AAAAAAAAAABYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABja - WFlaIAAAAAAAACSgAAAPhAAAts9kZXNjAAAAAAAAABZJRUMgaHR0cDovL3d3dy5p - ZWMuY2gAAAAAAAAAAAAAABZJRUMgaHR0cDovL3d3dy5pZWMuY2gAAAAAAAAAAAAA - AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAZGVzYwAAAAAAAAAu - SUVDIDYxOTY2LTIuMSBEZWZhdWx0IFJHQiBjb2xvdXIgc3BhY2UgLSBzUkdCAAAA - AAAAAAAAAAAuSUVDIDYxOTY2LTIuMSBEZWZhdWx0IFJHQiBjb2xvdXIgc3BhY2Ug - LSBzUkdCAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGRlc2MAAAAAAAAALFJlZmVyZW5j - ZSBWaWV3aW5nIENvbmRpdGlvbiBpbiBJRUM2MTk2Ni0yLjEAAAAAAAAAAAAAACxS - ZWZlcmVuY2UgVmlld2luZyBDb25kaXRpb24gaW4gSUVDNjE5NjYtMi4xAAAAAAAA - AAAAAAAAAAAAAAAAAAAAAAAAAAB2aWV3AAAAAAATpP4AFF8uABDPFAAD7cwABBML - AANcngAAAAFYWVogAAAAAABMCVYAUAAAAFcf521lYXMAAAAAAAAAAQAAAAAAAAAA - AAAAAAAAAAAAAAKPAAAAAnNpZyAAAAAAQ1JUIGN1cnYAAAAAAAAEAAAAAAUACgAP - ABQAGQAeACMAKAAtADIANwA7AEAARQBKAE8AVABZAF4AYwBoAG0AcgB3AHwAgQCG - AIsAkACVAJoAnwCkAKkArgCyALcAvADBAMYAywDQANUA2wDgAOUA6wDwAPYA+wEB - AQcBDQETARkBHwElASsBMgE4AT4BRQFMAVIBWQFgAWcBbgF1AXwBgwGLAZIBmgGh - AakBsQG5AcEByQHRAdkB4QHpAfIB+gIDAgwCFAIdAiYCLwI4AkECSwJUAl0CZwJx - AnoChAKOApgCogKsArYCwQLLAtUC4ALrAvUDAAMLAxYDIQMtAzgDQwNPA1oDZgNy - A34DigOWA6IDrgO6A8cD0wPgA+wD+QQGBBMEIAQtBDsESARVBGMEcQR+BIwEmgSo - BLYExATTBOEE8AT+BQ0FHAUrBToFSQVYBWcFdwWGBZYFpgW1BcUF1QXlBfYGBgYW - BicGNwZIBlkGagZ7BowGnQavBsAG0QbjBvUHBwcZBysHPQdPB2EHdAeGB5kHrAe/ - B9IH5Qf4CAsIHwgyCEYIWghuCIIIlgiqCL4I0gjnCPsJEAklCToJTwlkCXkJjwmk - CboJzwnlCfsKEQonCj0KVApqCoEKmAquCsUK3ArzCwsLIgs5C1ELaQuAC5gLsAvI - C+EL+QwSDCoMQwxcDHUMjgynDMAM2QzzDQ0NJg1ADVoNdA2ODakNww3eDfgOEw4u - DkkOZA5/DpsOtg7SDu4PCQ8lD0EPXg96D5YPsw/PD+wQCRAmEEMQYRB+EJsQuRDX - EPURExExEU8RbRGMEaoRyRHoEgcSJhJFEmQShBKjEsMS4xMDEyMTQxNjE4MTpBPF - E+UUBhQnFEkUahSLFK0UzhTwFRIVNBVWFXgVmxW9FeAWAxYmFkkWbBaPFrIW1hb6 - Fx0XQRdlF4kXrhfSF/cYGxhAGGUYihivGNUY+hkgGUUZaxmRGbcZ3RoEGioaURp3 - Gp4axRrsGxQbOxtjG4obshvaHAIcKhxSHHscoxzMHPUdHh1HHXAdmR3DHeweFh5A - HmoelB6+HukfEx8+H2kflB+/H+ogFSBBIGwgmCDEIPAhHCFIIXUhoSHOIfsiJyJV - IoIiryLdIwojOCNmI5QjwiPwJB8kTSR8JKsk2iUJJTglaCWXJccl9yYnJlcmhya3 - JugnGCdJJ3onqyfcKA0oPyhxKKIo1CkGKTgpaymdKdAqAio1KmgqmyrPKwIrNitp - K50r0SwFLDksbiyiLNctDC1BLXYtqy3hLhYuTC6CLrcu7i8kL1ovkS/HL/4wNTBs - MKQw2zESMUoxgjG6MfIyKjJjMpsy1DMNM0YzfzO4M/E0KzRlNJ402DUTNU01hzXC - Nf02NzZyNq426TckN2A3nDfXOBQ4UDiMOMg5BTlCOX85vDn5OjY6dDqyOu87LTtr - O6o76DwnPGU8pDzjPSI9YT2hPeA+ID5gPqA+4D8hP2E/oj/iQCNAZECmQOdBKUFq - QaxB7kIwQnJCtUL3QzpDfUPARANER0SKRM5FEkVVRZpF3kYiRmdGq0bwRzVHe0fA - SAVIS0iRSNdJHUljSalJ8Eo3Sn1KxEsMS1NLmkviTCpMcky6TQJNSk2TTdxOJU5u - TrdPAE9JT5NP3VAnUHFQu1EGUVBRm1HmUjFSfFLHUxNTX1OqU/ZUQlSPVNtVKFV1 - VcJWD1ZcVqlW91dEV5JX4FgvWH1Yy1kaWWlZuFoHWlZaplr1W0VblVvlXDVchlzW - XSddeF3JXhpebF69Xw9fYV+zYAVgV2CqYPxhT2GiYfViSWKcYvBjQ2OXY+tkQGSU - ZOllPWWSZedmPWaSZuhnPWeTZ+loP2iWaOxpQ2maafFqSGqfavdrT2una/9sV2yv - bQhtYG25bhJua27Ebx5veG/RcCtwhnDgcTpxlXHwcktypnMBc11zuHQUdHB0zHUo - dYV14XY+dpt2+HdWd7N4EXhueMx5KnmJeed6RnqlewR7Y3vCfCF8gXzhfUF9oX4B - fmJ+wn8jf4R/5YBHgKiBCoFrgc2CMIKSgvSDV4O6hB2EgITjhUeFq4YOhnKG14c7 - h5+IBIhpiM6JM4mZif6KZIrKizCLlov8jGOMyo0xjZiN/45mjs6PNo+ekAaQbpDW - kT+RqJIRknqS45NNk7aUIJSKlPSVX5XJljSWn5cKl3WX4JhMmLiZJJmQmfyaaJrV - m0Kbr5wcnImc951kndKeQJ6unx2fi5/6oGmg2KFHobaiJqKWowajdqPmpFakx6U4 - pammGqaLpv2nbqfgqFKoxKk3qamqHKqPqwKrdavprFys0K1ErbiuLa6hrxavi7AA - sHWw6rFgsdayS7LCszizrrQltJy1E7WKtgG2ebbwt2i34LhZuNG5SrnCuju6tbsu - u6e8IbybvRW9j74KvoS+/796v/XAcMDswWfB48JfwtvDWMPUxFHEzsVLxcjGRsbD - x0HHv8g9yLzJOsm5yjjKt8s2y7bMNcy1zTXNtc42zrbPN8+40DnQutE80b7SP9LB - 00TTxtRJ1MvVTtXR1lXW2Ndc1+DYZNjo2WzZ8dp22vvbgNwF3IrdEN2W3hzeot8p - 36/gNuC94UThzOJT4tvjY+Pr5HPk/OWE5g3mlucf56noMui86Ubp0Opb6uXrcOv7 - 7IbtEe2c7ijutO9A78zwWPDl8XLx//KM8xnzp/Q09ML1UPXe9m32+/eK+Bn4qPk4 - +cf6V/rn+3f8B/yY/Sn9uv5L/tz/bf///8AAEQgBAACrAwESAAIRAQMRAf/EAB8A - AAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAAB - fQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYn - KCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeI - iYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh - 4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYH - CAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRC - kaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZX - WFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKz - tLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/bAEMA - AQEBAQEBAgEBAgMCAgIDBQMDAwMFBgUFBQUFBgcGBgYGBgYHBwcHBwcHBwkJCQkJ - CQoKCgoKCwsLCwsLCwsLC//bAEMBAgICAwMDBQMDBQwIBwgMDAwMDAwMDAwMDAwM - DAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDP/dAAQAFv/aAAwD - AQACEQMRAD8A/UKO1ArfWDHWvmrH0BmJbAVtLbDqKLIDPS3XPStiODFFkBQjg+YG - taOH5hS5RWM9bccitcRdutOwWM3yBjpWr5XAxS5RmX5GO1anldqOUDI8jtWr5QpN - AZJg5rUMVPlAxzAAM1q+UO1LlYGL5PU1qmLr9KLAYLQ57VrNDSsBhGD5q1jDyKVg - Oee37VrvDTsgOde3HStd4eeanlA5uWEYORWu8Ix0o5Sr3OUltwa2ZIRn60mFkcs9 - sMYIrbkt88jmkUcnLa5rfktgaVgONltAc+tdDNb44osBwtzagA5Fb11bdqLIdzz6 - S1Bc9K6B7cbjSsh3P//Q/XtYB0rWEO3pXzZ9AUFhx2rUEXNAFFYa1FiORjigCjHD - g4rVjgAYd6AKCxetafk+goFczRFmtNYC5CoMk8UDMzyuOa9X0PwU5QXeoJk9Qn+N - ZyqxjuzppYSpPZHmlto95enMKHHr2r3GXS/LUBxgAcKvauWeL/lR6FPKr/EzxeXQ - HiUmVxkeld9qcapuEacj/P51zyxc+52QyymvsnkN7F5IKqcH6VuarbLJG+5STjI5 - rCWJn3OuGApLTlR5Xf6neW5JjlHA9KZrlqsIfqfSs/rM+5r/AGfRe8Uc4/jmS2l8 - u4VXHfsa4HWTGS0eD9RVLGVV1JllVCX2T1rTvGWiX0gid/Jc9m/xr5ruZZYZMDOM - 9q1WYTW6OSpkcH8LsfXQRJVDIQQehFfLeg+O9X0OQAtvhzgq3T/61dMMdB/ErHm1 - soqx+HU+nHiNY3h3xVpXiSHdatiUD5kPWuuM4y+FnnVKM4O0lYvSQjbWlJGMHjNU - ZHNyQ1qSR80rDRgPb4NazxVNh8xzkkI7VrSQjqKGh3OYlt8itiSKkM466t+M1s3U - HBoC5wjwfOe9a0luA5yKVhn/0f2mEGDWqI+favnrHu3KCw1o+XRZBcprEQfwq6qD - NSx3RXSMZBNWQvIFIRWWIsdqjJPQV7V8MPBR1SU6veLmNOFB9amc1FanRSw7m/Il - 8D/D1ii6lqQwx+6tfREdvDHblVGNo4rjnVkz1KOHpwONbTIoVEajGO9b93ExXcOp - rBo7Y+R5nqloI9znHNaWqQJMGST+feoaTOqHmeW3tiGDbs/yrob9YxD5SHBA6Cs5 - QR1Qm+p4jqSeXM8ajLdcn0rQ1dBKXLfK2OvSueUTp0aPCPF0cjoxiGeO3Wr3iBWK - PApHT6/rUOJtCSR846hLLbyFH5LVa1yKVJXkcrkdu+Kh+Y202cHduzNiQ5wc1l6j - MRM6qTuHf60mxuOhkXmowlmHIfjII4/wri9SucthW6EjilzGbprodVY+IbvTbhbq - xlKOhyCO3/1q85lu2U/KeAMECnGpKL0ZzVcPGS95H3n8O/iTaeL7c6fekRX0Q+Ze - zD1FfCmmeIL3S7uLUdOcpPA25D6+oPsa9KjjntM8HFZUt6Z+mzAGuM8CeLLXxr4c - g1m2PzMMSL6MOtenGSkro8GdOUHaR1TIPrU5U96ZJnPHVxkqAMeSKr7R0DOZu4hz - WjdRGgd7bHGPH854rQkh+c0XHzH/0v3I8vB6VZK44r589wqlB0NSsKAIAozUwUg4 - 9qQBbW73FzHbpyXYKPxNdn8PdMOp+K7WLHCNvP4VEnZXNKUOaVj6t8K6HHpmjQ2i - LgKBnHc121tb+XCEXmuN3buz14tRVkZL2+5ioHA4rdWMHg8E1HKV7Sxw2p2zg5iH - viuqurFZB82MDnipcOxtTqpHh2o27qXbbgf1Pauw1fTlaN0jU8ms3TOyNVM8dvo1 - ZN6EKRxmtHVbdR/qxjJ7mocTpizxrxFAnkkFgh6nPtUfi6WOKMRONy9GGegxmsJ2 - 6nXR11PHNRslnt3ZTnGeaineKBGRR8rKe/c1k2rG7TvofPniOBvMYfxZwfet3WbB - LudgOWLE/WsrNm2x4bqtjIWIzg88gf5zXV61aj5v4fY1DHc+ftUhVW2w/MM85960 - tbmt0V9/X6Vm9AZ5vqDmPKqQcenWs/VbqGR/MTgmi5hK+zGRXDuMrwQaq2rrICT9 - 6mpGLifVH7M+vyW2t3vhuRvkmHmoPcda8z+C+opYfEKymdtu5jGfxr08FWtLlZ4G - Z4dOLmj9HmGRip2AIzXsHzxUKHvUpxmloBWZe1TlQaHqBkXMZxVy4UYxUtAcfJH8 - 54rRkjJcnFIZ/9P91nFOr589u5WIB6049aBkOMdqk79PxoA9y+B2m+bqdzqBHKAI - PxrsfgXbBNHmuO8kv8q56zOvDKybPoCFCUJ64HFNUmMDb1rnZ17iSLsI7AcGpJ7l - FhMjgZxg0jRXMqaQIdmDismW98yTy1HGOvvRzI3VN21Of16URxP1AxyKoa9OZIC2 - /HWs5M6qMNTxvxDfIbUmM4z3/WuW8QSmKNoXGQWwMccGuacj0qdPXQ8p8Q3Ek8hU - fMGHOB6+9Q6rBmzhm3fvH+Vo+6j/AANc0rnRBWdjy7UZnRjC2QCmFI/zzVvxEhNm - N2AQDz6Y/wAcVNrHTB62PNr67iAkjbdwuV4x/OuP1q+luIHuF6rxt9uM1lNmqhfR - 7nF+I9XiQb1XdnkE1x/ie6iKttwzDuf5e1Zyl3NFTR4r4ovbhpy7Ngmud8QXKuGY - 4JB4qLkSVtDlnuXSfC5IPrWO82whuhzSujBq51Ni3zkgcE1Us3UZJ5BqjCR2nhi+ - +weIbe7RsFJA35Gse0bbcK64GK1hJxdzhrxvGzP1ysZxd2MNyvIkQN+dch8NNQOq - eB9PuWOT5YUn6V9HSnzRUj4+tDkm4nasMGnkdq0MyMjFSgAD6UgKM4qSbpSfmBz8 - ineanf7xzUgf/9T92DSNxkGvnz2upE3WkJOaChB1r1zwh4b8L+JtKbR/N2amyl1P - +FYVMRGMuXqejQyypVp+0vZHpvwNIbQyM42ynimfBvT9R0cXulakhR4pcZ7EeorK - VWM1eJpHC1KL5aiPe5mUQMW6gVy/iLxFZaBYNeXzHH8IUZyfpWUpJGsKbb0RJeXT - qrA/dH5V8neOPG/xM8QQufCUKxQDnaD8zDnGDzn39KxlJdDup0tdT6QebJbackdR - X5t6n8S/jr4LikuNSsXAjy7BgzLjGQAVyWH0FSqsVo0dCo32Z976nNu2qoyRkg9O - tfkpd/8ABRdNKP8AxUmnrbwx/u5HySQT/wBMzgj35JFJ14HRGhJdD9APEflpcuJM - AhcD05OTXx8/7V3gTxdZQXMDm2ku0zGJD+eO3FZSqQ6M66VKWzR6R4ruILPUo5YG - 5ZcD+Z4ryCbxwms3CtA/mLwMng/iOvFc0pa3R2Qp+7ZnZaoRJalMBlK/Mv1rndU1 - waYojmx68nk5xjpTuieXqjyXxJbGOVxbcYGCO3pxXG+OPiPpGg+bc3br5YU/ePP+ - c1lI1hNI8+8YW8qJIVJODyvTqO1fHPxm/a68JaPbzLab5JY22lMgckDnPT+tHsm9 - gliqcVqzqfEctzEcbDvxn+mf1r8/NX/asvvFzyrpkFztwSMRls57Er93Hv1p+wfU - 5pYum9mfWl9rnk4jlOGHWvilvHnje6lW7jsJ5dx3KGUrwPfB/LNL2KM/rS6I+79F - 1TzXCDJI6V4L4M+IV7N5cPiiEW7lgFYf5I6dTmpcLbEe1UnofZWn4f7wwcfnWfoV - 2JRG+c54qI7kVLNXP0v+Brb/AIc2TfX+dfAt7+0/448ET6V4I8G2EbWtqxN9PIec - HptFexRxVOnBRk9T52tlmJrTdSEND9VvpXOeDNYm8ReGLPWbhNj3EYcj613wmpJS - R5FWlKEnCW50n0pSpBxVElSY06bgZqWKxhSffNOkHznipHc//9X91WyTSMa8A9nq - R0HmkUSRywQT298t0bW7t5FNuV6sScbfcGruiW1tNrtnLcxhzHIGXPrXJi8PGcea - 2qPayjHzo1PZN+7Lofdmj7ysb3ChZmQFyPXFeffDjxDquu67q1vfcJaOkcf4jNct - Kak2l0PTx+GnS5ZS6nYeKYI3iVm6D73pj3FdLrGmRXduYWPUVc00ebCZ826t4+8N - +HbV47uWKCKLLFnIUDHfJrw39oX4RfEDXbMy+C1sTJC2+MXytJGGHIJjUrvI6gE4 - zXm1K1VSske3hqVGa95mf49+NvhK40G41SOfz7VV/wBcqM0XH+3jZ+Rr86vjP+zO - /ir4M3F/8YrE+OvF2najDei21KeRLC9hiJzaJGny26sp4UJhmUBmPWvRoYdVYt1J - 8vrp/n+ZjjXUw8rUqLn6WZwnxM/af/Zs1rUTovime0CTOI2e5iZUJP8AD5jLt/Wv - w5+EP7KPxN+F+t6mfEw1poLy+mvXivhBsKuPlgVRIylN3XsAOK5sZg4U1zQrJvsd - WXYitWlyTw00u/8ASR+8mn/C/wCG3iaWG+8NTm1ikRdkGd0RXsUPvx04r5t/Y20P - xBq923g3Rra50oW+JRZz5MIycEwScjPcx9MeleTFuUuV7nv18PLDw53LTzP0s8A/ - C3xFbMBcAyKFChs8sM/yFfoR4G8OWlhoIhl5IQLk9zjH611rCyi7NnkfXlLZH5C/ - tDX2o+FdQHzBIo1y3OOh9jzUf/BRKDTtNjaezULLINpceg6ce1cte6lynq4dqVK5 - +Xv7QfxUg1CJLe1u1hYgByD1zwcYr86/jlfazca1DHalvLLEEjpWlKnJrmbODEVl - F8tj6N8KfDz4ca3f2sviudr65nbMFsoaR2HfbGmSevU14X+zNonxB8T3ep6frCX2 - h6ReWskAvLeVI70yNlY25YFYkPO1Tls8n17KFB1HZzsjkrNqHPGF32R+jNx4p/Z4 - +G1kbFmSzmk+Romiy+73Vcn8CK/HbxJ+yrqOkfGHW/iDq+gWmmDxBMrta2Tp9ihV - SAxgjz5gBAyFZiQSeetd7y3D2/janmRxuN5rfVX/AF8j9JdQ+L3w8vdQFtpt7DH5 - p+RJFaFj9A6jNfG//CvPFOp6/dt4ctotP0NhtSwYl07HgEtsHoFP415dahGO0j14 - VKlrzhY+5LJ9E1WcmFVMnGcnPT26V4l8O9N1jT9YW2kZmWIgHJzXI5NDau9EfcXh - 1JohCrjGOcU3SJGEUTnJNJPUqEd7lnQdBvvGnxk03wzbxHyJ5t9w/wDsp2/E167+ - xvqy+JvjR4jt7pMnSWCRsewYZrbDUPa17PZak47FSwuD93eR+o9hZwafZRWNuAsc - ShVA9BVvjbivpFpofBNtu7G445p/0oJKUw44qSbpSfkMw3+8adIDvOKgTP/W/dFm - yeKD1NfPnsvcZzkUHFAWJbaY29xHMnVTmoQD0FBUW000fafgC3s0Z9UtxhrrYz/U - DFM8DRz2WiWMM4yzxg/hXDyqLdj6erVlVhDm7HqF64Hyr6c1VMoBAIyehrOabOWE - LanDa5ZvdxSRjPHTPrXT6oYDCwJC5HP1Nc/srnVTbdkkfEnxa8J6zd6bJZ21rHOh - yctjr2OOte8+INOmc5kIwOw9DUuLta7R6NCc4u6Z+GnxG/Zv8YeK9YkW1to7VGbH - 7sZz+Wa/YDUV0fSgjFF+Y8sf61zywi3b/E9elmFdKyPj39mL9l648ASLrutSPPco - Mxo54TPUgds/jX3J4W1nTZ4ikEil1YggdxnvTp0oR+Hc5cTVq1H+8Nq3t5UhkijA - BVT19f0rpojD9jlZIySFOT6muqlTW7OVytZJH4Q/8FAEu5JWjuXXdICqjuQM9PpU - 37f9jPLrhbepAPyrjnAxk+1edXilUPocMk6Oh+GuveFrXUkks9QjWRZuoPPPQ/Sv - RPFds+mz7jhmbhcDnJ6VmrrYwnCMviR84aZ8PvE/hy6Nx4TunKqf9RMd34ZJz9K9 - b8JeNtN1TVn025PkzRPtYHqCDjpVOq1uZxoQ+zoZun+LPFV/GLHXNO3SKuAeDgZ7 - Zr6s8NaNZ3Kh7kRzA8/N1zQ5vubOhP8AmPFtF03WNSUW1tEYQevr/wDqr60TTtOs - YTuQLtG7HasZ3eqI+r33Z89WPhmfT/mU/vc8lu+K9cvDDOsk0afL1rHkZm4OOhd0 - 1UXTY52G1iB17Y4qOzkRrXbgjaOa0grGEkfXv7FHhWy0/WvE3iKJcy3c4LMf90V6 - R+x5qvh7UfCmpW+lyh7u2uNlyvcHHGfwr2svpJc0+585nldyUKa6H2FTtvXFekfO - LQbinY70DsirN0wadNkjFJ+QGI/3j/jT34Yip5Rn/9f9zmOKY/Wvnz2raiDim/Sg - Q4HB4pBQCPpr4VeJL/WYjFfMGNthF+leefCS/FtrUtqTjzUyB7iuatC2qPXwddzX - JJn1pLgsHY8DoKy47+JURJPvE4FYN66noRi7D711dGDgHHfFYWragLeMsW+btUSm - jqpUziPEt75JMgbjbkk15f418WwRCSDG5mAA56nrWE6sUenRovsfN3xd8dy2iYgb - asQLc8demK+ZfiVNqPinxomkwnYhPLDoADXlYjESk+WJ9DhaNOEeeR9k/spfb/E2 - lan4h1NiIg4jTJ6g5JIrkdK0bxRovw1vNA8D3i2OoXUZMUjDKo+MA/lV4a9OLbjd - nHjLV6ijBqKPuKXxFoNhaGznukTbkHPQ1+LHwr8C/Gr4f3F3qnxF8ba54juXVlkt - rvynti2ch02Ipj2jgYIGOoJ5rrhi5W1jY5auWQUtKqfp/wAFGH+23r1uLuS6uXU+ - WGYHrjPGK+Cv2mfHviT4m/EwfDXQmkt7mRHfE2MZXt8pI57ZrhlKVSV0d/PClDlb - 1Pnm91zTNS1KF1bd5L7hwRnH8q+VNc0Dx98NfHhvHEccxJ82WdTKSfxbj6YxWipy - eqaOZV4RfvXNf4pBNF+LltqmgKCmpWwklUf31cgH6sP5VFYfbdW1P+0tTmElwx3F - 8enQAAYAHoKTpzsNVY83Mj9BvhLo2s61YRajJCyxbB17+1ehfs+eLFn8JW9vqQAT - ygfMzjBJIIPucVm4Nbncqi5bi+KLebTbYiQHkfe7+1dj450n7ZbPc2g3xqDgr396 - hSBtM8XsHMisG79zXN295NbXbW8ikc55/rVt3Rw1lY6mx2xRvGx6evpWn4V0yTxF - 4q0/Q7YbvtlwkfHpnn9KKcXJ2Rw1qqjByl0PtT9iXwJqOjaVrnjm/tZrH+2bgBIZ - uCVjG0Pj0bGa+47K1jsrWK0iGFiUKAPYYr6OjSVONkfC4rFSrTcpFgEDpTjjvitT - luJ1GR+VKcgZNAXuVJs4zRNjFAIxZHw5FEjgOakdmf/Q/cliCc1GWHavnz2rajsi - oM96B2LAPpzUIY5zQFjb0HVW0jWYL8dI2GfoetYW7nJpNJqzLhNxd0fbNtqsD20c - yYKkAg/WvnfwL4/s5WHhK5mBuohvjU9So7fhXFVoyiua2h7eFxEKmjep6l4p1IEN - jKtjnFcTrF9M6yFAGYkfl3rhqSPZpOx43q0TXmthbhmEY6/yrU8URgabM6OUl3Zy - OuK8+o2nc7o1W1ypnyv8UbrRfDHidLuci3TIiDDljk54FeX/ABc+BusfFjSCF1K5 - hHmYkWJ2Unn1GCOPQ1i67veMTvw8XJcspH2V4bu9OvEV4VlIKgZHQgccGuU8F/AS - 10rwvaf8I1fX1hthCtGszlVI7gNur1KMnJX0OSdOCbTlqavi+2vJ7SYpEQduFz79 - Mgda8Y+Jvwo8ew6ZNdWHjS5SRFP7q7iWTJ6cbdmR61c4T7HXh8vjNp86+5n5b+Jv - hT4gT4w33jLUQFMbhV2DkAnj24qx4p8OfExtcudB8QeLLOANlVlhtWBwB1P70jjn - HrXFGlNNpNHsSymNlzK585/tHp4Uk1Uu7IkwyGBIP44HrXyB8WPhHaz639n1bWtT - 1gqz5hB8qMkH5dyx4ByOcHPNVGnrdyMsRgfZxSUfxPPfGnxV8FeCGZradrq7U8W9 - updicc8D7v44FWNQ+EOmaL4eNpaQIt5eYiAX+BSefxI70SlBaXuePVpNvlifcHwT - +I3h74v/AA6s4/Csc8Dvt81ZF2BSvYnnjPpXNfAW2XwPaWHhuCNFZ3GSvoPWuGvU - f2TZUuSPvPU/ULQdLXTvhtbwa+6NebyCw7jt+leW+PPGRg07y4T8ir8o98VjBsiL - s73PmrxldQ23imWO3YcHr69a4mCPVfEviKK0tEaa6uZBHGg6szHAFbxiZ1q2nvH3 - l+xn4Pk8QeMZfFdymbbS0+RiODI3+Ar7o+Cvw3t/hb8P7Lw2ADclfNuXH8Ujdfy6 - V7eDwrh789z47NMyVX91T+H8z1veRTCe1egeIKxqIkdKBEm/iq+7FADZ3yKrzvxU - Mu5lvJ85qpI5DkCkM//R/b5nGSDUJYFq+evc9vqP3nPFVZZEjG5zgd80JjLIck9e - 1fLnxs/ao+HHwZt0g1K5WbULg7IIEOWZj0GK0hRnLZEuSW7PoLxFr9toGnPf3Bxt - HA96+M7Xxr4j+ItjHq+sL9nicBli+vTNenQyt35qphUrroei+EPEl6PH+n+JLxyi - PdKjHphWOKxrayF1prCMkcYH1HQ/nXo1sLGVJ00rGNKu4zU7n6W+K9OuNOuSpHbc - jDuCKk+Gfiax+MPwks76U/6fYL9mugOodOOfrXwmJouEnCS1PsMJi7pNao8smkkv - d0Ug9gDVjV4LjSrkwXS4wevY15s6T3PUVeI/w3oEFisgZAQxzg9Mmur8O3tnOBlg - c9Pb61MKTWxr9ZTZo2s1zaoY4wFHQYrrjp0UkRdXABHUe9bKMlsaSrRa1Pkf42a9 - d2+kzfZwpRRllI6/0r1rxL4DstWjeK5/eJ3z3rOaqPY6cPWjB3ufg7411PxHf395 - ePapGpYooAwSM4JP4V97fGH4YaFaCSS0AjmLZ2r3wen+NZPmjuj2IzdRaTf3n5B+ - LfB891ppnhVVml+YsB09q9x8fQnTNQe2nXhMkHsBWEm3pYU493c+Mv8AhEl0+Jry - /IaSMgc9jXTeLNXiu7v7LAwKhtxP41nqYSqRgjiIpZNPvkv4m/eIc5qreSuiktzn - 0pqDZwVMQm9zsdR8eahq1usUoJ7V4vq3xs8O/AvxT4e1/wAU2i3cFxdr+6fngdyO - +CK7MNg51ZcsThxWZQoRvI/ZP9jH9ni60m3T4s+PINt1MM2EEg5RT/y0YdiR09BX - cfCb9uz4PfESzhjS7jt5CqjaTjHHpXtUsCqOttT5bF5lUxDs3Zdj7bJPYVh6R4n0 - DX4FudJuo5lbkbTW7PPsbROetNJwcmlcREzc81G7Z5pNhYN/rUPmcUrjI7hsLiq9 - y4wQaRRjSOd5qrLJ+8PWlcpH/9L9avFvxO8OeF2NvJKJbg8CNeTmvzPvfG9pos8l - tPc/atTfmaVjnafQVjTy2K+JnpyrO+h6h+0P+1SPAvhu41C7mEL7DsjB5r8Z/wBs - zV9Z8QMlvbSM6Sdeetayp06fwxFFuW7Od+AviPxB+0V+1BH4o8WTPcWls7PCrn5e - PSpP2Pte0fw74/0i03qkwyjKB1PvW2GV53ZnV2R+88OoR2MUdjbjCjtXnK68t1qi - Mpzu7V3nMz3bRr3Fs0ak5HNcTZao0cJdOB1oJPTvBHx3vf2fvG6+JL1Wfw/qZEeo - RD+A9BKPp3ry7X7C28R6DLazgMkyEDPrXBjcup4iPvaPudeFxk6Lutj9d9Ul8N+O - tCg8Q6BMlzaXiCSKVDkEEZr8D/gP+1xr37JPjwfDH4kSPN4O1GbFtO5ybZ2P3f8A - cP6V8TjsurYZ+8tO59ZgsZTrK0d+x+vt9b3WgXJeFiAvSrB8ZeGfGenx6roNxHdQ - zgOGUggg15LqtHp+x0Hp8Vn02NF1EsQCOV7YrzrWdKt7pm+znBx0NaxrIzcJLY9d - l+I+m6lYMLOZYgRnIr4b8TeFtXhke50aaW2fv5Zxn6jpVOd9jWlVcfiOl+J3irTP - 30schIXPzN1PuK+HfiPoPxAuEmVryXLA5IA5H1rCUb9T0oY+CSVmfMfx7+I1oLp4 - 1bYwJGCeufSvIPFvw/nW9ku75ZJ5uSWkOay9kr3bJq4+claCseVafeXN6weQkB2z - kmus0/RCswSQbQvSm7I5Gpyd2y5aWEl26l/uVvPPDYwSXRO1IELfkKhXeiNLRSPx - h/bx+ITal8aLTwhp8oEWjwgkA/xn/Jr49+NWvTeK/jpr2uSvu33BUH2HFfU4Cgqd - JPqz5PH13Uqy7I+7Pht4tupvC8OsWkj74sB9pwR+VeO/s3a/FDqEugX5zFcDjPrX - rRd1Znl8utj9HPhv+178Vvh5PFLo+qPPAMfu5ST+tfK3izQxpMrtECq5yMUOnB7o - lOUT98fgz/wU/wBO1BYtO8dx+U/ALtyPzr+fbQ7u4ZWG/ketZSwsGWqj6n9mXgP9 - of4a/EG2WbSr+Pc/bcK/kA0P4ueNfAN6t1ol5LCVIOFPH5VhLA9mUqi6o/tlhuoL - iMTW7h1PQrzX833wD/4KSeL/AA8YrPxcGngGAXH9RWEsJNbFRnF7H9HN03FfKnwl - /av+G3xXso2sryNJnH3SR1rnlCUd0aLXY+ipCN5qubiGT50YMDyCDWWpaP/T+S/D - 3xO0m5uLbS9L/wCJjqVxJ5cpHKqe5J7187/Bi/t9F8ZwCIYFyWlCkYIJNdFKfM7M - 7pq2x1P7StrP4V8ZWOrNmeBPlkDdOfb2r3r4++C4fGnhDzlH7xQXB9wKK8GnzRCl - JPRnyPY21roni/RPHfh8AW73C+bsHTPHNedfC/xalnfzeDNVb7zfJu7MOlRTak1K - OjHNNaM/cDw5qLti4k5Bwyn2Neb+FNRnk0uDzDz5Sc/hXdc52fTWk6oLlGg6jpgV - 574Xu3t4zKSc9ye9MiSPcLOaBrBkjIyn51ymnXKRuJZTndxgds02+oHhv7SXwpsv - iL4PuYAgaUpujPfdXump2T3cf2XBYDOCKzq041I8skaU6jg7pn5Hfs2/te/Er9n7 - xHN8NfE8sktvasVjEpJ4Hpmtn9sf4RRWc/8Awm2jwHzIv9dt64/+tXyeY5DG7nT0 - Pp8BnkkuWrqj9VfAH7XHhvxWft73Y/ej5kJ5Br+ezwD49lkQWSzGG4Tpg4yO1fLV - sNOnLlkfTU8VSqxTif1K6D8StA8Qx7oLgMHPc81+AHhr46+MPCU6Sl2kjTvms4xZ - FRx3R/QNr9lpM9g7Od0ZBO7HSvys0f8AbGk1bRzYyybMrggnvVWIjy9Tf+OF1pGn - 3zRw/eOenevlvxb4xl8Uag1wxyG6c1HL0NueKWhCNRFzL8gwM1kwOIF3NjHWp5Nb - EOoeb/H3xvF4N+HN7JE2Jp0KL6818n/tHeIbrxdrX9h2nzQWil5MdOK7sLh7yTPP - xmKdmkfkLcPLP4uvJpjlmcsfqTXpPhbwsdc8X3144/c+Zge+K+noUm4pI+bnNJs6 - TwHPe6ZqcF/DnCODXuejeFLW0mDBBsPHPauxUWcrmnufUV7FB4h0CK7b5g6DNUPA - t0k+mSaWxyY+lRJWZSd1qeYR2TaXLIGztHSu31yyEhe3cYz3pJhY8S1LURPd+Wp4 - qG+8P3ljqW9stGxyDRzCtY9D8OsRDt603TcW1tx6VQHUaB471/wTq6an4fu5LZ1b - Pyng49RXlmq3hDHBpNJ7iP138Lf8FE/ENh4etLO/YmaNNrH1Ir8eYtUkWMAH9a5n - hqd9jRVJH//U/KPw5eT6N4hi1i6I2wkBTj8Kr/FrRr7wreB9Pybd2zjHA9Kd5U2e - krSPuzQtZTxJo8lrkOJFyvPr1r58+AHiS4vdNa3mOZEIIHtXXTqKSMJpxZ8j/Hnw - dP4K8ZzappmV2tuyK+2vjp4DtfE2iTzMuZtpYY9CKwqUuV80TSM+ZWZ6L8A/Fmp+ - Kvhho+vXTqXkh2MT3KHFcB+x5ZTr8JoNHuG5tLueEfTPArojO8UzG1tD7d8P6ifJ - aNju+lcbZXn9m4RBuPb8KtTE4n0DaXCSL8x+V16+hFebWOvsy7COQc8elWpEcp7e - NVCWRY84wM1xFhLFexNk4THP4U0xWOc+Knh6HxF4Umt5kDmXPGM5Hevgf9ub/gof - 4E/Z28OXPg3wYF1nxS6bUiQ5SIkY3SEdB7dazqzhHSTKjfofGHxU+HyeBNekubRw - ku8tbopGWUckAdTivw38UftBfFrxd8QF+JesazO2qxy+bGykhE5+6qdAvbFeNiMP - Rq6NaHdRxlWk7xP3v8K6+NU0xGYgnGCGr5N/Z/8A2vtC+L2pP4R+IEFromprGptb - 9PlSZu6yL2J9a8SpkMnd0Zp+WzPbpZ5F6VItH2EkcC3LSYCn2roPCngnW/Gt9LY2 - M9tayxfeE7YOD0ZccMD7GvNngq8JcsoM9COLpSjdTJ7HUTEBzjFe5aR8ENB0qPzP - FGsi5dR/q7YYH58mhYGo+hMsZSX2jw7W/EV59iaCzVpZnGFVOTX0Dc6DawxGz8H6 - YYweDNIMn9a0jl0r3kzGpmMbe6fmd8StNHhHwheTag2b++y0h9Ae1Vf2jLa4vPEx - 0BGLiE5k9zXr4fCKKukeTXxLkfFXw+laO9dEX5i+cV71o3wyu9ORPEEMTYyAygdq - 9OlTaWxxSlc9X0vRbTVdIN3ZoB5eFce/rWtam50W1/cqFguAHOeOnvXUnbRmNj52 - 8dfGLTPgz4ktodUikKXa5JUdMda+Rf2rPHmneNfG0Gl6MwmTTkKM68gsTyB9KxnG - LZPO9kfotoHxD8JfEKyXUdCuo5CwBK5Ga/KDwD4T8VLcDUdOmkshnqhwT9awcF0N - lN9Ufrxc6VFLDiQZB6V8baF4v8b6SVhuL1pUGB81TyD5z6K1iCXS22N9wjiuGk8c - XWp6f9lvFDk9DTsPQytRuyZDzwaypT5ylj2pMVhgnbHB/Ws4qg4JpDP/1fg/XxYe - P/CKXJG793wR7/4Gvnv4A+P47mzbQr5vujMat0rsuqkdTpTcWaPw11CXw141i0xn - 2eZ8hBPQ/wD167M+FYv+E4GsRKPKbD5B6ZrCMXGVjRtSR9TeJ7+2OiCOVMySqVVj - 7CvH/FviPLxxWz/JENwH6EVvJohI9d/Z+0RdL8K3flYUNqDvj/eAzVP4S+N/DPhz - wn4g1jxFdJaWlmyXUjSHAVSuD79R2rSMPd0Ic9Xc9vnVll8lgQCe/f8AGvxA/ad/ - 4Kb+LNQml8O/Aawa0tlzGdRmX5iBxlF7fU1k6sI9SvQ/Xz4l/GH4c/B/QZNc8aar - DZmJeIi2WYjsB1r+R3U/F/xA+Ieszal4z1CfULmVtxedieT6DoKxeJW0UPlZ+qX7 - RP8AwVO8deMrR/A/wQt20mzOY5NQb/WuDwdg7D3r8+vCvgZbnY91jBqJVakupSUV - uS+H/Dlz4jvX1HVna8urli8kkpLMSepJNfUXgnw3a2LKY0HpRGmt2TzHieufsrW3 - iiwN9pTfZrthxjoT7iv0K8JWcRPlsobb/nFaqlFiuz8NPE/gfxp8Ktd+za7btGUb - KSjJU/jX72+LPhp4U8b6Y1jr9okqyJg5HPNTLD9mCkfn38FP2rr3S4bXT9aYsYwE - WVjkqPY1h/FT9k/VPAtw+teEg1zYYJKd1+lO8rcs1dDvZ3if0Tfs0fEX4Z/Evw7H - dKYhfxKDKshzn/aAr+dT4H/Fvxf8NNdjjimdGhbMRJOD6o3sa5J4SL1h9x0RxD+2 - f01/GX4j+HvBfhyaTS0EkzDau3jGa/OSH9ovwd478Kw+PPEbC30/TW/0hHON0q/w - D1wazpYZyeqNJ1Ule5kT+AEvrS78c+K2ELTkuN5xgH61+Vn7TH7VHj747eKLix8L - ySadoCfJFDFld4HGTiuxulDRanNzSZ9K/EL9qz4f/DeR9D0mRdTdRzHFzz7n/wDV - X5t6H8OXnIuLzJJ559ay+sNfCPlberO18WfHT4sfFaY6bZymw08sdkUPGAexbqa9 - O8NeF7OxiTagzWUqsnuEYJHnvg74WW1rtu78eZKTkluTmvoNbfyVwq1G5SSM+20+ - 1sLcpGoHFbDWUk7cjrQhs5R7dpptqDrXbW+hyRMHI6U2mFjKs9OlQKwFegWmn5AV - hweaXKNHPW+nsG+fkNXYSxpAMelVyhc5OTRlZyQtdGbuPNAmf//W/nR8A+JZ9D1C - O6icg9cj0NefaVdMGK46Zwf1H9amFRrU7ZK7P0r8I+J/7Xs45C/zDAI6V87/AAr8 - QyQ3Mds/IK7SevXkfrXbCakjFpn05rsVylzHIPuLwfoa3ZYIdR0gzKu1yMA5556E - Y6c0Sj1KUjzvUobPWoZdOuV3xXHyuvb5fX6UXlvNH+/GEXPOPfrUX6D3PD/FfwU8 - K3UbRQWy4z0A7Ef416ZLextcNbuxO7oc0mkN3R8Qav8As/6dBOZLGMDBxtPtX1Bd - XoinljuiCexHU8/4Vk4RvoNM+TLPwBc6So2Z2xsCQa+mriGCfdhQO+4VIrHk2i28 - ltM0QGMfMDXXTWv2GYSKODj8jVcwrHWeHtTMF+sZbhgDn3rlILpLW+Wdjx6GrVSz - B+Z9KWV6s0G3BDdM1z/h+6jnsvNY/NkY54rXmJUexNrN6BA0Myh1YEMDVXVo0med - l+YbeT70cwWPjb4vfD3Qsvq+jII3I3FF9fUV6NqGnR3Wsw21+Ts8zBU+h70JXJbZ - 8vfHDVrHRjoHwm07LW9nbrPdk/xTS8jPvjNdF8afC+ial47m8SKp+0grGMdCFGBm - uPE1Ly5VsjaCtqzwq30uzgdESMDIPauxbR7p4gzR4x3rnaZpzDrSCAxjAHBqCOOa - 1+ZvxpWBM73TtOYxhk/StHwxqcLDY1aKIuYvWOmTSDdLzk16BYPaSQF8DOapIXMY - cGjLwxGOeK6uURLbh4+fSiyKuYsyRwssbEZxXL6ldymck8YpXBM2pLuKMfL/AA+l - ccLlyjN15qbh6G9fXAOM8CuXl1D5wD2ppiNKTJcnP+fyrHbUYicmkVzrsf/ZiQE9 - BBMBCgAnAhsDBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheABQJYVJfLBQkFr8fyAAoJ - EFAjKZHxDf/QUEoH/0BvqMNSQmhkAy4Iz2tkGEv0F2Df/bce0LuTlkw1IjWpGegX - LzsUVQTLsOpuvs0YsiQTNaxPl09Y3NDt4/BvwsfdSFLpl84gjyUp6lXTEjZnos/+ - FImdGgOG8otYj0TUJeMQm1DWrUci/PODct6JSaFRTSvWpb5HIo6dxnnRopP2Uytu - vt2LsbzT8YAvp51pjV+klt2nVUKIiz1aJQ4hLGgJ/vydadE9XmluZPbl9/B8O7Ud - +Fi9kZZfcGLWVm988fpTqUEuLCibzVjK0N1RcvhxBK8T+Izwxo1UkuewzH/WruRa - ZdecgsKjdDIB2Ycg2sbi2b2n+zTOmiD7MHkYXIaJAhwEEwEKAAYFAlT4po0ACgkQ - Cc6NmiLJd0TG/Q//UJwOOV+hxtL8DaeAd9n1krTqxKXrztyaEosddrLrCo2/7fM2 - RsF6nnyUO3Dx24IdtwclGnEUxal7ncZ9a+jR3YYXXnOTJppCt6wW42fZVPg4cbhj - 0J7ivE2tD7MrrQeylCIZe43dwZ625YcglreClL6QSNmb0GI+RpmcKthMTIWwfLeC - CzJid0zz73KWO/RqyJAC/J95pUxC62L+LC//NrdcPa7zG4GB2u/JIje00I9LOClO - Dly+iboh96l2FM1Jfv/TAkrKiQXxQe/cWK+RGzfd9HrKvdNbH7jqRoaUbwOipj/f - 4Ednbavm/uNwsRcCuIcyFPHONI9D93V6ivQXgSlyrd4p3mh2DG7SPmlt889Cc55J - P6mg37MuifpTbpad0jxTfn8VXU1hyhJ7kbTkxem9e85chZ1mUJaaVmqgWviGmapg - 2lqeM6Z6Jj0GUbc3Bk8GPt6ELscCcKlKLUGiAO/JnUxn6s6HQjdYjeqzJBe/TdxV - jFcOIXrprGGhg3luqbM+ZqoqjdxIqpFUejbK3WCxlk7dOzTaHcYljlMaP/6Va2C4 - sZFkkWJjDhU8xxZXnqxB1Ml3mopU8Lgqg75R1qKOeQd27LEMf6DYMVYZmOJEoUAF - PBO1YMkMRq25fDLRK//w77uQ8YYRGtGBDPIjDKx188spPEbTevfYAQglfI+JARwE - EwEKAAYFAlgqSLsACgkQcKkNVKzCJH7I4Af/ZCiAlu7I6mD0jvSsgTN92PCc3+Fh - YvKLlkcGWOlWKX3X4eMsYLJlizA1ajFaRST4/VkOHf9WpKIog4wa3vV4BW8Gm72o - fil+ZA/WerSKZxwaL5D1lGJsh72MCdOHBAXbXIPvoT07qFwIHFMLUgDIuWUgilEQ - ysD1bBaubV8INbetObr5DJwfCTi4wqx6vDfB53GTAekuFcrCYxUn7vJla+axmC4+ - fkgBx8xIKEgp5+ukmf6OgnCGCLlKEml3xw/QzepZ4TNqKS3oVF+htz+afcORzl5E - GP+1U8XqLsIZcbVQh6cEXXOPUO9WeGD+YfBC16i203266d+PvEXt419FobkBDQRT - lhJYAQgA16Y1Y1+c7RmV5cpPRr8kn7kp8ecsow6Y5A2IFN6kx+cNrkzH0TbEswLT - wUQEmYEJfNmBwEy3LJER4IV5MRMZmEwdbwAu/2k7DolcNvfeIhbQtWtNq9EuI5me - EeQTFf5Lpo4OqcCyPtMy7jE+1bs0f415SMuRZgWEbtecQNst8BNSV73CGNtatIa5 - 35hN2RN4IjiujOs5iDR7U2KNeEe0xfBxOG3JKqJDQ9JAKWGE9qY4ZiGQjX9YC/4Q - OwT+jZQZJHZgL86Sdq07x/d9QA2r6ZGK4kpu1zEfABO+oMUSG+7M5Rqdgf5QOlNE - bRT/PocAH4NIbg5JW+VNqgd9n8E+/QARAQABiQEfBBgBCgAJAhsMBQJUq17dAAoJ - EFAjKZHxDf/Q7CIH/2WmlrHQKycRSoLTjav6PXWq7Zt2XyvVa+TbgXy/xtvUYhRJ - LlVSNM8Fux6xnW5ndwwoV41yYKLTdTOkZD3GF8GBk01xwThp5T+Xex9jzo97UdMn - IrBc8uQSM3LUdH/aivQLQW2cElTQ1EiGA+ytMpHGkCbMHm0ZL0ATSuYEJB8ngTl3 - a3nCUXNH3eDAYaSwCAxtR/97E/VbT8VRdIIuwj74+8mQwbK0xMJwk3rX3DU5KA7K - eRXxrV/pvrrMJpVEVzviHYCdRpna2OEFx7fGTSEv5TR10QF6ZmN/hqnihFFDzFM9 - lOhaAfB1/u7WgYK+KzCTQETvdxYIccjQvryc4E4= - =zOhA - -----END PGP PUBLIC KEY BLOCK----- diff --git a/wiki/gpg/terrible.md b/wiki/gpg/terrible.md deleted file mode 100644 index b916b79..0000000 --- a/wiki/gpg/terrible.md +++ /dev/null @@ -1,139 +0,0 @@ -# GPG Is Terrible - -A discussion at work reminded me that I hadn't looked at the state of the art -for email and communications security in a while. Turns out the options -haven't changed much: S/MIME, which relies on x.509 PKI and is therefore -unusable unless you want to pay for a certificate from someone with lots of -incentives to screw you, or GPG. - -S/MIME in the wild is a total non-starter. GPG, on the other hand, is merely -really, _really_ bad. - -(You may want to take this with a side of [the other perspective](cool).) - -## Body Security And Nothing Else - -GPG encrypts and signs email message bodies. That's it, that's all it does -when integrated with email. Email messages contain lots of other useful, -potentially sensitive data: the subject line, for example. GPG still exposes -all of the headers for the world to see, and conversely does nothing to -detect or prevent header tampering by idiot mailers. - -(Yes. Signed headers _would_ mean that mailing lists can no longer inject -`[listname]` crud into your messages. Feature, not bug; we should be, and in -many cases already are, storing that in a header of its own, not littering -the subject line. We also need to keep improving mail tooling, to better -handle those headers.) - -In return for doing about half of its One Job, GPG demands a _lot_ from its -users. - -## The Real Name Policy - -The GPG community has a massive “legal names” fixation. [Widespread GPG -documentation](http://cryptnet.net/fdp/crypto/keysigning_party/en/extra/signing_policy.html), -and years of community inertia, stand behind expecting people to put their -legal name in their GPG key, and conversely expecting people to verify the -identity in a GPG key (generally by checking government ID) before signing it. - -As the [#nymwars](http://www.jwz.org/blog/2011/08/nym-wars/) folks can tell -you, this policy is harmful and limiting. There are good theoretical reasons -to validate _an_ identity before using its keys to secure messages, but legal -identities can be anywhere from awkward to dangerous to use. - -GPG does not _technically_ restrict users from creating autonymous keys, but -the community at large discourages their use unless they can be traced back -to some legal identity. Autonyms keys tend to go unsigned by any other key, -cutting them off from the GPG trust network's validation effect. - -As [@wlonk](https://twitter.com/wlonk/) put it: - -> I care about communicating with the coherent theory of mind behind @so-and-so. - -## Issuing Identities - -GPG makes issuing new identities simultaneously too easy and too hard for users. -It's hard, because the _only_ way to issue a new identity on an existing key -(and thus associated with and able to share correspondence with an existing -identity) requires that the user have access to their personal root key. There's -no way to create ad-hoc identities and bind them after the fact, making it hard -to implement opportunistic tools. (OTR's on-demand key generation fails to the -opposite extreme.) It's easy, because there's no mechanism beyond the web of -trust itself to vet newly-created keys or identities; the GPG community -compounds this by demanding that everyone carefully vet legal identities, making -it _very_ time-consuming to deploy a new name. - -## Finding Paul Revere - -It turns out autonymity in GPG would be pretty fragile even if GPG's user -community _didn't_ insist on puncturing it at every opportunity, since GPG -irrevocably publishes the social graph of its users to every keyserver they -use. You don't even have to publish it yourself; anyone who has a copy of -your public key can upload a copy for you, revealing to the world the -identities of everyone who knows you well enough to sign your key, and when -they signed it. - -A lot of people can be meaningfully identified by that information alone, -even without publishing their personal identity. - -## The Web Of Vulnerable CAs - -Each GPG user is also a unilateral signing authority. GPG's trust model means -that a compromised key can be used to confer validity onto _any_ other key, -compromising potentially many other users by causing them to trust -illegitimate keys. GPG assumes everyone will be constantly on watch for -unusual signing activity, and perfectly aware of the safety of their own keys -at all times. - -Given that the GPG signature graph is largely public, it should be possible to -moderate signatures using clique analysis, limiting the impact of a trusted -party who signs inauthentic identities. Unfortunately, GPG makes it challenging -to implement this by providing almost no support for iteratively deepening the -local keyring by downloading signers' keys as needed. - -## Interoperability - -Sending a GPG-signed message to a non-GPG-using normal human being is a great -way to confuse the hell out of them. You have two options: - -* In-band “cleartext” signing, which litters the email body with technical - noise, or -* PGP/MIME, which delivers a meaningless-looking “signature.asc” attachment. - -In both cases, the recipient is left with a bunch of information they (a) -can't use and (b) can't hide or remove. It might as well say “virus.dat” for -all the meaning it conveys. - -Some of this is not GPG's fault, exactly, but after over a decade, surely -either advocacy or compromise with major mail vendors should have been -possible. - -(Accidentally sending an _encrypted_ email to a non-GPG-using recipient is, -thankfully, hard enough to be irrelevant unless someone is actively spoofing -their identity.) - -## Webmail Need Not Apply - -Well, unless you want to write the message text in an editor, copy and paste -it into GPG, and copy and paste the encrypted blob back out into your -message. (Hope your webmail's online editor doesn't mangle dashes or quotes -for you!) - -Apparently Google's [finally fixing that for Chrome -users](https://code.google.com/p/end-to-end/), so that's something. - -## Mobile Need Not Apply - -<del>Safely distributing GPG keys to mobile applications is more or less -impossible, and integration with mobile mail applications is nonexistant. -Hope you only ever read your mail from a Real Computer!</del> - -vollkorn points out that the above is inaccurate. He posted a couple of -options for GPG on Android, and the state of the art for iOS GPG apps is -apparently better than I was able to find. See [his -comment](#comment-1422227740) for details. - -## Further Reading - -* [Secushare.org's “15 reasons not to start using PGP”](http://secushare.org/PGP) -* [Mike Perry's “Why the Web of Trust Sucks”](https://lists.torproject.org/pipermail/tor-talk/2013-September/030235.html)
\ No newline at end of file diff --git a/wiki/hire-me.md b/wiki/hire-me.md deleted file mode 100644 index adfbbf2..0000000 --- a/wiki/hire-me.md +++ /dev/null @@ -1,110 +0,0 @@ -# Hire Me - -I'm always interested in hearing from people and organizations that I can help, -whether that means coming in for a few days to talk about end-to-end testing or -joining your organization full-time to help turn an idea into reality. - -I live in and around Toronto, ON. I am more than happy to work remotely, and I -can probably help your organization learn to integrate remote work if it doesn't -already know how. - -You can see more about me as a person on -[HireMyFriend](https://hiremyfriend.io/profiles/90b8caa5) or -[LinkedIn](https://ca.linkedin.com/in/ojacobson/). You can also get a sense of -the code I write by looking at this blog, as well as my -[Bitbucket](https://bitbucket.org/ojacobson) or -[Github](https://github.com/ojacobson/) sites: I recommend starting with -[Refreshbooks](https://github.com/ojacobson/refreshbooks) or -[Sparkplug](https://github.com/ojacobson/sparkplug). - -## For Fun - -I regularly revisit problems from old jobs, interesting ideas from the internet, -and whatever else catches my fancy as a way to build up skills with specific -technologies. Right now, I'm tinkering with [AngularJS](http://angularjs.org) -and [Jersey 2](https://jersey.java.net) as a way of building lightweight, -highly-responsive web front ends. Ask me about it and I'll be more than happy to -talk your ear off. I've also run similar projects to explore Node, Django, -Flask, Rails, and other platforms for web development, as well as numerous tools -and frameworks for other platforms. - -I also mentor people new to programming, teaching them how to craft working -systems. This is less about teaching people to write code and more about -teaching them why we care about source control, how to think about -configuration, how to and why to automate testing, and how to think about -software systems and data flow at a higher level. I strongly believe that -software development needs a formal apprenticeship program, and mentoring has -done a lot to validate that belief. - -## FreshBooks (2009-2014) - -During the five years I was with the company, it grew from a 20-person one-room -organization to a healthy, growing two-hundred-person technology company. As an -early employee, I had my hand in many, many projects and helped the development -team absorb the massive cultural changes that come with growth, while also -building a SaaS product that let others realize their dreams. Some highlights: - -* As the lead [MySQL](http://grimoire.ca/mysql/choose-something-else) database - administrator-slash-developer, I worked with the entire development team to - balance concerns about reliability and availability with ensuring new ideas - and incremental improvements could be executed without massive bureaucracy - and at low risk. This extended into diverse parts of the company: alongside - the operations team, I handled capacity planning, reliability, outage - planning, and performance monitoring, while with the development team, I - was responsible for designing processes and deploying tools to ease testing - of database changes and ensuring smooth, predictable, and _low-effort_ - deployment to production and for training developers to make the best use of - MySQL for their projects. - -* As a tools developer, I built the [Sparkplug](https://pypi.python.org/pypi/sparkplug) - framework to standardize the tools and processes for building message-driven - applications, allowing the team to move away from monolithic web applications - towards a more event-driven suite of interal systems. Providing a standard - framework paid off well; building and deploying completely novel event - handlers for FreshBooks’ core systems could be completed in as little as a - week, including testing and production provisioning. - -* As an ops-ish toolsmith, I worked extensively on configuration management - for both applications and the underlying servers. I lead a number of - projects to reduce the risk around deployments: creating a standard - development VM to ensure developers had an environment consistent with - reality, automating packaging and rollout to testing servers, automating the - _creation_ of testing servers, and more. As part of this work, I built - training materials and ran sessions to teach other developers how to think - like a sysadmin, covering Linux, Puppet, virtualization, and other topics. - -## Riptown Media (2006-2009) - -Riptown Media was an software development company tasked with building and -maintaining a suite of gambling systems for a single client. I was brought on -board as a Java developer, and rapidly expanded my role to encompass other -fields. - -* As the primary developer for poker-room back office and anti-fraud tools, I - worked with the customer support and business intelligence teams to better - understand their daily needs and frustrations, so that I could turn those - into meaningful improvements to their tools and processes. These - improvements, in turn, lead to measurable changes in the frequency and - length of customer support calls, in fraud rates, and in the percieved value - of internal customer intelligence. - -* As a lead developer, my team put together the server half of an in-house - casino gaming platform. We worked in tight collaboration with the client - team, in-house and third-party testers, and interaction designers, and - delivered our first game in under six months. Our platform was meant to - reduce our reliance on third-party “white label” games vendors; internally, - it was a success. Our game received zero customer-reported defects during - its initial run. - -## OSI Geospatial (2004-2006) - -At OSI Geospatial, I lead the development of a target-tracking and battlespace -awareness overlay as part of a suite of operational theatre tools. In 2004, the -state of the art for web-based geomatics software was not up to the task; this -ended up being a custom server written in C++ and making heavy use of PostgreSQL -and PostGIS for its inner workings. - -## Contact Me - -Sound good? Curious? Want to discuss any of this some more? You can get ahold of -me at owen.jacobson@grimoire.ca or on [Twitter](https://twitter.com/derspiny). diff --git a/wiki/if/messages-and-announcements.md b/wiki/if/messages-and-announcements.md deleted file mode 100644 index d4e0958..0000000 --- a/wiki/if/messages-and-announcements.md +++ /dev/null @@ -1,78 +0,0 @@ -# Messages and Announcements - -## Motivation - -Maintain a prosaic tone and delivery throughout interactions with the simulation. - -Support gameplay elements that alter the character's perception of the world, and represent those effects to the player. - -## Ultimate Disposition - -All messages are eventually disposed in one of two ways: - -* displayed, marked up, on the web UI, or -* discarded. - -For the purposes of the following, the builtin function `notify()` delivers raw Markdown to the UI, which handles markup conversion and presentation. The method `:notify(args...)` applies unspecified transformations on `args...` before delivering the result to `notify()`. - -## Kinds of Messages - -* Diegetic messages deliver an approximation of the sensory response to events in the simulated environment. - - * Diegetic messages should be tagged, internally, with the sense or senses perceiving the events, so that simulated sensory effects can modulate the message. It should be possible for a deaf character's player to "see" lips moving, while a fully-hearing character's player should insead "hear" the words spoken. - -* Non-diegetic messages deliver information about the state of the simulation, or about the player's interaction with the simulator. - -## Processing Models - -### Parallel Entry Points - -In this model, there are separate entry point methods for diegetic and for non-diegetic messages. Each entry point performs appropriate processing, then calls into internal methods shared by both message forms to deliver messages to the UI. - -This model requires more names and is slightly more complex to explain, but splits the responsibility for output delivery from the responsibility for accepting non-diegetic messages, leaving each piece conceptually simpler. - -### Diegetic Filter Methods - -In this model, the internal methods are the entry point for non-diegetic messages directly, as they have no processing to perform. Diegetic messages instead call the non-diegetic machinery after performing any diegetic message effects. - -This model requires fewer names, and avoids what will in practice be a layer of "do-nothing" methods where non-diegetic entry points blindly call internal helpers, but combines the responsibility for non-diegetic output with message delivery. - -## APIs - -### Primary Diegetic Verbs - -* `recipient:you_SENSE(args...)`: a family of methods for delivering diegetic messages. Each SENSE (`you_hear`, `you_see`, `you_smell`, `you_feel`, `you_taste`) either delivers args to `recipient:notify` (see below) and returns `$true` (indicating that the recipient was capable of experiencing that sense) or ignores the arguments and returns `$false` (indicating that the recipient was not capable of experiencing that sense). - - The default implementation is equivalent to `return this:notify(@args);`. - -* `recipient:you_SENSE_lines(lines)`: applies with `you_sense` to the elements of `lines` in order, each of which must either be a single string (passed as the first argument of the corresponding `you_SENSE`) or a list (passed as the argument list to the corresponding `you_SENSE`). Returns `$true` if every line is accepted by `you_SENSE`, or `$false` if any line is rejected. Processing stops at the first rejected line. - -These methods are designed to be chained, to make it simpler to simulate partial impairment while allowing the game's prose to focus on the most appropriate "available" sense for each character. - -``` -player:you_hear(spoken_message) || player:you_see(lips_moving_message); -``` - -In spite of the names, these methods _do not_ prepend "You hear" to the output. The naming distinguishes single-recipient messages meant to be sensed by a single object from messages to be delivered to the occupants of a container or room: - -* `room:SENSE(args...)`: calls `you_SENSE(args...)` on every occupant of `room` who is not than `player`. Returns a list of objects for which `you_SENSE` returned `$false`, for use in `room:SENSE_only` (below). Skips `player`, to ease simulating situations where the character's self-perception has unique prose ("You say" vs. "Toby says"). -* `room:SENSE_all(args...)`: calls `you_SENSE(args...)` on every occupant of `room`. As with `room:SENSE`, this returns a list of objects that were unable to sense the simulated event. -* `room:SENSE_all_but(nonrecipients, args...)`: calls `you_SENSE(args...)` on every occupant of `room` that is not in `nonrecipients`. As with `room:SENSE`, this returns a list of objects that meet those criteria that were unable to sense the simulated event. -* `room:SENSE_only(recipients, args...)`: calls `you_SENSE(args...)` on every occupant of `room` who is in `recipients`. Returns a list of objects that are in `room`, which are in `recipients`, which could not `you_SENSE` the message. - -As with the single-recipient methods, these are meant to be chained, but the structure of a chain is different: - -``` -player:you_hear(you_say_message) || player:you_feel(your_lips_move_message); -unsensed = room:hear(heard_say_message); -unsensed = room:see_only(unsensed, lips_moving_message); -unsensed = room:smell_only(unsensed, smelly_breath_message); -``` - -### Primary Non-Diegetic Verbs - -* `recipient:tell(args...)`: a method for delivering non-diegetic messags. Delivers `args` to `recipient:notify` unaltered. Returns nothing. - -* `recipient:tell_lines(lines)`: applies `tell` to the elements of `lines` in order, each of which must either be a single string (passed as the first argument of `tell`) or a list (passed as the argument list to `tell`). Returns nothing. - -* `room:announce(args...)`, `room:announce_all(args...)`, `room:announce_all_but(nonrecipients, args...)`, and `room:announce_only(recipients, args...)` mirror the behaviour of their diegetic equivalents, calling `tell` on each appropriate occupant. However, these methods return nothing, as they are not expected to be chained. diff --git a/wiki/if/narrative-in-muds.md b/wiki/if/narrative-in-muds.md deleted file mode 100644 index 72cc262..0000000 --- a/wiki/if/narrative-in-muds.md +++ /dev/null @@ -1,115 +0,0 @@ -# Narrative in MUDs - -Design notes towards narrative conventions. - -## What? - -MUDs are engines of narration. The human interface to one is, to a first approximation, literary: everything that happens is realized as words, which are read, and the user's actions on the system are performed through writing. - -MUDs are a distinct subform of interactive fiction. In classic IF, the IF engine produces a single narrative, with which a single player interacts (usually synchronously - each narration by the engine leads to a single reply from the player, which then leads to further narration). MUDs, by contrast, produce _parallel_ narratives. Each player receives their own, distinct narrative, which depicts events happening in a shared fictional space. Furthermore, these narratives conventionally occur asynchronously, with replies from each player injecting further text into many players's narratives. - -For example, the following three narratives depict the same sequence of events, as presented to three distinct players. - -First, Alice's perspective: - -> You open the door and enter the house. -> -> ----- -> -> **Living Room** -> -> You're in the sparsely-furnished living room of an insipidly-generic student apartment. A TV sits on a bench, opposite a cheap and decrepit sofa. The front door opens to the south; beside it is an indifferent pile of salt-stained boots. -> -> Bob is here. cherise is here, seated on the sofa. -> -> ----- -> -> You say, "Hello." -> -> cherise says, "Called it." -> -> Bob walks out through the front door. -> -> You ask, "What's his problem?" - -Then, Bob's: - -> cherise sighs. "I'm sure she'll be here soon." -> -> Alice walks in through the front door. -> -> Alice says, "Hello." -> -> cherise says, "Called it." -> -> You open the front door and walk out. -> -> ----- -> -> **Hallway** -> -> You're standing in a grungy apartment hallway. It smells faintly of mildew and old cigarettes. An apartment's front door opens to the north. -> -> ----- - -Finally, cherise's perspective: - -> You sigh. "I'm sure she'll be here soon." -> -> Alice walks in through the front door. -> -> Alice says, "Hello." -> -> You say, "Called it." -> -> Bob walks out through the front door. -> -> Alice asks, "What's his problem?" - -I've intentionally used widespread MUD narrative conventions to illustrate some points. - -In daily life, this sort of parallel experience of overlapping events and partial perspectives is unremarkable. In interactive fiction, it's nearly unheard-of, outside of MUDs, and in the context of literary fiction, it raises some complex questions: - -* Who is Alice's narrator? Or cherise's? -* Just how many narrators _are_ there? Do the participants share a common narrator, or are they each an independently narrated story? -* The three perspectives shown here are the limited perspectives of each participant in the scene. Is there also an omniscient perspective? If so, does it have a coherent narrative? Is there anything about the scene that one narrator might omit, where another would include it? -* Why does the narrator present those specific elements of the _places_ each perspective visits, and how do those choices related to choices of perspective, tense, and person? Do Alice and cherise see the living room the same way? Should they? -* Why the second person? -* Why the present tense? - -## Some Bad History - -MUD narrative conventions largely arise from technical choices themselves motivated by the context MUDs arose in. MUDs originated as the product of a literarily-unsophisticated technical community: early MUDs were engines for sharing swords-and-sorcery adventures with friends, little more than multiplayer-enabled versions of Zork or Colossal Cave (themselves somewhat limited renditions of a quest story). - -Even later "talker" MUDs, designed specifically around social interaction rather than around adventure, and "VR" MUDs, designed around simulating elements of a shared space through text, derive a lot of narrative conventions from those ancestors. - -This ancestry has some consequences. - -The narrators of Zork and Colossal Cave speak in the second person; the protagonist character is a faceless, ageless, genderless proxy for the player, and this allows the narrator to conflate the two to effectively engage the player with the fictional world. Early MUDs ape this convention, but MUD characters _require_ names, since it's impractical to describe more than two characters in a narrative without some way to identify each character. The second person convention persists in those systems, even though the player and the character could (and often did) have divergent identities. - -Not every game uses the second person effectively. Games with a freeform 'pose' affordance allow players to inject player-constructed prose into the narrative to reflect actions not pre-envisioned by the game's authors. This almost universally breaks from the second person; a single line of prose provided by the player is not usually corrected for personal pronouns by the game before being delivered back through player narratives. If cherise runs the command - -> `pose waves at Alice.` - -most MUDs will generate the prose - -> cherise waves at Alice. - -in all three of Alice, Bob, and cherise's narratives, even though cherise's other actions in cherise's narrative are presented using the pronoun "You." - -This can be particularly jarring when some poses have codified support (and correctly substitute pronouns) and others do not (and rely on a generic `pose` system). - -## Extra-narrative Information - -Interactive fiction mixes narrative and extra-narrative information into the prose freely. Even discounting the player's input (which generally has a different tone and structure than the game's narrative), various gameplay situations require the presentation of non-narrative information. For example, nonsense inputs require _some_ response, so that the player understands that the game hasn't understood hem, but that response describes the input-processing behaviour of the game, and doesn't narrate the story the game is telling. - -Most IF games present this output through the same prose flow as the game's narrative, mixed indifferently with descriptive text. The obvious alternatives (of non-textual or non-narrative output) is, empirically, distracting: it forcibly reminds players that they're interacting with a machine, while prosaic output blends acceptably with the narrative. Thus: - -> `> flarp` -> -> I didn't understand that. - -is preferable to a beep, or to turning the input region another colour. - -For some reason, this is one of the few situations where IF narrators refer to _themselves_. Is the narrator in fact a mediator, with an active role in the story being told? - diff --git a/wiki/java/a-new-kind-of.md b/wiki/java/a-new-kind-of.md deleted file mode 100644 index 6cc81e5..0000000 --- a/wiki/java/a-new-kind-of.md +++ /dev/null @@ -1,137 +0,0 @@ -# A New Kind of Java - -Java 8 is almost here. You can [play with the early access -previews](http://jdk8.java.net/download.html) right now, and I think you -should, even if you don't like Java very much. There's so much _potential_ in -there. - -## The “One More Thing” - -The Java 8 release comes with a slew of notable library improvements: the new -[`java.time`](http://openjdk.java.net/jeps/150) package, designed by the folks -behind the extremely capable Joda time library; [reflective -access](http://openjdk.java.net/jeps/118) to parameter names; [Unicode -6.2](http://openjdk.java.net/jeps/133) support; numerous others. But all of -these things are dwarfed by the “one last thing”: - -**Lambdas**. - -## Ok, So..? - -Here's the thing: all of the “modern” languages that see regular use - C#, -Python, Ruby, the various Lisps including Clojure, and Javascript - have -language features allowing easy creation and use of one-method values. In -Python, that's any object with a `__call__` method (including function -objects); in Ruby, it's blocks; in Javascript, it's `function() {}`s. These -features allow _computation itself_ to be treated as a value and passed -around, which in turn provides a very powerful and succinct mechanism for -composing features. - -Java's had the “use” side down for a long time; interfaces like `Runnable` are -a great example of ways to expose “function-like” or “procedure-like” types to -the language without violating Java's bureaucratic attitude towards types and -objects. However, the syntax for creating these one-method values has always -been so verbose and awkward as to discourage their use. Consider, for example, -a simple “task” for a thread pool: - - pool.execute(new Runnable() { - @Override - public void run() { - System.out.println("Hello, world!"); - } - }); - -(Sure, it's a dumb example.) - -Even leaving out the optional-but-recommended `@Override` annotation, that's -still five lines of code that only exist to describe to the compiler how to -package up a block as an object. Yuck. For more sophisticated tasks, this sort -of verbosity has lead to multi-role “event handler” interfaces, to amortize -the syntactic cost across more blocks of code. - -With Java 8's lambda support, the same (dumb) example collapses to - - pool.execute(() -> System.out.println("Hello, world")); - -It's the same structure and is implemented very similarly by the compiler. -However, it's got much greater informational density for programmers reading -the code, and it's much more pleasant to write. - -If there's any justice, this will completely change how people design Java -software. - -## Event-Driven Systems - -As an example, I knocked together a simple “event driven IO” system in an -evening, loosely inspired by node.js. Here's the echo server I wrote as an -example application, in its entirety: - - package com.example.onepointeight; - - import java.io.IOException; - - public class Echo { - public static void main(String[] args) throws IOException { - Reactor.run(reactor -> - reactor.listen(3000, client -> - reactor.read(client, data -> { - data.flip(); - reactor.write(client, data); - }) - ) - ); - } - } - -It's got a bad case of Javascript “arrow” disease, but it demonstrates the -expressive power of lambdas for callbacks. This is built on NIO, and runs in a -single thread; as with any decent multiplexed-IO application, it starts to -have capacity problems due to memory exhaustion well before it starts to -struggle with the number of clients. Unlike Java 7 and earlier, though, the -whole program is short enough to keep in your head without worrying about the -details of how each callback is converted into an object and without having to -define three or four extra one-method classes. - -## Contextual operations - -Sure, we all know you use `try/finally` (or, if you're up on your Java 7, -`try()`) to clean things up. However, context isn't always as tidy as that: -sometimes things need to happen while it's set up, and un-happen when it's -being torn down. The folks behind JdbcTemplate already understood that, so you -can already write SQL operations using a syntax similar to - - User user = connection.query( - "SELECT login, group FROM users WHERE username = ?", - username, - rows -> rows.one(User::fromRow) - ); - -Terser **and** clearer than the corresponding try-with-resources version: - - try (PreparedStatement ps = connection.prepare("SELECT login, group FROM users WHERE username = ?")) { - ps.setString(1, username); - try (ResultSet rows = rs.execute()) { - if (!rows.next()) - throw new NoResultFoundException(); - return User.fromRow(rows); - } - } - -## Domain-Specific Languages - -I haven't worked this one out, yet, but I think it's possible to use lambdas -to implement conversational interfaces, similar in structure to “fluent” -interfaces like -[UriBuilder](http://docs.oracle.com/javaee/6/api/javax/ws/rs/core/UriBuilder.html). -If I can work out the mechanics, I'll put together an example for this, but -I'm half convinced something like - - URI googleIt = Uris.create(() -> { - scheme("http"); - host("google.com"); - path("/"); - queryParam("q", "hello world"); - }); - -is possible. - diff --git a/wiki/java/install/centos.md b/wiki/java/install/centos.md deleted file mode 100644 index 51c83f6..0000000 --- a/wiki/java/install/centos.md +++ /dev/null @@ -1,57 +0,0 @@ -# Installing Java on CentOS - -Verified as of CentOS 5.8, Java 6. CentOS 6 users: fucking switch to Debian -already. Is something wrong with you? Do you like being abused by your -vendors? - -## From Package Management (Yum) - -OpenJDK is available via [EPEL](http://fedoraproject.org/wiki/EPEL/FAQ), from -the Fedora project. Install EPEL before proceeding. - -You didn't install EPEL. Go install EPEL. [The directions are in the EPEL -FAQ](http://fedoraproject.org/wiki/EPEL/FAQ#Using_EPEL). - -Now install the JDK: - - sudo yum install java-1.6.0-openjdk-devel - -Or just the runtime: - - sudo yum install java-1.6.0-openjdk - -The RPMs place the appropriate binaries in `/usr/bin`. - -Applications that can't autodetect the JDK may need `JAVA_HOME` set to -`/usr/lib/jvm/java-openjdk`. - -## By Hand - -The [Java SE Development Kit -7](http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html) -tarballs can be installed by hand. Download the “Linux x64” `.tar.gz` version, -then unpack it in `/opt`: - - cd /opt - tar xzf ~/jdk-7u45-linux-x64.tar.gz - -This will create a directory named `/opt/jdk1.7.0_45` (actual version number -may vary) containing a ready-to-use Java dev kit. - -You will need to add the JDK's `bin` directory to `PATH` if you want commands -like `javac` and `java` to work without fully-qualifying the directory: - - cat > /etc/profile.d/oracle_jdk <<'ORACLE_JDK' - PATH="${PATH}:/opt/jdk1.7.0_45/bin" - export PATH - ORACLE_JDK - -(This will not affect non-interactive use; setting PATH for non-interactive -programs like build servers is beyond the scope of this document. Learn to use -your OS.) - -Installation this way does _not_ interact with the alternatives system (but -you can set that up by hand if you need to). - -For tools that cannot autodetect the JDK via `PATH`, you may need to set -`JAVA_HOME` to `/opt/jdk1.7.0_45`. diff --git a/wiki/java/install/index.md b/wiki/java/install/index.md deleted file mode 100644 index 684f050..0000000 --- a/wiki/java/install/index.md +++ /dev/null @@ -1,11 +0,0 @@ -# Installing Java … - -This document provided as a community service to -[##java](irc://irc.freenode.org/##java). Provided as-is; pull requests -welcome. - -1. [… on Ubuntu](ubuntu) (may also be applicable to Debian; needs verification - from a Debian user) - -2. [… on CentOS](centos) (probably also applicable to RHEL; needs verification - from a RHEL user) diff --git a/wiki/java/install/ubuntu.md b/wiki/java/install/ubuntu.md deleted file mode 100644 index 75d3478..0000000 --- a/wiki/java/install/ubuntu.md +++ /dev/null @@ -1,84 +0,0 @@ -# Installing Java on Ubuntu - -Accurate as of: Java 7, Ubuntu 12.04. The instructions below assume an amd64 -(64-bit) installation. If you're still using a 32-bit OS, work out the -differences yourself. - -## Via Package Management (Apt) - -OpenJDK 7 is available via apt by default. - -To install the JDK: - - sudo aptitude update - sudo aptitude install openjdk-7-jdk - -To install the JRE only (without the JDK): - - sudo aptitude update - sudo aptitude install openjdk-7-jre - -To install the JRE without GUI support (appropriate for headless servers): - - sudo aptitude update - sudo aptitude install openjdk-7-jre-headless - -(You can also use `apt-get` instead of `aptitude`.) - -These packages interact with [the `alternatives` -system](http://manpages.ubuntu.com/manpages/hardy/man8/update-alternatives.8.html), -and have [a dedicated `alternatives` manager -script](http://manpages.ubuntu.com/manpages/hardy/man8/update-java-alternatives.8.html). -The `alternatives` system affects `/usr/bin/java`, `/usr/bin/javac`, and -browser plugins for applets and Java Web Start applications for browsers -installed via package management. It also affects the symlinks under -`/etc/alternatives` related to Java. - -To list Java versions available, with at least one Java version installed via -Apt: - - update-java-alternatives --list - -To switch to `java-1.7.0-openjdk-amd64` for all Java invocations: - - update-java-alternatives --set java-1.7.0-openjdk-amd64 - -The value should be taken from the first column of the `--list` output. - -### Tool support - -Most modern Java tools will pick up the installed JDK via `$PATH` and do not -need the `JAVA_HOME` environment variable set explicitly. For applications old -enough not to be able to detect the JDK, you can set `JAVA_HOME` to -`/usr/lib/jvm/java-1.7.0-openjdk-amd64`. - -## By Hand - -The [Java SE Development Kit -7](http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html) -tarballs can be installed by hand. Download the “Linux x64” `.tar.gz` version, -then unpack it in `/opt`: - - cd /opt - tar xzf ~/jdk-7u45-linux-x64.tar.gz - -This will create a directory named `/opt/jdk1.7.0_45` (actual version number -may vary) containing a ready-to-use Java dev kit. - -You will need to add the JDK's `bin` directory to `PATH` if you want commands -like `javac` and `java` to work without fully-qualifying the directory: - - cat > /etc/profile.d/oracle_jdk <<'ORACLE_JDK' - PATH="${PATH}:/opt/jdk1.7.0_45/bin" - export PATH - ORACLE_JDK - -(This will not affect non-interactive use; setting PATH for non-interactive -programs like build servers is beyond the scope of this document. Learn to use -your OS.) - -Installation this way does _not_ interact with the alternatives system (but -you can set that up by hand if you need to). - -For tools that cannot autodetect the JDK via `PATH`, you may need to set -`JAVA_HOME` to `/opt/jdk1.7.0_45`. diff --git a/wiki/java/kwargs.md b/wiki/java/kwargs.md deleted file mode 100644 index d745010..0000000 --- a/wiki/java/kwargs.md +++ /dev/null @@ -1,152 +0,0 @@ -# Keyword Arguments in Java - -## What - -Java arguments are traditionally passed by position: - - void foo(int x, int y, int z) - -matches the call - - foo(1, 2, 3) - -and assigns `1` to `x`, `2` to `y`, and `3` to `z` in the resulting -activation. Keyword arguments assign values to formal parameters by matching -the parameter's name, instead. - -## Why - -Fuck the builder pattern, okay? Patterns like - - Response r = Response - .status(200) - .entity(foo) - .header("X-Plane", "Amazing") - .build(); - -(from JAX-RS) mean the creation and maintenance of an entire separate type -just to handle arbitrary ordering and presence/absence of options. Ordering -can be done using keywords; presence/absence can be done by providing one -method for each legal combination of arguments (or by adding optional -arguments to Java). - -The keyword-argument version would be something like - - Response r = new Response( - .status = 200, - .entity = foo, - .headers = Arrays.asList(Header.of("X-Plane", "Amazing")) - ); - -and the `ResponseBuilder` class would not need to exist at all for this case. -(There are others in JAX-RS that would still make `ResponseBuilder` mandatory, -but the use case for it gets much smaller.) - -As an added bonus, the necessary class metadata to make this work would also -allow reflective frameworks such as Spring to make sensible use of the -parameter names: - - <bean class="com.example.Person"> - <constructor-arg name="name" value="Erica McKenzie" /> - </bean> - -## Other Languages - -Python, most recently: - - def foo(x, y, z): - pass - - foo(z=3, x=1, y=2) - -Smalltalk (and ObjectiveC) use an interleaving convention that reads very much -like keyword arguments: - - Point atX: 5 atY: 8 - -## Challenges - -* Minimize changes to syntax. - * Make keyword arguments unambiguous. -* Minimize changes to bytecode spec. - -## Proposal - -Given a method definition - - void foo(int x, int y, int z) - -Allow calls written as - - foo( - SOME-SYNTAX(x, EXPR), - SOME-SYNTAX(y, EXPR), - SOME-SYNTAX(z, EXPR) - ) - -`SOME-SYNTAX` is a production that is not already legal at that point in Java, -which is a surprisingly frustrating limitation. Constructs like - - foo(x = EXPR, y = EXPR, z = EXPR) - -are already legal (assignment is an expression) and already match positional -arguments. - -Keyword arguments match the name of the formal argument in the method -declaration. Passing a keyword argument that does not match a formal argument -is a compilation error. - -Calls can mix keyword arguments and positional arguments, in the following -order: - -1. Positional arguments. -2. Varargs positional arguments. -3. Keyword arguments. - -Passing the same argument as both a positional and a keyword argument is a -compilation error. - -Call sites must satisfy every argument the method/constructor has (i.e., this -doesn't imply optional arguments). This makes implementation easy and -unintrusive: the compiler can implement keyword arguments by transforming them -into positional arguments. Reflective calls (`Method.invoke` and friends) can -continue accepting arguments as a sequence. - -The `Method` class would expose a new method: - - public List<String> getArgumentNames() - -The indexes in `getArgumentNames` match the indexes in `getArgumentTypes` and -related methods. - -Possibilities for syntax: - -* `foo(x := 5, y := 8, z := 2)` - `:=` is never a legal sequence of tokens in - Java. Introduces one new operator-like construct; the new sequence `:=` - “looks like” assignment, which is a useful mnemonic. - -* `foo(x ~ 5, y ~ 8, z ~ 2)` - `~` is not a binary operator and this is never - legal right now. This avoids introducing new operators, but adds a novel - interpretation to an existing unary operator that's not related to its - normal use. - -* `foo(.x = 5, .y = 8, .z = 2)` - using `=` as the keyword binding feels more - natural. Parameter names must be legal identifiers, which means the leading - dot is unambiguous. This syntax is not legal anywhere right now (the dot - always has a leading expression). The dot is a “namespace” symbol already. - -To support this, the class file format will need to record the names of -parameters, not just their order. This is a breaking change, and generated -names will need to be chosen for existing class files. (This may be derivable -from debug information, where present.) - - -## Edge Cases - -* Mixed positional and keyword arguments. - * Collisions (same argument passed by both) are, I think, detectable at - compile time. This should be an error. -* Inheritance. It is legal for a superclass to define `foo(a, b)` and for - subclasses to override it as `foo(x, y)`. Which argument names do you use - when? -* Varargs. diff --git a/wiki/java/stop-using-class-dot-forname.md b/wiki/java/stop-using-class-dot-forname.md deleted file mode 100644 index b01e972..0000000 --- a/wiki/java/stop-using-class-dot-forname.md +++ /dev/null @@ -1,69 +0,0 @@ -# JDBC Drivers and `Class.forName()` - -The short version: stop using `Class.forName(driverClass)` to load JDBC -drivers. You don't need this, and haven't since Java 6. You arguably never -needed this. - -This pattern appears all over the internet, and it's wrong. - -## Backstory - -JDBC has more or less always provided two ways to set up `Connection` objects: - -1. Obtain them from a driver-provided `DataSource` class, which applications or - containers are expected to create for themselves. - -2. Obtain them by passing a URL to `DriverManager`. - -Most people start with the latter, since it's very straightforward to use. -However, `DriverManager` needs to be able to locate `Driver` subclasses, and -the JVM doesn't permit class enumeration at runtime. - -In the original JDBC release, `Driver` subclasses were expected to register -themselves on load, similar to - - public class ExampleDriver extends Driver { - static { - DriverManager.registerDriver(ExampleDriver.class); - } - } - -Obviously, applications _can_ force drivers to load using -`Class.forName(driverName)`, but this hasn't ever been the only way to do it. -`DriverManager` also provides [a mechanism to load a set of named classes at -startup](https://docs.oracle.com/javase/8/docs/api/java/sql/DriverManager.html), -via the `jdbc.drivers` [system property](http://docs.oracle.com/javase/tutorial/essential/environment/sysprop.html). - -## JDBC 4 Fixed That - -JDBC 4, which came out with Java 6 in the Year of our Lord _Two Thousand and -Six_, also loads drivers using the [service -provider](https://docs.oracle.com/javase/8/docs/technotes/guides/jar/jar.html#Service%20Provider) -system, which requires no intervention at all from deployers or application -developers. - -_You don't need to write any code to load a JDBC 4 driver._ - -## What's The Harm? - -It's harmless in the immediate sense: forcing a driver to load immediately -before JDBC would load it itself has no additional side effects. However, it's -a pretty clear indicator that you've copied someone else's code without -thoroughly understanding what it does, which is a bad habit. - -## But What About My Database? - -You don't need to worry about it. All of the following drivers support JDBC -4-style automatic discovery: - -* PostgreSQL (since version 8.0-321, in 2007) - -* Firebird (since [version 2.2, in 2009](http://tracker.firebirdsql.org/browse/JDBC-140)) - -* [MySQL](../mysql/choose-something-else) (since [version 5.0, in 2005](http://dev.mysql.com/doc/relnotes/connector-j/en/news-5-0-0.html)) - -* H2 (since day 1, as far as I can tell) - -* Derby/JavaDB (since [version 10.2.1.6, in 2006](https://issues.apache.org/jira/browse/DERBY-930)) - -* SQL Server (version unknown, because MSDN is archaeologically hostile) diff --git a/wiki/muds/tinyfugue-on-yosemite.md b/wiki/muds/tinyfugue-on-yosemite.md deleted file mode 100644 index 1754436..0000000 --- a/wiki/muds/tinyfugue-on-yosemite.md +++ /dev/null @@ -1,21 +0,0 @@ -# Compiling TinyFugue on Yosemite - -TinyFugue's site claims that it works on OS X. This is largely true, but the -switch from `gcc` to `clang` has eliminated support for some _deeply_ legacy -symbols. - -Since SourceForge is a death zone, I'll post my fix here. To get TinyFugue to -compile, apply the following patch: - - --- src/malloc.c.orig 2015-02-13 23:45:44.000000000 -0500 - +++ src/malloc.c 2015-02-13 23:45:28.000000000 -0500 - @@ -12,7 +12,6 @@ - #include "signals.h" - #include "malloc.h" - - -caddr_t mmalloc_base = NULL; - int low_memory_warning = 0; - static char *reserve = NULL; - -This symbol appears to be unused. Certainly I haven't been able to find any -references, and `tf` works well enough. diff --git a/wiki/mysql/broken-xa.md b/wiki/mysql/broken-xa.md deleted file mode 100644 index 19afe22..0000000 --- a/wiki/mysql/broken-xa.md +++ /dev/null @@ -1,29 +0,0 @@ -# MySQL's Two-Phase Commit Implementation Is Broken - -From [the fine -manual](http://dev.mysql.com/doc/refman/5.5/en/xa-restrictions.html): - -> If an XA transaction has reached the PREPARED state and the MySQL server is -> killed (for example, with kill -9 on Unix) or shuts down abnormally, the -> transaction can be continued after the server restarts. However, if the -> client reconnects and commits the transaction, the transaction will be -> absent from the binary log even though it has been committed. This means the -> data and the binary log have gone out of synchrony. An implication is that -> **XA cannot be used safely together with replication**. - -(Emphasis mine.) - -If you're solving the kinds of problems where two-phase commit and XA -transaction management look attractive, then you very likely have the kinds of -uptime requirements that make replication mandatory. “It works, but not with -replication” is effectively “it doesn't work.” - -> It is possible that the server will roll back a pending XA transaction, even -> one that has reached the PREPARED state. This happens if a client connection -> terminates and the server continues to run, or if clients are connected and -> the server shuts down gracefully. - -XA transaction managers assume that if every resource successfully reaches the -PREPARED state, then every resource will be able to commit the transaction -“eventually.” Resources that unilaterally roll back PREPARED transactions -violate this assumption pretty badly. diff --git a/wiki/mysql/choose-something-else.md b/wiki/mysql/choose-something-else.md deleted file mode 100644 index 62afe85..0000000 --- a/wiki/mysql/choose-something-else.md +++ /dev/null @@ -1,736 +0,0 @@ -# Do Not Pass This Way Again - -Considering MySQL? Use something else. Already on MySQL? Migrate. For every -successful project built on MySQL, you could uncover a history of time wasted -mitigating MySQL's inadequacies, masked by a hard-won, but meaningless, sense -of accomplishment over the effort spent making MySQL behave. - -Thesis: databases fill roles ranging from pure storage to complex and -interesting data processing; MySQL is differently bad at both tasks. Real apps -all fall somewhere between these poles, and suffer variably from both sets of -MySQL flaws. - -* MySQL is bad at [storage](#storage). -* MySQL is bad at [data processing](#data-processing). -* MySQL is bad [by design](#by-design). -* [Bad arguments](#bad-arguments) for using MySQL. - -Much of this is inspired by the principles behind [PHP: A Fractal of Bad -Design](http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/). I -suggest reading that article too -- it's got a lot of good thought in it even -if you already know to stay well away from PHP. (If that article offends you, -well, this page probably will too.) - -## Storage - -Storage systems have four properties: - -1. Take and store data they receive from applications. -2. Keep that data safe against loss or accidental change. -3. Provide stored data to applications on demand. -4. Give administrators effective management tools. - -In a truly “pure” storage application, data-comprehension features -(constraints and relationships, nontrivial functions and aggregates) would go -totally unused. There is a time and a place for this: the return of “NoSQL” -storage systems attests to that. - -Pure storage systems tend to be closely coupled to their “main” application: -consider most web/server app databases. “Secondary” clients tend to be -read-only (reporting applications, monitoring) or to be utilities in service -of the main application (migration tools, documentation tools). If you believe -constraints, validity checks, and other comprehension features can be -implemented in “the application,” you are probably thinking of databases close -to this pole. - -### Storing Data - -MySQL has many edge cases which reduce the predictability of its behaviour -when storing information. Most of these edge cases are documented, but violate -the principle of least surprise (not to mention the expectations of users -familiar with other SQL implementations). - -* Implicit conversions (particularly to and from string types) can modify - MySQL's behaviour. - * Many implicit conversions are also silent (no warning, no diagnostic), - by design, making it more likely developers are entirely unaware of - them until one does something surprising. -* Conversions that violate basic constraints (range, length) of the output - type often coerce data rather than failing. - * Sometimes this raises a warning; does your app check for those? - * This behaviour is unlike many typed systems (but closely like PHP and - remotely like Perl). -* Conversion behaviour depends on a per-connection configuration value - (`sql_mode`) that has [a large constellation of possible - states](https://dev.mysql.com/doc/refman/5.5/en/sql-mode.html), making - it harder to carry expectations from manual testing over to code or from - tool to tool. -* MySQL recommends UTF-8 as a character-set, but still defaults to Latin-1. - The implimentation of `utf8` up until MySQL 5.5 was only the 3-byte - [BMP](http://en.wikipedia.org/wiki/Basic_Multilingual_Plane#Basic_Multilingual_Plane). - MySQL 5.5 and beyond supports a 4-byte `utf8`, but confusingly must be set - with the character-set `utf8mb4`. Implementation details of these encodings - within MySQL, such as the `utf8` 3-byte limit, tend to leak out into client - applications. Data that does not fit MySQL's understanding of the storage - encoding will be transformed until it does, by truncation or replacement, by - default. - * Collation support is per-encoding, with one of the stranger default - configurations: by default, the collation orders characters according to - Swedish alphabetization rules, case-insensitively. - * Since it's the default, lots of folks who don't know the manual - inside-out and backwards observe MySQL's case-insensitive collation - behaviour (`'a' = 'A'`) and conclude that “MySQL is case-insensitive,” - complicating any effort to use a case-sensitive locale. - * Both the encoding and the collation can vary, independently, by - _column_. Do you keep your schema definition open when you write - queries to watch out for this sort of shit? -* The `TIMESTAMP` type tries to do something smart by storing values in a - canonical timezone (UTC), but it's done with so few affordances that it's - very hard to even _tell_ that MySQL's done a right thing with your data. - * And even after that, the result of `foo < '2012-04-01 09:00:00'` still - depends on what time of year it is when you evaluate the query, unless - you're very careful with your connection timezone. - * `TIMESTAMP` is also special-cased in MySQL's schema definition handling, - making it easy to accidentally create (or to accidentally fail to - create) an auto-updating field when you didn't (did) want one. - * `DATETIME` does not get the same timezone handling `TIMESTAMP` does. - What? And you can't provide your own without resorting to hacks like - extra columns. - * Oh, did you want to _use_ MySQL's timezone support? Too bad, none of - that data's loaded by default. You have to process the OS's `tzinfo` - files into SQL with a separate tool and import that. If you ever want to - update MySQL's timezone settings later, you need to take the server down - just to make sure the changes apply. - -### Preserving Data - -... against unexpected changes: like most disk-backed storage systems, MySQL -is as reliable as the disks and filesystems its data lives on. MySQL provides -no additional functionality in terms of mirroring or hardware failure tolerance -(such as [Oracle ASM](http://en.wikipedia.org/wiki/Automatic_Storage_Management)). -However this is a limitation shared with many, _many_ other systems. - -When using the InnoDB storage engine (default since MySQL 5.5), MySQL maintains page -checksums in order to detect corruption caused by underlying storage. However, -many third-party software applications, as sell as users upgrading -from earlier versions of MySQL may be using MyISAM, which will frequently corrupt -data files on improper shutdown. - -The implicit conversion rules that bite when storing data also bite when -asking MySQL to modify data - my favourite example being a fat-fingered -`UPDATE` query where a mistyped `=` (as `-`, off by a single key) caused 90% -of the rows in the table to be affected, instead of one row, because of -implicit string-to-integer conversions. - -... against loss: hoo boy. MySQL, out of the box, gives you three approaches -to [backups](http://dev.mysql.com/doc/refman/5.5/en/backup-methods.html): - -* Take “blind” filesystem backups with `tar` or `rsync`. Unless you - meticulously lock tables or make the database read-only for the duration, - this produces a backup that requires crash recovery before it will be - usable, and can produce an inconsistent database. - * This can bite quite hard if you use InnoDB, as InnoDB crash recovery - takes time proportional to both the number of InnoDB tables and the - total size of InnoDB tables, with a large constant. -* Dump to SQL with `mysqldump`: slow, relatively large backups, and - non-incremental. -* Archive binary logs: fragile, complex, over-configurable, and configured - badly by default. (Binary logging is also the basis of MySQL's replication - system.) - -If neither of these are sufficient, you're left with purchasing [a backup tool -from -Oracle](http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_mysql_enterprise_backup) -or from one of the third-party MySQL vendors. - -Like many of MySQL's features, the binary logging feature is -[too](http://dev.mysql.com/doc/refman/5.5/en/binary-log.html) -[configurable](http://dev.mysql.com/doc/refman/5.5/en/replication-options-binary-log.html), -while still, somehow, defaulting to modes that are hazardous or surprising: -the -[default](http://dev.mysql.com/doc/refman/5.5/en/replication-options-binary-log.html#sysvar_binlog_format) -[behaviour](http://dev.mysql.com/doc/refman/5.5/en/replication-formats.html) -is to log SQL statements, rather than logging their side effects. This has -lead to numerous bugs over the years; MySQL (now) makes an effort to make -common “non-deterministic” cases such as `NOW()` and `RANDOM()` act -deterministically but these have been addressed using ad-hoc solutions. -Restoring binary-log-based backups can easily lead to data that differs from -the original system, and by the time you've noticed the problem, it's too late -to do anything about it. - -(Seriously. The binary log entries for each statement contain the “current” -time on the master and the random seed at the start of the statement, just in -case. If your non-deterministic query uses any other function, you're still -[fucked by -default](http://dev.mysql.com/doc/refman/5.5/en/replication-sbr-rbr.html#replication-sbr-rbr-sbr-disadvantages).) - -Additionally, a number of apparently-harmless features can lead to backups or -replicas wandering out of sync with the original database, in the default -configuration: - -* `AUTO_INCREMENT` and `UPDATE` statements. -* `AUTO_INCREMENT` and `INSERT` statements (sometimes). SURPRISE. -* Triggers. -* User-defined (native) functions. -* Stored (procedural SQL) functions. -* `DELETE ... LIMIT` and `UPDATE ... LIMIT` statements, though if you use - these, you've misunderstood how SQL is supposed to work. -* `INSERT ... ON DUPLICATE KEY UPDATE` statements. -* Bulk-loading data with `LOAD DATA` statements. -* [Operations on floating-point - values](http://dev.mysql.com/doc/refman/5.5/en/replication-features-floatvalues.html). - -### Retrieving Data - -This mostly works as expected. Most of the ways MySQL will screw you happen -when you store data, not when you retrieve it. However, there are a few things -that implicitly transform stored data before returning it: - -* MySQL's surreal type conversion system works the same way during `SELECT` - that it works during other operations, which can lead to queries matching - unexpected rows: - - owen@scratch> CREATE TABLE account ( - -> accountid INTEGER - -> AUTO_INCREMENT - -> PRIMARY KEY, - -> discountid INTEGER - -> ); - Query OK, 0 rows affected (0.54 sec) - - owen@scratch> INSERT INTO account - -> (discountid) - -> VALUES - -> (0), - -> (1), - -> (2); - Query OK, 3 rows affected (0.03 sec) - Records: 3 Duplicates: 0 Warnings: 0 - - owen@scratch> SELECT * - -> FROM account - -> WHERE discountid = 'banana'; - +-----------+------------+ - | accountid | discountid | - +-----------+------------+ - | 1 | 0 | - +-----------+------------+ - 1 row in set, 1 warning (0.05 sec) - - Ok, unexpected, but there's at least a warning (do your apps check for - those?) - let's see what it says: - - owen@scratch> SHOW WARNINGS; - +---------+------+--------------------------------------------+ - | Level | Code | Message | - +---------+------+--------------------------------------------+ - | Warning | 1292 | Truncated incorrect DOUBLE value: 'banana' | - +---------+------+--------------------------------------------+ - 1 row in set (0.03 sec) - - I can count on one hand the number of `DOUBLE` columns in this example and - still have five fingers left over. - - You might think this is an unreasonable example: maybe you should always - make sure your argument types exactly match the field types, and the query - should use `57` instead of `'banana'`. (This does actually “fix” the - problem.) It's unrealistic to expect every single user to run `SHOW CREATE - TABLE` before every single query, or to memorize the types of every column - in your schema, though. This example derived from a technically-skilled - but MySQL-ignorant tester examining MySQL data to verify some behavioural - changes in an app. - - * Actually, you don't even need a table for this: `SELECT 0 = 'banana'` - returns `1`. Did the [PHP](http://phpsadness.com/sad/52) folks design - MySQL's `=` operator? - - * This isn't affected by `sql_mode`, even though so many other things are. - -* `TIMESTAMP` columns (and _only_ `TIMESTAMP` columns) can return - apparently-differing values for the same stored value depending on - per-connection configuration even during read-only operation. This is done - silently and the default behaviour can change as a side effect of non-MySQL - configuration changes in the underlying OS. -* String-typed columns are transformed for encoding on output if the - connection is not using the same encoding as the underlying storage, using - the same rules as the transformation on input. -* Values that stricter `sql_mode` settings would reject during storage can - still be returned during retrieval; it is impossible to predict in advance - whether such data exists, since clients are free to set `sql_mode` to any - value at any time. - -### Efficiency - -For purely store-and-retrieve applications, MySQL's query planner (which -transforms the miniature program contained in each SQL statement into a tree -of disk access and data manipulation steps) is sufficient, but only barely. -Queries that retrieve data from one table, or from one table and a small -number of one-to-maybe-one related tables, produce relatively efficient plans. - -MySQL, however, offers a number of tuning options that can have dramatic and -counterintuitive effects, and the documentation provides very little advice -for choosing settings. Tuning relies on the administrator's personal -experience, blog articles of varying quality, and consultants. - -* The MySQL query cache defaults to a non-zero size in some commonly-installed - configurations. However, the larger the cache, the slower writes proceed: - invalidating cache entries that include the tables modified by a query means - considering every entry in the cache. This cache also uses MySQL's LRU - implementation, which has its own performance problems during eviction that - get worse with larger cache sizes. -* Memory-management settings, including `key_buffer_size` and `innodb_buffer_pool_size`, - have non-linear relationships with performance. The [standard](http://www.mysqlperformanceblog.com/2006/09/29/what-to-tune-in-mysql-server-after-installation/) - [advice](http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/) advises - making whichever value you care about more to a large value, but this can be - counterproductive if the related data is larger than the pool can hold: - MySQL is once again bad at discarding old buffer pages when the buffer is - exhausted, leading to dramatic slowdowns when query load reaches a certain - point. - * This also affects filesystem tuning settings such as `table_open_cache`. -* InnoDB, out of the box, comes configured to use one large (and automatically - growing) tablespace file for all tables, complicating backups and storage - management. This is fine for trivial databases, but MySQL provides no tools - (aside from `DROP TABLE` and reloading the data from an SQL dump) for - transplanting a table to another tablespace, and provides no tools (aside - from a filesystem-level `rm`, and reloading _all_ InnoDB data from an SQL - dump) for reclaiming empty space in a tablespace file. -* MySQL itself provides very few tools to manage storage; tasks like storing - large or infrequently-accessed tables and databases on dedicated filesystems - must be done on the filesystem, with MySQL shut down. - -## Data Processing - -Data processing encompasses tasks that require making decisions about data and -tasks that derive new data from existing data. This is a huge range of topics: - -* Deciding (and enforcing) application-specific validity rules. -* Summarizing and deriving data. -* Providing and maintaining alternate representations and structures. -* Hosting complex domain logic near the data it operates on. - -The further towards data processing tasks applications move, the more their -SQL resembles tiny programs sent to the data. MySQL is totally unprepared for -programs, and expects SQL to retrieve or modify simple rows. - -### Validity - -Good constraints are like `assert`s: in an ideal world, you can't tell if they -work, because your code never violates them. Here in the real world, -constraint violations happen for all sorts of reasons, ranging from buggy code -to buggy human cognition. A good database gives you more places to describe -your expectations and more tools for detecting and preventing surprises. -MySQL, on the other hand, can't validate your data for you, beyond simple (and -fixed) type constraints: - -* As with the data you store in it, MySQL feels free to change your table - definitions [implicitly and - silently](http://dev.mysql.com/doc/refman/5.5/en/silent-column-changes.html). - Many of these silent schema changes have important performance and - feature-availability implications. - * Foreign keys are ignored if you spell them certain, common, ways: - - CREATE TABLE foo ( - -- ..., - parent INTEGER - NOT NULL - REFERENCES foo_parent (id) - -- , ... - ) - - silently ignores the foreign key specification, while - - CREATE TABLE foo ( - -- ..., - parent INTEGER - NOT NULL, - FOREIGN KEY (parent) - REFERENCES foo_parent (id) - -- , ... - ) - - preserves it. - -* Foreign keys, one of the most widely-used database validity checks, are an - engine-specific feature, restricting their availability in combination with - other engine-specific features. (For example, a table cannot have both - foreign key constraints and full-text indexes, as of MySQL 5.5.) - * Configurations that violate assumptions about foreign keys, such as a - foreign key pointing into a MyISAM or NDB table, do not cause warnings - or any other diagnostics. The foreign key is simply discarded. SURPRISE. - (MySQL is riddled with these sorts of surprises, and apologists lean - very heavily on the “that's documented” excuse for its bad behaviour.) -* The MySQL parser recognizes `CHECK` clauses, which allow schema developers - to make complex declarative assertions about tuples in the database, but - [discards them without - warning](http://dev.mysql.com/doc/refman/5.5/en/create-table.html). If you - want `CHECK`-like constraints, you must implement them as triggers - but see - below... -* MySQL's comprehension of the `DEFAULT` clause is, uh, limited: only - constants are permitted, except for the [special - case](https://dev.mysql.com/doc/refman/5.5/en/timestamp-initialization.html) - of at most one `TIMESTAMP` column per table and at most one sequence-derived - column. Who designed this mess? - * Furthermore, there's no way to say “no default” and raise an error when - an INSERT forgets to provide a value. The default `DEFAULT` is either - `NULL` or a zero-like constant (`0`, `''`, and so on). Even for types - with no meaningful zero-like values (`DATETIME`). -* MySQL has no mechanism for introducing new types, which might otherwise - provide a route to enforcing validity. Counting the number of special cases - in MySQL's [existing type - system](http://dev.mysql.com/doc/refman/5.5/en/data-types.html) illustrates - why that's probably unfixable. - -I hope every client with write access to your data is absolutely perfect, -because MySQL _cannot help you_ if you make a mistake. - -### Summarizing and Deriving Data - -SQL databases generally provide features for doing “interesting” things with -sets of tuples, and MySQL is no exception. However, MySQL's limitations mean -that actually processing data in the database is fraught with wasted money, -brains, and time: - -* Aggregate (`GROUP BY`) queries run up against limits in MySQL's query - planner: a query with both `WHERE` and `GROUP BY` clauses can only satisfy - one constraint or the other with indexes, unless there's an index that - covers all the relevant fields in both clauses, in the right order. (What - this order is depends on the complexity of the query and on the distribution - of the underlying data, but that's hardly MySQL-specific.) - * If you have all three of `WHERE`, `GROUP BY`, and `ORDER BY` in the same - query, you're more or less fucked. Good luck designing a single index - that satisfies all three. -* Even though MySQL allows database administrators to [define normal functions - in a procedural SQL - dialect](http://dev.mysql.com/doc/refman/5.5/en/create-procedure.html), - [custom aggregate - functions](http://dev.mysql.com/doc/refman/5.5/en/create-function-udf.html) - can only be defined by native plugins. Good thing, too, because procedural - SQL in MySQL is its own kind of awful - more on that below. -* Subqueries are often convenient and occasionally necessary for expressing - multi-step transformations on some underlying data. MySQL's query planner - has only one strategy for optimizing them: evaluate the innermost query as - written, into an in-memory table, then use a nested loop to satisfy joins or - `IN` clauses. For large subquery results or interestingly nested subqueries, - this is absurdly slow. - * MySQL's query planner can't fold constraints from outer queries into - subqueries. - * The generated in-memory table never has any indexes, ever, even when - appropriate indexes are “obvious” from the surrounding query; you cannot - even specify them. - * These limitations also affect views, which are evaluated as if they were - subqueries. In combination with the lack of constraint folding in the - planner, this makes filtering or aggregating over large views completely - impractical. - * MySQL lacks [common table - expressions](http://www.postgresql.org/docs/9.2/static/queries-with.html). - Even if subquery efficiency problems get fixed, the inability to give - meaningful names to subqueries makes them hard to read and comprehend. - * I hope you like `CREATE TEMPORARY TABLE AS SELECT`, because that's your - only real alternative. -* [Window - functions](http://en.wikipedia.org/wiki/Select_(SQL)#Window_function) do not - exist at all in MySQL. This complicates many kinds of analysis, including - time series analyses and ranking analyses. - * Specific cases (for example, assigning rank numbers to rows) can be - implemented using [server-side variables and side effects during - `SELECT`](http://stackoverflow.com/questions/6473800/assigning-row-rank-numbers). - What? Good luck understanding that code in six months. -* Even interesting joins run into trouble. MySQL's query planner has trouble - with a number of cases that can easily arise in well-normalized data: - * Joining and ordering by rows from multiple tables often forces MySQL to - dump the whole join to a temporary table, then sort it -- awful, - especially if you then use `LIMIT BY` to paginate the results. - * `JOIN` clauses with non-trivial conditions, such as joins by range or - joins by similarity, generally cause the planner to revert to table - scans even if the same condition would be indexable outside of a join. - * Joins with `WHERE` clauses that span both tables, where the rows - selected by the `WHERE` clause are outliers relative to the table - statistics, often cause MySQL to access tables in suboptimal order. -* Ok, forget about interesting joins. Even interesting `WHERE` clauses can run - into trouble: MySQL can't index deterministic functions of a row, either. - While some deterministic functions can be eliminated from the `WHERE` clause - using simple algebra, many useful cases (whitespace-insensitive comparison, - hash-based comparisons, and so on) can't. - * You can fake these by storing the computed value in the row alongside - the “real” value. This leaves your schema with some ugly data repetition - and a chance for the two to fall out of sync, and clients must use the - “computed” column explicitly. - * Oh, and they must maintain the “computed” version explicitly. - * Or you can use triggers. Ha. See above. - -And now you know why MySQL advocates are such big fans of doing data -_processing_ in “the client” or “the app.” - -### Alternate Representations and Derived Tables - -Many databases let schema designers and administrators abstract the underlying -“physical” table structure from the presentation given to clients, or to some -specific clients, for any of a number of reasons. MySQL tries to let you do -this, too! And fumbles it quite badly. - -* As mentioned above, non-trivial views are basically useless. Queries like - `SELECT some columns FROM a_view WHERE id = 53` are evaluated in the - stupidest -- and slowest -- possible way. Good luck hiding unusual - partitioning arrangements or a permissions check in a view if you want any - kind of performance. -* The poor interactions between triggers and binary logging's default - configuration make it impractical to use triggers to maintain “materialized” - views to avoid the problems with “real” views. - * It also effectively means triggers can't be used to emulate `CHECK` - constraints and other consistency features. - * Code to maintain materialized views is also finicky and hard to get - “right,” especially if the view includes aggregates or interesting joins - over its source data. I hope you enjoy debugging MySQL's procedural - SQL… -* For the relatively common case of wanting to abstract partitioned storage - away for clients, MySQL actually has [a - tool](http://dev.mysql.com/doc/refman/5.5/en/partitioning.html) for it! But - it comes with [enough caveats to strangle a - horse](http://dev.mysql.com/doc/refman/5.5/en/partitioning-limitations.html): - * It's a separate table engine wrapping a “real” storage engine, which - means it has its own, separate support for engine-specific features: - transactions, foreign keys, and index types, `AUTO_INCREMENT`, and - others. The syntax for configuring partitions makes selecting the wrong - underlying engine entirely too easy, too. - * Partitioned tables may not be the referrent of foreign keys: you can't - have both enforced relationships and this kind of storage management. - * MySQL doesn't actually know how to store partitions on separate disks or - filesystems. You still need to reach underneath of MySQL do to actual - storage management. - * Partitioning an InnoDB table under the default InnoDB configuration - stores all of the partitions in the global tablespace file anyways. - Helpful! For per-table configurations, they still all end up - together in the same file. Partitioning InnoDB tables is a waste of - time for managing storage. - * TL,DR: MySQL's partition support is so finicky and limited that - MySQL-based apps tend to opt for multiple MySQL servers (“sharding”) - instead. - -### Hosting Logic In The Database - -Yeah, yeah, the usual reaction to stored procedures and in-DB code is “eww, -yuck!” for some not-terrible reasons, but hear me out on two points: - -* Under the freestanding-database-server paradigm, there will usually be - network latency between database clients and the database itself. There are - two ways to minimize the impact of that: move the data to the code in bulk - to minimize round-trips, or move the code to the data. -* Some database administration tasks are better implemented using in-database - code than as freestanding clients: complex data migrations that can't be - expressed as freestanding SQL queries, for example. - -MySQL, as of version -[5.0](http://dev.mysql.com/doc/relnotes/mysql/5.0/en/news-5-0-0.html) -(released in 2003 -- remember that date, I'll come back to it), has support -for in-database code via a procedural SQL-like dialect, like many other SQL -databases. This includes server-side procedures (blocks of stored code that -are invoked outside of any other statements and return statement-like -results), functions (blocks of stored code that compute a result, used in any -expression context such as a `SELECT` list or `WHERE` clause), and triggers -(blocks of stored code that run whenever a row is created, modified, or -deleted). - -Given the examples of -[other](http://www.postgresql.org/docs/7.3/static/plpgsql.html) -[contemporaneous](http://msdn.microsoft.com/en-US/library/ms189826(v=sql.90).aspx) -[procedural](http://docs.oracle.com/cd/B10501_01/appdev.920/a96624/toc.htm) -[languages](http://www.firebirdsql.org/file/documentation/reference_manuals/reference_material/html/langrefupd15-psql.html), -MySQL's procedural dialect -- an implementation of the -[SQL/PSM](http://en.wikipedia.org/wiki/SQL/PSM) language -- is quite limited: - -* There is no language construct for looping over a query result. This seems - like a pretty fundamental feature for a database-hosted language, but no. -* There is no language construct for looping while a condition holds. This - seems like a pretty fundamental feature for an imperative language designed - any time after about 1975, but no. -* There is no language construct for looping over a range. -* There is, in fact, one language construct for looping: the unconditional - loop. All other iteration control is done via conditional `LEAVE` - statements, as - - BEGIN - DECLARE c CURSOR FOR - SELECT foo, bar, baz - FROM some_table - WHERE some_condition; - DECLARE done INT DEFAULT 0; - DECLARE CONTINUE HANDLER FOR NOT FOUND - SET done = 1; - - DECLARE c_foo INTEGER; - DECLARE c_bar INTEGER; - DECLARE c_baz INTEGER; - - OPEN c; - process_some_table: LOOP - FETCH c INTO c_foo, c_bar, c_baz; - IF done THEN - LEAVE process_some_table; - END IF; - - -- do something with c_foo, c_bar, c_baz - END LOOP; - END; - - The original “structured programming” revolution in the 1960s seems to - have passed the MySQL team by. - -* Okay, I lied. There are two looping constructs: there's also the `REPEAT ... - UNTIL condition END REPEAT` construct, analogous to C's `do {} while - (!condition);` loop. But you still can't loop over query results, and you - can't run zero iterations of the loop's main body this way. -* There is nothing resembling a modern exception system with automatic scoping - of handlers or declarative exception management. Error handling is entirely - via Visual Basic-style “on condition X, do Y” instructions, which remain in - effect for the rest of the program's execution. - * In the language shipped with MySQL 5.0, there wasn't a way to signal - errors, either: programmers had to resort to stunts like [intentionally - issuing failing - queries](http://stackoverflow.com/questions/465727/raise-error-within-mysql-function), - instead. Later versions of the language addressed this with the - [`SIGNAL` - statement](http://dev.mysql.com/doc/refman/5.5/en/signal.html): see, - they _can_ learn from better languages, eventually. -* You can't escape to some other language, since MySQL doesn't have an - extension mechanism for server-side languages or a good way to call - out-of-process services during queries. - -The net result is that developing MySQL stored programs is unpleasant, -uncomfortable, and far more error-prone than it could have been. - -## Why Is MySQL The Way It Is? { #by-design } - -MySQL's technology and history contain the seeds of all of these flaws. - -### Pluggable Storage Engines - -Very early in MySQL's life, the MySQL dev team realized that MyISAM was not -the only way to store data, and opted to support other storage backends within -MySQL. This is basically an alright idea; while I personally prefer storage -systems that focus their effort on making one backend work very well, -supporting multiple backends and letting third-party developers write their -own is a pretty good approach too. - -Unfortunately, MySQL's storage backend interface puts a very low ceiling on -the ways storage backends can make MySQL behave better. - -MySQL's data access paths through table engines are very simple: MySQL asks -the engine to open a table, asks the engine to iterate through the table -returning rows, filters the rows itself (outside of the storage engine), then -asks the engine to close the table. Alternately, MySQL asks the engine to open -a table, asks the engine to retrieve rows in range or for a single value over -a specific index, filters the rows itself, and asks the engine to close the -table. - -This simplistic interface frees table engines from having to worry about query -optimization - in theory. Unfortunately, engine-specific features have a large -impact on the performance of various query plans, but the channels back to the -query planner provide very little granularity for estimating cost and prevent -the planner from making good use of the engine in unusual cases. Conversely, -the table engine system is totally isolated from the actual query, and can't -make query-dependent performance choices “on its own.” There's no third path; -the query planner itself is not pluggable. - -Similar consequences apply to type checking, support for new types, or even -something as “obvious” as multiple automatic `TIMESTAMP` columns in the same -table. - -Table manipulation -- creation, structural modification, and so on -- runs -into similar problems. MySQL itself parses each `CREATE TABLE` statement, then -hands off a parsed representation to the table engine so that it can manage -storage. The parsed representation is lossy: there are plenty of forms MySQL's -parser recognizes that aren't representable in a `TABLE` structure, preventing -engines from implementing, say, column or tuple `CHECK` constraints without -MySQL's help. - -The [sheer number of table -engines](http://dev.mysql.com/doc/refman/5.5/en/storage-engines.html) makes -that help very slow in coming. Any change to the table engine interface means -perturbing the code to each engine, making progress on new MySQL-level -features that interact with storage such as better query planning or new SQL -constructs necessarily slow to implement and slow to test. - -### Held Back By History - -The original MySQL team focused on pure read performance and on “ease of use” -(for new users with simple needs, as far as I can tell) over correctness and -completeness, violating Knuth's laws of optimization. Many of these decisions -locked MySQL into behaviours very early in its life that it still displays -now. Features like implicit type conversions legitimately do help streamline -development in very simple cases; experience with [other -languages](http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/) -unfortunately shows that the same behaviours sandbag development and help hide -bugs in more sophisticated scenarios. - -MySQL has since changed hands, and the teams working on MySQL (and MariaDB, -and Percona) are much more mature now than the team that made those early -decisions. MySQL's massive and frequently non-savvy userbase makes it very hard -to introduce breaking changes. At the same time, adding _optional_ breaking -changes via server and client mode flags (such as `sql_mode`) increases the -cognitive overhead of understanding MySQL's behaviours -- especially when that -behaviour can vary from client to client, or when the server's configuration is -out of the user's control (for example, on a shared host, or on EC2). - -A solution similar to Python's `from __future__ import` pragmas for making -breaking changes opt-in some releases in advance of making them mandatory -might help, but MySQL doesn't have the kind of highly-invested, highly-skilled -user base that would make that effective -- and it still has all of the -problems of modal behaviour. - -## Bad Arguments - -Inevitably, someone's going to come along and tell me how wrong I am and how -MySQL is just fine as a database system. These people are everywhere, and they -mean well too, and they are almost all wrong. There are two good reasons to -use MySQL: - -1. **Some earlier group wrote for it, and we haven't finished porting our code - off of MySQL.** -2. **We've considered all of these points, and many more, and decided that - `___feature_x___` that MySQL offers is worth the hassle.** - -Unfortunately, these aren't the reasons people do give, generally. The -following are much more common: - -* **It's good enough.** No it ain't. There are plenty of other equally-capable - data storage systems that don't come with MySQL's huge raft of edge cases - and quirks. - * **We haven't run into these problems.** Actually, a lot of these - problems happen _silently_. Odds are, unless you write your queries and - schema statements with the manual open and refer back to it constantly, - or have been using MySQL since the 3.x era _daily_, at least some of - these issues have bitten you. The ones that prevent you from using your - database intelligently are very hard to notice in action. -* **We already know how to use it.** MySQL development and administration - causes brain damage, folks, the same way PHP does. Where PHP teaches - programmers that “array” is the only structure you need, MySQL teaches - people that databases are awkward, slow, hard-to-tune monsters that require - constant attention. That doesn't have to be true; there are comfortable, - fast, and easily-tuned systems out there that don't require daily care and - feeding or the love of a specialist. -* **It's the only thing our host supports.** [Get](http://linode.com/) [a](http://www.heroku.com/) [better](http://gandi.net/) [host](https://www.engineyard.com). It's - not like they're expensive or hard to find. - * **We used it because it was there.** Please hire some fucking software - developers and go back to writing elevator pitches and flirting with Y - Combinator. -* **Everybody knows MySQL. It's easy to hire MySQL folks.** It's easy to hire - MCSEs, too, but you should be hiring for attitude and ability to learn, not - for specific skillsets, if you want to run a successful software project. - * **It's popular.** Sure, and nobody ever got fired for buying - IBM/Microsoft/Adobe. Popularity isn't any indication of quality, and if - we let popularity dictate what technology we use and improve we'll never - get anywhere. Marketing software to geeks is _easy_ - it's just that - lots of high-quality projects don't bother. -* **It's lightweight.** So's [SQLite 3](http://www.sqlite.org) or - [H2](http://www.h2database.com/html/main.html). If you care about deployment - footprint more than any other factor, MySQL is actually pretty clunky (and - embedded MySQL has even bigger problems than freestanding MySQL). -* **It's getting better, so we might as well stay on it.** [It's - true](http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html), if you go - by feature checklists and the manual, MySQL is improving “rapidly.” 5.6 is - due out soon and superficially looks to contain a number of good changes. I - have two problems with this line of reasoning: - 1. Why wait? Other databases are good _now_, not _eventually_. - 2. MySQL has a history of providing the bare minimum to satisfy a feature - checkbox without actually making the feature work well, work consistently, - or work in combination with other features. diff --git a/wiki/packaging-ideas.md b/wiki/packaging-ideas.md deleted file mode 100644 index a881358..0000000 --- a/wiki/packaging-ideas.md +++ /dev/null @@ -1,20 +0,0 @@ -# Why “Web 2.0” Matters - -It's not about Web 2.0. It's about every stupid industry buzzword that's ever -made programmers roll their eyes. - -## Packaging ideas - -“Web 2.0” gives people who don't live and breathe technology a handy hook to -group ideas on. - -* New, unfamiliar ideas -* Ideas people already have but haven't articulated. - -## Unpackaging ideas - -A well-packaged idea has to do two things: - -1. Get the idea into someone's head. -2. Let someone “unpackage” the idea in their own context and think new things - about it. diff --git a/wiki/people/co-op-social-media.md b/wiki/people/co-op-social-media.md deleted file mode 100644 index afbb119..0000000 --- a/wiki/people/co-op-social-media.md +++ /dev/null @@ -1,51 +0,0 @@ -# Notes towards a Co-Op Social Network - -## Premises - -* Money, and paid labor generally, are a prerequisite to a polished, usable technical system that is accessible to more than just the technical core of its contributors. - - * A social network is a technical system. - -* Existing social network systems have _users_ - people who socialize through the network - and _customers_ - people who provide revenue in return for obtaining something of value. Those two groups have incompatible desires, and the ones with the money win. - - * _Users_ want a way to connect, communicate, plan, and organize with people they know. - - * _Customers_ want to maximize the return on their dollars. - - * Social network customers tend to be advertisers, whose goals are maximizing the visibility of their messages and maximizing the information extracted about the relationships and demographics of the users being advertised to. - - * Secondary customers: law enforcement. - -* Revenue from customers and investor dollars are the only sources of money for a privately-owned, incorporated social network. - -* The conflicting incentives between social networks' stated goals, which would benefit the users, and social networks' actual goals, which benefit the customers, is at the root of major, high-visibility dysfunctions in social networks: - - * The rise in news-like advertising on Facebook (it's what metrics provided by Facebook show Facebook users will respond to, after all) - - * Limited or consciously-ineffective tools for managing conflict. (If you block someone, you can't see ads they might propagate or share!) - - * Emphasis on metrics-gathering and privacy-puncturing tools even for users (automatic location posting, closed services, infinite data retention as the default). - - * Features widely requested by users generally disregarded in favour of "pet" features, features requested by customers, and doing nothing. - - * The "Nobody at Twitter uses Twitter" problem - -* Social media users are poorly-served by existing social networks, as a result of these mismatched incentives. - -* Accepted wisdom is that nobody will pay to use a social network, and that social networks which need money therefore must find money somewhere else. - -* We can do better. - -## Proposal - -* Organize a social media venture as a cooperative, so that the company's own incentives line up with things that benefit the users. - - * What kind of co-op? Member-owned? Employee-owned? The incentives are different. - -* Fund the cooperative through a combination of membership dues, user fees, public and private _grants_, and other non-transactional income sources. - -* Direct the construction, modification, and maintenance of the network with direct involvement of the network's users. - - * How? - -* Prioritize the welfare of users and the sustainability of the enterprise, rather than maximizing revenue or assets. diff --git a/wiki/people/community-norms.md b/wiki/people/community-norms.md deleted file mode 100644 index 9062f23..0000000 --- a/wiki/people/community-norms.md +++ /dev/null @@ -1,23 +0,0 @@ -# Community Norms & Problematic Media - -From a Slack discussion about [this rpg.stackexchange thread](http://meta.rpg.stackexchange.com/questions/6399/rpgs-by-and-for-white-nationalists): - -> Roleplaying games are vehicles for stories. If you ignore the social context and strip them down to pure mechanics, you might as well play _any_ game - but (a) mechanics are not, themselves, morally or socially neutral, and (b) the thing Varg actually shipped is, quite consciously, laden with intent, and disregarding that wilfully enables him. -> -> Critical analysis and remixing is a great way to defang racist literature. -> -> I think maybe the point is “it’s not the game, it’s the players:” you’re (implicitly) arguing for a conscious, intentional restriction on the spectrum of tolerated opinions and beliefs. That’s fine, that’s _the nature of social spaces_, but the idea of consciously excluding someone on that basis is, at least on paper, anathema to (for lack of a better term) nerd culture. -> -> (It actually happens anyways, “on paper” and “lol these are humans” never line up. I'm talking about what we tell ourselves, and therefore how we justify our actions.) -> -> The conversation about whether to ban discussion of _the game_ is misguided, because it doesn’t address any of that. A bunch of folks who all agree with the prevailing opinion (that racist vehicles are a bad idea and the people who voluntarily participate in them are, at best, highly suspect) should, I think, be perfectly able to have a sensible, critical discussion of the game without violating community norms. -> -> What you’re actually asking for, and what may be the only way to manage this, is a _ban on having personal beliefs that align with the author's_. -> -> I’m +1 on that, and someone in the thread explains why perfectly: nominal inclusiveness causes marginalized people to exclude themselves for their own protection, since nominal inclusiveness prevents various social safety mechanisms that might otherwise protect marginal people from functioning. -> -> Basically, I think [geek social fallacies #1 and #2](http://www.plausiblydeniable.com/opinion/gsf.html) are driving that thread. -> -> In a context where it’s unacceptable to say “if you are racist, we want you to leave,” banning discussion of racist material is a workable stopgap, but it’s considerably more expensive (more stressful, harder to enforce, harder to explain to new community members) than fixing the problem you actually care about. -> -> I’m quite happy to ban people for having vile opinions. The “it’s just my opinion, man” stance that so, so many idiot bros (even me, sometimes) fall back on is not, basically, a good reason to tolerate people. diff --git a/wiki/people/public-compensation.md b/wiki/people/public-compensation.md deleted file mode 100644 index dcae172..0000000 --- a/wiki/people/public-compensation.md +++ /dev/null @@ -1,101 +0,0 @@ -# Notes Towards A Public Compensation Database - -There's a large body of evidence that silence about compensation is not in the interest of those being paid. -Let's do something about that: let's build a system for analyzing and reporting over salary info. - -## Design Goals - -1. Respect the safety and consent of the participants. - -2. Promote a long-term, public conversation about compensation and salary, both in tech and in other fields. - -## Concerns - -* Compensation data is historically contentious. - For a cautionary tale, see [@EricaJoy's Storify](https://storify.com/_danilo/ericajoy-s-salary-transparency-experiment-at-googl) about salary transparency at Google. - Protecting participants from reprisal requires both effective transparency about how data will be collected and used and a deep, pervasive respect for consent. - -* Naive implementations of anonymity will encourage abusive submissions: defamatory posts, fictional people, attempts to skew the data in a variety of ways. - If this tool succeeds, abuses will discredit it and may damage the larger conversation. - Abuses may also prevent the tool from succeeding. - -* _Actual laws_ around salary discussion are not uniform. - Tools should not make it easy for people to harm themselves by mistake. - -* Voluntary disclosure is an inherently unequal process. - -## Design - -The tool stores _observations_ of compensation as of a given date, consisting of one or more of the following compensation types: - -* Salary -* Hourly wage -* Bonus packages -* Equity (at approximate or negotiated value, eg. stock options or grants) -* “Other Compensation” of Yearly, Quarterly, Monthly, or One-Time periodicity - -From these, the tool will derive a “total compensation” for the observation, used as a basis for reporting. - -Each observation can carry _zero or more_ structured labels: - -* Employer - * Employer's city, district, and country -* Employee's name - * Employee's city, district, and country -* Job Title -* Years In Role (senority) -* Years In Field (experience) -* Sex -* Gender -* Ethnicity -* Age -* Family Status -* Disabilities - -All labels are _strictly_ voluntary and will be signposted clearly in the submission process. -Every label consists of freeform text or numeric fields. -Text fields will suggest autocompletions using values from existing verified observations, to encourage submitters to enter data consistently. - -There are two core workflows: - -* Submitting an observation -* Reporting on observed compensation - -The submission workflow opens a UI which requests a date (defaulting to the date of submission) and a compensation package. -The UI also contains expandable sections to allow the user to choose which labels to add to the submission. -Finally, the UI contains an email address field used to validate the submission. -The validation process will be described later in this document, and serves to both deter abuse and to enable post-facto moderation of a user's submissions. - -The report workflow will allow users to select a set of labels and see the distribution of total compensation within those labels, and how it breaks down. -For example, a user may report on compensation for jobs in Toronto, ON, Canada with three years' experience and see the distribution of compensation, and then break that down further by gender and ethnicity. - -The report workflow will also users to enter a tentative observation and review how that compares to other compensation packages for similar jobs. -For example, a user may enter a tentative observation for Research In Motion, for Software Team Lead jobs, with a compensation of CAD 80,000/yr, and see the percentile their compensation falls in, and the distribution of compensation observations for the same job. - -## Verification - -To allow moderation of observations, users must include an email address when submitting observations. -This email address _must not be stored_, since storing it would allow submissions to be traced to specific people. -Instead, the tool digests the email address with a preconfigured salt, and associates the digest with the unverified observation. -The tool then emails the given address with a verification message, and discards the address. - -The verification message contains the following: - -* Prose outlining the verification process. -* A brief summary of the observation, containing the date of the observation and the total compensation observed. -* A link to the unverified observation, where the user can verify or destroy the observation. - -The verification system serves three purposes: - -1. It discourages people from submitting spurious observations by increasing the time investment needed to get an observation into the data set. -2. It complicates automated attempts to skew the data. -3. It allows observations from the same person to be correlated with one another without necessarily identifying the submitter. - -The correlation provided by the verification system also allows observations to be moderated retroactively: observations shown to be abusive can be used to prevent the author from submitting further observations, and to remove all of that author's submissions (at least, under that address) to be removed from the data set. - -Correlations may also allow amending or superceding observations safely. Needs fleshing out. - -## Similar Efforts - -* Piper Miriam's [Am I Underpaid](https://github.com/pipermerriam/am-i-underpaid), which attempts to address the question of compensation equality in a local way. -* As mentioned above, [@EricaJoy's Storify](https://storify.com/_danilo/ericajoy-s-salary-transparency-experiment-at-googl) covers doing this with Google Docs. diff --git a/wiki/people/rape-culture-and-men.md b/wiki/people/rape-culture-and-men.md deleted file mode 100644 index ca97504..0000000 --- a/wiki/people/rape-culture-and-men.md +++ /dev/null @@ -1,39 +0,0 @@ -# This Is Rape Culture - -In the last couple of years, I've been interacting with folks who take a more -active hand in gender and social issues, and it's changed the way I see the -word “rape.” It didn't entirely make sense to me how so many people could be -self-identified victims of rape culture while so few people are, even in a -euphemistic way, identifiable as rapists, so I dug a bit at my assumptions. - -Growing up immersed in what I now recognize as the early stages of modern -“news” culture, rape was always reported as a violent act. Something so black -and white that if you committed rape, you would know yourself to be a rapist. -Media descriptions of rape and of rapists focussed on acts of overt violence: -“she was in the wrong neighbourhood and got raped at knifepoint,” “held down -and raped,” and so on. - -Reading more recent postings on the idea of “rape culture,” however, paints a -very different picture of the same word. “Raped at a party,” “too drunk to -consent,” and other depictions of rape as an act of exploitation (or, -appallingly, convenience or indifference) rather than violence. - -Let me be perfectly clear here: without _active consent_, any sexual contact -is rape or is on the road to it. In that sense, violence, exploitation, -intoxication and other forms of coercion are interchangeable and equally vile. - -However, when the public idea of rape is limited to rapes with overt violence, -it's really easy to excuse non-violent coerced sex as “not really rape.” After -all, you didn't hit her, did you? She never said _no_ and _meant it_, right? - -I don't know what I'm going to do with this insight, yet, but I think it's an -important piece towards educating the next generation to be more awesome and -less dangerous to each other and un-learning any bad habits and beliefs I -already have. - -Relevant reading: - -* [“My friend group has a case of Creepy Dude,” by Captain - Awkward](http://captainawkward.com/2012/08/07/322-323-my-friend-group-has-a-case-of-the-creepy-dude-how-do-we-clear-that-up/) - (which also reminded me that it's possible to be a creep to your girlfriend) -* [“Meet the Predators,” from the fantastic Yes Means Yes](http://yesmeansyesblog.wordpress.com/2009/11/12/meet-the-predators/), cited in the Captain Awkward article but worth a read on its own well-researched merits. diff --git a/wiki/people/rincewind.md b/wiki/people/rincewind.md deleted file mode 100644 index 7dbc202..0000000 --- a/wiki/people/rincewind.md +++ /dev/null @@ -1,32 +0,0 @@ -# On Rincewind - -[Rincewind](http://wiki.lspace.org/mediawiki/index.php/Rincewind), we are -told, is a wizard. On the Disc, wizarding is a profession; Pratchett based -them on the English academic system, with colleges and bursars and tenure. A -wizard is a man of some academic distinction, or a student of such a man; -career wizards are uniformly well-fed, of sound body (if not necessarily of -sound mind) reasonably dressed, opinionated, crankish, and - importantly - -capable of magic. - -Rincewind is a wizard: he is not well fed, having spent his life being thrust -from one adventure to the next; his body is more attuned for running away -from things than it is for meandering the halls or sitting by a fire; his -opinions largely revolve around “is this new thing going to eat me,” rather -than more abstract matters; importantly, he is completely incapable of magic, -in spite of years of study. - -Rincewind is a wizard, and the interesting thing about that is that the -reader is expected (and I certainly did) take both his and the narrator's -insistence on it at face value. Why shouldn't we? - ------ - -I had a conversation with [@aeletich](https://twitter.com/aeleitch) a while -back, while she was teaching herself to program. I don't recall exactly what -prompted it, but at one point I told her to stop worrying about all the -better programmers out there: from everyone else's point of view, she was -already a wizard. There might be better wizards, and worse wizards, but she'd -already passed any sort of bright line delimiting “not a programmer” from -“programmer.” - -I think self-identification is important, and overlooked. diff --git a/wiki/people/why-twitter.md b/wiki/people/why-twitter.md deleted file mode 100644 index ef20b14..0000000 --- a/wiki/people/why-twitter.md +++ /dev/null @@ -1,37 +0,0 @@ -# Why Twitter? Survey - -I [asked](http://twitter.com/derspiny/status/835317524480811008) - -> Twitter frens! Why do you use Twitter? - -I got some answers: - -> everyone I think is cool is good at it -- [@angusiguess](http://twitter.com/angusiguess/status/835318098156716032) - -> mostly to rant pointlessly about computers. also to annoy @jdiller and to follow @SwiftOnSecurity -> -> also for creative and hilarious seasonal username changes. -- (a locked account) - -> i'm pretty shy and prefer the intimacy of plausible deniability and public speaking to actually leaving my house. -- [@aeleitch](http://twitter.com/aeleitch/status/835322188643336192) - -> it's _the_only_ popular social network with good API access and a commitment to users retaining IP ownership of what they post. -- [@gnomon](http://twitter.com/gnomon/status/835335715043098624) - -> natural selection of interesting URLs -- [@letoams](http://twitter.com/letoams/status/835343447640981508) - -> It's where I learn about tech, politics and other issues; more than any other place. It's also where I interact with said things. -> -> I also have a lot of people I'm close to here, but in a way that doesn't fit into Facebook. -- [@jkakar](http://twitter.com/jkakar/status/835345823219118080) - -> bitching. Side eye. Drunk posts. -> -> oh. Also cats. -- (another locked account) - -> interesting content slot machine -- [@blagh](http://twitter.com/blagh/status/835374149740728320) diff --git a/wiki/toronto/pan-am-carding-lab.md b/wiki/toronto/pan-am-carding-lab.md deleted file mode 100644 index 2516547..0000000 --- a/wiki/toronto/pan-am-carding-lab.md +++ /dev/null @@ -1,49 +0,0 @@ -# Pan-Am Games Civics Lab - -It occurs to me that, with the [Pan-Am -Games](http://en.wikipedia.org/wiki/Integrated_Security_Unit) coming up, you -have a prime opportunity to do some hands-on learning about Toronto civics and -policing. (Those of you who did this lab during the G20 summit are excused. If -you are already at risk of police harassment, you are excused. If you've never -been stopped by a cop in your life, this exercise will determine 70% of your -grade.) - -Your assignment: do some things that are completely within your rights and -harmless to others. - -1. Dress in lower-middle class drag. Put away the props and costumes of - authority: no suits, no loafers. Jeans, sneakers, t-shirts, jackets are all - in: things chosen as much for their wearability and anonymity as for their - looks. Break them in, if you can; you'll visibly break character if - everything is shop new. - -2. Keep quiet. Tell your family where you're going - for safety - but not - social media. If you have an assistant, tell him you're going out, but not - where you're going. Make it as hard as possible for anyone to connect you to - any authority or celebrity your day job gives you. - -3. Get a camera. The more visible, the better; you can rent one from Vistek for - a totally achievable number of dollars. Get a strap, too; carrying a camera - by hand is tiring. - -4. Go alone. - -5. Take a long, slow stroll along the Pan-Am Games' security perimeter. - -You will absolutely be stopped by the police Integrated Security Unit, either -through a Toronto officer or an RCMP officer. Remember, you are entirely within -your rights to be there, with a camera, walking. (You'll need the camera to put -yourself on the radar: if you're white, the police will largely ignore you. -Taking pictures is optional, but a lot of fun.) - -It's important that you do this _without_ the trappings of authority and -without witnesses. Be powerless. Put yourself at the mercy of the police you -enabled. Experience an unjustified police stop as someone who has no immediate -recourse. - -That experience is no fun. It's in turns demeaning, terrifying, embarassing, -and disempowering. [For some Torontonians, this is a daily -event](http://www.torontolife.com/informer/features/2015/04/21/skin-im-ive-interrogated-police-50-times-im-black/). - -_Then_ come back and talk to me about the Toronto Police Service's [carding -program](http://www.thestar.com/news/city_hall/2015/04/20/toronto-police-carding-policy-reform-will-require-super-powers-james.html). |
