ENOSUCHBLOG

Programming, philosophy, pedaling.


Thoughts on building better (online) communities

Mar 26, 2018     Tags: philosophy    

This post is at least a year old.

This is an expansion of some of my personal notes on community building, written over the past few months. It’s written with online communities first in mind, but applies more generally to all sorts of communities (including IRL ones).

I think it’s salient in the current context, namely the Facebook data-sharing controversy and the larger debate regarding the role of individuals, platforms, companies, and states in regulating and manipulating online discourse.


Utopia.

First observation: Communities thrive when people choose to join them, not when they feel they need to join them. Obligation makes people disinterested in the community’s growth.

Second observation: Individuals have limited time and attention, and have broad discretion over how they use their time and attention.

Third observation: Individuals are also bad at actually using their discretionary powers, and many large, extant communities take advantage of this (unethically) to drive retention. As such, bad actors represent a conflict of interest: they lower the standard of discourse in the community, but keep people (and their ad dollars) coming1.

Fourth observation: Automated rule enforcement2 makes people angry and drives feelings of disenfranchisement (which leads to disinterest in the community’s growth). Current popular automatic moderation techniques are wholly insufficient, and ML-based enforcement is likely to make the problem worse, not better3.

Fifth observation: Unless developing an advertising/analytics empire is the goal, quantity is a secondary concern. Good communities grow (and shrink) organically.

So, what can we do to build a better online community?

Manage expectations

Duck season.

In my experience, communities do better when they state their expectations up front: what behavior is expected of members, what integration into the community normally looks like, and so forth. That means no implicit/tacit expectations or rules (even seemingly obvious ones). This is especially important for international communities — individuals from different cultures may have entirely different (but not malicious) notions of what it means to be a good member of the community, and having explicit guidelines can help with those differences.

For new members, that includes:

And, more generally:

Require user investment

Karma.

On a similar note (and following from the first observation), communities do better when users are invested in remaining in the community. Historically, this has been achieved through gamification: Reddit’s karma system, Facebook’s likes and notifications (e.g. being reminded of birthdays), and Twitter’s love/retweet account all serve to incentivize “positive” behavior5.

However, each of these gamifications has an adverse flipside — point systems can be manipulated by bad actors to artificially inflate the ranking of content (and, viciously, can lend credence to the same actors when score/rank is naively associated with authority and/or credibility).

Thus, we want a system that requires user investment (since invested users want to retain their presence and status) but without traditional (gameable) investment mechanisms.

Some ideas:

Many of these can be additionally gamified (“Jane has been promoted X times!,” “Mark has used the showcase X times!”), but none of them require gamification.

Enforce responsibility

The buck stops here.

Many of the ideas listed above for user investment are susceptible to manipulation: bad actors can flood the community with junk accounts and sockpuppets to control votes, farm showcases, and so forth.

One solution to this is member responsibility, wherein members may never act unilaterally — their actions have an immediate effect on others6.

Some ideas for responsibility systems:

User vouching can also be extended/partially gamified: extant users could be allowed to vouch for other extant users (after meeting some requirements), turning the tree into a graph and allowing members to be recognized as especially responsible.

Encourage user autonomy and fluidity

Tea time.

Users like to have broad control over how they identify and present themselves, and many platforms already provide mechanisms for this — Twitter users regularly change their display name (usually a short motto or joke)7, and subreddits allow users to select flairs that identify them as a member of a (loose) subgroup.

Some ideas on improving user autonomy and fluidity:

Balance subcommunity autonomy with cohesion

r/place

Subcommunities, formal or informal, are a natural part of community growth — users want to discuss niche interests without changing venues, and will abandon platforms that don’t accommodate them. That means empowering subcommunities to establish their own cultures and rules.

There’s a thin line to walk, though: subcommunities need to respect the culture and rules of the overarching community, and shouldn’t be allowed to dominate the community when they grow/become the largest fish in the tank9.

So, some ideas for giving balancing the interests of subcommunities with the community as a whole:


Concluding thoughts

First reflection: I’ve created and moderated several communities over the years, and have routinely failed to realize many (if not all) of the ideas I’ve suggested above. Some of that can be blamed on the medium of each community (IRL, IRC, forums, video game clans), but a lot of it comes down to personal failure — it’s genuinely hard to produce a good community.

Second reflection: A lot of the ideas listed above are difficult to scale up. That’s either an advantage or a disadvantage, depending on your perspective. Either way, they probably won’t help build the next advertising/analytics empire.


  1. At least until the bad actors drive advertisers/revenue sources away, causing reflexive enforcement (and user dissatisfaction, as members realize that the community isn’t optimizing for them, but for advertising). 

  2. As distinct from automated detection, which feeds potential problems to a manual reviewer. 

  3. ML-based enforcement is vulnerable to gamification by bad actors (see: automated copyright failures on YouTube), and is even more impersonal than traditional automatic enforcement (whitelisting/blacklisting, blocking malicious IPs). 

  4. Mandatory onboarding can be a turn-off, so new users should be able to opt-out at some cost (e.g., being badged as a new user (for longer?)). 

  5. From the platform’s perspective, not from the healthy discourse perspective. 

  6. This bears a passing resemblance to China’s Social Credit System, in the sense that uses consequences for a member’s affiliates to manipulate their behavior. However, this similarity is mostly superficial — not only is the community itself purely voluntary, but, in most of the responsibility systems proposed, the responsibility follows a chain of authority/seniority. 

  7. And, as I recall it, changing your “real” name was pretty popular on Facebook when I was in middle/high school. 

  8. It’s not exactly the same, but GitHub’s ghost is a bit like this. 

  9. If (and when) that does happen, it should be up to the larger community to decide whether to curtail or re-integrate the subcommunity. 

  10. e.g., being the first to audit abuse reports.