Presentation on Sense-making Blindspots and Systems Thinking

Having listened to the recording, it strikes me that a key framing question for applying engineering systems theory (like interfaces) to social theory is, do social systems really want to be transparent and accountable? For such systems would clarity and accurate information be more of a feature or more of a bug? It strikes me that by-and-large, most social systems really prefer not to be accountable, so clarity of interface is not just an engineering problem to be solved. There is a prior political problem of interests and alignment.

Part way through the talk, Momcilo states that society can visualized as a set of interconnected systems that are organized and rule-bound. This echos the major strand of sociology known as “functionalism”, most associated with the name of Talcott Parsons. On biological analogy, for functionalists, society is modeled as having many moving parts, which are meant to align with one another for the greater good. Quite a bit of sociology after Parsons (going on a century now) however, critiques this functionalist view in favor of one conflict model or another. For conflict sociology, some social systems are seen to really be predatory on other social systems, and not everyone has everyone else’s best interests at heart.

I would tend to favor such post-functionist critical theory over functionalism. As an idealized model, perhaps systems like government really should represent the good of the whole. On a practical analytical level, however, that has never really been the case. For the purposes of gaining and retaining power, obfuscation can be more useful than clarity in many cases. So the question of blindspots for sensemaking really ought to also factor in purposeful misdirection by those keen to manipulate systems for personal advantage.

I feel there’s a huge gap to bridge here, but I’ll try.

This theory sets out to challenge the myth that society is unfathomable and unmanagable beast that has a mind of its own and does what it pleases. It also highlights logical incoherence in many systems that is only a problem as far as the systems are required to be coherent.

To start with, I’d say that social systems don’t have ability to “prefer” anything. They are just sets of components and their interdependencies working together towards a purpose or function.

The only “job” of the system is to align with the will of its owner and do a good job of addressing the frustration and the motivation and honouring the explicit specification. We’re not going into “greater good” here - this is purely a logical exercise.

The misalignment between stated intentions and implicit specification of the system is where the mystification is.

In the “shell game”, a small ball (or pea) is placed under one of three cups or shells, which are then shuffled. The player is asked to guess which cup hides the ball — but it’s often a scam, with sleight of hand used to deceive the player. Let’s say that the difference in sense-making that’s needed for someone to realise that the game is rigged (and not participate) is a certain “quantity” that’s similar to the (undiscovered or insurfaced) discrepancy between explicit and implicit interface.

Let’s look at the modern schooling system.

Validation is a big concept here. If the ownership of the system weren’t so disputable and if the owners didn’t have to also conform to the overarching systems - they wouldn’t need to be bending backwards to get validated.

But as is, they do - and they are not indifferent to what we think.
As complex as they are, systems (and social systems) can be analysed and their workings derived in the form of implicit definitions.

The owners/operators of the system “care” about the coherence of the system - because the only way to preserve the integrity of the system is to present the implementation as a faithful reification of its explicit interface.

When you expose a system’s incoherence, theoretically, you could declare it invalid.
if you couple that with having clear accountabilities and responsibilities for the system’s operation - you can “invalidate” those people too.

What happens next? Tension, opportunity opening, system testing at its limits…

1 Like

Just this morning I’ve been on email with Gregg Henriques and his UTOK inner circle having a rather parallel conversation about social justifications. In that conversation, Gregg cited this previous article of his:

I was particularly struck by Vervaeke and Sengstock’s diagram (down near the end) ranging from “Con Game” to “Science Lab”. To me that speaks volumes about gaps in sensemaking, and whether or not various political actors see such gaps more as problems or more as opportunities.

Haha, it definitely seems like it. Graeber himself says its around 40%, but yeah it definitely is a lot. Also, he points out that having so many jobs like these frame peoples ‘common sense’ about what counts as a job, how rewarding it is, the overall purpose of work, how it relates to value, how we are supposed to relate to work etc. and the second part of the book on this theory explores these things in some depth.

1 Like