I’m opening a discussion thread in connection to the presentation I gave yesterday.
This is a re-ordered and simplified summary to provoke further thoughts on the subject. The link to the recording of the presentation will be made available.
In short and in the reverse order:
System incoherences create a tension in the society and lead to social change
We can uncover system incoherence through a scenario based inquiry tool (Spec Drift Dialogues) that poses controlled, hypothetical situations to key stakeholders (politicians, lawyers, economists, philosophers…) and compares their interpretations against the system’s stated intent and executional reality
All systems must present themselves as coherent to maintain legitimacy. Coherence means alignment between purpose, structure, and outcome.
Sense-making of societal anomalies suffers from blindspots
Social systems are mystified on multiple levels obfuscating its workings and interdependencies and re-routing accountability and responsibility
Thanks for this summary, @Martin and for the very thought-provoking presentation yesterday. I guess it may not be obvious to those who weren’t there that we are, at least in part, talking about good practices that are currently used in software engineering. So just clarifying that.
What I picked up covered software that was so inadequately built or maintained that it was considered too complex to fix properly, and therefore ended up being patched, potentially introducing yet more layers of complexity. So maybe there is actual experience of software whose complexity compares with the social systems that tend to be the focus of our interest for 2R.
The question, and theme, that sticks with me is: are there methods, or can we develop methods, to take a complex (socio-technical) system and refactor it towards greater simplicity and understandability? Because, from my perspective, the comprehensibility of the system as experienced by participants makes a huge difference.
My main intention is to raise awareness of the mystification of systems and associated sense-making blindspots. The idea is that discovering a blindspot can present a very lucrative opportunity.
For example, if you never saw apples on the ground (your blindspot) and instead had to climb trees to get to the fruit.
At the moment I’m actively looking for a right angle of approach at these topics.
Are you asking whether looking for blindspots is a “right angle of approach”? If so, would it help to take particular real examples of blindspots, and work with them?
I was very impressed and thoroughly convinced around the blindspot examples you brought up. What I don’t think I grasped so well was, who has the blindspots … at least, who has blindspots that could be enlightened? Which I would see leading on to, who is in a position to use scenario based inquiry tools, and with whom?
Alternatively, if the blindspots are seen as systemic rather than personal, what are the processes that throw light in a way that leads to actual systemic change?
Just the most effective way for arguing that this is a really fertile ground for research. Epistemologically, blindspot is a failure either in the acquisition, processing or justification of knowledge. Not just gaps but places where the conditions for knowing are obscured and unavailable.
Good question - if blindspots are built into people or systems - I’d say both! Systems are built around them - part of the design to get competitive advantage.
In terms of systemic change - blindspots cause sense-making to be deficient. The required threshold of coherence not reached.
There are many use cases - from inter-personal relationships to political, economical, global decision-making…
Hi all, Martin, I’m posting the questions I asked on the call as requested.
I asked chatgpt for a summary:
Are we caught in a cognitive bind when it comes to systems innovation?
Do certain personality types or cognitive styles, which are needed for complex systems thinking or systems development, inherently exclude or override other important aspects of integration, such as emotional intelligence, collective consciousness, or “we” thinking?
Do the brains or mindsets that can construct innovative systems also inherently create limitations?
Is there something about the way these individuals process information that leads to recurring “hiccups” or systemic flaws?
Are we structurally or cognitively stuck?
Are we in a situation where the very ways of thinking required to solve complex problems actually keep us from solving them in a truly holistic or sustainable way?
How does innovation shift when viewed through this lens?
If we approached system development with a more integrative perspective (one that includes different types of intelligence or awareness), what would innovation then look like? Where would it move or how would it evolve?
Does the ability to sense and process high levels of complexity come at a cost — for example, at the expense of other aspects of life, like emotional balance, social connection, or day-to-day enjoyment?
Is it inevitable that certain kinds of insight or intelligence come with trade-offs — that some kids of depth or capacity must be gained at the cost of something else?
Is disruption necessary for meaningful change or transformation? Can we ever truly progress without going through chaos or breakdowns of the existing order?
Is a certain level of imbalance or tension essential to human development — not just socially, but individually and spiritually? If things were too peaceful or balanced, would we lose our sense of purpose or become self-destructive and seek out disruption?
How or could (or should?) we include the essence of disruption or even destruction, in some small way consciously into the systems that we’re creating, so that we allow for a flux which creates the polarity that maybe we are energetically, intentionally trying to to pull through?
Interesting questions, and certainly hope to keep the various questions alive. If I recall correctly, there was one particular question or framing which we decided to shape the next call around (I think), I suppose it is in the recording, but does anyone remember exactly what that was?
Just something that popped up in my mind which was slightly addressed in the talk; namely, that it is the specific role of some people to mystify, to lead people through the labyrinth until they get tired and lose faith. It reminded me a bit of the ‘bullshit jobs’ framework (David Graeber). Here’s a taxonomy from chat gpt, duct-tapers and box tickers seem especially relevant:
1. Flunkies
Definition: Jobs that exist mainly to make someone else look or feel important.
Examples: Receptionists who have nothing to do, doormen, administrative assistants whose real function is just to symbolize status.
Key Trait: They serve to inflate egos or signal prestige.
2. Goons
Definition: Jobs that only exist because others are doing the same job. If no one had this job, it wouldn’t be needed.
I think that the first step is to acknowledge that people struggle with sense-making and as systems get more complex - it gets worse. It’s a spectrum of ability when it comes to seeing through the complexification and manipulation.
If this were a conscious effort (like focusing hard on systems or assuming a certain cynical disposition to figure out if they are bona fide) - then there would be a cost (less trusting, less open to opportunities, etc…) . But I believe that this sense of “seeing wrongness” in systems is intuitive and not “effort-based”. I have no way of knowing, It just seems to me that way…
It seems that way. You need tensions and they arise from contradictions. Of course, these “contradictions” or “incoherences” is exactly what we’re talking about. Some people (most people) can’t see them - or they don’t think they are important.
Bourdieu thought that the tension between the field and the habitus is what leads to changes.
Habitus - being the conservative, hierarchical inertia that carries a lot of capital and the field being the arena where fresh opportunistic energy gets released through struggle for dominance.
Marxist idea about people not being able to “see the conditions of their own subjugation” is directly applicable to this because - you see that something is amiss, but you only have wrong type of tools to try and solve something you don’t have clarity about.
Chaos and disruption are needed for the “recharging” cycle - dopamine release…
Yes, probably. Although, theoretically, it’s possible to simulate it by “medicating” people with hormones and cycle them without incurring chaos on the external world.
That’s what Trump is currently doing. He’s creating a controlled" chaos to get people out of comfort zone and force them to change. This is (probably) unprecedented experiment on this scale in history. Obviously, he’s not doing it ouf of the goodness of his heart…
This is interesting. I think what you’re suggesting here is quite advanced. I’d say that a good angle would be to ask if someone externally would recognise what these jobs are all about. Most would just accept the statement of fact as if it were inevitable nature of reality. Some would intuitively realise that something’s wrong. What Elon Musk did when he purchased Twitter should be analysed. What do you need in terms of personal capacities to fire a large number of people like that and not fear that everything would implode?
I find it very interesting that you all understood it in your own way - which is also different from what I set out to document. You might remember that I used word “swirl” - to try and graphically depict this hurricane of emotions and sense-making storm that takes place - I see it like a huge washing machine on spin cycle where it’s hard to see what’s spinning - like a school of sardines running in front of a shark… It’s a chaos of small chaoses - and they are all mystifying structures that need deciphering…
All the hallmarks of an interface vs implementation misalignment and
" Transport Secretary Heidi Alexander described the issue as “a problem that needs to be tackled”, while the [RAC] called for a Government-backed code of conduct to be reintroduced."
is again focusing on the individual manifestation of a much wider problem of systemic corruption.
A system that issues fines with no need to prove its own correctness, while forcing citizens to prove innocence under asymmetric rules.
System whose inner logic is sealed off, but whose effects are legally binding and financially punitive.
My preferred sources like Habermas and Pogany arrive at similar conclusions. It takes system breakdown, contradictions, cognitive dissonance, and chaos to provoke the emergence of new structures. Paradoxically, the current chaotic environment provides some tangible optimism about the potential for change. Motivation for change has suddenly become quite evident. (Not that those who triggered the change motivation are necessarily guiding the process. Where everything ultimately stabilizes may be different than how things appear just now).
Definitely seems to be the main point in this case and others. Also partly based on the podcast I sent over, I think looking into U.S political media and coverage over the last few years would provide an interesting case study, especially moments which (for lack of a better word) might be captured as ‘surreal’. Moments of complete, total and bewildering breakdown of faith or ‘trust’ in ‘the system’, whether that be through the breakdown of manners and norms (mostly in trump), in blatant lies, or in the tearing of the social fabric and ideology that everyone was used to. There is a specific quality to certain conversations where no one actually believes or trust or has ‘faith’ in what is being said, but it is nonetheless still being said. Maybe ‘bad faith’ might capture it a bit, but it also seems to be an especially interesting and unique phenomenon.
I’m partly addressing this in the interface vs implementation misalignment and I’m asking a question: Could it be that this “breakdown of faith or trust in the system” is taking place because the system is not hiding anymore its implicit specification? Before, we had faith that “unpressed buttons” would work according to their stated functionality. Now, we’ve discovered that the protection circuits are faulty that there’s seemingly either no limits to the power of the operators or that we have no idea what those limits are.
Zygmunt Bauman connects this feeling of insecurity and uncertainty to the concept of Liquid Fear. Again, you need to have the right perspective and context to see it - because if you’re born into society that’s already gripped by such state - you wouldn’t be able to recognise it.