Seeing as a Human Organism
The goal of today's article is to give readers more understanding of the more recent neologisms that I use/support, especially ones that I use in the context of "systems thinking for good".
Seeing Like a _______
I like the form Seeing Like a ______. It allows one to see as the system itself rather than as an individual actor within it. This was popularized in Seeing Like a State (which has the associated powerful terms of legibility/texture). However, you can also use it to "see" as other large institutions, e.g. Seeing Like an Attention Economy Aggregator (Facebook). In order to build the future, I think more folks should See Like a "Human Organism".
Seeing Like a Bottom-Up Multi-Scale Human Organism
Here are the words that result when you're "Seeing Like a Human Organism":
- "Directionally correct", "vectoring or iterating towards": The idea here is that, although you're not actually sure what the long-term result will be, you think that a current course of action is "in the right direction". Directionally correct should often be accompanied by the "foundational truth" that steadies that directional correctness. With "vectoring/iterating towards", you're almost always "thinking through time", e.g. focusing for n months and then re-evaluating. (A lean/design/explore-exploit mindset.)
- Proto-example: I love using proto-example instead of something like MVP, especially because, for me, proto-example is often connected with the form "proto-example of X, where I believe that X should be distributed in the system". In other words, proto-examples are things that work in a small context and have strong properties in a macro context.
- Mechanism: This word seems like it's more from economics than systems, but I still love it. Mechanism is the noun for how something works. It's a great way to understand the root causes of any outcome. (Kind of like 5 Why's-ing something.) It seems to be more powerful than synonyms like process or method, primarily because it has a texture of "exactness".
- API: I often use API as a way to think about the mechanisms of human-to-human interaction (instead of just computers). This is especially helpful when thinking from a human node-based perspective. e.g. What should the APIs of all of the human nodes be? This leads to thoughts like: "what should my 21.co-style value exchange options look like?" Or, less on the API side and more on the "properties of the nodes"-side, "in addition to my preferred gender pronouns, what else should people immediately know about me?"
- Manifest/embody: I often use these words to show how one "instantiates" their values. In other words, it's a way of "being" in the world, instead of simply "knowing" something.
- Nature 1.0, 2.0, 3.0: Nature 1.0 is biological nature/earth, Nature 2.0 is humanity, and Nature 3.0 is computers/machines/AI. All of these need to co-evolve with each other towards a macro SharedOutcome. These are pronouns. Side note: It's weird that most of my neologisms are verb-ish, rather than pronouns. (Bonus note: These terms are inspired by folks like Trent McConaghy!)
- Sensemaking: This is often used by folks who study the attention economy, especially when you're thinking from a "human organism" perspective. It allows you to answer questions like "how much clarity does humanity itself have in understanding its challenges/opportunities?" It's important to think of sensemaking as the start of a process of taking action (i.e. the initial inputs into the action).
You can also try to add a "do gooder" lens to this, i.e. to be increasingly good for the system, rather than for individuals. If you do, these words result:
- SharedOutcomes: This is a great word to use when you're trying to find alignment with an "other", especially a "competitor". Instead, you find that you're both trying to achieve some SharedOutcome (like providing value to customers), and then can co-evolve to that SharedOutcome (you're both happy if it succeeds!). This is especially powerful with "deeper" SharedOutcomes like "long-term flourishing of humanity" (which many folks hold as a SharedOutcome).
- Subtraction Mindset (引き算の美学): This has mostly been pushed by the Ethereum Foundation. It's highly related to Co-Evolving to SharedOutcomes. Instead of "making one more important", you "become smaller" by "empowering the crowd" to achieve the SharedOutcome. (Also see Terra Nullius as the opposite [and empirically incorrect] foundation for claiming "winner" dynamics.)
Other
- Categorically Different: This is similar to orthogonal, but I prefer it in certain cases. Orthogonal still implies mapping onto a graph. Categorically different is a superset and can apply to non-graph ideas.
- Isomorphic: Is a way to say that a certain underlying mechanism/structure holds true for two ideas in a certain context.
- Positive and Normative: (Classic one.) Positive and normative are "is" vs. "ought" claims. This is often used in rationalist contexts and in economics/academia.
- Also, I'm pretty excited by weird flex, but ok, especially as a way to see that everyone is showing their authentic selves in strange ways.
For more on language, see:
Vote on my future articles here!