Posthumans will necessarily push the boundaries of human factors, ergonomics, HCI (human computer interaction) and HRI (human robot interaction). Some of the interactions to be accounted for are interpersonal—how will a posthuman talk to other humans in a given context?
Posthumans will have an interaction and interface legacy situation. They will have to maintain old bodily and social languages, protocols, etc. for backwards-compatibility with stock humans. Sometimes the solution to that may fall squarely into the realm of computers and networks, e.g. the people might communicate only indirectly through various software interfaces and filters. Sometimes the solutions may involve other physical entities such as robots.
An aside: on the subject of interface standards, certainly there will be pressures (such as the market) to make posthuman technology that various types of humans find functional and convenient, which leads to at least some adoption of common standards. But sometimes companies and people do not adhere to common standards. Current technology interfaces are often defined by open standards, but sometimes they are not completely open (e.g., royalties are to be paid to an organization), or they are proprietary and/or secret. Sometimes the proprietary protocols and formats become popular; those proprietary standards are often reverse-engineered, however the originator can redefine the protocol/format at its whim causing at least temporary incompatibilities. Whether they are reverse-engineered or not, many implementations are incomplete or break the specification. Thus there is no guarantee, at least based on human history, that any given posthuman technology will be compatible with anything else. Perhaps we will eventually curtail this situation with more adaptive protocols combined with smarter technology companies.
Even if you are a brain in a vat or a pure information entity living in a computer-based system, you will need interfaces in the form of protocols. Protocols will start with our current ones, but eventually posthumans may require more advanced protocols. For instance, a protocol set specific to posthumans might be mental-capability handshaking and mind docking. But the physical substrate can still rear its ugly head. An example of this harsh reality: a superintelligence in Australia is conversing with a superintelligence in the United States about superstring theory and right at the cusp of a breakthrough a shark chomps through the undersea fiber trunk and science is set back 100 years.
If that example is not far out enough for the audience, then you could instead imagine faster-than-light intercommunications between intelligence clusters spread across the galaxy. But one day the nature of the universe fluctuates (due to to the actions of enemy alien super-intelligences), rendering the physical properties that FTL depended on obsolete, which results in disintegrating the entire intergalactic intelligence cloud.
Of course, eventually, one would expect supersmart entities to find more robust solutions for information-based intelligence. The point of this section was to illustrate just one of the many interface issues which are amplified by posthuman technology.
Change and Feedback
The discipline of human factors can already predict problems that will occur when trying to design and integrate a piece of technology into a system, and these problems apply to posthuman technology as well. But posthumans make things even more complex: the biological aspect may no longer be constant. Human factors, ergonomics, HCI and HRI all depend on a relatively static biological norm. Occasional humans fall out of the normal ranges but for most humans a fit can be made. Not necessarily so with posthumans. Advanced drugs, gene therapy, physical modifications, etc. could change the physical properties of a person. Likewise with cyborg parts and androids. Cognitive enhancements will totally change the psychological aspect of design. User centered interaction design depends on known cognitive relationships which are no longer necessarily true when the user is non-human.
Imagine a group of people on a mission, for instance to colonize another planet. Let’s say we give them all cognitive enhancement A. This changes how they do their jobs, sometimes in unexpected ways. Then we develop cognitive enhancement B—but to design B we have to redefine the user as user+A and take into account the changes in the mission operation due to A. Once again, B changes not only their minds but also how they do their jobs, sometimes in unexpected ways. Now we design computer interface 2, but to do that we have to redefine human factors and HCI for users with cognitive enhancement B and the usage is for the new B-enhanced mission. Computer interface 2 also changes how they do their job, sometimes in unexpected ways. And so on.
It seems that the difficulty and rate of change will be increasing for human factors and interface design, however there is one counterargument against difficulty. One of the limitations of modeling or observing an interaction is that the internal mechanisms of the human behavior are largely unknown . But, we will know most of the internal mechanisms of posthumans and AIs. However, human factors will need to be equipped with sufficiently advanced tools for modeling and predicting posthuman behavior in a context even if the design of the posthuman or AI is known.
Human factors may also have to adapt to additional feedback loops, such as when the designers of technology are themselves posthumans who are potentially also being rapidly updated. Hopefully, this will lead to a trend towards better predictions of effects of technological change and/or faster dynamics to handle and redesign due to the effects.
 Woods, David, and Dekker, Sidney, “Anticipating the effects of technological change: a new era of dynamics for human factors,”Theoretical Issues in Ergonomics Science, vol. 1, no. 3, pp. 272-282, 2000.
 Rouse, William B., Systems Engineering Models of Human-Machine Interaction. New York: Elsevier North Holland, 1980, pp. 129-132.
Apparently a concept I developed in my spare time in 2009, which I dubbed “posthuman factors,” is very similar to some guy’s PhD dissertation in 2010 in which he also used the term posthuman factors. (And I don’t mean everything in his dissertation, but there’s a lot of overlap.)
I recently learned of this through a Wikipedia article I discovered (created in April 2011 by user Nikiburri, subsequently deleted) called “Posthuman factors.” It has a good summary:
In general, posthuman factors addresses the intersection of design practices that includes (1) the design of posthumans, (2) designing for such posthumans, especially in safe and sustainable ways, and (3) designing the design methodologies that will supersede human-centered design (i.e., “posthuman-centered design”, or the processes of design that posthumans employ).
It cites my IEET article “Why You Should Care About (Post)Human Factors,” published Jan 8, 2010, and also claims that posthuman factors was first “articulated” by Dr. Haakon Faste in his Jan 2010 doctoral dissertation “Posthuman Factors: How Perceptual Robotic Art Will Save Humanity from Extinction.”
Most likely we were both thinking about it and writing about it at around the same time (one would assume that, as with my articles mentioned above, the writing actually started in 2009). And then there are whatever projects that lead to this particular synthesis of concepts; e.g. in my case it connects at least as far back to my attempt to describe an interface point of view for future human/robot/posthuman/etc. interactions (“Would You Still Love Me If I Was A Robot?“).
The Posthuman factors page has a link to a wikipedia page for Haakon Faste (created by the same user Nikiburri) which informs us that he is a leading figure in the field of posthuman factors and that he coined the term in 2010. But I posted this article, originally under the title, “Do We Need a Posthuman Factors Discipline?”, in December 2009 on my blog, so I guess that means I coined it first. It’s nice to know that I started a new field. And I’m pleased that at least one other person is thinking about these issues.
[Editor’s note: Haakon Faste is I believe the son of well loved Stanford prof and product design guru Rolf Faste.
Both Rolf and Haakan Faste’s publications are worth reading including Haakon’s thesis. See http://fastefoundation.org/publications/index_chronological.html and also http://www.haakonfaste.com/]
Samuel H. Kenyon is an amateur AI researcher, software engineer, interaction designer, actor, and writer. He blogs at http://synapticnulship.com/ where this post previously appeared. Republished under creative commons license.