Lia DiBello and Neil Sahota on Human-AI Symbiosis - Commoncog

Dr. Lia DiBello is the Chief Science Officer of ACSILabs, Inc, which makes the FutureView Platform — a virtual reality training platform used by the US military and by certain large businesses to accelerate expertise. I last talked to Lia about her groundbreaking work explicating the mental model of business expertise, and in fact created this podcast to interview her. She’s back today to talk about her use of AI to accelerate expertise. 


This is a companion discussion topic for the original entry at https://commoncog.com/lia-dibello-neil-sahota-human-ai-symbiosis

I finally had a chance to watch this interview. It was very intriguing, but there were several points where I felt I needed an extra level of detail to “grok” what the practical lessons are.

@cedric: Do you have any further insight on these areas based on your background conversations with Lia and Neil?

How are these exercises different than existing problem solving? Lia gives two examples of creative problem solving: the graduation completion rates situation, and the mechanics at the NYC Transit system dealing with first pass yields, mean time to failure, and outdated equipment. If she hadn’t mentioned AI, I would have thought that the graduation scenario was a textbook example of design thinking problem solving, and NYC Transit as an example of a kaizen event. So how was AI used? Did they just feed answers into a model, get feedback, and iterate? How did they teach the model the “rules” for that context? How did they validate it?

Expert systems redux? When Neil described the crop yields example, it sounded like a classic case of an expert system: enter data, the system has rules on what happens with X crop in Y soil, and you can model the effects. The key limitation on expert systems is that your models are only as good as your expert knowledge: if your technocratic rules don’t include downstream effects on the environment, or on the political economy, you are blind to those second and third order consequences. Neil seems to agree with this based on his statements about the importance of getting the business experts into the process instead of just the engineers. But how does AI help to get the right rules, or to uncover blind spots?

2 Likes

This was an interesting interview - was very happy to see this in my podcast feed!

While I thought both guests made many good points, I did get fixated on a couple of items that undermined their stance a bit:

Centaur chess

The discussion of centaur chess as an example of how humans and AI should work together is quite dated. My general sense (which seems generally confirmed based on a quick search but is still somewhat in dispute, to be fair) is that it is no longer true that humans + chess engines can beat a chess engine on its own. At best, it provides minimal advantage, and there is evidence that the risk of people making mistakes in executing a move (putting a piece on the wrong square by accident, for instance) may more than negate that advantage.

This point could have been salvaged by talking about the differences between games like chess and go with unambiguous, finite outcomes and most other domains that lack these things. That would actually enhance the argument that AI on its own won’t be able to fully do most things humans care about (as Vaughan Tan has excellently argued in his recent series on meaning-making as an exclusively human domain).

Industrial Revolution parallels

The discussion of job obsolescence during the Industrial Revolution as essentially being overblown, and the march of technology unstoppable, is at odds with more recent scholarship on this topic.

There is a strong argument that the Luddites, for example, had a valid point in resisting the changes induced by the introduction of automatic looms. While loom operators did eventually benefit from the changes, there was a multi-decade interlude where they did not. It’s hard to get folks that excited about losing their productive careers with the assurance that things will get better after they are long gone.

Also, there is ample evidence that rulers regularly inhibited the adoption of labor-saving technologies successfully. Their fears on what it would do to workers limited awarding of patents, loans, and other sanctioning. Great Britain’s reversal of this trend was a key factor in it leading the Industrial Revolution.

As much as the rise of AI appears inevitable to many, that is only because of key supports that both enable its rise and prevent those opposed to it from meaningfully disabling it. To be clear, I’m not advocating for doing this, but acting as if it cannot happen is a mistake.

Caveats aside, I agree strongly with both guests that simulation is a critical enabler of effective use of AI. Humans and AI both need a place to learn and develop their expertise, both in the domain as a whole and in working with others. High-fidelity simulated environments provide a lot of advantages in making this happen, so I expect significant efforts in VR/AR, modeling and related efforts to be focal points for the next few years.

6 Likes

The broad theme of the interview is that integrating AI into a company is like integrating any new technology into a socio-technical system: you have to do iterative trial and error to figure out the right way to integrate; the change you are inflicting on the (as Lia puts it) “user-tool-activity system” must be coherent.

How are these exercises different than existing problem solving? Lia gives two examples of creative problem solving: the graduation completion rates situation, and the mechanics at the NYC Transit system dealing with first pass yields, mean time to failure, and outdated equipment. If she hadn’t mentioned AI, I would have thought that the graduation scenario was a textbook example of design thinking problem solving, and NYC Transit as an example of a kaizen event. So how was AI used?

The two examples that she gave are not examples of AI, but examples of a training methodology that she later augmented with AI. A full list of published papers covering that training methodology (along with why it is novel — it is not just a kaizen project, or a design thinking problem, but a way to disequilibrate learners in order to get them to construct new models) is available on Commoncog here: Change Your Business: A Library of Strategic Rehearsals - Commoncog

A later published example, after they’d built out their 3D simulation platform — which is the thing they augmented with AI — is available here:

Again, the overarching thing we are talking about is “here is a user-tool-activity system that we are augmenting with technology, this is what a coherent intervention looks like.”

Or “we have a method for accelerating expertise” and then “we are now accelerating expertise with AI”. The process to get to the second sentence is not ‘mere sprinkling of new technology’, it is “let’s iterate on the pedagogical model where it makes sense to use AI” (and not all the AI is gen AI, the earlier randomness generator used more classical, non-NN methods).

This is not the issue Neil was using that example to illustrate. The main thrust of that example was “here is a good use of AI, now compare with a company that pays for an OpenAI subscription and sends its employees for prompt training, and then expect to reap the benefit of sprinkling magic dust AI on their processes.” Later in the interview, he talks about the recommended approach to integrate AI: treat is as an innovation project, with a developer team willing to iterate while building such augmentation systems! Don’t expect good results immediately!

As an aside, this is why I don’t do podcast interviews that often — interviewing is a skill, and I’m just not that good at it!

3 Likes

Do you expect to get better at it in some other way?

An excellent question. I keep thinking that having conversations with new folks are a good enough way to practice, but they’re really not. I don’t yet have a plan to accelerate this. (Though, I must say, it’s not really a priority for me at the moment!)

1 Like

Maybe @cedric is trying to persuade us readers to have a podcast where we discuss the articles without him…

4 Likes

The articles on meaning-making by Vaughn really are excellent. I’m surprised I’ve not seen anyone else mention them on here.

2 Likes

Oh! Great call, I’ll start a thread on them!

I didn’t think to mention it, but I’ve been thinking about them a lot; with the caveat that I hate the term ‘meaning-making’ (and Vaughn tells me he hates it too, but can think of nothing better).

2 Likes