A Vaccine for Anthropomorphism of AI - Commoncog

A couple of weeks ago I was back in my hometown, and found myself needing to explain to my dad why he shouldn’t treat DeepSeek or ChatGPT like, well, god.


This is a companion discussion topic for the original entry at https://commoncog.com/vaccine-anthropomorphism-of-ai

I actually liked Cal Newport’s take on AI that he wrote in a recent newsletter (as a complement to Cedric’s advice)

My advice, for the moment:

  1. Tune out both the most heated and the most dismissive rhetoric.
  2. Focus on tangible changes in areas that you care about that really do seem connected to AI—read widely and ask people you trust about what they’re seeing.
  3. Beyond that, however, follow AI news with a large grain of salt. All of this is too new for anyone to really understand what they’re saying.

The best place to start is to actually start in places you really care about (high value tasks)- because I think that focuses your mind on its actual utility given a certain level of effort and finetuning.

This will inevitably force you to tame your expectations of AI! I found it super useful for me!

3 Likes

It’s far more glib and much less useful, but I was chatting with someone the other day and said “as a whole we’re just starting to leave the ‘OMG the horse can talk’ phase”. For a while we’ve been totally caught up in the fact that the horse can talk as we thought it was just us. We’re starting to realise that while it is amazing the horse can talk, it’s not superhuman, it’s often a bit of an idiot, and asking it how to run the government would be a bad idea.

The average data labeller explanation I hadn’t heard before and it’s a useful lens, thanks.

2 Likes

Too late for the U.S. :sob:

3 Likes

I would also like to applaud @cedric for the bravery to put the word “anthropomorphism” into the title of a post :sweat_smile:

Probably the most effective vaccine/cure for our inherent desire to trust a conversational AI is to actually push it to its limits and watch it fail.

A stupid example from yesterday. I was preparing to leave a very negative review for a product online (I know, I know, I should be better but sometimes you just have to be petty) and I wanted it to create a pseudonym for me that was an anagram of my first and last name. My last name has a “z” in it, which immediately creates a huge hurdle for anagrams.

I almost didn’t catch it because it came up with such a clever one, but I noticed it threw a “d” into the pseudonym. Afterward I gave it more explicit instructions on how to verify the anagram against the input string. It then proceeded to parse my name with 5 or 6 additional characters that are not in my name. Several rounds later, it effectively gave up after it apologized for seemingly the 50th time for acting outside the bounds I created for it.

3 Likes

Glib, maybe, and not useful, maybe, but boy is this framing hilarious.

Well, it turns out that counting isn’t one of the things ‘arrows in a starfield’ work particularly well for!

1 Like

The current court case in Florida against character.ai (warning, child suicide) has a whole section on deliberate anthropomorphism, really shows the need for a vaccine, but not sure thats going to work.

  1. This is anthropomorphizing by design. That is, Defendants assign human traits to their model, intending their product to present an anthropomorphic user interface design which, in turn, will lead C.AI customers to perceive the system as more human than it is.
  2. Nothing necessitates that Defendants design their system in ways that make their characters seem and interact as human-like as possible – that is simply a more lucrative design choice for them because of its high potential to trick and drive some number of consumers to use the product more than they otherwise would if given an actual choice.
  3. Defendants had actual knowledge of the power of anthropomorphic design and
    purposefully designed, programmed, and sold the C.AI product in a manner intended to take advantage of its effect on customers.
  4. In addition to exploiting anthropomorphism for data collection, these designs can be used dishonestly, to manipulate user perceptions about an A.I. system’s capabilities, deceive customers about an A.I. system’s true purpose, and elicit emotional responses in human customers in order to manipulate user behavior.
  5. Technology industry executives themselves have trouble distinguishing fact from fiction when it comes to these incredibly convincing and psychologically manipulative designs, and recognize the danger posed.
  6. The inclusion of the small font statement “Remember: Everything Characters say is made up!” does not constitute reasonable or effective warning. On the contrary, this warning is deliberately difficult for customers to see and is then contradicted by the C.AI system itself.
  7. Defendants provide advanced character voice call features that are likely to mislead and confuse users, especially minors, that fictional AI chatbots are not indeed human, real, and/or qualified to give professional advice in the case of professionally-labeled characters.

They include reviews from people who decided that it was real people, or just highly disturbing

There are lots of people building their own chatbots from open source models because the commercial ones are too boring, safe, or just not weird enough.

2 Likes

Well, shit.

Ideally some form of a vaccine makes it into school education. (My more pessimistic attitude is that it’ll take 1-2 decades to get there; look at how long it’s taken (and is still taking) for us to figure out what to do re: social media and teen mental health.)

1 Like

I read a lengthy profile on this case and it’s one of the most gut wrenching things I’ve ever read. I am by no means a luddite, but as my kids have grown I’ve become incredibly skeptical of the use of technology. I am doing my best to take Cal Newport’s approach outlined in his article Approach Technology Like the Amish - Cal Newport

The Amish, it turns out, do something that’s both shockingly radical and simple in our age of impulsive and complicated consumerism: they start with the things they value most, then work backwards to ask whether a given technology performs more harm than good with respect to these values.

This also comports with how @cedric filters his information diet.

After a lifetime of doing things because “it’s easy” or “everyone is doing it” (e.g., social media, using any Google products, giving up my personal information to random websites), I’ve done an about face. It’s a long slog to unwind 20+ years of it, but I think it’s worth it in the end. I plan to raise my kids with privacy as the default, not the exception.

Becoming ultra concerned with privacy also points out how frequently people on the internet are only willing to give you something if you give them your information in exchange (try using a VPN if you want to feel that).

4 Likes

There is a case to be made that we should avoid treating AI as some doomsday coming for us.

There’s a particular trend Cal Newport calls ‘vibe reporting’ that has emerged with regard to AI specifically.

The premise is this:

In vibe reporting, you place disparate facts next to each other without making explicit claims.

Readers naturally connect these facts in their minds, creating the impression that something ominous is happening, even though nothing significant is occurring.

Take this article, for example-

The reporter says the following:

  • Microsoft just hit another lofty valuation and made huge profits
  • Yet, it continues to lay people off
  • Even as business leaders claim AI is “redesigning” jobs rather than cutting them, the headlines tell another story
  • Y-Combinator startups are building skeleton teams that bring in millions of revenue
  • ‘hiring of coders has dropped off a cliff’

Clearly, as a casual reader, when you read these and put them together, you get a vibe that ‘AI is already replacing me in my job!’

Note: The author herself did not make that claim that AI is replacing people. She is merely highlighting that there is now a preference for leaner teams.

Now, what’s the reality here?

As in many big tech firms, they are cutting back on less profitable areas to save revenue to invest in high-capex spend for projects like AI data centres (which cost billions to build).

AI wasn’t replacing people.

Companies were refocusing their corporate agenda in this AI boom.

On top of that, there are macro and industry-specific trends to consider.

There is no mention of the glut in hiring during the pandemic period.

There is no mention of the AI startup boom that is happening.
(If you are an AI startup, you have every incentive to tout the utility of your product in saving costs or generating sales. Otherwise, how do you get people interested in your product?)

But the vibe of many such pieces gave you the sense that people are being replaced left and right by AI agents.

I am not saying that we should not be concerned about the implications of AI at work.

We should be wary of alarmist coverage of AI right now.

AI is an important technology deserving genuine scrutiny, not vibe-driven narratives but actual facts.

3 Likes

Wait, where’s the link to the article?!

But — 100%, I pretty much don’t read most mainstream coverage on AI right now.

the article : https://www.channelnewsasia.com/commentary/ai-tech-job-cuts-layoffs-redesign-career-replace-human-5275621

the discussion of Vibe reporting: https://www.youtube.com/watch?v=tgnMHxzW_6k&t=3326s&ab_channel=CalNewport

1 Like

One of the nice things about this vaccine explanation is that it does have nice affordances for better AI use. For instance:

  • I told my dad that uploading a PDF or some source text to the LLM and asking it questions about the text would result in better responses, even if the text was already in the LLM’s training corpus. My (wrong but useful) explanation was that you could get the LLM to move the arrows into the starfield of the text (though you cannot guarantee that it won’t hallucinate, or draw from the broader corpus). Of course we call this RAG, but I didn’t mention this to my dad.
  • A friend asked why I split my highlight organisation prompts into two separate prompts, to be run in two separate steps. (I give the LLM a markdown export of my book highlights and ask it to transform it into a markdown format that I like, and then I give it the transformed text as a second step, in a separate conversation and ask it to reorganise into themes). With the ‘arrow in the starfield’ analogy, I have a good explanation: I tell him that LLMs are trained on discrete tasks and therefore have a ‘trained arrow’ for text transformation, and another ‘trained arrow’ for synthesis/summarisation. Because these are two separate arrows, it is considered best practice to run the tasks separately, in a separate conversation to keep the context separate. And indeed this is best practice; it is also what happens when generating code or in ‘Deep Research’ mode — the LLM will run a checklist and then spawn individual agents to run discrete tasks.

Of course these explanations are all somewhat wrong, but giving the user a plausible model works better than just telling it “just-so” stories about why using an AI in this way or that way is better.

4 Likes