"Bing is an emotionally manipulative liar"

Is this story true?

My reading and experience with Bing is that there are lots of software safeguards to avoid this sort of thing.

More notably - if Bing really did this, I have to believe that someone out there would keep trying until getting proof on video and then post it to Youtube or elsewhere and go viral. The fact that such a post has not occurred makes me quite suspicious of the claims.

The Verge article you link to goes to a Reddit post with multiple screenshots. That strikes me as about as reliable as a video – if someone is going to fake multiple screenshots, why not a video?

It is true that it’s difficult to verify these kinds of responses because LLMs don’t respond to the same questions the same way each time (another kind of problem – lack of reproducibility creates its own challenges). But there have been a number of odd exchanges of similar sort from fairly reliable places. See the Simon Willison blog post I linked to elsewhere.

Has anyone seen denials from Microsoft? Surely they would want to get out ahead of definite hoaxes that go viral.

Finally, if as a layperson anyone really wants to understand what ChatGPT, new Bing and so on are doing – and much of why they are (currently) as limited as they are, you could do a lot worse than Stephen Wolfram’s excellent explainer here:

It gets technical in places, but at least for the first half it’s possible to skim those parts and sill get a good sense of the big picture.


Spectacular! Thanks for sharing this, Wolfram can be excruciatingly detailed but sure he does not leave anything behind. This is the definite resource for understanding what ChatGPT is. (Also check Jay Alammar’s articles if anyone is interested)


There were similar stories on the german press, also with corresponding Screenshots, so why should it be not true!?
The purpose of ChatGTP is not to give a true answer, but to answer like a human!

1 Like

It’s not hard to fake screenshots; it’s much harder to fake a video of a chat session.

There are explicit guardrails in the software against this sort of response.

I cannot duplicate this myself; can you?

Have a look here:

I do not understand, you believing this!
MS wasn´t even able to get the software during their press conference to really do, what they are promising!
They are using a system for something, this system was not build for, and is not intended, so that makes pretty clear, that they do not understand, how this system is working, or what it is doing.
So why do you think, they would be able to build some kind of “guardrails” into it!?! They maybe have had a look to prevent the software to hail Hitler again, as their last try to invent something like that ended up, but in all other circumstances, it is within the nature of ChatGPT, to react like observed, as this is the way this software is working!

There is a big difference between a factual error vs. going off the rails emotionally.

@Ulli Can you get Bing AI to act like an emotionally manipulative liar? I have tried and cannot. I know at least two friends who have tried and cannot.

When you have specific questions such as “How do you use Snap Focus with a Ricoh GR3 camera” or “What is the differential of SinX” Bing Chatbot gives thorough and understandable answers. Much better than doing a Google search on the internet.

I asked it “If I ask a nonsense question will I get a nonsense answer?”

And it replied

"The question of whether asking a nonsense question will result in a nonsense answer Is not as straightforward as it may seem. Nonsense questions are those that violate the rules of logic, grammar, or common sense, and thus cannot be answered meaningfully. However, different types of nonsense questions may elicit different types of responses, depending on the context and the intention of the questioner and the responder.

For example, some nonsense questions may be asked as jokes, riddles, or rhetorical devices to make a point or provoke a reaction. In these cases, the responder may choose to play along with the humor or irony of the question, or to challenge or refute it with another question or statement. Alternatively, some nonsense questions may be asked out of genuine confusion or ignorance, such as when a child asks “Why is the sky blue?” without knowing what “blue” means. In these cases, the responder may try to clarify or educate the questioner by explaining the relevant concepts or facts.

Therefore, whether asking a nonsense question will result in a nonsense answer depends on how both parties interpret and respond to the question. A nonsense answer is one that does not address or resolve the issue raised by the question, but rather adds more confusion or absurdity. A sensible answer is one that either acknowledges and responds to the intended meaning of the question, or exposes and corrects its nonsensical nature."

Ulli, have you tried it yet with your own test questions?

I know some examples discussed in the media, and I know for example the one from MPU, that I have linked above!
It really doesn’t matter, if it is not reacting in that way to you, or your two friends.
It matter, that we got examples from the system reacting in that way at least to some users, and it matter, that the system is even build, to react in that way, as it is build to react like a human being, and human beings react like that. So the system is doing exactly, what it was designed for!

Sure, the system is designed for that!!
The big issue, the system is not giving answers, that are true for sure!
You could be on the lucky site, and get answers that are true, but it will answering in the same “thorough and understandable” way, if it is presenting you some kind of bullshit, it came up with by itself!
Have you viewed the video from the presentation?
Even there, in this controlled environment, MS wasn’t able to get the system to answer in a correct way!

That is possible.

Or it is possible that for some reason someone doesn’t want to see OpenAI succeed and has embellished things.

1 Like

The responses I have received from Bing have been strange, as I reported in a different thread here. Sure, I was aiming to get it into a spot where replies might seem dark. The NY Times reporter I cited was going down the same track. Here’s my take on this.

  1. Early days for Bing in the wild.
  2. It is not difficult to get Bing to go off the rails, which I think is due to the enormous difficulty of tuning Bing to interact with all manner of human conversation, and do so in multiple languages.
  3. It is likely the engineers are not yet equipped to understand the wide variety of tone and intent in human conversation, and adapt Bing accordingly.
  4. The concept of “guardrails” for a robot that, itself, claims to have read the entire internet (which is probably not true) and from that data inform its conversations, is far different than “guardrails” for a social platform where being careful not to admit words from a certain corpus and statements that use those words.
  5. At a point in my strange conversations with Bing I’ve almost felt I was “abusing it”, not with the words I used but with persistently insisting on taking the conversation in the direction I wanted to push Bing. The idea of abusing a robot with the way I speak to it is new to me. You cannot abuse Alexa – it just replies “I’m sorry, I do not understand the question”.
  6. I am not comfortable with the thought that some users – perhaps young people – may come across one of Bing’s foul moods and not know how to deal with yet.
  7. But, it’s early days so we’re all learning


Really!?! :rofl: :joy: :rofl:
[and 20 more characters]

Humans write fiction ad nauseam about AI questioning its own existence and feelings. So I’m not surprised at all that it’s exactly the answer a bot that predicts human language is outputting.


Yes, this.

But also, I was struck in the Wolfram article by the trick of mixing up which “next word” is chosen.

For those who haven’t read it: In effect, large language models like ChatGPT work by picking the next most likely word based on all that has gone before (and considerable “training” on existing texts out in the world).

But if they always take the most likely next word, they sound dull and boring, and robotic. On the other hand, if about 20% of the time they choose one of the less likely next words … they sound much more human.

Wolfram demonstrates this with examples, and it really does seem to be the case. ChatGPT (and its kin) sound human – and so it becomes easy to read emotion, creepiness, friendliness and more into it. And all it takes to cross that uncanny valley, apparently, is about 20% of unexpected variation.

Moreover the 20% threshold has pretty much just been reached by trial and error, there’s no underlying theory or model to that aspect of it.

Just imagine how much more convincingly human-like these models could become with still more effort (which is clearly in the works). It does give me pause.


Microsoft is acknowledging at least some of the odd behavior:

… we have found that in long, extended chat sessions of 15 or more questions, Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone.

The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend.This is a non-trivial scenario that requires a lot of prompting so most of you won’t run into it, but we are looking at how to give you more fine-tuned control.

1 Like

I think that’s a fair explanation - much more fair in fact than the tech and mainstream media have been.

If it takes a long 15+ question session of aggresive input to Bing to get it to respond in kind then i am not concerned. In fact I would argue that’s not a bug -that’s a feature.

1 Like