“Mysterious Aura,” Seriously? 🤣

After reading this I find myself vacillating between incredulity and uproarious laughter! ” Mysterious aura,” seriously? :rofl: One should not use AI to write slop.

I’ll respond graciously, though I admit my darker angel is tempting me toward a more “robust” response. :slightly_smiling_face: Perhaps I can suggest a more effective message.

Hi,Dr

Hi, it’s great to connect with you. I stumbled upon your LinkedIn profile and was drawn to the mysterious aura surrounding your field.
I’m looking for like-minded people and to expand my network. I enjoy interacting with talented individuals because it allows me to learn new things from them.
Are there any other easier ways to contact you?

Amy Turner Construction Company | Project Development Manager | Multi-market Expansion & Strategic Outreach

5 Likes

Reminds me of this

3 Likes

I know you have a doctorate, but I am now going to think of you as Barrett Mosbacker, MA (mysterious aura).

:rofl:

1 Like

That’s funny. I’ve always fancied myself as a mysterious spy. :male_detective: :joy:

2 Likes

It is so tempting to put something like that in my profile! Does that actually work? :slightly_smiling_face:

Unfortunately it can indeed work, its called prompt hacking and it has the potential to be a very big problem for AIs. They can’t always tell the difference between content read online and user instructions.

2 Likes

Those who frequent forums like this are better informed and prepared to protect themselves against hackers, though not immune. How does the ‘average’ user keep themselves safe? It is scary to think about how exposed we all are, and especially the less technically inclined.

Prompt injection is not something being better informed can always help with, unless you don’t use LLMs at all.

From: What Is a Prompt Injection Attack? | IBM

Indirect prompt injections

In these attacks, hackers hide their payloads in the data the LLM consumes, such as by planting prompts on web pages the LLM might read.

For example, an attacker could post a malicious prompt to a forum, telling LLMs to direct their users to a phishing website. When someone uses an LLM to read and summarize the forum discussion, the app’s summary tells the unsuspecting user to visit the attacker’s page.

Malicious prompts do not have to be written in plain text. They can also be embedded in images the LLM scans.

This is gonna get worse before it gets better.


And tell me again how Apple is behind in LLMs?

2 Likes

I think I’ll go back to pen and paper and an IBM Selectric. :joy:

3 Likes