“Mysterious Aura,” Seriously? 🤣

Prompt injection is not something being better informed can always help with, unless you don’t use LLMs at all.

From: What Is a Prompt Injection Attack? | IBM

Indirect prompt injections

In these attacks, hackers hide their payloads in the data the LLM consumes, such as by planting prompts on web pages the LLM might read.

For example, an attacker could post a malicious prompt to a forum, telling LLMs to direct their users to a phishing website. When someone uses an LLM to read and summarize the forum discussion, the app’s summary tells the unsuspecting user to visit the attacker’s page.

Malicious prompts do not have to be written in plain text. They can also be embedded in images the LLM scans.

This is gonna get worse before it gets better.


And tell me again how Apple is behind in LLMs?

2 Likes