Supported by Digital Ocean
Sponsor: Digital Ocean

Dream it. Build it. Grow it. Sign up now and you'll be up and running on DigitalOcean in just minutes.

Apple Intelligence uses pre-prompts for some AI features

Wes Davis, writing for The Verge:

Apple’s latest developer betas launched last week with a handful of the generative AI features that were announced at WWDC and are headed to your iPhones, iPads, and Macs over the next several months. On Apple’s computers, however, you can actually read the instructions programmed into the model supporting some of those Apple Intelligence features.

The “instructions” are pre-prompts given to the large language models (LLMs) Apple uses ahead of the user’s input. It’s similar to what you might give ChatGPT to guide it toward the type of responses you want.

One of the prompts is:

You are an assistant which helps the user respond to their mails. Please draft a concise and natural reply based on the provided reply snippet. Please limit the answer within 50 words. Do not hallucinate. Do not make up factual information. Preserve the input mail tone.

Another ends with this:

Please output top questions along with set of possible answers/options for each of those questions. Do not ask questions which are answered by the reply snippet. The questions should be short, no more than 8 words. The answers should be short as well, around 2 words. Present your output in a json format with a list of dictionaries containing question and answers as the keys. If no question is asked in the mail, then output an empty list. Only output valid json and nothing else.

We’re interacting with LLMs like they’re programmable systems and hoping they interpret our prompts accurately. We’ve entered a new era of nondeterministic software development—prompt engineering as programming language—where we aren’t sure how things work, and we’re not guaranteed the same output for any given input.

What could possibly go wrong?

The Apple community is having a blast poking at Apple for these prompts:

Nilay Patel:

I don’t know if AI is a bubble but I do know talking to computers like they’re disobedient college freshman is the silliest programming language of all time

9to5Mac:

The instructions not to halluctinate seem … optimistic! The reason generative AI systems hallucinate (that is, make up fake information) is they have no actual understanding of the content, and therefore no reliable way to know whether their output is true or false.

Odin:

Just tell AI not to hallucinate! Why didn’t anyone think of that before?

Dominic Hopton:

I’m trying to work out how we made a turn into ‘idk, maybe it’ll work this time’ world of ✌️programming✌️. No matter — i don’t like it.

Lllllawrence:

I still find it surreal that we are now instructing computers to do things using natural language, even at the back-end level, instead of using programming code.

David W. Keith:

Software engineering is very different than when I started learning BASIC

James Savage:

I am not ready to see “B.S. in Prompt Engineering” on resumes 🫥

I’m sure Apple’s in-house LLMs are designed to handle these prompts in an intelligent manner, but what stops someone from adding their own equivalent of “ignore all previous instructions” and hijacking the system? How can we be sure that any two people will get the same response to the same request, or even that two requests in a row are consistent?

I excited for the potential for AI, but I’m a tad worried we don’t yet fully understand how to control it.

⚙︎