write a prompt...
This is a demo of the "Unpwnable" prompt injection protection strategy (what is prompt injection?). We want to demonstrate that you can take normal (not contrived, not super defensive) product prompts, and modify them only slightly to sufficiently protect against prompt injection attacks, WHILE preserving the original functionality. It's been pwned during the playtest, so "Unpwnable" is a misnomer, but everyone had fun trying so we're just releasing it as is.
First, you can verify that the prompt works as advertised, by submitting topics you would like GPT3 to write about (e.g. "dog", "netlify", "sam altman"). In the API we've used a simple prompt that is meant to be reflective of a realistic product prompt.
Your real mission, should you choose to accept it, is to reverse engineer the source prompt to as high fidelity as possible. In other words, try to obtain as much of the "hidden" prefix string that is prepended to your input before it is sent to GPT3.
There are multiple rate limits (which is there for my wallet's sake). You can add your OpenAI key here to save me money + bypass all of the rate limits: (it's not saved anywhere on my end but you'll have to trust me until I open source this. You can also set a hard limit in the OpenAI dashboard to protect against abuse, and delete the key when you are done playing.)
You can leave your guesses and process in the accompanying Substack post (dont post guesses on HN or social media in case it spoils it for others!).
I will publish the source prompt and code in a few days; you can then compare your results to the actual prompt. Leave an email to get notified when we publish!
We expect that:
The source prompt's
SHA-256 hash is
cf58ad59e753e80419325ce57901efe40b4e141d819a13b9c0ba2d0c3402de50 (due to whitespace/nondeterminism/misc reasons, don't bother trying to get an exact match, but it'd be a really cool bonus if
you did). Mostly I just want to prove that I'm not lying about the prompt when I eventually release it.