FAQs

What is AIProxy for?

AIProxy is an abuse prevention backend for macOS, iOS, and visionOS apps. You can use AIProxy to ship your AI-powered apps without worrying about your API keys being stolen or abused. We use five layers of security to keep your API key secure and your AI usage predictable:

  • Certificate pinning
  • DeviceCheck verification
  • Split key encryption
  • Per user rate limits
  • Per IP rate limits

How does AIProxy work?

To use AIProxy you first add the service/API you want to protect, then add your API key through our dashboard. We don't actually store the key on our servers, we encrypt the key and store 1/2 of that result in our backend and give the customer the other 1/2 to send up with requests to AIProxy. We marry the two pieces and decrypt the key, forwarding that onto the API.

This technique addresses a few security concerns:

  1. If someone sniffs the customer's AIProxy request from the network, there is no way for the attacker to use the information in the request headers to derive the customer's secret key. Anything that goes over the network is fair game for an attacker to view in plaintext. E.g. an attacker could install your iOS app on their own phone and then MITM themselves to inspect network requests of your app. So we assume someone is looking, and take precaution that the data that they see is as useless as possible.
  2. By splitting the encrypted key into two parts, and storing them separately (one on the backend, and one in your iOS app), we disincentivize attacks on AIProxy itself. The alternative, where our database has all the information it needs to construct secret keys, would make for a lucrative target for attackers. If someone were able to get in, they could get a whole bunch of secret keys.
  3. It gives customers the assurance that no one within AIProxy can look at their secret keys, because we don't actually have them in our database.

That's one piece of protection, and prevents anyone from sniffing your API key from your Mac or iOS app completely. However, an attacker can still grab your AIProxy headers and abuse your AIProxy endpoint. So to combat that we send up 1 time use DeviceCheck tokens. We check the DeviceCheck token against Apple's servers to ensure that it's from a legit device running your app, and also check against our DB to make sure the token hasn't been used before. If the token passes both of those checks then we fulfill the request.

What if someone steals my AIProxy key?

The key we provide you is useless on its own and can be hardcoded in your client. When you add an OpenAI key in our dashboard we don't store it on our backend. We encrypt your key and store only half, and give you the other half which you use in your client. We combine these two pieces and decrypt when a request gets made.

What if someone uses my AIProxy endpoint?

We have multiple mechanisms in place to restrict endpoint abuse:

  1. Your AIProxy project comes with proxy rules that you configure. You can enable only endpoints that your app depends on in the proxy rules section. For example, if your app depends on /v1/chat/completions, then you would permit the proxying of requests to that endpoint and block all others. This makes your enpdoint less desireable to attackers.
  2. We use Apple's DeviceCheck service to ensure that requests to AIProxy originated from your app running on legitimate Apple hardware.
  3. We guarantee that DeviceCheck tokens are only used once, which prevents an attacker from replaying a token that they sniffed from the network.

Why not just use Firebase?

With Firebase you don't get endpoint protection out of the box and those endpoints in your app are still open to abuse. An attacker can script against your endpoint and run your bill way up.

How does AIProxy handle scaling?

The proxy is deployed on AWS Lambda, meaning we can effortlessly scale horizontally behind a load balancer.

How do I integrate with my iOS or Mac app?

Watch the quickstart video or read our integration guide.

What data does AIProxy collect?

We store the following information of API calls that route through the Service:

  • Originating IP address of the network request
  • HTTP status code of the network response
  • HTTP response body if and only if the status code is greater than or equal to 400
  • Metadata about the network request (number of input tokens, length of audio uploaded, size of audio file).

The information above is used for display in the developer’s Dashboard account.

Request header information, such as the Authentication Bearer token (which many providers use to store a secret key), is not collected or stored in the Service or Dashboard, including log files.

Neither the Service nor Dashboard stores API secret keys that the developer submits, nor are they present in the logs.

You can learn more by reading our privacy policy.

How do you know if DeviceCheck is working?

When you make a request take a look at Live Console page in then dashboard. You’ll see the request and whether or not Devicecheck passed.

How do plans and pricing work?

When you break out each tier, we will first reach out to you to convert to next paid tier. We will not immedietly rate limit your app. You can upgrade or downgrade your account by visiting your account page in the top right nav.

Does AIProxy work with Android or web apps?

While its possible to use AIProxy on Android and web, we currently don't have client libraries built for those platforms so you'll need to build your own to use the service.