Swift Examples - OpenAI

Step 1

Add the AIProxySwift package to your Xcode project

  1. Open your Xcode project
  2. Select File > Add Package Dependencies
  3. Paste github.com/lzell/aiproxyswift into the package URL bar
  4. Click Add Package
Step 2

Initialize

Import AIProxy and initialize using the following code:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
Note to existing customers - If you previously used AIProxy.swift from our dashboard, or integrated with SwiftOpenAI, you will find that we initialize the aiproxy service slightly differently here. We no longer accept a deviceCheckBypass as an argument to the initializer of the service. It was too easy to accidentally leak the constant. Instead, you add the device check bypass as an environment variable. Please follow the steps in the next section for adding an environment variable to your project.
Step 3

Setup DeviceCheck

Before adding the DeviceCheck ENV variable check these three things: 1) in Xcode check that your Apple developer team is selected under the 'Signing & Capabilities' tab 2) make sure the bundle ID of your app is explicit (i.e. it can't contain a wildcard) 3) make sure the bundle ID in Xcode matches a bundle ID listed on the Identifers page in the Apple Developer dashboard.

Add the AIPROXY_DEVICE_CHECK_BYPASS env variable to your Xcode project:

  • Type cmd-shift-comma to open up the "Edit Schemes" menu
  • Select Run in the sidebar
  • Add to the "Environment Variables" section (not the "Arguments Passed on Launch" section) an env variable with name AIPROXY_DEVICE_CHECK_BYPASS and value that we provided you in the AIProxy dashboard.

Get a non-streaming chat completion from OpenAI

                import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let response = try await openAIService.chatCompletionRequest(body: .init(
        model: "gpt-4o",
        messages: [.system(content: .text("hello world"))]
    ))
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create OpenAI chat completion: \(error.localizedDescription)")
}
            

Get a streaming chat completion from OpenAI

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
let requestBody = OpenAIChatCompletionRequestBody(
    model: "gpt-4o-mini",
    messages: [.user(content: .text("hello world"))]
)

do {
    let stream = try await openAIService.streamingChatCompletionRequest(body: requestBody)
    for try await chunk in stream {
        print(chunk.choices.first?.delta.content ?? "")
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create OpenAI streaming chat completion: \(error.localizedDescription)")
}

Send a multi-modal chat completion request to OpenAI

On macOS, use NSImage(named:) in place of UIImage(named:)

                import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
guard let image = UIImage(named: "myImage") else {
    print("Could not find an image named 'myImage' in your app assets")
    return
}

guard let imageURL = AIProxy.openAIEncodedImage(image: image) else {
    print("Could not convert image to OpenAI's imageURL format")
    return
}

do {
    let response = try await openAIService.chatCompletionRequest(body: .init(
        model: "gpt-4o",
        messages: [
            .system(
                content: .text("Tell me what you see")
            ),
            .user(
                content: .parts(
                    [
                        .text("What do you see?"),
                        .imageURL(imageURL, detail: .auto)
                    ]
                )
            )
        ]
    ))
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create OpenAI multi-modal chat completion: \(error.localizedDescription)")
}

How to generate an image with DALLE

This snippet will print out the URL of an image generated with dall-e-3:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let requestBody = OpenAICreateImageRequestBody(
        prompt: "a skier",
        model: "dall-e-3"
    )
    let response = try await openAIService.createImageRequest(body: requestBody)
    print(response.data.first?.url ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not generate an image with OpenAI's DALLE: \(error.localizedDescription)")
}

How to ensure OpenAI returns JSON as the chat message content

Use responseFormat and specify in the prompt that OpenAI should return JSON only:

                import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let requestBody = OpenAIChatCompletionRequestBody(
        model: "gpt-4o",
        messages: [
            .system(content: .text("Return valid JSON only")),
            .user(content: .text("Return alice and bob in a list of names"))
        ],
        responseFormat: .jsonObject
    )
    let response = try await openAIService.chatCompletionRequest(body: requestBody)
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create OpenAI chat completion in JSON mode: \(error.localizedDescription)")
}
            

How to use OpenAI structured outputs (JSON schemas) in a chat response

This example prompts chatGPT to construct a color palette and conform to a strict JSON schema in its response:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let schema: [String: AIProxyJSONValue] = [
        "type": "object",
        "properties": [
            "colors": [
                "type": "array",
                "items": [
                    "type": "object",
                    "properties": [
                        "name": [
                            "type": "string",
                            "description": "A descriptive name to give the color"
                        ],
                        "hex_code": [
                            "type": "string",
                            "description": "The hex code of the color"
                        ]
                    ],
                    "required": ["name", "hex_code"],
                    "additionalProperties": false
                ]
            ]
        ],
        "required": ["colors"],
        "additionalProperties": false
    ]
    let requestBody = OpenAIChatCompletionRequestBody(
        model: "gpt-4o",
        messages: [
            .system(content: .text("Return valid JSON only")),
            .user(content: .text("Return a peaches and cream color palette"))
        ],
        responseFormat: .jsonSchema(
            name: "palette_creator",
            description: "A list of colors that make up a color pallete",
            schema: schema,
            strict: true
        )
    )
    let response = try await openAIService.chatCompletionRequest(body: requestBody)
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create OpenAI chat completion with structured outputs: \(error.localizedDescription)")
}

How to use OpenAI structured outputs (JSON schemas) in a tool call

This example is taken from the structured outputs announcement: https://openai.com/index/introducing-structured-outputs-in-the-api/

It asks ChatGPT to call a function with the correct arguments to look up a business's unfulfilled orders:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let schema: [String: AIProxyJSONValue] = [
        "type": "object",
        "properties": [
            "location": [
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
            ],
            "unit": [
                "type": "string",
                "enum": ["celsius", "fahrenheit"],
                "description": "The unit of temperature. If not specified in the prompt, always default to fahrenheit",
                "default": "fahrenheit"
            ]
        ],
        "required": ["location", "unit"],
        "additionalProperties": false
    ]

    let requestBody = OpenAIChatCompletionRequestBody(
        model: "gpt-4o-2024-08-06",
        messages: [
            .user(content: .text("How cold is it today in SF?"))
        ],
        tools: [
            .function(
                name: "get_weather",
                description: "Call this when the user wants the weather",
                parameters: schema,
                strict: true)
        ]
    )

    let response = try await openAIService.chatCompletionRequest(body: requestBody)
    if let toolCall = response.choices.first?.message.toolCalls?.first {
        let functionName = toolCall.function.name
        let arguments = toolCall.function.arguments ?? [:]
        print("ChatGPT wants us to call function \(functionName) with arguments: \(arguments)")
    } else {
        print("Could not get function arguments")
    }

} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not make an OpenAI structured output tool call: \(error.localizedDescription)")
}

How to get word-level timestamps in an audio transcription

  1. Record an audio file in quicktime and save it as "helloworld.m4a"
  2. Add the audio file to your Xcode project. Make sure it's included in your target: select your audio file in the project tree, type cmd-opt-0 to open the inspect panel, and view Target Membership
  3. Run this snippet:
import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let url = Bundle.main.url(forResource: "helloworld", withExtension: "m4a")!
    let requestBody = OpenAICreateTranscriptionRequestBody(
        file: try Data(contentsOf: url),
        model: "whisper-1",
        responseFormat: "verbose_json",
        timestampGranularities: [.word, .segment]
    )
    let response = try await openAIService.createTranscriptionRequest(body: requestBody)
    if let words = response.words {
        for word in words {
            print("\(word.word) from \(word.start) to \(word.end)")
        }
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not get word-level timestamps from OpenAI: \(error.localizedDescription)")
}

Specify your own clientID to annotate requests

                let openAIService = AIProxy.openAIService(
    partialKey: "hardcode_partial_key_here",
    serviceURL: "hardcode_service_url_here",
    clientID: ""
)