Welcome to the Apple Developer Forums

Post your questions, exchange knowledge, and connect with fellow developers and Apple engineers on a variety of software development topics.

For questions about using Apple hardware and services, visit Apple Support Communities

Posts

Sort by:
Post not yet marked as solved
0 Replies
6 Views
We have an AppIntent that starts streaming data in its perform() function with a URLSession. This may be a quick operation, or it may take some time (more than 30 seconds but less than a minute). Is there any way we can keep that streaming data URLSession active when the AppIntent asks the user to continue with requestConfirmation? What we have seen so far is that any operation the AppIntent takes in its perform() function that interacts with the user causes the URLSession to be abruptly terminated with a NSURLErrorNetworkConnectionLost error when the app is not in the foreground. If the app is currently running in the foreground then the session does remain active and data continues to stream in. Sadly, our primary use case is for the Siri/Shortcuts interaction to happen with openAppWhenRun set to false and not require the user to open the app. In that case (with the AppIntent invoked while the app is in the background) the network connection is dropped. It has been frustrating in initial development because on the simulator the connection is not dropped and data continues to stream in, even while the app is in the background. On a physical device, this is not the case. The only condition we have found to have the network connection maintained is with the app in the foreground when the AppIntent is run. Here is what we have now: struct AskAI: AppIntent { static var title: LocalizedStringResource = "Ask" static var description: IntentDescription = IntentDescription("This will ask the A.I. app") static var openAppWhenRun = false @Parameter(title: "Prompt", description: "The prompt to send", requestValueDialog: IntentDialog("What would you like to ask?")) var prompt: String @MainActor func perform() async throws -> some IntentResult & ProvidesDialog & ShowsSnippetView & ReturnsValue<String> { var continuationCalled = false //Start the streaming data URLSession task Task<String, Never> { await withCheckedContinuation { continuation in Brain.shared.requestIntentStream(prompt: prompt, model: Brain.shared.appSettings.textModel, timeoutInterval: TimeInterval(Brain.shared.appSettings.requestTimeout )) { result in if !continuationCalled { continuationCalled = true continuation.resume(returning: Brain.stripMarkdown(result)) } } } } //Start the intentTimeout timer and early out if continuationCalled changed let startTime = Date() let timeout = Brain.shared.appSettings.intentTimeout while !continuationCalled && Date().timeIntervalSince(startTime) < timeout { try? await Task.sleep(nanoseconds: 1_000_000_000) } //At this point either the intentTimeout was reached (data still streaming) or continuationCalled is true (data stream complete) //best effort for Siri to read the first part and continue as more is received var allReadResponse = "" var partialResponse = "" while !continuationCalled { partialResponse = Brain.shared.responseText.replacingOccurrences(of: allReadResponse, with: "") allReadResponse += partialResponse do { let dialogResponse = partialResponse + " --- There is more, would you like to continue?" //THIS WILL TERMINATE THE URLSession if the app is not in the foreground! try await requestConfirmation(result: .result(dialog: "\(dialogResponse)") { AISnippetView() }) } catch { return .result( value: Brain.shared.responseText, dialog: "", //user cancelled, return what we have so far but we've already spoken the dialog view: AISnippetView() ) } } //Read the last part (or the whole thing it it was retrieved within the intentTimeout) let remainingResponse = Brain.shared.responseText.replacingOccurrences(of: allReadResponse, with: "") return .result( value: Brain.shared.responseText, dialog: "\(remainingResponse)", view: AISnippetView() ) } With this logic, Siri will read the first part of the response data when the timer expires and continuationCalled is false. The data is still streaming and will continue to come in while she is speaking - ONLY IF THE APP IS IN THE FOREGROUND. Otherwise the call to requestConfirmation will terminate the connection. Is there any way to get the task with the requestIntentStream URLSession to stay active?
Posted
by
Post not yet marked as solved
0 Replies
4 Views
Hello guys, I am trying to run this sample project on my Ipad, I got a black screen the camera does not initialize. I tried updating the info.plist and asking for camera permission. I updated all the devices, did someone tried this demo? https://developer.apple.com/documentation/vision/detecting_animal_body_poses_with_vision
Posted
by
Post not yet marked as solved
0 Replies
1 Views
I am trying to create a Multiplatform Document Based app using SwiftUI. I have a List which the user can reorder by dragging Items using .onMove. It works fine in macOS but crashes iOS every time with the above message alongside @main The relevant code is- struct ContentView: View { … List { ForEach(Array(document.journey.items.enumerated()), id: \.1.id) { (index, item) in Section { HStack { Text("\(index+1)") … Text("\(item.address)" … } // End HStack .background( Capsule() .fill((selection.contains(item.id) ? Color.red : Color.blue)) .onTapGesture(perform: { if selection.contains(item.id) { selection.remove(item.id) } else { selection.insert(item.id) } }) ) .font(.headline) .foregroundColor(.white) } // End Section … } // End ForEach .onDelete(perform: delete) .onMove { offsets, toOffset in document.moveItems(offsets: offsets, toOffset: toOffset, undoManager: undoManager) } } // End List //////// final class MultiStopDocument: ReferenceFileDocument { … func moveItems(offsets: IndexSet, toOffset: Int, undoManager: UndoManager? = nil) { let oldItems = journey.items // BREAK POINT print("GOT to First") withAnimation { journey.items.move(fromOffsets: offsets, toOffset: toOffset) } print("GOT to Second") #if os(macOS) undoManager?.registerUndo(withTarget: self) { doc in doc.replaceItems(with: oldItems, undoManager: undoManager) } #endif } /////////// import SwiftUI @main - Thread 1: EXC_BREAKPOINT (code=1, subcode=0x1af84999c) struct MultiStopApp: App { Console macOS stepping :- GOT to First GOT to Second Console iOS stepping :- 2023-06-07 12:03:16.869439+0100 MultiStop[27341:6172177] -[UITextEffectsWindow _accessibilityFindSubviewDescendant:]: unrecognized selector sent to instance 0x112037400 GOT to First 2023-06-07 12:03:52.083774+0100 MultiStop[27341:6172433] XPC connection interrupted GOT to Second THEN CRASH
Posted
by
Post not yet marked as solved
0 Replies
2 Views
String catalog is a great new feature to organize strings in one place. What would you recommend how to handle extracted Strings which need not be translated in different languages
Posted
by
Post not yet marked as solved
0 Replies
1 Views
What is the best way to use swift data to store one off data models? For example an application state that you want to be persisted across application launches. The only examples I've seen use arrays of items which wouldn't work for having just one application state. Is it possible to query just one item?
Posted
by
Post not yet marked as solved
0 Replies
5 Views
Hello there! Really excited about this year's additions to ImageAnalysisInteraction, especially the selectedText. Thank you! My question is if there's a way to have multiple interactions for a given image analysis. For example, instead of being able to make just one text selection, I would like to support multiple. In theory, I know multiple UIInteraction-conforming objects can be added to any view but I'm unsure if this is possible with ImageAnalysisInteraction. I'm willing to create a custom UIInteraction, if you see a potential way forward there (would love to hear any ideas!). Side question/feature request: UITextSelectionDisplayInteraction, which I assume ImageAnalysisInteraction might be using internally, allows handleViews customizations and others. I would be super useful if that was exposed through ImageAnalysisInteraction as well! Looking forward to hearing your ideas. TIA!
Posted
by
Post not yet marked as solved
0 Replies
4 Views
New on iOS17 we can control the amount of 'other audio' ducking through the AVAudioEngine. Is this also possible on AVAudioSession? In my app I don't voice input, but I do play voice audio while music from other apps plays in the background. Often the music either drowns to voice, if I use the .mixWithOthers category, or it's not loud enough if I use .duckOthers. It would be awesome to have the level of control that AVAudioEngine has.
Posted
by
Post not yet marked as solved
0 Replies
2 Views
I have a problem when using WKNavigationDelegate. It seems like only some of the delegate methods are working. I have implemented some of the methods below, just for testing purposes. I have only had one active at the time. Am I doing something wrong, or could it be something wrong with WKNavigationDelegate? extension MyViewController: WKNavigationDelegate { // Not called public func webView(_ webView: WKWebView, decidePolicyFor navigationAction: WKNavigationAction) async -> WKNavigationActionPolicy { return .cancel } // Called // func webView(_ webView: WKWebView, decidePolicyFor navigationAction: WKNavigationAction, preferences: WKWebpagePreferences) async -> (WKNavigationActionPolicy, WKWebpagePreferences) { // return (.cancel, preferences) // } // Not called // func webView(_ webView: WKWebView, decidePolicyFor navigationAction: WKNavigationAction, decisionHandler: @escaping (WKNavigationActionPolicy) -> Void) { // decisionHandler(.cancel) // } // Called // func webView(_ webView: WKWebView, decidePolicyFor navigationAction: WKNavigationAction, preferences: WKWebpagePreferences, decisionHandler: @escaping (WKNavigationActionPolicy, WKWebpagePreferences) -> Void) { // decisionHandler(.cancel, preferences) // } // Not called // func webView(_ webView: WKWebView, decidePolicyFor navigationResponse: WKNavigationResponse, decisionHandler: @escaping (WKNavigationResponsePolicy) -> Void) { // decisionHandler(.cancel) // } // Not called // func webView(_ webView: WKWebView, decidePolicyFor navigationResponse: WKNavigationResponse) async -> WKNavigationResponsePolicy { // return .cancel // } }
Posted
by
Post not yet marked as solved
0 Replies
3 Views
Seems like the same pipelines that enabled VNDetectHumanBodyPose3DRequest can be utilized to upgrade the hand tracking model as well - can we expect that upgrade this year? I suppose that Vision Pro only uses the 2020 2D workflow, correct?
Posted
by
Post not yet marked as solved
0 Replies
4 Views
In SwiftUI I have a view (CartView) that is being presented from a NavigationLink (this navigationLink is in a different view, HomeView). When I add the @Environment(.dismiss) to the current view (CartView). The NavigationLink with "Checkout" label does not work and the app just freezes when its tapped. If i remove the @Environment wrapper the issue goes away and everything works perfectly fine. Below I have added some of the code in my view. struct CartView: View { @Environment(\.dismiss) var dismiss @EnvironmentObject var viewModel: CartViewModel var body: some View { NavigationStack { ScrollView { if viewModel.items.isEmpty { Text(AppConfig.Cart.emptyCartMessage) .font(.largeTitle) } else { ForEach(viewModel.items, id: \.id) { item in ItemView(item: item) .environmentObject(viewModel) } NavigationLink { CheckoutView() .environmentObject(viewModel) } label: { Text("Checkout") .padding() .foregroundColor(.init(uiColor: .systemBackground)) .frame(maxWidth: .infinity) .frame(height: 45) .background( RoundedRectangle( cornerRadius: 10, style: .continuous ) .fill(Color(uiColor: .label)) ) .padding(.top) .padding(.horizontal) } } } } } }
Posted
by
Post not yet marked as solved
0 Replies
6 Views
I have a target that is intended to support both iPad and Mac Catalyst. I have the hardened runtime configuration for camera and photo library enabled in Xcode capabilities and get the following error when attempting to upload the Mac Catalyst build to TestFlight. I have been using it locally for a long time, but mostly sending the iPad version to TestFlight. Documentation on the entitlement indicates it is appropriate for macOS from what I can tell. Invalid Code Signing Entitlements. Your application bundle's signature contains code signing entitlements that are not supported on macOS. Specifically, key 'com.apple.security.personal-information.photo-library' in 'com.technomage.Data-Boards.pkg/Payload/DataBoards.app/Contents/MacOS/DataBoards' is not supported. (ID: 6af5bcd1-ba53-40ca-9185-c409c5647b61
Posted
by
Post not yet marked as solved
0 Replies
2 Views
import SwiftUI struct Splashscreenview: View { @State private var isActive = false @State private var size = 0.8 @State private var opacity = 0.5 var body: some View { if isActive{ ContentView() } else{ VStack { VStack{ Image(systemName: "hare.fill") .font(.system(size: 100)) .foregroundColor(.blue) Text("Smartt Bank") .font(Font.custom("Baskerville-Bold", size: 30)) .foregroundColor(.black.opacity(0.80)) } .scaleEffect(size) .opacity(opacity) .onAppear{ withAnimation(.easeIn(duration: 1.2)){ self.size = 0.9 self.opacity = 1.0 } } } .onAppear{ DispatchQueue.main.asyncAfter(deadline: .now() + 2.0) { self.isActive = true } } } } } struct Splashscreenview_Previews: PreviewProvider { static var previews: some View { Splashscreenview() } }
Posted
by
Post not yet marked as solved
0 Replies
1 Views
We developed a HomeKit bridge for our client to bridge their BLE-based dimmers to HomeKit. One end customer found that, on every 1 or 2 days, the bridged accessories that he already moved to different rooms and had automation set up were moved back to the Default Room, and related automation lost. It’s like the bridged accessories were removed and re-added to HomeKit. We traced the problem on the bridge side, but couldn’t find any hint. The bridge just received various requests from HomeKit clients (probably Apple TV, HomePod or iPhone), and the bridge replied correctly on the number of bridged accessories and their information. Anyone with HomeKit bridge (such as Philips Hue hub?) experience this too?
Posted
by
Post not yet marked as solved
0 Replies
15 Views
I am getting this error in my preview: CompileDylibError: Failed to build ContentView.swift Compiling failed: main actor-isolated let 'previewContainer' can not be referenced from a non-isolated context I don't know enough about @MainActor to figure this out.
Posted
by
Post not yet marked as solved
0 Replies
11 Views
Hi, We are excited about the updates on then ASC API. Any visibility as to when we will start seeing the new web UI and when we would be able to try the new APIs? Thanks, Jorge
Posted
by
Post not yet marked as solved
1 Replies
23 Views
I'm really excited about the Object Capture APIs being moved to iOS, and the complex UI shown in the WWDC session. I have a few unanswered questions: Where is the sample code available from? Are the new Object Capture APIs on iOS limited to certain devices? Can we capture images from the front facing cameras?
Posted
by
Post not yet marked as solved
0 Replies
2 Views
I've been experiencing this ongoing problem since I first tried to sign up two weeks ago for Search Ads via the online form at https://app.searchads.apple.com/cm/basic/app/signup (Basic) or https://app.searchads.apple.com/cm/app/signup (Advanced). I fill out the form and hit submit and it gives me the following error message "Please correct the errors below before you proceed." There are no form errors I can identify. I've tried multiple devices and browsers but get the same message. I've reached out to Apple Support helpline and developer support email but so far no one has been able to help me. If it's a problem with the online form itself, I would think the .com team would be the best contact but no one at Apple has been able to track down any contact info. Has anyone else experienced this issue? Thanks in advance.
Posted
by
PFB
Learn More about - WWDC22.

Pinned Posts

Categories

See all