manar Aldhafyan
manar Aldhafyan

Reputation: 109

Xcode - CoreML model predictions is not consistent on swift

I am using CoreML for image classification and the model is working fine on the Create ML (when using the preview option, I have used the model in my swift app which asks users to pick an image for a specific object then test if the uploaded image is the object asked for. What is happening now is that after I connected my model to the swift app I have noticed that first when I run it on the simulator it gets wrong predictions while in the CoreML preview it gives an accurate prediction (this was done with the same photo), after searching I have noticed many people recommending running the app on real device not Xcode simulator, I did that and is started working correctly and smoothly. after a couple of days it started giving wrong predictions again even on a real device and when I test it again on the coreML preview it gives an accurate prediction and this kept on happening multiple times suddenly it will stat missing up then I would delete the model then connect it again with the app then it will start working for couple days then goes back to the same loop. I am not sure what is the issue here because the model is working fine on the Create CoreML app but on Xcode it performance is inconsistent.

Here is the code I used to connect the model just in case it might help:

func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
    
    imagePicker.dismiss(animated: true, completion: nil)
    
    guard let image = info[.originalImage] as? UIImage else { return }
    imageView.image = image
    
    // Convert the image for CIImage
    if let ciImage = CIImage(image: image) {
        processImage(ciImage: ciImage)
    }else {
        print("CIImage convert error")
    }
    
    
}

// Process Image output
func processImage(ciImage: CIImage){
    
    do{
        let model = try VNCoreMLModel(for: KhutaaImageClassifier().model)
        
        let request = VNCoreMLRequest(model: model) { (request, error) in
            self.processClassifications(for: request, error: error)
        }
        
     
        DispatchQueue.global(qos: .userInitiated).async {
            let handler = VNImageRequestHandler(ciImage: ciImage, orientation: .up)
            do {
                try handler.perform([request])
            } catch {
                
                print("Failed to perform classification.\n\(error.localizedDescription)")
            }
        }
        
    }catch {
        print(error.localizedDescription)
    }
    
}

func processClassifications(for request: VNRequest, error: Error?) {
    DispatchQueue.main.async {
        guard let results = request.results else {
            print("Unable to classify image.\n\(error!.localizedDescription)")
            return
        }
        
        let classifications = results as! [VNClassificationObservation]
        
        self.resultLabel.text = classifications.first?.identifier
        let classLabel = classifications.first?.identifier.uppercased()
        print("predicted is " + classLabel!)
        self.validateImage(sender: classLabel)
    }
    
}

Here is the code that compare the label for the object and the predicted value from the model:

 func validateImage(sender: String?){
    //a.caseInsensitiveCompare(b) == .orderedSame
    let label = self.curFindObj?.imageLabel ?? ""
    print("class label is" + label)
    if sender!.caseInsensitiveCompare(label.uppercased()) == .orderedSame{
        GeneralAlert(self).showAlert(title: "Well Done!", message: "You have completed the mission", onOK: nil)
        
        addPoints()
        }

Upvotes: 2

Views: 726

Answers (1)

FtoTheZ
FtoTheZ

Reputation: 426

Sounds like an fp16/fp32 issue. Try to export the model as mlpackage with precision explicitly set to fp32.

Upvotes: 0

Related Questions