I have a trained CoreML image model that is supposed to recognize the front and back of checks. I am facing a problem where the model will see an image of a giraffe, and the model is only trying to decide if the image is the front of a check or the back of a check (with the confidence levels totaling 100%). Ideally, the model would not say 'what is the likelyhood this is the back of a check vs the front of a check' but instead 'is this the back of a check'.
CoreML image model only recognizing the given data sets, how to include outside data?
116 views Asked by asterisk12 At
1
There are 1 answers
Related Questions in SWIFT
- Navigate after logged in with webservice
- URLSession requesting JSON array from server not working
- When using onDrag in SwiftUI on Mac how can I detect when the dragged object has been released anywhere?
- Protect OpenAI key using Firebase function
- How to correct error: "Cannot convert value of type 'MyType.Type' to expected argument type 'Binding<MyType>'"?
- How to share metadata of an audio url file to a WhatsApp conversation with friends
- Using @Bindable with a Observable type in SwiftUI
- How to make a scroll view of 9 images in a forEach loop open on image 6 if image 6 is clicked on from a grid?
- Using MTLPixelFormat.rgba16Float results in random round-off errors
- Search and highlight text of current text in PDFKit Swift
- How is passing a function as a parameter related to escaping autoclosure?
- Actionable notification api call not working in background
- Custom layout occupies all horizontal space
- Is it possible to fix slow CKAsset loading on Cloudkit?
- Thread 1: Fatal error: Unexpectedly found nil while implicitly unwrapping an Optional value - MapView.isMyLocationEnabled
Related Questions in XCODE
- I am getting lots of errors when building react native app in Xcode
- Xcode commits (possibly outside of any branch) disappeared, how to get them back?
- Can't run built SFML project from Xcode template
- Postal Framework crash in iPhone but runs successfully in simulator
- React Native - RealmJS - Linker command failed with exit code 1
- how to install xcode on macos hight sierra without apple account
- Xcode: Can't Attach to process
- Issue with Xcode Target and settings for Apple Watch App
- There are no active runners online GitLab
- My project code not running in Xcode(15.3) but the same code running in Xcode 14.2 in swift how to fix in xcode 15.3?
- How to press and hold in Xcode simulator
- Memory management for image data storing and retrieving with SwiftData (or CoreData)
- Error: spawn flutter ENOENT in flutter build_runner
- Can a project using Crashlytics have a GoogleService-Info.plist file renamed to something else?
- What changed from xcode 13.2.1 to 14.2 that would affect an app's entitlements?
Related Questions in COREML
- Difference between mlmodel and mlpackage
- Pytroch segmentation model(.pt) not converting to CoreML
- Core ML model class not getting generated in App Playground Xcode
- CoreML Tools: RuntimeError: PyTorch convert function for op 'intimplicit' not implemented
- Core ML Input ImageType/TensorType get different result
- yolov8 to CoreML generates only MultiArray output, not class labels
- CoreML allocating memory
- Update model obtained from keras YOLOV8Detector to Apple MLPackage/CoreML
- Why is my CoreML model out of scope in this, and how do I fix it?
- coreML Hand Pose classification: doesn't appear on the camera
- Swift Playground Bundle can't find Compiled CoreML Model (.mlmodelc)
- Core ML MLOneHotEncoder Error Post-Update: "unknown category String"
- Is it possible to apply regex pattern with OCR scanning in VisionKit?
- Is it possible to set/change output sizes in Create ML for style transfer models?
- CALayer in SwiftUI: add CALayer circleLayer to view Camera layer but not show it on the screen
Related Questions in CREATEML
- Object Detection using Vision performs different than in Create ML Preview
- CreateML object detection project producing 0% accuracy in train, valid, and test
- coreML Hand Pose classification: doesn't appear on the camera
- CreateML DataFrame can't have column with element of unsupported type Dictionary
- SwiftUI Coreml: Failed to get the home directory when checking model path
- StateIn Input for ActivityClassifier in CoreML
- Using CreateML Model in Xcode (ActivityClassifier)
- extra input feature stateIn "LSTM state input" after model generation from Create ML Application
- Creating MLDataTable Using [MLDataTable.Rows.Element]
- Deprecation Warning Use DataSource instead of MLDataTable when initializing in Create ML
- Can we train CreateML on iOS device using user's data?
- Why using the code way is slower than using create ml
- coreml Failed to get the home directory when checking model path
- How can I fix the correct working model giving wrong result in app?
- (iOS) Word embedding vector is nil when using text from SwiftUI TextField
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
If you model is only trained on images of checks, you can only give it images of checks. If you give it some other image, it will assume it's a check because that's the only thing it knows about.
To make a model that can also detect "no check" you need to add a new category and also train the model with images of all kinds of objects that are not checks.
Alternatively, you can use some kind of OOD (out of domain) detection, to verify that the input image is similar to the sorts of things the model has been trained on. But that's not something you can easily do with Core ML.