Unlocking AR Potential: A Guide to Using Spatial Personalization in Apple's Vision Pro
Unlocking the Potential of Apple’s Latest AI Innovation: Enhancing Siri & Personalizing Your Home Screen - Insights
Getty Images/Yuuji
Despite not launching any AI models since the generative AI craze began, Apple is working on some AI projects. Just last week, Apple researchers shared a paper unveiling a new language model the company is working on, and insider sources reported that Apple has two AI-powered robots in the works. Now, the release of yet another research paper shows Apple is just getting started.
On Monday, Apple researchers published a research paper that presents Ferret-UI, a new multimodal large language model (MLLM) capable of understanding mobile user interface (UI) screens.
Also: Generating music using AI in Copilot just got even better
MLLMs differ from standard LLMs in that they go beyond text, showing a deep understanding of multimodal elements such as images and audio. In this case, Ferret-UI is trained to recognize the different elements of a user’s home screen, such as app icons and small text.
Identifying app screen elements has been challenging for MLLMs in the past due to their small nature. To overcome that issue, according to the paper, the researchers added “any resolution” on top of Ferret, which allows it to magnify the details on the screen.
Building on that, Apple’s MLLM also has “referring, grounding, and reasoning capabilities,” which allow Ferret-UI to comprehend UI screens fully and perform tasks when instructed based on the contents of the screen, according to the paper, as seen in the photo below.
K. You et al.
To measure how the model performs compared to other MLLMs, Apple researchers compared Ferret-UI to GPT-4V, OpenAI’s MLLM, in public benchmarks, elementary tasks, and advanced tasks.
Also: The best AI image generators to try right now
Ferret-UI outperformed GPT-4V across nearly all tasks in the elementary category, including icon recognition, OCR, widget classification, find icon, and find widget tasks on iPhone and Android. The only exception was the “find text” task on the iPhone, where GPT-4V slightly outperformed the Ferret models, as seen in the chart below.
K. You et al.
When it comes to grounding conversations on the findings of the UI, GPT-4V has a slight advantage, outperforming Ferret 93.4% to 91.7%. However, the researchers note that Ferret UI’s performance is still “noteworthy” since it generates raw coordinates instead of the set of pre-defined boxes GPT-4V chooses from. You can find an example below.
K. You et al.
The paper does not address what Apple plans to leverage the technology for, or if it will at all. Instead, the researchers more broadly state that Ferret-UI’s advanced capabilities have the potential to positively impact UI-related applications.
“The advent of these enhanced capabilities promises substantial advancements for a multitude of downstream UI applications, thereby amplifying the potential benefits afforded by Ferret-UI in this domain,” the researchers wrote.
Also: Google updates Gemini and Gemma on Vertex AI, and gives Imagen a text-to-live-image generator
The ways in which Ferret-UI can improve Siri are evident. Because of the thorough understanding the model has of a user’s app screen, and knowledge of how to perform certain tasks, Ferret-UI could be used to supercharge Siri to perform tasks for you.
There’s certainly interest in an assistant that does more than just respond to queries. New AI gadgets such as the Rabbit R1 get plenty of attention for being able to carry out an entire task for you, such as booking a flight or ordering a meal, without you having to instruct them step by step.
Artificial Intelligence
How I used ChatGPT to scan 170k lines of code in seconds and save me hours of detective work
6 ways to write better ChatGPT prompts - and get the results you want faster
6 digital twin building blocks businesses need - and how AI fits in
Google’s Gems are a gentle introduction to AI prompt engineering
- How I used ChatGPT to scan 170k lines of code in seconds and save me hours of detective work
- 6 ways to write better ChatGPT prompts - and get the results you want faster
- 6 digital twin building blocks businesses need - and how AI fits in
- Google’s Gems are a gentle introduction to AI prompt engineering
Also read:
- [New] 2024 Approved Superior Choice of Steadicams for Drone Video Shootings
- [New] Flash Fiction Directorial Map
- [New] In 2024, 10 Superior Tools for FBX File Recording
- [New] In 2024, Insta's Friendship Breakdown Detect It Fast
- [Updated] Building a Diverse Content Portfolio on YouTube Shorts for 2024
- 2024 Approved 'PixelPress' Mastering the Art of Screen Recording
- 2024 Approved Behind-the-Scenes Developing VegasPro '19
- Blending Beats with Video Footage on Vimeo Platform for 2024
- Introducing the Precision of the Global Scale of Languages by Pearson to Mondly
- Swift Fixes: Expert Tips on Getting Your Google Chrome Back Online Quickly
- The Ultimate Trick for Starting Windows 7, Vista, and XP in Safe Mode Safely
- Troubleshooting and Repairing a Stuck Keyboard on Your Windows PC [Fixed]
- Troubleshooting Guide: Resolving Issues with the Windows PC Health Check Application
- Troubleshooting Guide: Successfully Overcoming Startup Failure (Error 0xC0000142)
- Troubleshooting Tips: Fixing Large Pagefile.sys Issues on Your PC
- Ultimate Guide: Turning Off Pop-Up Suppressors on Major Browsers (Chrome, Firefox, Edge & IE)
- Understanding the Restrictions: Protecting Data Through Updated Safety Protocols
- Title: Unlocking AR Potential: A Guide to Using Spatial Personalization in Apple's Vision Pro
- Author: Jeffrey
- Created at : 2024-10-03 16:17:38
- Updated at : 2024-10-04 16:03:10
- Link: https://tech-haven.techidaily.com/unlocking-ar-potential-a-guide-to-using-spatial-personalization-in-apples-vision-pro/
- License: This work is licensed under CC BY-NC-SA 4.0.