The research papers from Apple showcase the company’s deep dive into artificial intelligence technology. The research indicates that Apple is working on developing on-device AI technology, including methods for creating animatable avatars and running large language models from an iPhone or iPad.
One of the research projects, named “LLM in a flash,” focuses on efficiently running large language models on devices with limited memory. This method could potentially enable complex AI applications to run smoothly on iPhones or iPads, including a generative AI-powered Siri that assists with various tasks, text generation, and natural language processing.
Another method, called HUGS, allows for the creation of fully animatable avatars from short video clips captured on an iPhone in as little as 30 minutes. This neural rendering framework can use a few seconds of video footage to create detailed and animatable avatars.
These advancements could have significant implications for the iPhone and Vision Pro. They suggest that Apple is making progress in running large language models on smaller, less powerful devices, potentially leading to more accessible generative AI tools and improved performance in various applications. The HUGS method could also enhance user experiences in social media, gaming, educational, and augmented reality applications.
While Apple tends to focus on machine learning rather than AI, these research papers indicate a deeper involvement in new AI technology. However, the company has not publicly acknowledged implementing generative AI into its products and has yet to officially confirm its work with Apple GPT.