Apple did NOT talk about AI at the event, but included it in several products

Apple did NOT talk about AI at the event, but included it in several products

Apple showcased significant advancements in integrating machine learning into its products and services at the recent WWDC 2023 event. While consciously omitting the term “artificial intelligence” (AI), Apple emphasized the utilization of “machine learning” (ML) techniques to enhance the user experience.

This contrasts with the strategy of its competitors, such as Microsoft and Google, which have placed a strong emphasis on generative AI.

By prioritizing machine learning (ML), Apple showcases its dedication to pioneering user-centered technological advancements.

Transforming autocorrect and dictation in iOS 17.

During the unveiling of iOS 17, Craig Federighi, Apple’s Senior Vice President of Software Engineering, introduced remarkable enhancements to autocorrect and dictation, empowered by on-device machine learning.

Federighi highlighted the implementation of a cutting-edge “transformer language model,” elevating autocorrect’s accuracy to unprecedented levels.

Built upon transformer architecture, renowned for its contributions to generative AI advancements, this model revolutionizes autocorrect functionality. Now, by simply pressing the space bar, autocorrect can intelligently complete entire words or even sentences.

Furthermore, the model adapts and learns from the user’s unique writing style, resulting in further enhancements to the quality of suggestions provided.

The Potential of Apple Silicon and the neural engine

Apple has integrated machine learning into its devices thanks to the Neural Engine, a specialized part of the Apple Silicon chips. This unit is designed to accelerate machine learning applications and has been present since the 2017 A11 chip.

At the event, it was highlighted that dictation in iOS 17 uses a “transformer-based speech recognition model,” which leverages the Neural Engine for even more accurate dictation.

Improvements to iPad, AirPods, and Apple Watch.

Throughout the event, Apple made multiple references to machine learning. In regards to the iPad, they unveiled a new feature on the lock screen that utilizes an “advanced machine learning model” to generate additional frames in chosen Live Photos.

Furthermore, thanks to new machine learning models, iPadOS is now capable of recognizing fields in PDF files. This enables swift auto-completion of contact information, making the process of populating such fields considerably faster.

AirPods now offer “Adaptive Audio” which uses machine learning to understand the user’s listening preferences over time. This allows for customized volume adjustments.

On the other hand, the Apple Watch features a widget called Smart Stack, which uses machine learning to display relevant information at the right time.

Journal: a new app that leverages machine learning

Apple unveiled a brand-new app called Journal, designed for iPhone users to create and maintain their personal journals. This app offers encrypted text and image features to ensure user privacy. Although Apple acknowledged the involvement of AI in the Journal, it chose not to explicitly mention the term.

Using machine learning on the device, the app can offer personalized suggestions to inspire writing based on information stored on the iPhone, such as photos, location, music, and physical exercise.

Users have complete autonomy to determine which information they want to include and save in their journals.

Vision Pro: an immersive experience created with machine learning

The flagship product unveiled at the event was the Apple Vision Pro, a new device that provides an immersive augmented reality experience. During the demonstration, Apple revealed that the moving image of the user’s eyes in the glasses is generated using advanced machine learning techniques.

By performing a facial scan, a digital representation called a “Persona” is generated using an encoder-decoder neural network. This network compresses the facial information collected during the scanning procedure and utilizes it to produce a three-dimensional (3D) model of the user’s face.

The powerful M2 Ultra chip and the future of machine learning at Apple

At the event, Apple unveiled its most powerful Apple Silicon chip to date, the M2 Ultra. This chip features up to 24 CPU cores, 76 GPU cores, and a 32-core Neural Engine, capable of performing 31.6 trillion operations per second.

Apple stressed that this power will be especially useful for training “large transformer models,” demonstrating its interest in AI applications.

AI experts have expressed great enthusiasm regarding the capabilities of the M2 Ultra, primarily due to its unified memory architecture that enables the execution of larger AI models.

Follow us on our social networks and keep up to date with everything that happens in the Metaverse!.

                Twitter   Linkedin   Facebook   Telegram   Instagram    Google News

Exit mobile version