Generative AI

Any links to online stores should be assumed to be affiliates. The company or PR agency provides all or most review samples. They have no control over my content, and I provide my honest opinion.

MediaTek has announced it will be showcasing on-device generative AI using Llama 2, Meta’s open-source large language model (LLM), at next week’s Mobile World Congress 2024 in Barcelona. The demo will highlight MediaTek’s latest Dimensity 9300 and 8300 system-on-chips (SoCs) running an optimized version of Llama 2 for the first time.

What is a Large Language Model?

A large language model is a type of artificial intelligence system that is trained on massive volumes of text data to generate human-like writing. Unlike traditional AI models that are trained to perform specific natural language processing tasks like translation or question answering, LLMs are trained in an “unsupervised” manner to simply predict the next word in a sequence. This allows them to understand language in a more general, human-like way.

Over the past few years, advances in computing power and dataset size have enabled dramatic leaps in LLM capabilities. Models like GPT-3, created by AI research company OpenAI in 2020, showed that LLMs could generate surprisingly coherent essays, stories, code, and more when given a prompt. More recent models like Meta’s Llama 2 and Google’s PaLM have continued pushing the boundaries of what’s possible.

Introducing Llama 2

Llama 2 is Meta’s latest publicly-available LLM. Unveiled in January 2023, it builds on Meta’s previous LLaMA model using a dataset of webpages and books in over 100 languages. With 7 billion parameters, Llama 2 is much smaller than leading proprietary models from companies like Google and Anthropic which boast over 100 billion parameters. However, its multilingual design provides unique capabilities for developers.

Llama 2 demonstrates strong abilities in areas like summarization, question answering, and dialogue across languages. And like all modern LLMs, it can chat about a diverse range of topics while maintaining a conversational flow. Meta is positioning Llama 2 as an accessible model for students, researchers, and companies to build creative applications both quickly and responsibly.

On-Device Generative AI

Up until now, leveraging large models like Llama 2 required sending requests to powerful cloud servers for processing. Running these models on consumer devices has been largely impossible given intensive computation, memory, and energy constraints.

However, MediaTek believes its newest Dimensity chipsets open the door to on-device generative AI. Its integrated APU (AI processing unit) and NeuroPilot framework offer hardware-based acceleration tailored for neural networks like Llama 2. Combined with software optimizations, MediaTek claims that the Dimensity 9300 and 8300 will offer seamless Llama 2 experiences directly on smartphones without relying on the cloud.

On-device processing brings notable advantages:

  • Privacy: Sensitive user data never leaves the device
  • Security: Reduced exposure to hacking of data in transit
  • Reliability: Ability to work offline or with poor connectivity
  • Latency: Faster response times
  • Cost: No cloud compute fees for developers/users

The MediaTek Demo

At Mobile World Congress next week, MediaTek will be showcasing an application using Llama 2 running fully on a reference device powered by the new Dimensity hardware.

The application allows users to provide a longform document like a news article or blog post. Llama 2 then analyzes the text and generates a short social media-friendly summary while preserving key details.

This demonstration of on-device generative AI capabilities highlights what could be possible on next-generation smartphones. If the experience proves seamless, more developers may rapidly build Llama 2 directly into their apps rather than rely on cloud APIs.

The combination of MediaTek’s specialized hardware and Llama 2’s multilingual design could also greatly expand access to AI globally – even for users in areas with limited data connectivity. And by keeping data processing on-device, privacy, security and cost-savings increase.

The Road Ahead

MediaTek is not the only chipmaker eyeing on-device AI with LLMs. Qualcomm recently highlighted its own advances on its new Snapdragon 8 Gen 3.

The big open question is whether smartphone power budgets can truly support seamless experiences with massive models, even with acceleration hardware. Heavy workloads may still throttle devices and drain batteries quickly.

Nonetheless, MediaTek’s demo represents an important milestone in bringing more advanced AI capabilities to mobile devices. And Llama 2’s unique design likely makes it an ideal fit for initial trials given its smaller size compared to leading LLMs.

If MediaTek can convincingly showcase the technology next week, wider adoption may follow quickly. Developers could tap into Llama 2 to create innovative mobile apps leveraging text generation, summarization, translation, search, and recommendation features normally requiring cloud connectivity. And they can do so across languages, opening doors globally.

Of course, the hallmark of all LLMs still lies in their conversational abilities. If smartphone users eventually gain access to chatbots like Llama 2 directly on their devices, it may profoundly expand this human-computer interface. MediaTek’s efforts underscore that the age of on-device generative AI may arrive sooner than we think.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *