Tech

SETI but for LLM; how an LLM solution that’s barely a few months old could revolutionize the way inference is done

Share
Share


  • Exo supports LLaMA, Mistral, LlaVA, Qwen, and DeepSeek
  • Can run on Linux, macOS, Android, and iOS, but not Windows
  • AI models needing 16GB RAM can run on two 8GB laptops

Running large language models (LLMs) typically requires expensive, high-performance hardware with substantial memory and GPU power. However, Exo software now looks to offer an alternative by enabling distributed artificial intelligence (AI) inference across a network of devices.

The company allows users to combine the computing power of multiple computers, smartphones, and even single-board computers (SBCs) like Raspberry Pis to run models that would otherwise be inaccessible.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Flint 3 matches Wi-Fi 7 rivals on specs but undercuts them on price for early adopters
Tech

Flint 3 matches Wi-Fi 7 rivals on specs but undercuts them on price for early adopters

GL.iNet Flint 3 is a powerful Wi-Fi 7 router with 2.5GbE ports...

This 122TB SSD costs ,400, but could shrink data centers and their power bills forever
Tech

This 122TB SSD costs $12,400, but could shrink data centers and their power bills forever

Solidigm’s 122.88TB SSD may not be the fastest, but it wins on...

A new tool predicts when users will reject a new technology
Tech

A new tool predicts when users will reject a new technology

If you can predict that a new technology will not be adopted,...

This futuristic dual-screen laptop looks incredible, but one disappointing flaw might ruin it for power users
Tech

This futuristic dual-screen laptop looks incredible, but one disappointing flaw might ruin it for power users

Aura Ultrabook Dual 14″ Touch is perfect for presentations and scrolling through...