Tech

SETI but for LLM; how an LLM solution that’s barely a few months old could revolutionize the way inference is done

Share
Share


  • Exo supports LLaMA, Mistral, LlaVA, Qwen, and DeepSeek
  • Can run on Linux, macOS, Android, and iOS, but not Windows
  • AI models needing 16GB RAM can run on two 8GB laptops

Running large language models (LLMs) typically requires expensive, high-performance hardware with substantial memory and GPU power. However, Exo software now looks to offer an alternative by enabling distributed artificial intelligence (AI) inference across a network of devices.

The company allows users to combine the computing power of multiple computers, smartphones, and even single-board computers (SBCs) like Raspberry Pis to run models that would otherwise be inaccessible.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
You can put Google Gemini right on your smartphone home screen – here’s how
Tech

You can put Google Gemini right on your smartphone home screen – here’s how

Google has launched Gemini home screen widgets for Android and iOS devices...

You can now fact check anybody’s post in WhatsApp – here’s how
Tech

You can now fact check anybody’s post in WhatsApp – here’s how

Perplexity AI’s new WhatsApp integration offers instant fact-checking without leaving the app...

US asks judge to break up Google’s ad tech business
Tech

US asks judge to break up Google’s ad tech business

Google is facing a demand by the US government to break up...