Tech

SETI but for LLM; how an LLM solution that’s barely a few months old could revolutionize the way inference is done

Share
Share


  • Exo supports LLaMA, Mistral, LlaVA, Qwen, and DeepSeek
  • Can run on Linux, macOS, Android, and iOS, but not Windows
  • AI models needing 16GB RAM can run on two 8GB laptops

Running large language models (LLMs) typically requires expensive, high-performance hardware with substantial memory and GPU power. However, Exo software now looks to offer an alternative by enabling distributed artificial intelligence (AI) inference across a network of devices.

The company allows users to combine the computing power of multiple computers, smartphones, and even single-board computers (SBCs) like Raspberry Pis to run models that would otherwise be inaccessible.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Teaching robots to weld by using human expertise could solve UK’s critical welder shortage
Tech

Teaching robots to weld by using human expertise could solve UK’s critical welder shortage

Credit: CC0 Public Domain Robots could be the solution to filling the...

WhatsApp is officially getting ads – and I’m worried it’s a slippery slope from here
Tech

WhatsApp is officially getting ads – and I’m worried it’s a slippery slope from here

WhatsApp is finally getting ads They’ll appear in the Updates tab, integrated...

After hitting top retail stores, experts warn this infamous criminal gang is now going after US insurance giants
Tech

After hitting top retail stores, experts warn this infamous criminal gang is now going after US insurance giants

Scattered Spider is no longer targeting retailers, Google claims “Multiple” intrusions have...