AI model development has reached an inflection point, bringing high-performance computing capabilities typically reserved for the cloud out to edge devices. It’s a refreshing perspective compared to the all-consuming nature of large language models (LLMs) and the GPUs needed to run them.
“You’re gonna run out of compute, power, energy and money at some point,” said Zach Shelby, CEO and co-founder of Edge Impulse, a Qualcomm Technologies company. “We want to deploy [generative AI] so broadly. It’s not scalable, right? And then it…








