In summary
- Nvidia launched the Jetson Orin Nano Super, a $249 compact computer with AI performance improved by 70%, reaching 67 TOPS.
- Designed for developers, it can handle computer vision and language models with low power consumption (7-25 watts).
- Ampere architecture and software upgrades make it ideal for robotics, chatbots, and efficient edge applications.
Good news for AI developers and hobbyists: Nvidia just made it much cheaper to build robots, drones, smart cameras, and other AI-powered devices that need a brain. The company’s new Jetson Orin Nano Super, announced Tuesday and available now, packs more processing power than its predecessor while costing half as much, $249.
The palm-sized computer offers a 70% performance boost, reaching 67 trillion operations per second for AI tasks. This represents a significant jump from previous models, especially for powering chatbots, computer vision and robotics applications.
“This is a new Jetson Nano Super. Almost 70 billion operations per second, 25 watts and $249,” Nvidia CEO Jensen Huang said in an official video revealed from his kitchen. “It runs everything the HGX does, even runs LLMs.”
Memory bandwidth also received a significant upgrade, increasing to 102 gigabytes per second, 50% faster than the previous generation Jetson. This improvement means the device can handle more complex AI models and process data from up to four cameras simultaneously.
More Read
The device comes with Nvidia’s Ampere GPU architecture and a 6-core ARM processor, allowing it to run multiple AI applications at once. This gives developers the potential to work on more varied competencies, such as building small models for robots capable of environment mapping, object recognition, and voice commands with low processing power.
Current Jetson Orin Nano owners are not far behind either. Nvidia is releasing software updates to increase the efficiency of its legacy AI processors.
The numbers behind Nvidia’s new Jetson Orin Nano Super tell an interesting story. With just 1,024 CUDA cores, it seems modest compared to the RTX 2060’s 1,920 cores, the RTX 3060’s 3,584, or the RTX 4060’s 3,072. But the raw core count doesn’t tell the whole story.
While gaming GPUs like the RTX series consume between 115 and 170 watts of power, the Jetson consumes just 7 to 25 watts. That’s about one-seventh the power consumption of an RTX 4060—the most efficient of the group.
The memory bandwidth numbers paint a similar picture. The Jetson’s 102 GB/s might seem unimpressive next to the 300+ GB/s of RTX cards, but it’s specifically optimized for AI workloads at the edge, where efficient data processing matters more than performance. rough.
That said, the real magic happens in AI performance. The device produces 67 TOPS (trillion operations per second) for AI tasks—a difficult number to directly compare to the TFLOPS of RTX cards, since they measure different types of operations.
But in practical terms, the Jetson can handle tasks like running local AI chatbots, processing multiple camera feeds, and controlling robots—all simultaneously with a power budget that could barely run a gaming GPU cooling fan, basically being on the cutting edge. Pair with an RTX 2060 at a fraction of the cost and power consumption.
Its 8GB of shared memory may not seem like much, but it means it’s more capable than a regular RTX 2060 when it comes to running local AI models like Flux or Stable Diffusion that can throw an “out of memory” error on those GPUs, or download part of the work to normal RAM, reducing inference time—basically the AI’s thought process.
The Jetson Orin Nano Super also supports various types of models both large and small, including those with up to 8 billion parameters, such as the Llama 3.1 model. It can generate tokens at a rate of about 18-20 per second when using a quantized version of these models. A little slow, but still good enough for some local applications. Still, it’s an improvement over the previous generation of Jetson AI hardware models.
Given its price and features, the Jetson Orin Nano Super is designed primarily for prototyping and small-scale applications. For advanced users, businesses, or applications that require extensive computational resources, the device’s capabilities may seem limited compared to high-end systems that cost much more and require much more power.
Edited by Andrew Hayward
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Crypto Keynote USA
For the Latest Crypto News, Follow ©KeynoteUSA on Twitter Or Google News.