Nvidia brings the AI hype to the holidays with a tiny new development board called the Jetson Orin Nano Super. Nvidia cheekily refers to the board as “most affordable generative AI supercomputer,” though that may be stretching it quite a bit.
This dev kit is a system-on-module, similar to the Raspberry Pi Compute Module. It sits on top of a reference carrier for I/O, power, and I/O. Nvidia’s tiny dev kit, like the Raspberry Pi, is aimed at hobbyists and developers who want to experiment with generative artificial intelligence.
Nvidia’s Jetson Orin Nano Super Development Kit is similar to a Raspberry pi Compute Module. It consists of a compute board containing the SoC, which attaches to an I/O board and power carrier. Click to enlarge.
The Orin Nano is powered by six Arm Cortex A78AE cores, along with a Nvidia GPU based upon its older Ampere architecture. It has 1024 CUDA Cores and 32 tensor Cores.
It appears that the hardware design is identical to the original Jetson Orin Nano. Nvidia claims that the board is now shipped with upgraded software which unlocks additional performance. The price has been reduced to $249 from the original $499.
The Jetson Orin Nano Super is faster than any Intel, AMD or Qualcomm AI PCs with 67 TOPS. It also has 8GB of LPDDR5 RAM that can deliver 102GB/s memory bandwidth. Nvidia claims that these specs represent a 70 percent increase in performance, and a 50 percent increase in memory bandwidth compared to its predecessor.
This bandwidth boost is especially important for those who want to play around with the large language models (LLMs), which power modern AI chatbots. We estimate that the dev kit can generate words at a rate of 18-20 tokens per second using a 4-bit quantized model of Meta’s 8 billion-parameter Llama 3.0 model.
For more information on how TOPS, bandwidth, and memory capacity relate to model performance, check out our guide.
The dev kit’s carrier boards feature the usual connectivity options for an SBC. These include gigabit Ethernet and DisplayPort, as well as four USB 3.2 type-A ports. USB-C is also included, along with dual M.2 slots that have M and E keys.
- Are you buying a PC to run local AI? What is the depth of Nvidia’s CUDA motte?
- The European Commission has told Nvidia to take a closer view of its purchase of Run.ai.
- An introduction to speculative coding for LLM performance
You might think that Arm cores would be problematic in terms of software support. However, this is not the case. Nvidia has been supporting GPUs on Arm processors since years. Its most sophisticated designs โ the GH200 & GB200 โ use its custom Arm-based Grace processor. You can expect wide support for the GPU giantโs software suite, including Nvidia’s Isaac, Metropolis and Holoscan. The dev kit supports up to four cameras, which can be used for robotics and vision processing workloads.
Nvidia will also be releasing a software update for its older Jetson Orin NX or Nano system-on-modules, which should boost GenAI performance 1.7x. If you already own an Orin Nano you shouldn’t miss out too much. (r)