NVIDIA Jetson Thor and Namla: x8 Multimodal AI Inference at the Edge with Advantech

Rabah Guedrez - CEO
Rabah Guedrez - CEO

Published on June 23, 2025

NVIDIA Jetson Thor and Namla: x8 Multimodal AI Inference at the Edge with Advantech
Ready for a demo of Namla ?

Edge AI is reshaping the way intelligent applications are deployed, enabling real-time decision-making directly where data is generated—on the edge. From smart cities to industrial automation and autonomous systems, organizations are increasingly turning to compact yet powerful computing platforms to handle complex AI workloads outside traditional data centers.

At the forefront of this transformation is the NVIDIA Jetson platform, a family of edge AI modules built to bring accelerated computing to resource-constrained environments. Known for their small form factor, energy efficiency, and AI capabilities, Jetson modules have become a cornerstone in robotics, computer vision, and industrial use cases.

Among these, the NVIDIA Jetson Thor represents a major leap forward. Delivering up to 2,070 TOPS of AI performance, Jetson Thor is designed for rugged industrial deployments and demanding edge workloads. It enables enterprise IT leaders and solution providers to deploy sophisticated AI-driven applications with unprecedented speed and efficiency, revolutionizing sectors such as smart infrastructure, robotics, autonomous vehicles, and real-time inspection.

 Jetson Thor: Powerful and Rugged
Performance Comparison Across Jetson Modules

Jetson Thor achieves a massive leap in performance, delivering ~x8 the compute capability in a similar form factor compared to its predecessors. Here's a quick look:

Jetson Serie

Reference

AI Performance (TOPS)

CUDA Cores

Tensor Cores

Jetson Orin Nano Series

Orin Nano 4GB

20

512

16

Orin Nano 4GB (Super)

34

512

16

Orin Nano 8GB

40

512

16

Orin Nano 8GB (Super)

67

512

16

Jetson Orin NX NX Series

Orin NX 8GB

70

1024

32

Orin NX 8GB (Super)

117

1024

32

Orin NX 16GB

100

1024

32

Orin NX 16GB (Super)

157

1024

32

Jetsib AGX Orin Series

AGX Orin 32GB

200

1792

56

AGX Orin 64GB

275

2048

64

Jetson Thor

Jetson Thor

2070

8192

192

Jetson Thor: Unleashing Multimodal AI at the Edge

Jetson Thor’s 2,070 TOPS FP4 throughput delivers an ~8× leap in AI performance over AGX Orin, enabling edge devices to run advanced multimodal inference workloads such as vision-language models (VLMs), large language models (LLMs), and high-speed sensor fusion—entirely on-device, with no reliance on cloud infrastructure.

It’s not just about TOPS—Jetson Thor also introduces a massive increase in GPU parallelism with up to 8,192 CUDA cores, compared to just 2,048 in AGX Orin. This 4× jump in CUDA cores significantly boosts performance for deep learning workloads, especially for transformer-based models and convolution-heavy tasks like 3D vision and semantic segmentation.

With 128 GB of LPDDR5X memory, Thor supports high-throughput inference for large models like LLaVA, SAM, or quantized 70B parameter LLMs. Its architecture supports TensorRT-LLM, Transformer Engine, and accelerated KV cache operations—making it ideal for both real-time LLM and VLM inferencing at the edge.

Built in a rugged, compact form factor, Jetson Thor is engineered for demanding deployments—featuring 100 GbE networking, CAN, USB 3.2, GMSL2 camera support, and wide voltage/temperature tolerance. Paired with platforms like Advantech’s MIC-743, Thor brings server-class multimodal AI capabilities to factories, mobile robots, smart intersections, and autonomous infrastructure.

From object detection to natural language interaction, Jetson Thor transforms the edge into an intelligent, responsive compute layer—ready to deploy LLMs and VLMs with no latency, no offload, and full autonomy.

etson Thor stands out with 2,070 TOPS, offering nearly 10× the AI compute

Kubernetes-Native Orchestration by Namla

Namla enhances Jetson Thor's capabilities by offering a Kubernetes-native edge AI orchestration platform. Enterprises gain:

  • Zero-touch Provisioning: Rapid deployment and management of distributed edge devices without manual intervention, significantly reducing setup time and operational complexity.

  • Real-time Observability: Complete visibility into device health, GPU usage, network performance, and AI model efficiency, enabling proactive management and minimizing downtime.

  • Secure MLOps: Efficient management and deployment of AI models across multiple edge nodes securely and consistently, ensuring robust and reliable AI operations.

  • Embedded SD-WAN: High-speed networking capabilities, integrated with Namla's embedded SD-WAN, enhance clustering performance, ensuring secure, efficient, and reliable connectivity between distributed edge devices and the cloud.

Namla continues its commitment to enabling Edge AI for its customers. Through its partnership with NVIDIA, Namla brought official support for JetPack 6 (as highlighted in NVIDIA's recent blog article mentioning Namla). Now, with NVIDIA Jetson Thor, Namla further advances its edge orchestration capabilities.

Strategic Partnership with Advantech

Namla continues its strategic partnership with Advantech to seamlessly integrate Jetson Thor with advanced orchestration capabilities. Advantech's rugged Jetson Thor lineup, notably the MIC-743, perfectly complements Namla’s platform, providing enterprises with a ready-to-deploy, fully integrated edge AI solution. This collaboration streamlines the path from evaluation to deployment, empowering enterprises to rapidly scale their edge AI initiatives.

CEO and Co-founder at GTC Paris holding Jetson Thor signer by Nvidia CEO

Real-World Impact
  • Robotics: Enhanced autonomy with real-time AI vision, navigation, object detection, and sensor integration, making robotic systems smarter and more responsive in dynamic environments.

  • Smart Cities: Real-time analytics and responsiveness for traffic management, safety monitoring, infrastructure monitoring, predictive maintenance, and intelligent public safety solutions, significantly improving urban life quality.

  • Industrial Automation: On-site advanced quality inspection, predictive maintenance, and real-time analytics, boosting productivity and safety while reducing operational costs.

Namla’s Edge AI commitment

The combined strength of NVIDIA Jetson Thor, Namla’s Kubernetes-native orchestration, and Advantech’s hardware accelerates edge AI adoption. Enterprises can now confidently deploy powerful, scalable, and secure edge AI solutions, bringing cloud-like agility directly to the edge and driving innovation across industries.

 

Resources
nv-inception-social-tw-li-template-2
March 03, 2025
Deploy AI on Nvidia Jetson with Namla & Edge Impulse
Edge Impulse is a leading platform for developing and deploying machine learning models on edge devices. It enables developers to train, optimize, and deploy models on resource-constrained hardware, such as microcontrollers, single-board computers, and GPUs. This report outlines the steps to deploy Edge Impulse models on various edge devices using Namla platform.
ai-service-provider-1
July 29, 2024
Leveraging Kubernetes for Private LLMs: Lessons from the Edge
Seeing how our clients utilize our Edge to Cloud Orchestration Platform for their distributed edge infrastructure inspired me to write this blog. In this post, I’ll showcase how you can build your private ChatGPT with the power of open-source LLMs at the edge of your network, granting it access to your data and enabling its use across various devices from anywhere, even while on vacation, without the fear of privacy concerns. This guide will help you leverage the power of edge computing to create a seamless, personalized AI experience that stays with you wherever you go.