Deploy AI on Nvidia Jetson with Namla & Edge Impulse

Deploy AI on Nvidia Jetson with Namla & Edge Impulse
Safa Khadraoui, ML Engineer

Published on March 3, 2025

Deploy AI on Nvidia Jetson with Namla & Edge Impulse
Ready for a demo of
Namla
?

Edge Impulse is a leading platform for developing and deploying machine learning models on edge devices. It enables developers to train, optimize, and deploy models on resource-constrained hardware, such as microcontrollers, single-board computers, and GPUs. This report outlines the steps to deploy Edge Impulse models on various edge devices using Namla platform.

1. Prerequisites

Before deploying an Edge Impulse model, ensure you have the following:

  • An Edge Impulse account
  • A trained and optimized model in the Edge Impulse platform
  • A compatible edge device (e.g., x86, Raspberry Pi, NVIDIA Jetson,… etc.)
  • A Namla Platform account
  • Access to a Namla-supported edge device

2. Steps for Deploying YOLOv5 with Edge Impulse

In this blogpost, we show how to deploy YOLOv5, but we can obviously deploy any compatible model.

Step 1: Export YOLOv5 Model

Since we are using YOLOv5 pretrained model, we need to export the model in a supported format (onnx in our case):

git clone https://github.com/ultralytics/yolov5.git 
cd yolov5 
pip install -r requirements.txt
python export.py --weights yolov5s.pt --img 640 --batch 1 --include onnx

This generates a yolov5s.onnx file for inference.

Step 2: Upload the model to your Edge Impulse account
ill-undefined
ill-undefined
ill-undefined

After you’ve uploaded the model, you will be prompted to choose your deployment method, choose ”Docker container (NVIDIA Jetson Orin - JetPack 6.0)” (Choose the jetpack version compatible with your device).

ill-undefined
Step 3: Deploy Edge Impulse using Namla

To deploy Edge Impulse on Namla, login to your account, go to the Application section and you will create a new application:

ill-undefined

Fill the form and go next:

Paste your Kubernetes manifest or fill a new one using the easy manifest creator. You can also use this ready-to-use manifest provided by Namla that allows the deployment of Edge Impulse:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: inference-container
spec:
  replicas: 1
  selector:
    matchLabels:
      app: inference-container
  template:
    metadata:
      labels:
        app: inference-container
    spec:
      containers:
        - name: inference-container
          image: public.ecr.aws/g7a8t7v6/inference-container-jetson-orin-6-0:aa5c24954939fbe5e525b0bcac0959ec6017baf7
          ports:
            - containerPort: 1337
          args:
            - --api-key
            - ei_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
            - --run-http-server
            - "1337"
            - --impulse-id
            - "2"
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/hostname
                    operator: In
                    values:
                      - axiomtek-orin-nx-jp6-id68
---
apiVersion: v1
kind: Service
metadata:
  name: inference-service
spec:
  type: NodePort
  selector:
    app: inference-container
  ports:
    - protocol: TCP
      port: 1337
      targetPort: 1337
      nodePort: 31337

Go next, and schedule your application to deploy it on the device you want to. In our case, we are using an Nvidia Jetson JP6. Then click on deploy and let Namla do the work for you.

ill-undefined

Once your application is successfully deployed, you can access the Edge Impulse HTTP endpoint on: http://localhost:1337, there you can test the model you uploaded earlier:

ill-undefined
Resources
image
February 7th, 2024
Namla Joins NVIDIA Inception: A Step Forward in Edge AI Innovation
We are thrilled to announce that Namla has been accepted into the NVIDIA Inception program ! This marks an exciting milestone in our journey to revolutionize Edge AI orchestration and management. NVIDIA Inception is designed to nurture startups driving breakthroughs in AI and data science, and being part of this prestigious program further strengthens our relationship with NVIDIA.
image
November 2024
The AI Service Provider: Delivering Seamless AI and SD WAN at the Edge
This paper shows how service providers can tackle these challenges with a fully-integrated edge AI networking platform. The combination of IronYun Vaidio, a real-world AI video analytics application, and Namla's orchestration solution, which includes SD-WAN capabilities, running on Advantech's AI edge devices based on NVIDIA Jetson technology, allows service providers to go beyond traditional enterprise network services to deliver end-to-end AI services. By taking on this new role, service providers can offer added value to their enterprise customers, shifting to AI service providers to thrive in the gen AI era.
image
May 28, 2024
Namla to Scale Edge AI with NVIDIA Jetson and NVIDIA Metropolis Platforms
Namla offers an integrated platform for deploying edge AI infrastructure, managing devices, and orchestrating applications while ensuring edge-to-cloud connectivity and security. This collaboration with NVIDIA will help enable businesses in various sectors, such as retail, manufacturing, healthcare, oil & gas, and video analytics, to scale up their deployment of NVIDIA Jetson systems-on-module and streamline the implementation of edge AI applications using NVIDIA Metropolis microservices.