Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

Welcome! Here you’ll find some useful pages and notes on using the meta-tegra layer in your project. See the linked pages to have a look around! You may want to start with this page for information on which branches support which platforms and L4T releases.

If you are a new to the platform and don’t have a specific set of layers you plan to use with meta-tegra, or if you are just interested in trying meta-tegra for the first time as quickly as possible, we highly recommend checking out tegra-demo-distro as a starting point. See the README there for instructions to build one of several demo images which demonstrate the capabilities of meta-tegra and companion layers.

For real-time communication, the OE4T project uses https://gitter.im/OE4T/community

Ignore any historical references to slack as this repository is no longer available.

See the thread at https://github.com/OE4T/meta-tegra/discussions/515 for monthly meeting schedules.

Finally, see this page for information about contributing to the project as well as information about testing and test coverage. This project is built and supported by volunteers and we greatly appreciate your participation.

OpenEmbedded/Yocto BSP layer for NVIDIA Jetson Modules

Jetson Linux release: R36.5.0 JetPack release: 6.2.2

Boards supported:

  • Jetson AGX Orin development kit
  • Jetson Orin NX 16GB (p3767-0000) in Xavier NX (p3509) carrier
  • Jetson Orin NX 16GB (p3767-0000) in Orin Nano (p3768) carrier
  • Jetson Orin Nano development kit
  • Jetson AGX Orin Industrial 64GB (P3701-0008) in Orin AGX (P3737) carrier

This layer depends on: URI: git://git.openembedded.org/openembedded-core branch: master LAYERSERIES_COMPAT: whinlatter

CUDA toolchain compatibility note

CUDA 12.6 supports up through gcc 13.2 only, so recipes are included for adding the gcc 13.2 toolchain to the build for CUDA use, and cuda.bbclass has been updated to pass the g++ 13 compiler to nvcc for CUDA code compilation.

Getting Help

For general build issues or questions about getting started with your build setup please use the Discussions tab of the meta-tegra repository:

  • Use the Ideas category for anything you’d like to see included in meta-tegra, Wiki content, or the tegra-demo-distro.
  • Use the Q&A category for questions about how to build or modify your Tegra target based on the content here.
  • Use the “Show and Tell” category for any projects you’d like to share which are related to meta-tegra.
  • Use the General channel for anything that doesn’t fit well into the categories above, and which doesn’t relate to a build or runtime issue with Tegra yocto builds.

Reporting Issues

Use the Issues tab in meta-tegra for reporting build or runtime issues with Tegra yocto build targets. When reporting build or runtime issues, please include as much information about your environment as you can. For example, the target hardware you are building for, branch/version information, etc. Please fill in the provided bug template when reporting issues.

We are required to provide an e-mail address, but please use GitHub as described above, instead of sending e-mail to oe4t-questions@madison.systems.

Contributing

Please see CONTRIBUTING.md for information on submitting patches to the maintainers.

Contributions are welcome!

Currently maintained branches

Last update: 28 Feb 2026

The OE4T demo distro has corresponding branches to demonstrate full builds for the Jetson platforms supported by this layer.

Branches are named for the OE-Core branch name each one tracks; see this page for Yocto Project releases and branches.

For Jetson Linux (L4T) releases:

  • our master branch, and other master- prefixed branches, track the latest available Jetson Linux releases and OE-Core master.
  • branches corresponding to regular (non-LTS) OE-Core release branches track the latest Jetson Linux release at the time the branch is created
  • branches corresponding to long-term support (LTS) OE-Core release branches are kept up-to-date with the Jetson Linux releases. When there is a significant Jetson Linux upgrade, an additional LTS branch is created for the older release series.

Active branches:

Deprecated branches that receive less attention:

Older branches, no longer actively maintained:

  • styhead - L4T R36.4.0/JetPack 6.1 for AGX Orin/Orin NX/Orin Nano
  • nanbield - L4T R35.4.1/JetPack 5.1.2 for AGX Xavier/Xavier NX/AGX Orin/Orin NX/Orin Nano
  • mickledore - L4T R35.4.1/JetPack 5.1.2 for AGX Xavier/Xavier NX/AGX Orin/Orin NX/Orin Nano
  • langdale - L4T R35.2.1/JetPack 5.1 for AGX Xavier/Xavier NX/AGX Orin/Orin NX
  • honister - L4T R32.6.1/JetPack 4.6 for TX1/TX2/TX2-NX/Xavier/Xavier-NX/Nano/Nano-2GB
  • hardknott - L4T R32.5.2/JetPack 4.5.1 for TX1/TX2/TX2-NX/Xavier/Xavier-NX/Nano/Nano-2GB
  • gatesgarth - L4T R32.4.4/JetPack 4.4.1 for TX1/TX2/Xavier/Xavier-NX/Nano/Nano-2GB
  • dunfell-l4t-r32.6.1 - L4T R32.6.1/JetPack 4.6 for TX1/TX2/TX2-NX/Xavier/Xavier-NX/Nano/Nano-2GB
  • dunfell-l4t-r32.5.0 - L4T R32.5.2/JetPack 4.5.1 for TX1/TX2/TX2-NX/Xavier/Xavier-NX/Nano/Nano-2GB
  • dunfell-l4t-r32.4.3 - L4T R32.4.3/JetPack 4.4 for TX1/TX2/Xavier/Xavier-NX/Nano
  • dunfell-l4t-r32.4.2 - L4T R32.4.2/JetPack 4.4DP for TX1/TX2/Xavier/Xavier-NX/Nano
  • dunfell-l4t-r32.3.1 - L4T R32.3.1/JetPack 4.3 for TX1/TX2/Xavier/Nano
  • dunfell - L4T R32.7.4/JetPack 4.6.4 for TX1/TX2/TX2-NX/Xavier/Xavier-NX/Nano/Nano-2GB
  • zeus-l4t-r32.3.1 - L4T R32.3.1/JetPack 4.3 for TX1/TX2/Xavier/Nano
  • zeus - L4T R32.2.3/JetPack 4.2.3 for TX1/TX2/Xavier/Nano
  • warrior - L4T R32.1/JetPack 4.2 for TX1/TX2/Xavier/Nano, L4T R21.7 for TK1
  • warrior-l4t-r32.2 - L4T R32.2/JetPack 4.2.1 for TX1/TX2/Xavier/Nano, L4T R21.7 for TK1
  • thud - L4T R28.2.1 for TX1 and TX2, L4T R21.7 for TK1
  • thud-l4t-r28.3 - L4T R28.3 for TX1/TX2, L4T R21.7 for TK1
  • thud-l4t-r32.1 - L4T R32.1 for TX1/TX2, L4T R21.7 for TK1 (not fully tested)
  • thud-l4t-r32.3.1 - L4T R32.3.1/JetPack 4.3 for TX1/TX2/Xavier/Nano (not fully tested)
  • sumo - L4T R28.4.0 for TX1 and TX2, L4T R21.7 for TK1
  • rocko - L4T R28.1 for TX1 and TX2, L4T R21.6 for TK1
    • rocko-l4t-r28.2 - updates TX1 and TX2 to L4T R28.2.1, TK1 to L4T R21.7
  • pyro - L4T R24.2.1 for TX1, R27.1 for TX2
    • pyro-l4t-r24.2.2 - updates TX1 to L4T R24.2.2
    • pyro-l4t-r28.1 - updates TX1 and TX2 to L4T R28.1
  • morty - L4T R24.2.1 for TX1, R21.5 for TK1, no TX2 support
  • krogoth - L4T R24.1 for TX1
  • jethro - used for initial development, very out of date

Work-in-progress branches: any branch prefixed with wip- is work in progress, and can radically change or be deleted at any time.

This total-beginner’s guide will walk you through the process of flashing a newly-generated image your Jetson development kit! The instructions here are for branches based off L4T R32.4.3 and later. (For earlier releases, click the revisions count, under the title, to go back to an earlier revision of the page.)

Initrd Flashing

For branches based off L4T R35.1.0 and later (master, kirkstone, and langdale), and the kirkstone-l4t-r32.7.x branch, an alternative flashing process (called “initrd flashing”) is available, which supports flashing to a rootfs (APP partition) on an external storage device. See this page for more information.

The table below helps outline the flashing mechanism(s) supported depending on target root filesystem storage for all recent branches (kirkstone-l4t-r32.7.x and later)

Target Rootfs StorageFlashing method
on-board eMMCdoflash.sh or initrd-flash
SDCarddoflash.sh or initrd-flash. dosdcard.sh may be used for subsequent programming after initial bootloader programming with doflash.sh or initrd-flash.
NVMeinitrd-flash
M.2 drive or SATA driveinitrd-flash

Prerequisites

Before you get started, you’ll need the following:

  • A suitable USB cable. For most Jetsons, this is a type A to micro-B cable, but for the AGX Xavier and AGX Orin dev kits, you’ll need a USB-C cable (or a USB-C to type A cable, if your development host does not have USB-C ports). As NVIDIA mentions in their documentation, it’s important to use a good-quality cable for successful flashing.

  • A free USB port on your development machine. The flashing tools work best if you can connect directly to a port on your system, rather than using a USB hub.

  • For L4T R32.5.0 and later, you must have the dtc command in your PATH, since the NVIDIA tools use that command when preparing the boot files for some of the Jetsons. On Ubuntu systems, that command is provided by the device-tree-compiler package.

  • For L4T R35 and later, you must have the GNU cpp command in your PATH (and not the LLVM/Clang cpp, see #1959).

While not required, a serial console connection is very useful, particularly with troubleshooting flashing problems, since the bootloaders only write messages to the serial console.

Please note, also, that flashing typically does not work from a virtual machine. You should be running the flashing tools directly on a Linux host.

For SDcard-based development kits

If you have a Jetson Nano or Jetson Xavier NX development kit, you’ll need a good-quality MicroSDHC/SDXC card, preferably 16GB or larger. Higher-speed cards (at least UHS-I) are preferred, particularly if you plan to program the SDcard through an SDcard reader/writer on your development host. The reader/writer should be high-speed also, and connected through a high-speed I/O interface (e.g., USB 3.1).

Programming an SDcard in a reader/writer attached to your host is also faster (much faster) if you have the bmaptool command in your PATH. On Ubuntu systems, that command is provided by the bmap-tools package. (But note that bmaptool requires sudo.)

The Jetson AGX Xavier development kit also supports booting from a MicroSD card instead of the on-board eMMC, with some limitations.

Avoiding sudo

You can avoid using sudo during the flashing/SDcard writing process (except for using bmaptool, as noted above) by adding yourself to suitable groups and installing a udev rules file to give yourself access to the Jetsons via USB. The following instructions are for Ubuntu; other distros may have other groups or require additional setup.

  • For SDcard writing, add yourself to group disk.
  • For USB flashing, add yourself to group plugdev,

You can use this script to install the udev rules that grant the plugdev group write access to the Jetson devices when they are connected in recovery mode to your development host.

Note that after changing your group membership and/or udev rules, you may need to reboot your development host for the changes to take effect. It’s worth this extra setup, though, to eliminate the need for root access.

Building a tegraflash package

All of the Jetson machine configurations add a tegraflash image type by default, which generates a compressed tarball contains all of the files, tools, and scripts for flashing the device and/or creating a fully-populated SDcard. If you’ve successfully run a bitbake build of an image, you should see a file called

<image-type>-${MACHINE}.tegraflash.tar.gz

or, in more recent branches,

<image-type>-${MACHINE}.rootfs.tegraflash.tar.<compression>

in the directory $BUILDDIR/tmp/deploy/images/${MACHINE}. where <compression> could be either gz or zst, depending on the branch you are using (zstd replaced gzip as the default compression method in Feb 2025).

Using an SDcard with the Jetson AGX Xavier

By default, the tegraflash package for the AGX Xavier is set up for flashing the on-board eMMC. If you want to boot your Xavier off an SDcard instead, you should add the following to your build configuration (e.g., in $BUILDDIR/conf/local.conf):

  TEGRA_ROOTFS_AND_KERNEL_ON_SDCARD = "1"
  ROOTFSPART_SIZE = "15032385536"

The ROOTFSPART_SIZE setting is for a 16GB SDcard; adjust the size as needed for a larger or smaller card.

With these settings in place, the resulting tegraflash package supports flashing the bootloader file to the on-board eMMC, moving the kernel, device tree, and rootfs on the SDcard. Note that this is only supported for the Jetson AGX Xavier, and that SDcard booting does not support the bootloader redundancy features.

With this configuration, there will be two scripts in the tegraflash package: dosdcard.sh for writing the SDcard, and doflash.sh for flashing the bootloader partitions to the eMMC. Run the dosdcard.sh script to format and write the SDcard on your development host, insert the SDcard into the slot on the AGX Xavier dev kit, then use the doflash.sh to flash the bootloader partitions. (Unlike for Xavier NX devices, you must perform these steps separately.)

Unpacking the tegraflash package

To flash your Jetson, or create an SDcard image, create an empty directory and use the tar command to unpack the tegraflash package into it:

  $ mkdir ~/tegraflash
  $ cd ~/tegraflash
  $ tar -x -f $BUILDDIR/tmp/deploy/images/${MACHINE}/<image-type>-${MACHINE}.tegraflash.tar.gz

Be sure to use the tar command from a terminal window. Some users have reported issues with incorrect results when extracting files using GUI-based tools.

Setting up for flashing

  1. Start with your Jetson powered off. (NVIDIA recommends connecting hardware only while the device is powered off.)
  2. Connect the USB cable from your Jetson to your development host.
  3. Insert an SDcard into the slot on the module, if needed.
  4. Power on the Jetson and put it into recovery mode.

For SDcard-based Jetsons (Nano and Xavier NX), you have the option of programming the SDcard contents either during USB flashing or separately using an SDcard reader/writer on your development host. If you program the SDcard separately, perform that step first and insert the already-programmed card into the slot on the module in step 3 above. (When using an SDcard with the AGX Xavier, you must pre-program the SDcard first.)

To verify that the device is in recovery mode and that the USB cable is connected properly, use the following command:

  $ lsusb -d 0955:
  Bus 001 Device 006: ID 0955:7c18 NVIDIA Corp. T186 [TX2 Tegra Parker] recovery mode

If you don’t see your Jetson listed, double-check the cable and try the recovery mode sequence again.

Recovery mode jumpers and buttons

The different Jetson develpoment kits have different mechanisms for entering recovery mode.

Jetson TX1 and TX2 development kits

Press and hold the REC (“recovery”) button, press and release the RST (“reset”) button. Continue to hold the REC button for 3-4 seconds, then release. [[images/TX1-TX2-Devkit-RecoveryMode-Button.jpg|alt=TX1-TX2 buttons]]

Jetson AGX Xavier development kit

Press and hold the center button, and press and release the reset button (on the right). [[images/AGX-Xavier-RecoveryMode-Button.jpg|alt=AGX Xavier buttons]]

Jetson Orin development kit

Press and hold the center button. Then plug in the power supply. Release the center button. Note that it can take 10-15 seconds for the device to fully enter recovery mode and export its serial console after power up.

All Jetson Nano, Xavier NX development kits

Connect a jumper between the 3rd and 4th pins from the right hand side of “button header” underneath the back of the module (FRC and GND; see the labeling on the underside of the carrier board). The module will power up in recovery mode automatically. [[images/Nano-NX-RecoveryMode-Jumper.jpg|alt=Nano-Xavier pins]]

For the older Jetson Nano rev A02 carrier boards, the FRC pin is in the 8-pin header next to the module, beside the MIPI-CSI camera interface. The pins are labeled on the underside of the carrier board. Nano A02 pins

Writing an SDcard

If you want to program the SDcard contents directly onto the card from your development host:

  1. Insert the card into the reader/writer on your host.
  2. Carefully determine the device name for the card. Using the wrong device name could destroy your host’s filesystems.
  3. Run the dosdcard.sh script to program the card.

Here is an example, for a system where /dev/sda is the device name of the card:

$ ./dosdcard.sh /dev/sda

Remember to use sudo, if needed. The script will ask you to confirm before writing (which you can skip by adding -y to the command above).

Creating an SDcard image

You can also create an SDcard image file that can later be written to one or more cards:

$ ./dosdcard.sh <filename>

The resulting file will be quite large, and writing the image can take a long time.

SPI flash on SDcard-based kits

The SDcard-based development kits store some (in some cases, all) of the bootloader content on a SPI flash device on the Jetson module. You must ensure that the bootloader content in this flash device is compatible with the layout on the SDcard you create, since the early-stage boot data is programmed with the locations/sizes of SDcard-resident partitions, and cannot read the GPT partition table at runtime. To do this, you must perform a USB flash to program the SPI flash at least once on you development kit, by following the steps in the next section.

Once the SPI flash has been programmed correctly, you should be able to update just by writing new SDcard images unless you make changes in your build that affect one of the boot-related partitions residing in the SPI flash, or change the flash layout XML in a way that alters the location/size of one of the SDcard-resident boot partitions (if there are any).

Flashing the Jetson

Once everything is set up, use the doflash.sh script to program the Jetson:

$ ./doflash.sh

Remember to use sudo to invoke the script, if needed. If successful, the Jetson will be rebooted into your just-built image automatically after flashing is complete.

For SDcard-based development kits, you can program just the boot partitions in the SPI flash with:

$ ./doflash.sh --spi-only

You should insert your programmed SDcard in the slot on the Jetson before performing this step, so when the Jetson reboots after the flashing process completes, it will boot into your image.

Automating Unpack and Flash Steps

You can use this script if desired to automate the steps associated with unpacking and running the ./doflash.sh script for tegraflashing.

Issues during flashing

If you run sudo ./doflash.sh and flashing is started but then it hang in some step like:

[   1.7586 ] Flashing the device
[   1.7611 ] tegradevflash --pt flash.xml.bin --storageinfo storage_info.bin --create
[   1.7636 ] Cboot version 00.01.0000
[   1.7659 ] Writing partition GPT with gpt.bin
[   1.7666 ] [................................................] 100%
[   1.7707 ] Writing partition PT with flash.xml.bin
[  15.9892 ] [................................................] 100%
[  15.9937 ] Writing partition NVC with nvtboot.bin.encrypt
[  16.2433 ] [................................................] 100%
[  16.2569 ] Writing partition NVC_R with nvtboot.bin.encrypt
[  26.2706 ] [................................................] 100%
[  26.2877 ] Writing partition VER_b with jetson-nano-qspi-sd_bootblob_ver.txt
[  36.3103 ] [................................................] 100%
[  36.3202 ] Writing partition VER with jetson-nano-qspi-sd_bootblob_ver.txt
[  36.5833 ] [................................................] 100%
[  36.5927 ] Writing partition APP with test-image.ext4.img
[  36.8548 ] [................................................] 100%

or if e.g following:

[   1.9394 ] 00000007: Written less bytes than expected
[  21.7219 ] 
Error: Return value 7
Command tegradevflash --pt flash.xml.bin --storageinfo storage_info.bin --create

It’s good to connect serial console which in above case will print something like:

[0020.161] device_write_gpt: Erasing boot device spiflash0
[0039.824] Erasing Storage Device
[0039.827] Writing protective mbr
[0039.833] Error in command_complete 18003 int_status
[0039.840] Error in command_complete 18003 int_status
[0039.847] Error in command_complete 18003 int_status
[0039.852] sending the command failed 0xffffffec in sdmmc_send_command at 109
[0039.859] switch command send failed 0xffffffec in sdmmc_send_switch_command at 470
[0039.866] switch cmd send failed 0xffffffec in sdmmc_select_access_region at 1301
[0039.876] Error in command_complete 18001 int_status
[0039.883] Error in command_complete 18001 int_status
[0039.890] Error in command_complete 18001 int_status
[0039.895] sending the command failed 0xffffffec in sdmmc_send_command at 109
[0039.902] setting block length failed 0xffffffec in sdmmc_block_io at 945
[0039.909] block I/O failed 0xffffffec in sdmmc_io at 1215
[0039.914] block write failed 0xffffffec in sdmmc_bdev_write_block at 178
[0039.921] device_write_gpt: failed to write protective mbr
[0039.926] Number of bytes written -20
[0039.930] Written less bytes than expected with error 0x7
[0039.935] Write command failed for GPT partition

Things to try:

  • ! USB cable must be plugged directly to PC host (don’t use USB hub otherwise issues like described above will appear) !
  • verify USB cable quality (try to use another one)
  • power off/on device and try flashing again

General Tegraflash Troubleshooting

See Tegraflash-Troubleshooting

Notes on extending support for flashing Jetson devices that boot from external storage media (NVMe, USB).

Last update: 25 Jul 2025

This is currently supported on branches based off JetPack 5/L4T R35 or later, and kirkstone-l4t-r32.7.x. For R32.7.x, there is support for T210 (TX1/Nano) as well as T186 (TX2) and T194 (Xavier) targets.

Prerequisites

Beyond the normal host tools required for building and normal flashing, you should also have these commands available on your build host:

  • sgdisk (from the gdisk/gptfdisk package)
  • udisksctl (part of the udisks2 package)

You should disable automatic mounting of removable media in your desktop settings. On recent Ubuntu (GNOME), go to Settings -> Removable Media, and check the box next to “Never prompt or start programs on media insertion.” You may also need to update the /org/gnome/desktop/media-handling/automount setting via dconf. Check the setting with:

$ dconf read /org/gnome/desktop/media-handling/automount

If it reports true, set it with:

$ dconf write /org/gnome/desktop/media-handling/automount false

For Ubuntu 24.04, use gsettings, and also disable automount-open:

$ gsettings set org.gnome.desktop.media-handling automount false
$ gsettings set org.gnome.desktop.media-handling automount-open false

If the bmaptool command is available, it will be used for writing to the storage device, which speeds up writes but (currently) requires root privileges (the scripts will automatically use sudo to invoke it when needed).

No additional host changes should be required

Your image needs to include a device-tree with usb2-0 in otg mode - as here.

Avoiding Sudo

Note: sudo access will be needed when writing the disks using bmap-tools. This method below will avoid sudo while mounting/unmounting the flaskpkg and related block devices.

For running the initrd-flash script without sudo, the host changes mentioned in the “Avoiding sudo” section on the Flashing the Jetson Dev Kit wiki page still apply.

In addition, to avoid prompts for authentication at several points in the process you need to configure polkit appropriately. On Ubuntu 22.04 this can be accomplished with the following script snippet run as sudo root:

cat << EOF > /var/lib/polkit-1/localauthority/50-local.d/com.github.oe4t.pkla
[Allow Mounting for Disk Group]
Identity=unix-group:disk
Action=org.freedesktop.udisks2.filesystem-mount
ResultAny=yes

[Allow Power Off Drive for Disk Group]
Identity=unix-group:disk
Action=org.freedesktop.udisks2.power-off-drive
ResultAny=yes
EOF
chmod 644 /var/lib/polkit-1/localauthority/50-local.d/com.github.oe4t.pkla
systemctl restart polkit

Build configuration

No configuration is required if you just want to use initrd flashing and still keep your rootfs on the Jetson’s internal storage device. You only need to add a configuration setting if you want to configure your system to have its rootfs (APP partition) on an external storage device. To do that, add a line to your local.conf such as:

TNSPEC_BOOTDEV:jetson-xavier-nx-devkit-emmc = "nvme0n1p1"
  • If trying this out with a different Jetson device, use the MACHINE name for the override in the above.
  • If trying USB storage instead of NVMe, use sda1 as the boot device, instead of nvme0n1p1.

Flashing after build

  1. Put the Jetson device into recovery mode and connect it to your host via the USB OTG port.
  2. Unpack the tegraflash tarball into an empty directory.
  3. cd to that directory and run ./initrd-flash to start the flashing process.

The script:

  1. Uses the RCM boot feature to download a special initrd and kernel that sets up the device as a USB mass storage gadget.
  2. Waits for the USB storage device to appear on the host, then copies in the bootloader files and a command sequence for the target that instructs it to start the boot device update, and tells it which storage device(s) should be exported to the host for writing.
  3. Uses the make-sdcard script to write to storage device(s). This happens in parallel with the target’s programming of the boot device.
  4. Waits for the target to export another storage device to report its final status and the logs generated on the target. The script copies the device logs into a subdirectory. When finished, it releases the storage device, and the target reboots automatically.

Note: add the current Linux user to the disk group to avoid the usage of sudo to run initrd-flash script.

Re-flashing just the rootfs storage device

The initrd-flash script has a --skip-bootloader option for skipping the programming of the boot partitions, so you can re-flash just the rootfs storage device. You should only use this option if you have already programmed the boot partitions once with the versions you’re using for your current build.

Possible future enhancements

  • Develop the kernel/initrd used here into a more general “recovery” image, and/or applied for cross-version OTA updates, although the specific use cases will probably require something a bit different and need more customization.
  • See if something could be done to automate setup when using LUKS encryption. Direct formatting and partition writing from the host isn’t really an option there. A hybrid approach (formatting and cryptsetup done on the device, then exporting the encrypted partitions via USB) should be workable.

How it works

  • The helper scripts now support an --external-device option that passes appropriate options to tegraflash.py (needed since one of the BCTs appears to include information the external storage device for the boot chain to work), and an --rcm-boot option to allow direct download/execution of a kernel+initrd image.
  • The SDcard-related support in the nvflashxmlparse and make-sdcard scripts was generalized to distinguish between the ‘boot’ device and any ‘rootfs’ device.
  • The tegra-flash-init recipe was added to install a minimal init script for the flashing kernel, which sets up a USB mass storage gadget for the device to be flashed. The serial number advertised by the gadget is the unique chip id (ECID) of the Tegra SoC.
  • The initrd-flash script and the flashing kernel/initrd are added to the tegraflash package to drive the process. The ECID (unique ID) of the SoC is extracted during initial RCM contact and used to locate the correct /dev/sdX device for the partition writing.
  • A find-jetson-usb script has been added to wait for the appearance of the Jetson (in recovery mode) on the host USB bus.
  • The tegraflash package generator in image_types_tegra.bbclass exports additional settings (e.g., the TNSPEC_BOOTDEV setting) in the file .env.initrd-flash for use by the initrd-flash script.
  • The tegra-bootfiles recipe populates an external flash layout (XML) file in addition to the main (internal storage) flash layout file. The default layout from the L4T kit are modified, if required, to ensure that the boot and kernel partitions are present in the correct layout (with no duplicates) when TNSPEC_BOOTDEV set for using external storage.

Notes

  • RCM booting on T194 platforms bypasses the UEFI bootloader, directly loading the kernel from nvtboot. This means that the kernel/initrd does not have access to any EFI variables. UEFI is used in the RCM boot chain on T234 platforms.
  • On Xavier NX dev kits (SDcard-based), you must still have an SDcard installed in the slot even if you are booting off an external drive. The SDcard must not have an esp or APP partition on it. You must manually reformat the SDcard, as the flashing process will not do that for you. For all other Jetsons with internal eMMC storage, the eMMC will be erased as part of the flashing process (and re-partitioned/re-populated for those platforms that store some of the bootloader binaries in the eMMC).
  • Based on readings of some NVIDIA dev forum posts, A/B updates in JetPack 5.0 do not work properly in all cases when booting off an external drive. That is supposed to be fixed in JetPack 5.1.
  • Depending on your device’s configuration (e.g., having multiple storage devices attached), you may need to manually configure the boot order in the UEFI bootloader by hitting ESC when UEFI starts, and then selecting Boot Maintenance Manager, then Boot Options, then Change Boot Order. This is a limitation in JetPack 5.0 that is supposed to be fixed in JetPack 5.1.
  • If you use a custom flash layout for your builds, note that there are some limitations on the composition of your flash layout file(s) due to how the bootloaders and the NVIDIA tools work. For example, you cannot use a SPI flash-only layout for internal storage, since the BUP payload generator expects to be able to create a payload containing the kernel/kernel DTB. The generator will fail during the build, since those partitions are not present in the SPI flash. You also cannot use a single flash layout that includes only the boot partitions (in, for example, SPI flash on AGX Orin and Xavier NX) and the external storage device (nvme). The tools that generate the MB1 BCT and/or MB2 BCT will error out because those bootloaders cannot access external storage. Hopefully NVIDIA will resolve these limitations in a future release.

Comparison with stock L4T initrd flashing

  • OE builds are per-machine, so much of the additional scripting to handle different targets during the flashing process can be omitted.
  • With OE builds, TNSPEC_BOOTDEV selection is performed at build time. Switching back and forth between external rootfs and internal storage should be done with different builds.
  • Stock L4T provides its initrd in prebuilt form, which requires disassembling and reassembling the initrd in the flashing scripts. With OE, we can build the flashing initrd directly.
  • Stock L4T requires customizing the external drive’s flash layout to specify the exact size of the storage device, in sectors. That’s not required with OE builds, which do not use NVIDIA’s flashing tools to partition the external drive.
  • Stock L4T inserts udev rules on the host during flashing and does some network setup to talk to the device. The process implemented for OE builds does not use any networking and does not require any udev rules changes during the flashing process. You also don’t have to be root to perform initrd-based flashing for OE builds, if you have followed the instructions here. (However, the bmaptool copy command used in the make-sdcard script does need root access for its setup, and the script will run it under sudo for you).

Limitations on using an external drive for the rootfs

  • On Jetson TX2 devices, the bootloaders do not have support for loading the kernel from an external drive. The kernel, initrd, and device tree must reside on the eMMC (along with some of the boot partitions).
  • Other Jetsons that boot directly from the eMMC (TX1, Nano-eMMC, Xavier NX-eMMC, AGX Xavier) also need to have some of the boot partitions in the main part of the eMMC.
  • With Jetsons running JetPack 5/L4T R35.1.0, you may need to manually interrupt the UEFI bootloader to adjust the boot order to favor the external drive. Even then, UEFI may attempt a PXE (network) boot first. (This appears to be fixed with JetPack 5.1/L4T R35.2.1.)

Known issues

  • On an AGX Orin configured to use an external drive for the rootfs (NVMe), once it has been flashed using initrd-flash, the RCM boot of the initrd-flash kernel stops working; the NVMe-resident OS is booted instead. This happens with the stock L4T initrd flashing tools also. To work around the problem, clear the partition table on the NVMe drive (e.g., using sgdisk /dev/nvme0n1 --clear) before resetting the Orin into recovery mode to start the re-flashing process.
  • On T210 platforms (TX1/Nano), if you use the normal doflash.sh script, boot binaries will get overwritten (due to the way the NVIDIA flashing tools work), and that will cause an “FDT_ERR_BADMAGIC” error if you later try to run initrd-flash. The error is minor, and probably won’t cause any real issues with the flashing/booting process. To be safe, though, you should not mix normal and initrd-based flashing in the same tegraflash directory.

Customizing External Storage Size

Beginning with Jetpack 5.1.2 (r35.4.1) (and this commit), the TEGRA_EXTERNAL_DEVICE_SECTORS variable is used to customize the total size of device containing the root filesystem (as well as all other partitions in PARTITION_LAYOUT_EXTERNAL). The default size of this variable assumes a device which is at least 64GB in size.

You may increase your root filesystem size to a value of around 30GB, leaving space for two root filesystem partitions (to support A/B redundancy) and additional partitions by defining ROOTFSPART_SIZE to a 4K aligned value in bytes of ~30 GB using a setting like ROOTFSPART_SIZE = "30032384000" in your local.conf.

If you have an external device larger than 64GB and would like to use this for a larger root filesystem, in addition to modifying ROOTFSPART_SIZE you will also need to adjust the TEGRA_EXTERNAL_DEVICE_SECTORS to specify a larger size in sectors. For instance, to specify a ~60 GB rootfs on a 128 GB flash drive use ROOTFSPART_SIZE = "60064768000"and TEGRA_EXTERNAL_DEVICE_SECTORS = "250000000"

General Tegraflash Troubleshooting

See Tegraflash-Troubleshooting

OE4T Contributor Guide

See the CONTRIBUTING.md file for details about contributing to this repository.

In addition to code and documentation contributions we greatly appreciate help in the form of testing.

Please see the Release and Validation sheet for a list of current test coverage and test cases. Request edit on this sheet if you’d like to help contribute.

CONTRIBUTING

Thank you for contributing to the OE4T project! Your contributions are greatly appreciated!

Submitting Code Changes

The OE4T project repositories follow the OpenEmbedded Guidelines. Please review these when proposing your Pull Request. A few highlights and additional requirements:

  • Please submit issues or pull requests through Github. Only rebase and squash commits are used for PRs, so if you have a PR that is outstanding for a long time, please keep your branch up to date by rebasing your changes, rather than merging.
  • Group commits based on their functionality and components changed. For the first line, use something like component: Short Summary to describe your change where component refers to a specific software component being changed.
  • Please try to make incremental changes with multiple commits, rather than “big bang” single commits with changes spread across multiple components.
  • Add a Signed-off-by: line to your commit, using git commit -s or a pre-commit hook like the one setup with this script, using your real name and e-mail address (no anonymous contributions, please). This indicates that you have the right to submit the patch per the Developer’s Certificate of Origin in the next section.
  • Target the master branch for pull requests unless your change is specific to an earlier branch.

Developer’s Certificate of Origin

By making a contribution to this project, I certify that:

  1. The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or
  2. The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or
  3. The contribution was provided directly to me by some other person who certified (1), (2) or (3) and I have not modified it.
  4. I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.

(Adapted from the Linux kernel’s certificate of origin.)

Submitting Documentation Changes

Documentation is served as an mdbook based on the content in the docs directory. Please open a PR for documentation changes on the relevant branch. Please target the master branch for docs changes unless your changes are specific to older branches. Documentation content in older branches are based on a snapshot at branch time and may be out of date.

Instructions for r35

See https://github.com/OE4T/tegra-demo-distro/discussions/310#discussioncomment-10534547

  1. Grab the pinmux spreadsheet and configure the pins the way you need then generate the new files https://developer.nvidia.com/downloads/jetson-orin-nx-and-orin-nano-series-pinmux-config-template.
  2. This will give you three new dtsi files. You need to match up what these are with your machine and meld them to get the changes you need from the machine. The relevant recipe is tegra-bootfiles
  3. Build in these recipes libgpiod libgpiod-tools libgpiod-dev
  4. Back at the command line run gpioinfo and grep on your gpio you want. For my case I wanted GGPIO3_PCC.00
  5. Take the controller name (0 or 1 for me) and the line and you should now be able to gpioset -c 1 12=1 to set. where the c is the controller number and 12 is the line number.

Good reference https://docs.nvidia.com/jetson/archives/r35.3.1/DeveloperGuide/text/HR/JetsonModuleAdaptationAndBringUp/JetsonOrinNxNanoSeries.html#generating-the-pinmux-dtsi-files

Jetpack 4 instructions for Controlling the pin states on the Jetson TX2 SoM

There’s two ways:

  • Through bootloader configuration.
  • Through using the virtual /sys filesystem in userspace.

Pin settings in bootloader configuration

Summary

You need to do the following:

  1. Download an Microsoft Excel sheet(!) containing some macros(!!) and the L4T (“Linux For Tegra”) package from Nvidias downloadcenter. Note: For this you need a Nvidia developer account.
  2. In the Excel sheet, select the desired pin configuration using cell dropdown menus. Use the embedded macro to write out some device tree files.
  3. Use a Python script which comes with L4T to convert the device tree files into something file the bootloader can understand.
  4. Embed the bootloader configuration in the Yocto source tree.

Detailed steps

As an example, the following guide walks you through reconfiguring pin A9 from it’s default state (output GND) to input with weak pull-up.

  1. The MS Excel part:
    1. Visit the Nvidia developer download center and search for Jetson TX2 Series Pinmux. Here’s a direct link for 1.08. Download and run it with macros enabled.
    2. On the second sheet you’ll find the configuration for pin A9 on (at the time of writing) line 246. Cells in columns AR and AS define it as output grounding the signal. Change these cells to Input and Int PU.
    3. At the very top of the sheet, click the button labeled Generate DT file. Some dialogues will pop up which asks for stuff and have an effect on the filename.
  2. The Python part:
    1. Go to the Nvidia developer download center and search for Jetson Linux Driver Package (L4T). Follow the link to the L4T x.y.z Release Page. (For example, here’s the one for R32.4.3.) There, you should find a link labeled L4T Driver Package (BSP) leading to some tarball named similar to Tegra186_Linux_Rx.y.z_aarch64.tbz2. (Again, as an example here’s the one for R32.4.3.). Uncompress it and change to Linux_for_Tegra/kernel/pinmux/t186/ inside.
    2. Run the pinmux-dts2cfg.py in the following way:
    python pinmux-dts2cfg.py \
        --pinmux \
        addr_info.txt \
        gpio_addr_info.txt \
        por_val.txt \
        --mandatory_pinmux_file mandatory_pinmux.txt \
        /path/to/your/excel-created/tegra18x-jetson-tx2-config-template-*-pinmux.dtsi \
        /path/to/your/excel-created/tegra18x-jetson-tx2-config-template-*-gpio-*.dtsi \
        1.0 \
        > /tmp/new.cfg
    
    If it throws errors, it might be related to this.
  3. Add a patch in your distro layer reflecting the pin settings in your /tmp/new.cfg created above.

Controlling/reading the pin state from userspace

You can control/read the pin value from the virtual /sys filesystem but not the pull up/down state.

Software-wise, the GPIOs have other names than on the schematic. Nvidia doesn’t make it easy to go from schematic name (like A9) to the /sys name (like gpio488). The following user contributed posts explain it better than anything Nvidia has come up with so far:

Having found out the /sys name for your pin, you can take following snippets as an example:

The following snippet sets the gpio to output-low.

# GPIO488 is A9 on the SoM
pin=488
echo $pin > /sys/class/gpio/export
echo out  > /sys/class/gpio/gpio$pin/direction
echo 0    > /sys/class/gpio/gpio$pin/value

The following snippet sets the pin to input and reads its logical state:

# GPIO488 is A9 on the SoM
pin=488
echo $pin > /sys/class/gpio/export
echo in   > /sys/class/gpio/gpio$pin/direction
cat         /sys/class/gpio/gpio$pin/value

The meta-tegra layer includes MACHINE definitions for NVIDIA’s Jetson development kits. If you are developing a custom device using one of the Jetson modules with, for example, a custom carrier board, or you just want to modify the default boot-time configuration (pinmux, etc.) for an existing development kit as a separate MACHINE in your own metadata layer, you may need to supply a MACHINE-specific file for your builds.

IMPORTANT: For any custom carrier board/hardware design, make sure you consult the appropriate Platform Adaptation and Bring-Up Guide document available from the NVIDIA Developer Download site to get all the details on how to customize the pinmux configuration and other low-level hardware configuration settings. Failing to provide the correct settings could damage your device.

Boot-time hardware configuration and boot flash programming is particularly complicated for Jetson modules, and varies substantially between models. Consult a recent version of the L4T Driver Package Documentation, particularly the “BSP Customization” and “Bootloader” chapters, for background information. As mentioned above, the Platform Adaptation documentation is also a good reference.

NOTE: Due to restrictions in the implementation of bootloader update payloads, the length of your custom MACHINE name should be 31 characters or less.

Jetson-TX1

No additional build-time files are necessary for MACHINEs based on the Jetson-TX1 module. All customizations can be done in the device tree and/or U-Boot. You’ll need to point your build at your customized kernel and/or U-Boot repository and set variables in the machine .conf file for your custom device.

Jetson-Nano

In the warrior and zeus branches, the only MACHINE-specific build-time file for Jetson-Nano is the SDCard layout file used by recipes-bsp/sdcard-layout/sdcard-layout_1.0.bb. If you modify the partition layout for the SDCard, you’ll need to supply a copy of the sdcard-layout.in file that matches the SDCard partitions you define in your customized version of the flash_l4t_t210_spi_sd_p3448.xml file from the L4T BSP.

Starting with the zeus-l4t-r32.3.1 branch, full support for all revisions and SKUs of the Jetson Nano module was added, and the SDcard layout file was eliminated. To modify your partition layout, you need only provide a customized copy of the flash_l4t_t210_spi_sd_p3448.xml (for 0000 SKUs) or flash_l4t_t210_emmc_p3448.xml (for 0002 SKUs) file. Different module revisions (FABs) use different device tree files, so you may need to have multiple device tree source files to account for module variants in your custom device/carrier.

Jetson-TX2 and Jetson-TX2i

For the Jetson-TX2 family, there are several boot-time configuration files that are machine-specific. Be sure to follow the Platform Adaptation Guide documentation carefully so all of the necessary customizations for the BPMP device tree and the MB1 .cfg files for the pinmux, PMIC, PMC, boot ROM, and other on-module hardware get created properly. The basic steps are filling in the pinmux spreadsheet and generating the dtsi fragments, then converting those fragments to cfg files using the L4T pinmux-dts2cfg.py script.

The recipes-bsp/tegra-binaries/tegra-flashvars_<bsp-version>.bb recipe installs a file called flashvars that identifies the boot-time configuration files that need to be processed by the tegra186-flash-helper script for feeding into NVIDIA’s flashing tools. With older OE4T branches, you need to supply a customized copy of the flashvars file in your BSP layer. With the latest branches, the flashvars file gets generated automatically from the variables listed in TEGRA_FLASHVARS. Check the recipe in meta-tegra to confirm which method you need to follow.

The files listed in your flashvars file must be installed into ${datadir}/tegraflash in the build sysroot by another recipe. The simplest method is to create an overlay for the recipes-bsp/tegra-binaries/tegra-bootfiles recipe, as it already extracts the files for the Jetson development kits from the L4T BSP package:

# The fetch task is disabled on this recipe, but we need our files included in the task signature.
CUSTOM_DTSI_DIR := "${THISDIR}/${BPN}"
FILESEXTRAPATHS:prepend := "${CUSTOM_DTSI_DIR}:"

SRC_URI:append:${machine} = "\
    file://tegra19x-${machine}-padvoltage-default.cfg \
    file://tegra19x-${machine}-pinmux.cfg \
    "

# As the fetch task is disabled for this recipe, we access the files directly out of the layer.
do_install:append:${machine}() {
    install -m 0644 ${CUSTOM_DTSI_DIR}/tegra19x-${machine}-padvoltage-default.cfg ${D}${datadir}/tegraflash/
    install -m 0644 ${CUSTOM_DTSI_DIR}/tegra19x-${machine}-pinmux.cfg ${D}${datadir}/tegraflash/
}

The specifics of the configuration files and variables required may vary from version to version of the L4T BSP, so be sure to review any changes when upgrading.

Jetson AGX Xavier

Jetson AGX Xavier systems are similar to Jetson-TX2, but (as of this writing) have only two version-dependent boot-time files - the BPMP device tree and the PMIC configuration. Consult the NVIDIA documentation for customization steps, and see the Jetson-TX2 section above for information on how to integrate your custom files into the build.

Note that AGX Xavier targets handle UEFI variables differently than other platforms. If you plan to use with Jetpack 5 branches, please read https://github.com/OE4T/meta-tegra/pull/1865 and note that you likely will want to define TNSPEC_COMPAT_MACHINE.

Jetson Xavier NX

Jetson Xavier NX systems are similar to Jetson AGX Xavier, but (as of this writing) have no version-dependent boot-time files. Consult the NVIDIA documentation for customization steps, and see the Jetson-TX2 section above for information on how to integrate your customized files into the build.

Jetson Orin

This guide is based on Jetson Linux R35.4.1 so change bbappend names accordingly if you use a different release. Occurences of ${machine} should be replaced by your machine name.

Create a new machine config

Create a new Machine configuration at conf/machine/${machine}.conf in your layer. For guidance on what it should contain look at any of the machine configurations in meta-tegra.

Create a new flash config

Create a new flash configuration recipes-bsp/tegra-binaries/tegra-flashvars/${machine}/flashvars. You can start by copying one of the flashvars files in meta-tegra. To use the newly created flashvars file create the following recipes-bsp/tegra-binaries/tegra-flashvars_35.4.1.bbappend:

FILESEXTRAPATHS:prepend := "${THISDIR}/${BPN}:"

Add pinmux dtsi files

Generate the pinmux dtsi files with the Nvidia pinmux Excel sheet (or this one for Orin AGX). Rename the resulting files to start with tegra234- (Otherwise meta-tegra has issues handling them.) and convert line endings to Unix using dos2unix. Copy the files to recipes-bsp/tegra-binaries/tegra-flashvars.

NOTE: If you manually rename your generated DTSI files, you may need to modify the #include statement on line 35 of your -pinmux.dtsi file, as it has the original filename for the -gpio-default.dtsi file hardcoded.

Install the files with following tegra-bootfiles_35.4.1.bbappend:

# Hack: The fetch task is disabled on this recipe, so the following is just for the task signature.
FILESEXTRAPATHS:prepend := "${THISDIR}/${BPN}:"
SRC_URI:append:${machine} = "\
    file://tegra234-${machine}-gpio-default.dtsi \
    file://tegra234-${machine}-padvoltage-default.dtsi \
    file://tegra234-${machine}-pinmux.dtsi \
"

# Hack: As the fetch task is disabled for this recipe, we have to directly access the files."
CUSTOM_DTSI_DIR := "${THISDIR}/${BPN}"
do_install:append:${machine}() {
    install -m 0644 ${CUSTOM_DTSI_DIR}/tegra234-${machine}-gpio-default.dtsi ${D}${datadir}/tegraflash/
    install -m 0644 ${CUSTOM_DTSI_DIR}/tegra234-${machine}-padvoltage-default.dtsi ${D}${datadir}/tegraflash/
    install -m 0644 ${CUSTOM_DTSI_DIR}/tegra234-${machine}-pinmux.dtsi ${D}${datadir}/tegraflash/
}

(Don’t forget to replace ${machine} with your machine name.)

Then modify flashvars to use the files:

  • PINMUX_CONFIG should be set to your tegra234-${machine}-pinmux.dtsi
  • PMC_CONFIG should be set to your tegra234-${machine}-padvoltage-default.dtsi

(Optionally) disable board EEPROM usage

As explained in the Platform Adaptation and Bring-Up Guide by Nvidia, you might want to disable the usage of the board EEPROM. For that create a copy of the file used in flashvars for MB2BCT_CFG and modify it according to the Nvidia guide. Include this new file in Yocto the same way as explained in Add pinmux dtsi files and update MB2BCT_CFG in flashvars with the new file name.

Use a custom device tree

See Custom Device Tree and apply the described changes to your ${machine}.conf.

Customizing the kernel

For custom hardware, you’ll probably need to modify the kernel in at least one of the following ways:

  • Custom kernel configuration
  • Custom device tree
  • Adding patches

Starting with the L4T R32.3.1-based branches, you can use the Yocto Linux tools to apply patches and configuration changes during the build, although it may be simpler to fork the linux-tegra-4.9 repository to apply patches, and supply your own defconfig file for the kernel configuration. Having your own fork of the kernel sources should also be easier for creating a custom device tree. (You should also set the KERNEL_DEVICETREE variable in your machine configuration file appropriately.)

Custom MACHINE definitions for existing hardware

If you need to define an alternate MACHINE configuration for an NVIDIA Jetson development kit without altering the boot-time configuration files for hardware initialization, you can have your MACHINE reuse the existing files in meta-tegra. For example, let’s say you want to create tegraflash packages for the Jetson-TX2 development kit for both the default cboot->U-boot->Linux boot sequence as well as for booting directly from cboot to Linux, without U-Boot. In your BSP or distro layer, you could add a machine configuration file called, for example, conf/machine/jetson-tx2-cboot.conf that looks like this:

MACHINEOVERRIDES = "jetson-tx2:${MACHINE}"
require conf/machine/jetson-tx2.conf
PACKAGE_EXTRA_ARCHS_append = " jetson-tx2"
PREFERRED_PROVIDER_virtual/bootloader = "cboot-prebuilt"

This would override the bootloader settings in the default jetson-tx2 configuration to use cboot instead of U-Boot, but otherwise reuse all of the MACHINE-specific packages, files, and settings for the jetson-tx2 MACHINE in meta-tegra.

For Jetson Xavier NX based machine types - jetson-xavier-nx-devkit and jetson-xavier-nx-devkit-emmc, the conf/machine/custom-machine.conf would look like this:

require conf/machine/jetson-xavier-nx-devkit-emmc.conf
MACHINEOVERRIDES = "cuda:tegra:tegra194:xavier-nx:jetson-xavier-nx-devkit-emmc:${MACHINE}"
PACKAGE_EXTRA_ARCHS_append = " jetson-xavier-nx-devkit-emmc"

Custom Device Tree

In many cases it is desirable to avoid forking or patching the kernel sources. The devicetree bbclass can be used to create a custom dtb. There’s an example in tegra-demo-distro documented at Using-device-tree-overlays which accomplishes this for recent branches.

Custom Partitioning

See Redundant-Rootfs-A-B-Partition-Support for suggestions regarding defining partition layout files for your MACHINE.

This page describes one mechanism for enabling disk encryption on meta-tegra, using the notes from Islam Hussein in this thread on matrix.

The encryption happens as a post-process initiated manually after the build.

Yocto changes

  1. Modify your partition xml to set ‘encrypted’ to true on the corresponding partition, as described in the NVIDIA Disk Encryption Documentation.
<partition name="data-partition" type="data" encrypted="true">
  1. Choose a different init script to be used in initramfs which uses luks-srv-app and disable it totally after that to prevent further use. See code snippet below. For the “context”, refer to the build changes section below.
__l4t_enc_root_dm="l4t_enc_root";
__l4t_enc_root_dm_dev="/dev/mapper/${__l4t_enc_root_dm}"
eval nvluks-srv-app -g -c "<context>" | cryptsetup luksOpen /dev/nvme0n1p${current_rootfs} ${__l4t_enc_root_dm}

Build changes

Add a bash script to be called manually after finishing yocto build. The script will go to the path of the build, extract it in a temp directory. mount the rootfs and open it. Then it will make a luks-storage (And that’s why I couldn’t build it inside yocto) The problem is that when you want to open the luks using crypto you have to access device mapper which requires privileged access which yocto doesn’t have

  • Store the size of rootfs which is written in xml it have to be the same and then create luks drive with the same size.
  • To generate the password you’ll need to run gen_ekb.py
  • You’ll have to to write down dummy uuid which is the context used in the code snippet above. (context will be used in two places generating pass to encrypt rootfs and generating the pass access it.)
  • One way is to use a generic password which doesn’t need ecid. So the same key will be used for all of my devices.
GEN_LUKS_PASS_CMD="tools/gen_luks_passphrase.py"
genpass_opt=""
genpass_opt+=" -k tools/ekb.key "
genpass_opt+=" -g "
genpass_opt+=" -c '${__rootfsuuid}' "

GEN_LUKS_PASS_CMD+=" ${genpass_opt}"

truncate --size ${__rootfs_size} ${__rootfs_name}
eval ${GEN_LUKS_PASS_CMD} | sudo cryptsetup \
       --type luks2 \
       -c aes-xts-plain64 \
       -s 256 \
       --uuid "${__rootfsuuid}" \
       luksFormat \
       ${__rootfs_name}
eval ${GEN_LUKS_PASS_CMD} | sudo cryptsetup luksOpen ${__rootfs_name} ${__l4t_enc}
sudo mkfs.ext4 /dev/mapper/${__l4t_enc}
sudo mount /dev/mapper/${__l4t_enc} ${__enc_rootfs_mountpoint}
sudo mount  ${__original_rootfs} ${__rootfs_original_mountpoint}
sudo tar -cf - -C ${__rootfs_original_mountpoint} . | sudo tar -xpf - -C ${__enc_rootfs_mountpoint}
sleep 5
sudo umount ${__enc_rootfs_mountpoint}
sudo cryptsetup luksClose ${__l4t_enc}
sudo umount ${__rootfs_original_mountpoint}

Linux 4.x Kernel Notes

Starting with the 4.4 kernel, NVIDIA maintains separate repositories for some of their hardware-specific drivers and the device tree files. To simplify kernel builds under OE-Core, the linux-tegra recipes for 4.4 and later point to a repository where the files in those separate repositories have been merged back together using git subtrees.

This makes it more difficult to compare the sources used here against the NVIDIA upstream sources, but simplifies the recipe and the management any patches that might be needed.

Notes on integration of the Jetson-customized NVIDIA container runtime (beta version 0.9.0) with Docker support. See this page for information on how this is integrated with the JetPack SDK.

Supported branches

Support for the container runtime is available on the zeus-l4t-r32.3.1 and later branches.

Layers required

In addition to the OE-Core and meta-tegra layers, you will need the meta-virtualization layer and the meta-oe, meta-networking, and meta-python layers from the meta-openembedded repository.

Configuration

Add virtualization to your DISTRO_FEATURES setting.

Building

  1. To run any containers, add nvidia-docker to your image.
  2. The Docker containers that NVIDIA supplies do not bundle in most of the hardware-specific libraries needed to run them, but expect them to be provided by the underlying host OS, so be sure to include TensorRT (note), CuDNN, and/or VisionWorks, if you expect to be running containers needing those packages.
  3. For containers that use GStreamer, be sure to include the Jetson-specific GStreamer plugins you may need.

NVIDIA DEVNET MIRROR and SDK Manager

Jetpack 4.3 content as well as CUDA host tool support before this PR was not anonymously downloadable from NVIDIA’s servers and requires an NVIDIA_DEVNET_MIRROR setup with the path to SDK manager downloads.

Attempting to build recipes which require host tool CUDA support will faied with message:

ERROR: Nothing PROVIDES 'cuda-binaries-ubuntu1804-native'
cuda-binaries-ubuntu1804-native was skipped: Recipe requires NVIDIA_DEVNET_MIRROR setup

To resolve, you must use the NVIDIA SDK manager to download the cotent to your build host, then add this setting to your build configuration (e.g., in conf/local.conf under your build directory):

NVIDIA_DEVNET_MIRROR = "file://path/to/downloads"

By default, the SDK Manager downloads to a directory called Downloads/nvidia/sdkm_downloads under your $HOME directory, so use that path in the above setting.

See example in tegra-demo-distro which demonstrates setting the path to the default download directory used by NVIDIA SDK manager.

There may be times when you need to perform the equivalent of a re-flashing of your Jetson-based device without being able to use the normal flashing process via USB. This is possible, although there are some risks, and it requires careful setup and testing.

Possible applications:

  • You need to alter the layout of the partitions in the Jetson’s eMMC storage.

  • You need to update a Jetson running software based off an older version of the L4T BSP to a newer version that requires a modified layout of the eMMC and/or SPI flash (for Jetsons that have a SPI flash boot device).

  • You just need the equivalent of a full “factory reset” that restores the device to a pristine state.

This page walks through a basic example of how to do this, using tools and scripts that you can modify/adapt as needed. The example uses a Jetson-TX2 development kit target; it has also been tested with Xavier and Nano development kits.

Overview of the process

The goal here is to perform the equivalent of a USB “tegraflash” on the running device. What that entails is: erasing/reformatting the storage devices on the Jetson module and writing the correct boot code/data, kernel, rootfs, etc. so that on reboot, the device successfully boots into the image.

To do this under Linux, we can’t be running in a rootfs that is mounted in the on-module storage. If your device supports external storage that is bootable, you could use that, or you could run the process entirely from an initial RAM disk loaded with the Linux kernel. The following example uses the latter approach.

Ingredients

  • The tegra-sysinstall repo contains the scripts that execute the overall process.
  • The tegra-boot-tools repo contains the tools for writing the boot partitions.
  • Example recipes for creating the initramfs image for the TX2 running an old L4T R32.1-based build are here.
  • The new image, based on L4T R32.5.0, is built from this test distro.

Key considerations

  • The flash layout from the new image build is used to generate configuration files that the tools use for correctly re-partitioning the storage devices. To ensure that the bootloaders and Linux agree on the eMMC partition layout, the primary GPT must be at least 16,896 bytes (33 512-byte sectors). (This is the case with the stock flash layouts for all recent releases of L4T.)

  • The partition_table file generated by the sysinstall-partition-layout recipe from the R32.5.0-based build must be copied into the metadata for the warrior/R32.1-based build, since that file will be part of the warrior-based initramfs.

  • The tegra-bootloader-update tool uses a BUP payload as the source of the contents for all of the boot partitions. The stock L4T BUP payload generator does not include all of the boot partition contents. Recent commits into meta-tegra include patches for the generator to include the missing pieces for TX2 and Xavier platforms. Update 10 Jun 2021: The additions to the BUP payload turned out to be incompatible with the stock L4T nv_upgrade_engine bootloader update program on the TX2 and were reworked here to create an alternate payload that contains the full complement of boot partitions for TX2-based platforms.

  • The tegra-sysinstall script expects the new rootfs image to be in tarball form, and does not perform any authentication or sanity checking on the image, so it is only usable for development purposes and should not be used in production.

Build process

  • The R32.5.0-based build includes tegra-bup-payload, which installs a BUP payload in /opt/ota_package and pulls in the bootloader update tool. The demo-image-egl image was used for this example. Note that it has IMAGE_FSTYPES set to include building a tar.gz tarball for the rootfs.

  • The sysinstall-upgrader-initramfs recipe in the warrior/R32.1-based tree builds a BUP payload containing the kernel and initrd suitable for installing with nv_update_engine on a system running an R32.1-based image. (Note that for platforms using U-Boot, installing the initrd would require a different process.)

  • The core-image-base image from the warrior/R32.1 tree was used as the starting point for the example.

Process steps

  1. Start by flashing the R32.5.0-based image directly on the TX2. Use sgdisk /dev/mmcblk0 --print to display the partition table, and save that output so you can compare the results against the partition table created later during the installation process.

  2. Boot the core-image-base image from the warrior/R32.1-based distro on the TX2.

  3. Because the filesystem size is not expanded out to the full APP partition size in this build, use mkfs.ext4 to format the UDA partition, and mount that at /mnt.

  4. rmdir /opt/ota_package, then ln -sn /mnt /opt/ota_package to provide space for the BUP payload.

  5. Use wget to download the sysinstall-upgrader-initramfs-jetson-tx2.bup-payload built in the warrior/R32.1-based build tree as /opt/ota_package/bl_update_payload.

  6. Use nv_update_engine --enable-ab, then nv_update_engine --install no-reboot to install the BUP payload. If successful, reboot.

  7. The kernel command line in the initramfs image doesn’t have console= set, so be patient while the image loads (takes about a minute or so) - there is no kernel output during the boot.

  8. mkdir /var/extra, as this directory is needed as a mount point during the installation.

  9. mkdir /installer and use wget or curl to download the demo-image-egl-jetson-tx2-devkit.tar.gz tarball from the R32.5.0-based build, naming it /installer/image.tar.gz.

  10. tegra-sysinstall to start the installation process. After it reformats the eMMC, the script will display the new partition table. Verify that the partition start and end sectors match the ones displayed in step 1 (after flashing the R32.5.0 image directly). If there is a mismatch, the device will probably not boot properly.

This section could be customized for a specific delivery mechanism. For instance, instead of using wget to download the BUP payload, the package could be delivered through your preferred update mechanism. If using an A/B update scheme like the one used for tegrademo-mender it should be possible to use the filesystem in the new boot partition to host the BUP payload and image content.

Installation steps

These are the steps performed by tegra-sysinstall:

  1. The sgdisk command (from the gptfdisk package) is used to zap the GPT partition table and create all of the partitions on the eMMC, based on the configuration file at /usr/share/tegra-sysinstall/partition_table.

  2. The APP, APP_b, DATA, LOGS, and EXTRA partitions are formatted using mkfs.ext4.

  3. The EXTRA partition is mounted at /var/extra for use as temporary storage.

  4. The rootfs tarball is unpacked into the APP partition, then into the APP_b partition.

  5. The boot partitions are initialized by chrooting into the just-installed APP partition to run tegra-bootloader-update --initialize using the BUP payload and /usr/share/tegra-boot-tools/boot-partitions.conf configuration file from the just-installed rootfs.

Once the above steps are complete, the device can be rebooted, and should boot into the 32.5.0-based image.

(Please note that the tegra-sysinstall scripts were developed to test support for secure boot combined with LUKS encrypted filesystems and programming a unique machine ID in the odm_reserved fuses, so there are several functions in the scripts that can be ignored/skipped for testing the installation process.)

Things to watch out for

  • If the initramfs with the installation tools is too large for cboot to handle properly (it has some compiled-in limits on the amount of memory it can reserve for the initial RAM disk), you’ll see data abort errors on the serial console.

  • If the BUP payload is missing any of the boot-related contents, the device will fail to boot when rebooting after the installation process is complete - one of the early-stage bootloaders will report errors on the serial console, and the device should go into USB recovery mode.

  • The above can also happen if there is a mismatch in the starting offsets and/or sizes of any of the boot partitions in the eMMC and the expected offsets that got built into the boot control tables for the bootloader during BUP generation. It’s important that the /usr/share/tegra-sysinstall/partition_table configuration file in the initramfs gets correctly generated from the same flash layout XML file that you are using for the image you are upgrading to.

  • Any power interruption or other event that causes the device to reset or reboot, or otherwise interrupt the reflashing process, will render the device unbootable. Since the process can take several minutes (depending on the specific hardware, size of the image being installed, etc.), use of this process should be managed carefully.

  • Full BUP support in meta-tegra, covering multiple module revisions in a single payload, was added with the update to L4T R32.3.1. If you are currently running builds based off an older version of L4T, you may run into boot issues after installing the upgrader BUP payload on some TX2 modules. Adjusting the TEGRA_FAB setting in your build configuration to match the actual FAB revision of the module(s) you’re using should help with this.

Video walkthrough

See the OE4T May 2021 meeting video and notes for initial discussion and walkthrough of the content discussed here.

Testing

PPS GPIO Support on Jetson TX1 TX2

I thought I would add this here in the event someone else is searching for how to add a PPS input to TX1/TX2 systems. Hours of reading and searching yielded nothing other than the fact that NVIDIA doesn’t support it on the dev kits and they don’t provide any more information. I hope that someone can take this and use it for what they need, whether on commercial carriers or even on the dev kit board – maybe this is fairly common knowledge to those who work in device trees all the time, but for a noob to ARM and device trees, I would have found a page like this extremely valuable.

My setup is I have a TX1 on the Astro carrier from ConnectTech. I’m using the pyro-r24.2.2 branch of meta-tegra and pyro for poky/meta-openembedded.

I requested the DTS files for the ASG001 (Astro carrier) from ConnectTech and created my own machine layer, using the jetson-tx1 machine from meta-tegra as a starting point. This utilizes the 3.10 kernel.

To enable PPS support, I added the following block immediately below the gpio@6000d000 section of mono-tegra210-jetson-tx1-CTI-ASG001.dts:

        pps {
                gpios = <&{/gpio@6000d000} 187 0>;

                compatible = "pps-gpio";
                status = "okay";
        };

This only added PPS support to the device tree, however the 3.10 kernel doesn’t support PPS GPIO clients on the device tree, so that support needed to be added by manually applying this patch to the source (I applied it in the tmp/work-shared kernel source git repo and created a patch I used in my linux-tegra bbappend): https://github.com/beagleboard/meta-beagleboard/blob/master/common-bsp/recipes-kernel/linux/linux-mainline-3.8/pps/0003-pps-gpio-add-device-tree-binding-and-support.patch

For later releases (it appears as early as R27.1), PPS GPIO support for device trees is present in the linux-tegra kernel, so the only requirement is adding the pps block to the DTS.

Finally, ensure that CONFIG_PPS and CONFIG_PPS_CLIENT_GPIO are enabled in your kernel configuration (I copied the defconfig, modified it, and added a do_configure_prepend() to my bbappend).

do_configure_prepend() {
        cp ${WORKDIR}/defconfig-cti ${WORKDIR}/defconfig
}

At that point, build a typical image (I use core-image-full-cmdline - I take it others will work the same way) gives a functional PPS input into the kernel.

As of 08 Dec 2023, this feature is supported in the kirkstone, mickledore, nanbield, and master branches.

As of the latest Jetpack 5 r35.x releases, NVIDIA provides partition layouts which support Root File System Redundancy, whereby bootloader slots and rootfs slots are paired together to support automatically selecting the associated root filesystem partition at boot to match the selected bootloader slot. The selected bootloader slot, a or b, will select the corresponding rootfs slot a or b.

When paired with the UEFI capsule update feature, a redundant root filesystem supports switching the root filesystem, kernel, and kernel dtb to match the updated bootloader slot. When paired with an update tool which can update kernel, dtb and rootfs partitions (swupdate, rauc, mender, or others) the process of performing capsule update can also switch to an updated rootfs through the redundant rootfs feature.

If you have the available root filesystem space to support redundant rootfs, using a redundant partition layout at the outset of your project might give you the option to support updates later without a repartition (or tegraflash) of the device.

Selecting Redundant Root Filesystem Partition Layout

By default, both the stock NVIDIA provided Jetpack image as well as OE4T images use the non redundant partition layouts.

To use NVIDIA provided redundant partition layouts and automatically apply the necessary dtb changes performed by NVIDIA’s flash.sh script, on branches which include https://github.com/OE4T/meta-tegra/pull/1428, you simply need to set USE_REDUNDANT_FLASH_LAYOUT_DEFAULT = "1" in your distro configuration, custom MACHINE configuration, (or local.conf). This is currently supported for most targets. See the notes below for limitations.

This configuration is set as the default for all supported targets when building with tegra-demo-distro.

Testing Root Filesystem A/B Slot Switching

See the sequence in https://github.com/OE4T/meta-tegra/pull/1428 to validate root slot and boot slot switching.

Setting Up a Custom MACHINE

Use these variables to setup a MACHINE or distro with support for redundant flash layouts:

  • USE_REDUNDANT_FLASH_LAYOUT_DEFAULT - Set to "1" in your distro layer to use redundant flash layouts for any supported MACHINEs. Set to "0" to use default non-redundant layouts from NVIDIA when using tegra-demo-distro (USE_REDUNDANT_FLASH_LAYOUT_DEFAULT is the default for master branch builds of tegra-demo-distro).
  • ROOTFSPART_SIZE_DEFAULT - Set with the size of the root filesystem partition when using the default (non-redundant) flash layout. This size will be automatically divided by 2 when USE_REDUNDANT_FLASH_LAYOUT is selected.
  • PARTITION_LAYOUT_TEMPLATE_DEFAULT - set with the partition layout to use with the default (non, external, non redundant) flash layout, for instance custom_layout.xml. Either provide a custom_external_layout_rootfs_ab.xml file or define PARTITION_LAYOUT_TEMPLATE_REDUNDANT with your redundant file.
  • PARTITION_LAYOUT_TEMPLATE_DEFAULT_SUPPORTS_REDUNDANT - Set to "1" if no PARTITION_LAYOUT_TEMPLATE_REDUNDANT is required for this MACHINE (and the same template is used for redundant or non redundant builds).
  • PARTITION_LAYOUT_EXTERNAL_DEFAULT - Set with the default partition layout when using an external device (sdcard or NVMe) for rootfs partition storage, for instance custom_external_layout.xml. Either provide a custom_external_layout_rootfs_ab.xml file or define PARTITION_LAYOUT_EXTERNAL_REDUNDANT with your redundant file.
  • HAS_REDUNDANT_PARTITION_LAYOUT_EXTERNAL - Set to "0" if your MACHINE does not support a PARTITION_LAYOUT_EXTERNAL_REDUNDANT and therefore does not support USE_REDUNDANT_FLASH_LAYOUT_DEFAULT

Overriding BSP Layer Changes

Use ROOTFSPART_SIZE, PARTITION_LAYOUT_EXTERNAL and PARTITION_LAYOUT_TEMPLATE as done before changes in https://github.com/OE4T/meta-tegra/pull/1428, to provide your own implementation outside the BSP layer and ignore the setting of USE_REDUNDANT_FLASH_LAYOUT.

Limitations

NVIDIA does not provide a redundant flash layout for flash_l4t_external.xml. Any targets which use flash_l4t_external.xml, which as of https://github.com/OE4T/meta-tegra/pull/1295 include Orin NX 16 GB in P3509 carrier, Orin NX 16 GB in P3768 carrier, or Orin Nano 4GB in p3768 carrier use HAS_REDUNDANT_PARTITION_LAYOUT_EXTERNAL ?= "0" and therefore don’t support the USE_REDUNDANT_FLASH_LAYOUT feature described here. Alternatively, override USE_REDUNDANT_FLASH_LAYOUT = "1" and set PARTITION_LAYOUT_EXTERNAL_DEFAULT ?= "flash_l4t_nvme.xml" or your custom external layout, but be aware of issue https://github.com/OE4T/meta-tegra/discussions/1286. `

SPI support on 40 pin header - Jetson Nano devkit

For enabling SPI support for Jetson Nano please use this patch. This patch cover Jetson nano (eMMC and SDcard version) only.

SPI devices after applying the patch are available on /dev/spidev0.0 and /dev/spidev0.1 (as generic spidev devices). You can use spidev_test tool and shortcut MOSI/MISO pins to test if communication is working as expected.

Note: some extension boards with SPI chips maybe will not work due to the level shifters which are assembled on 40 pin header. Please refer to 40 pin header considerations for more details.

Jetson secure boot support in L4T R35.2.1 implements a different chain of trust from what was present in the L4T R32 releases:

  • The Trusty secure OS has been replaced by OP-TEE, which allows for dynamic loading of trusted applications (TAs) from the non-secure world. TAs must be signed, and the public key used for checking the signature is compiled into the OP-TEE OS.
  • The cboot bootloader has been replaced by UEFI, which uses its own set of keys for validating signatures on binaries that it loads (Linux kernel, EFI applications, and EFI capsules).

NOTE NVIDIA made some changes to the UEFI bootloader in L4T R35.5.0 that require that an “authentication key” be programmed into the Encrypted Key Block on secured devices. If you are updating your secured device from an earlier R35.x release to R35.5.0, you must update the EKB on the device with the added key. See this developer forum thread for more information.

Getting started

Start by reading the Secure Boot section of the Jetson Linux Developer’s Guide.

The sections below cover specifics of how secure boot and signing are implemented for OE/Yocto builds with meta-tegra.

Bootloader signing

Setting fuses for secure boot

Follow the instructions in the NVIDIA documentation for generating keys and burning secure boot fuses for your Jetson device. Be warned that burning the fuses is a one-time operation, so be extremely careful. You could render your Jetson permanently unbootable if something goes wrong during the fuse burning process.

Build-time bootloader signing

If you have the bootloader signing and encryption key files available, you can add the following setting to your local.conf to create signed boot images and BUP packages:

TEGRA_SIGNING_ARGS = "-u /path/to/pkc-signing-key.pem -v /path/to/sbk.key --user_key /path/to/user.key"

These arguments parallel the ones used with the L4T flash.sh script for signing:

  • The -u option takes the path name of the RSA private key for PKC signing.
  • The -v option takes the path name of the SBK key used for encrypting the binaries loaded at boot time.
  • The --user_key option takes the path name of the encryption key you create for use with the NVIDIA sample OP-TEE TAs.

Note that with R35.2.1, the --user_key encryption key is used only for the XUSB firmware. Starting with R35.3.1, the user encryption key is not used for any of the boot firmware.

Build-time bootloader signing will be performed on the boot-related files in the tegraflash package for flashing, as well as the entries in any bootloader update payloads (BUPs).

Post-build signing

You can elect to perform bootloader signing outside of the build process by adding the -u, -v, and --user_key options when running the doflash.sh or initrd-flash script during flashing of your tegraflash package. For BUP generation, add those options when running the generate_bup_payload.sh script to have the bootloader components signed.

UEFI Secure Boot

To enable UEFI secure boot support, start by generating the PK, KEK, and DB keys and related configuration files, as described in the UEFI Secure Boot section of the Jetson Linux documentation.

It should be noted that UEFI boot is not compatible with the legacy secure boot supported on Tegra devices.

Build-time UEFI signing

During the build, signing of the EFI launcher app, the kernel, and device tree files is performed automatically when the following settings are present in your build configuration:

TEGRA_UEFI_DB_KEY = "/path/to/db.key"
TEGRA_UEFI_DB_CERT = "/path/to/db.crt"

Both settings must be present, and must point to one of the DB keys you generated (you do not need the PK or KEK keys).

Post-build UEFI signing

Post-build UEFI signing is not currently supported.

Enrolling UEFI keys at build time

To enable UEFI secure boot, the PK, KEK, and DB keys you generated must be “enrolled” at boot time. On Jetson platforms, this done by adding the needed key enrollment variable settings to the bootloader’s device tree via the UefiDefaultSecurityKeys.dts file you generated when creating the keys and configuration files. For meta-tegra builds, you can supply this file by adding a bbappend for the tegra-uefi-keys-dtb.bb recipe in one of your own metadata layers, substituting variables MY_LAYER with the path to your layer and MY_UEFI_KEYS_DIR with the path to your uefi_keys directory setup after following instructions linked above

export MY_LAYER=tegra-demo-distro/layers/meta-tegrademo
export MY_UEFI_KEYS_DIR=~/uefi_keys/
mkdir -p ${MY_LAYER}/recipes-bsp/uefi
cat > ${MY_LAYER}/recipes-bsp/uefi/tegra-uefi-keys-dtb.bbappend <<'EOF'
FILESEXTRAPATHS:prepend := "${THISDIR}/files:"
EOF
mkdir -p ${MY_LAYER}/recipes-bsp/uefi/files
cp ${MY_UEFI_KEYS_DIR}/UefiDefaultSecurityKeys.dts ${MY_LAYER}/recipes-bsp/uefi/files/
echo "Copy below is optional, only needed if you plan to update your keys with a capsule update"
cp ${MY_UEFI_KEYS_DIR}/UefiUpdateSecurityKeys.dts ${MY_LAYER}/recipes-bsp/uefi/files/

Enrolling UEFI keys at runtime

The Jetson Linux documentation describes the process for enrolling UEFI keys and enabling UEFI secure boot at runtime. You will need to add some packages to your image build to make the necessary commands available. As of this writing, runtime enrollment has not been tested.

OP-TEE Trusted Application signing

OP-TEE provides a mechanism for loading TAs from the “Rich Execution Environment” (REE, another term for the normal, non-secure OS), which must be signed with a key that is known the OP-TEE OS. Read the OP-TEE documentation on TAs for more information.

By default, a development/test key from the upstream OP-TEE source is compiled in; this configuration should not be used in any production device, since the key is publicly available. You should generate a suitable RSA keypair as described in the OP-TEE documentation. For build-time signing, add a bbappend for the optee-os recipe in one of your layers. For build-time signing, your bbappend should resemble the following:

FILESEXTRAPATHS:prepend := "${THISDIR}/files:"
SRC_URI += "file://optee-signing-key.pem"
EXTRA_OEMAKE += "TA_SIGN_KEY=${WORKDIR}/optee-signing-key.pem"

Post-build signing of TAs is more difficult, since external TAs are generally packaged and installed into the root filesystem as part of the build. For that approach, though, you would include the public key file in the optee-os bbappend, and set TA_PUBLIC_KEY instead of TA_SIGN_KEY. The OP-TEE makefiles will sign TAs with the a dummy private key, but the public key you specify will be compiled into the secure OS. You will have to figure out how to re-sign the TAs with your actual private key before they get used.

Using the NVIDIA built-in sample TAs

To make use of the encryption/decryption functions NVIDIA provides by default with their OP-TEE implementation, you will need to supply an “Encrypted Keyblob” (EKB) that corresponds to the KEK/K2 fuses you have burned on your Jetson device. Instructions for generating an EKB are in this section of the Jetson Linux documentation. See the note at the top of this page for information about changes in L4T R35.5.0 that require the re-generation of the EKB.

The tegra-bootfiles recipe installs the default EKB from the L4T kit. Add a bbappend for that recipe to replace the default with the custom EKB for your device.

Generating a Custom EKB

Before replacing the default EKB in your Yocto build, you must generate a custom one that matches OemK1 fuse burned on your Jetson device. To do this, you need the gen_ekb.py script from the NVIDIA OP-TEE samples code base (for the hwkey-agent sample). You can find that script either in the L4T public sources tarball, or on NVIDIA’s git server (making sure you choose the branch for the L4T version you are targeting).

Example:

python3 gen_ekb.py -chip t234 \
    -oem_k1_key oem_k1.key \
    -in_sym_key2 sym2_t234.key \
    -in_auth_key auth_t234.key \
    -out eks_t234.img

where

  • oem_k1.key is the OEM_K1 key stored in the OEM_K1 fuse.
  • sym2_t234.key is the disk encryption key.
  • auth_t234.key is the UEFI variable authentication key
  • eks_t234.img is the generated EKB image to be flashed to the EKS partition of the device

Kernel encryption is not currently supported in meta-tegra, so do not provide the UEFI payload encryption key (using -in_sym_key).

Secure Boot Support

Bootloader signing is supported for all Jetson targets for which secure boot is available (consult the L4T documentation). Support was added in the zeus branch for tegra186 (Jetson-TX2), and extended to the other SoC types in the dunfell-l4t-r32.4.3 branch.

Note that with L4T R35.2.1 and later, the secure boot sequence has changed. See this page for more information.

Setting fuses for secure boot

To enable secure boot on your device, follow the instructions in the L4T BSP documentation and the README included in the L4T Secure Boot package that can be downloaded here.

Caveats

  • The odmfuse.sh script in some L4T releases has a bug that causes fusing to fail on Jetson-TX2 devices; see issue #193 for an explanation and patch.
  • The L4T bootloader for tegra210 (TX1/Nano) has a bug that always disables secure boot during fuse burning in versions of L4T prior to R32.4.4. See this NVIDIA Developer Forum post for more information, and patched copies of the bootloader with a fix.
  • NVIDIA does not support secure boot on SDcard-based developer kits (Jetson Nano/Nano-2GB and Jetson Xavier NX). You may render your developer kit permanently unbootable if you attempt to burn the secure boot fuses.
  • The tools and scripts in L4T for secure boot support do not appear to be very well tested from release to release, and occasionally regressions get introduced that break fuse burning for some of the Jetson platforms, so be very careful when updating to a new release of the BSP.

Enabling boot image and BUP signing during the build

If you have the signing and (optional) encryption key files available, you can add the following setting to your local.conf to create signed boot images and BUP packages:

TEGRA_SIGNING_ARGS = "-u /path/to/signing-key.pem -v /path/to/encryption-key"

The additional arguments will be passed through to the flash-helper script and all files will be signed (and boot files will be encrypted, if the -v option is provided) during the build. The doflash.sh script in the resulting tegraflash package will flash the signed files to the devices. This is similar to the flashcmd.txt script you would get if you used the L4T flash.sh script with the --no-flash option as mentioned in the NVIDIA secure boot documentation.

Kernel and DTB encryption

Starting with L4T R32.5.0, cboot on tegra186 (TX2) and tegra194 (Xavier) platforms expect the kernel (boot.img) and kernel device tree to be encrypted as well as signed. This encryption is performed by a service in Trusty and uses a different encryption key than the one used for encrypting the bootloaders. See the L4T documentation for information on setting this up.

If you have set up kernel/DTB encryption on your device, add --user_key /path/to/kernel-encryption-key to TEGRA_SIGNING_ARGS. If you do not go through the extra steps of setting up a kernel encryption key, an all-zeros key will be used by default.

Manual signing

If you prefer not to have the signing occur during your build, you can manually add the necessary arguments to your invocation of doflash.sh after unpacking the tegraflash package. For example:

$ BOARDID=<boardid> FAB=<fab> BOARDSKU=<boardsku> BOARDREV=<boardrev> ./doflash.sh -u /path/to/signing-key.pem -v /path/to-encryption-key

The environment variable settings you need on the command will vary from target to target; consult the “Signing and Flashing Boot Files” section of the L4T BSP documentation for the specifics.

With recent branches, BUP generation can also be performed manually. The tegraflash package includes a generate_bup_payload.sh script that can be run with the same -u (and, if applicable -v) options to generate a BUP payload manually.

Using a code signing server

If you prefer not to have your signing/encryption keys local to your development host, you can override the tegraflash_custom_sign_pkg and tegraflash_custom_sign_bup functions in image_types_tegra.bbclass to package up the files in the current working directory, send them to be signed, then unpack the results back into the current directory. Everything needed to perform the signing, except for the keys, will be present in the package sent to the server. An example implementation of a code signing server is available here.

Tegra Specific Gstreamer Plugins

Originally, the machine configurations set MACHINE_GSTREAMER_1_0_PLUGIN to include the gstreamer1.0-plugins-tegra package, which is the base set of binary-only gstreamer plugins that is provided with L4T. In more recent releases, this has been changed to point to gstreamer1.0-omx-tegra instead (and using the now-current MACHINE_HWCODECS variable) to make it easier to build multimedia-ready images.

Note that since the OpenMAX plugins package is flagged as commercially licensed, it is also whitelisted in the machine configuration with:

LICENSE_FLAGS_WHITELIST_append = " commercial_gstreamer1.0-omx-tegra"

Update 2020-09-17

Starting with the branches using L4T R32.4.3 (dunfell-l4t-r32.4.3 and later), the commercially-licensed flag was removed from the OpenMAX plugin recipe, as the sources are available and do not appear to contain any encumbered code.

This page includes some guidance about how to resolve or work around issues with device flashing using the tegraflash package build by the Yocto build.

General Troubleshooting Tips/Suggestions

  1. Make sure you are using the correct flashing operation for your device/target storage. See the table here for guidance.
    • If your target can support either method, try the alternate method as a troubleshooting step.
  2. Try swapping USB cables/ensure you are using a high quality cable.
  3. Try power cycling the device/entering tegraflash mode from power on rather than reboot.
  4. Try running as sudo root or root rather than a user account, especially if any error message mention permissions.
  5. Switch to an alternative USB host controller as several people have noticed issues with these. See this issue for instance.
    • If you are using a USB 3.0 add-in card, switch to the one connected to the motherboard.
    • Try a USB 2.0 port if you have no other USB 3.0 controllers.
  6. Note any failures in logs for the respective flashing method
    • Start with the console log.
    • Connect the serial console on the target device if possible.
    • For initrd-flash steps, consult the host and device logs which are output at the end of the flash process.
  7. Suspect issues with partition table, especially if you’ve modified the partition table or increase sizes of partitions
    • Obscure errors like cp: cannot stat 'signed/*': No such file or directory typically mean you’ve got some problem with your custom partition table and/or target storage device size. See this issue for example.
  8. Attempt to reproduce with a devkit and a similar setup from tegra-demo-distro.
  9. Use hardware recovery mode entry rather than reboot force-recovery
    • See instructions at Flashing-the-Jetson-Dev-Kit for putting the device in recovery mode.
    • Although it’s possible to use reboot force-recovery, note the issues here which can occur in some scenarios. Using hardware recovery is typically a safer option if you are experiencing issues with tegraflash.
  10. Check, if the power-saving TLP Package is installed and running (preferably installed on notebooks/laptops to save battery power). This package disturbs the flashing process. Use sudo apt remove tlp and reboot your host computer to remove it before flashing.
  11. Use command line to extract tegraflash.tar.gz image file. When extracting by using a GUI app, esp.img file become corrupted. To use command-line like tar -xf your-image.tegraflash.tar.gz and follow normal flashing procedure with doflash.sh script.

Update: 10 Feb 2025

In the master branch:

  • The image type for tegraflash packages has been changed to tegraflash.tar.
  • The zip format for tegraflash packages has been removed. Zip packages do not work well with Linux sparse files, which are used for the EXT4 filesystem images we include in the package.
  • The default for IMAGE_FSTYPES is now set to tegraflash.tar.zst, using zstd compression on the package, which provides good compression with much faster compression and decompression times than gzip. You can override this in your build configuration, if needed.

Update: 27 May 2020

As of 27 May 2020, the image_types_tegraflash.bbclass and the helper scripts have been enhanced in the branches that support L4T R32.3.1 and later (zeus-l4t-r32.3.1, dunfell, dunfell-l4t-r32.4.2, and master). The sections below describe these updates.

Compressed-tar instead of zip for packaging

The venerable zip archive format has worked well enough over the years, but the zip tools are quite old and don’t have support for modern features like parallelism and sparse files. Switching to using a compressed tarball for tegraflash packages substantially speeds up build times and preserves sparse-file attributes for EXT4 filesystem images, resulting in much smaller (actual size vs. apparent size) packages.

In the zeus-l4t-r32.3.1 and dunfell branches, the default packaging remains zip. In dunfell-l4t-r32.4.2 and master, the default packaging has been changed to tar. You can set the variable TEGRAFLASH_PACKAGE_FORMAT in your build configuration to set the package format you want to use. Note however, that zip format is deprecated and support for it will likely be removed in a future release.

Use of bmaptool for SDcard creation

If you have the bmaptool package installed on your development host, the make-sdcard script will use it in place of dd to copy the EXT4 filesystem into the APP partition of an SDcard, which (when combined with the tar packaging mentioned above) results in much faster SDcard writing.

To take advantage of this, make sure bmaptool is available on your PATH and specify the device name of your SDcard writer when running dosdcard.sh. For example:

$ ./dosdcard.sh /dev/sda

The device name will be passed through to the underlying make-sdcard script. (If you run into permissions problems, you may need to use sudo.)

BUP payload generation

If you need to create BUP payloads outside of your bitbake builds, the tegraflash package now includes all of the files needed to do so, including a script to create the payload (similar to the l4t_generate_soc_bup.sh script in L4T):

$ ./generate_bup_payload.sh

You can pass the -u and/or -v options to this script to specify the public and/or private keys for signing the payload contents if your devices are fused for secure boot, and they will be passed through to each invocation of the flash helper script.

USB Device Mode Support

On the zeus and later branches (for L4T R32.2.3 and later), the l4t-usb-device-mode recipe is available to set up USB gadgets on a Jetson device for network and serial TTY access. The setup is similar to what’s provided in the L4T/JetPack BSP, except:

  • the scripts in the BSP under /opt/nvidia/l4t-usb-device-mode have been replaced by a combination of systemd, udev, and libusbgx configuration files;
  • the USB device identifier uses the Linux Foundation vendor ID; and
  • no mass storage gadget is created

Note that as of this writing, support for creating both an ECM gadget and an RNDIS gadget is provided, but the RNDIS gadget has not been tested.

Prerequisites

  1. You must have the meta-oe layer from meta-openembedded in your build for the libusbgx recipe.
  2. You must use systemd, and include udev and networkd support in its configuration (both of which are on by default in OE-Core zeus).

Network configuration

The systemd-networkd configuration files provided automatically create an l4tbr0 bridge device that combines the usb0 ECM interface and the rndis0 RNDIS interface. The bridge is assigned the IP address 192.168.55.1 and runs a DHCP server to serve the address 192.168.55.100 to the host side of the USB connection.

Serial port configuration

The serial port is called /dev/ttyGS0 on the device, and a udev rule automatically starts serial-getty on the device when it is created. If the connecting host is running Linux, the corresponding serial TTY will be /dev/ttyACM0 (or another /dev/ttyACMx device if there are multiple such devices on your host system).

Using device mode support

To use device mode support, just include l4t-usb-device-mode in your image.

Using cboot as Bootloader

[Applicable to L4T R32.1.0 and later]

For Jetson AGX Xavier, NVIDIA provides only cboot as the bootloader, so there is no U-Boot recipe for that platform. For Jetson TX2, the default configuration uses both - cboot loads U-Boot, which then loads the Linux kernel. You can, however, use just cboot as the bootloader by setting

PREFERRED_PROVIDER_virtual/bootloader = "cboot-prebuilt"

in your build configuration. If you do this, cboot directly loads the Linux kernel and initial ramdisk from the kernel (or kernel_b) partition, and the kernel image is not added to the root filesystem.

For branches with L4T R32.4.3 and later (dunfell-l4t-r32.4.3, gatesgarth and later branches), cboot is now built from sources by default, rather than using the prebuilt copy that comes with the L4T kit, so you should specify cboot-t18x instead of cboot-prebuilt for the PREFERRED_PROVIDER setting.

Note that in L4T R32.2.x, cboot has issues if the kernel or the initrd is too large, at least on TX2 platforms, causing kernel panics at boot time. With L4T R32.3.1, the kernel size limitation appears to be resolved, but if you use a separate initrd (instead of building it into the kernel as an initramfs), there is still a limit of just a few megabytes on its size (the relevant definitions (for the TX2) are probably in bootloader/partner/t18x/common/include/soc/t186/tegrabl_sdram_usage.h in the cboot sources). If you plan to customize your kernel to build in more drivers, rather than leaving them as loadable modules, or if you need to build more functionality into your initial ram filesystem, use R32.3.1 and bundle the initramfs into your kernel.

Building cboot from sources

NVIDIA has, from time to time, made cboot source code available. For Jetson AGX Xavier platforms, the most recent source release was with L4T R32.2.3, published in the L4T public_sources archive. This copy of cboot was removed from L4T R32.3.1. For L4T R32.4.2, cboot sources have been published again (for Xavier platforms only) as a separate download. For L4T R32.4.3 and R32.4.4, cboot sources are available for both TX2 and Xavier platforms.

Older releases (R28.x for TX2, R31.1 for Xavier) were restricted downloads. You must use your Developer Network login credentials to download the source package from the appropriate L4T page on NVIDIA’s website and store that tarball on your build host. The NVIDIA_DEVNET_MIRROR variable is used to locate the sources; see the recipes for more details on naming.

To use cboot built from source in your pre-R32.4.3 builds, set

PREFERRED_PROVIDER_virtual/bootloader = "cboot"

For R32.4.3 and later, the default is to build cboot from source, and the recipe names changed to be cboot-t18x for Jetson TX2 platforms and cboot-t19x for Jetson Xavier platforms.

PACKAGECONFIG for cboot builds

In branches with L4T R32.4.3 and later, you can control the inclusion of some cboot features by modifying the PACKAGECONFIG setting for the cboot recipe for your target device. All features are enabled by default, to match the stock L4T settings.

For Jetson-TX2 (tegra186/t18x) platforms, the following PACKAGECONFIG options are available:

PACKAGECONFIG optionDescription
displaycboot initializes the display; can be disabled for headless targets
recoveryenables booting the recovery kernel and rootfs (not currently populated in L4T)

For Xavier (tegra194/t19x) platforms, the following PACKAGECONFIG options are available:

PACKAGECONFIG optionDescription
bootdev-selectenables booting from devices other than the built-in eMMC or SATA interfaces
displaycboot initializes the display; can be disabled for headless targets
ethernetenables booting over the Ethernet interface
extlinuxenables cboot’s half-baked support for using an extlinux.conf file
recoveryenables booting the recovery kernel and rootfs (not currently populated in L4T)
shellenables the countdown pause during boot to break into the cboot “shell”

Note that removing the bootdev-select option has no effect on builds for the Xavier NX development kit; the recipe always enables that option for that target, since it is required for booting from the SDcard.

Jetson TX1/Nano platforms

While NVIDIA does ship a pre-built version of cboot for the tegra210 platforms (TX1 and Nano), they do not provide source code. U-Boot is the user-modifiable bootloader for those platforms.

For many L4T/Jetson Linux releases, NVIDIA has provided a mechanism (the jetson-io scripts) for applying device tree overlays (.dtbo files) dynamically at runtime. For OE/Yocto-based builds, device trees are built from sources, so runtime application of DTB overlays is less of an issue. The meta-tegra layer does provide some mechanisms for applying DTB overlays, through some build-time variable settings.

Build-time application of overlays

This mechanism is supported in the branches based on L4T R32.6.x through R35.x only. Overlays are applied to the device tree during the kernel build, directly modifying your kernel DTB. (For L4T R36 and later, the NVIDIA device trees are no longer provided in the kernel source tree.)

Locating overlays

The exact list of overlays supplied by NVIDIA varies by target platform. You can find them by building the kernel recipe (virtual/kernel or linux-tegra) and examining its output under ${BUILDDIR}/work/tmp/${MACHINE}/linux-tegra.

Applying overlays

Set the KERNEL_DEVICETREE_APPLY_OVERLAYS variable to a blank-separate list of .dtbo file names to have those overlays applied during the kernel build. You can do this in your machine configuration file, or add it, for example, to the local.conf file in your build workspace.

Example

For example, to configure a Jetson Xavier NX development kit for IMX477 and IMX219 cameras, you would add the following line to your $BUILDDIR/conf/local.conf file:

    KERNEL_DEVICETREE_APPLY_OVERLAYS:jetson-xavier-nx-devkit = "tegra194-p3668-all-p3509-0000-camera-imx477-imx219.dtbo"

Other possible use cases

For U-Boot-based Jetsons (only supported on a subset of Jetson modules with L4T R32.x), the .dtbo files will get populated into the /boot directory in the rootfs, and you could modify the /boot/extlinux/extlinux.conf file to add an FDTOVERLAY line to have one or more overlays applied at boot time. Unfortunately, OE-Core’s support for generating extlinux.conf content does not include support for FDTOVERLAY lines, so to make such a change you would have to work out a way to rewrite that file in a bbappend.

For out-of-tree device trees

For L4T R36.x, the nvidia-kernel-oot recipe is the default device tree provider for the Jetson platforms. You can also set the PREFERRED_PROVIDER_virtual/dtb variable to point to a recipe for providing your own customized device tree. To apply overlays to these device trees, add fdtoverlay invocations to the compilation step via a bbappend (for nvidia-kernel-oot) or in your custom recipe.

Example out-of-tree devicetree in tegra-demo-distro

See the tegra-demo-distro example at meta-tegrademo/recipes-bsp/tegrademo-devicetree which shows how to modify a base devicetree from nvidia-kernel-oot to one specific to your hardware platform. This simple example just adds a single “compatible” line to your base devicetree. To use this example:

  1. Determine which devicetree is currently in use. One way to do this is with bitbake -e <your image> and look at the value of KERNEL_DEVICETREE.
  2. Determine whether there’s an existing devicetree in the meta-tegrademo/recipes-bsp/tegrademo-devicetree which uses your existing devictree as a base. Current examples are:
  • tegra234-p3768-0000+p3767-0005-oe4t.dts: jetson-orin-nano-devkit or jetson-orin-nano-devkit-nvme builds on a p3768 (Orin Nano Devboard) carrier
  • tegra234-p3768-0000+p3767-0000-oe4t.dts: Nvidia Jetson Orin NX 16GB in a p3768 (Orin Nano Devboard) carrier
  • tegra234-p3737-0000+p3701-0000-oe4t.dts: jetson-agx-orin-devkit
  1. If there’s not an existing devicetree built from your base KERNEL_DEVICETREE, follow the examples to add one to SRC_URI and to the repo.
  2. Modify your MACHINE conf or local conf to specify your dtb provider and KERNEL_DEVICETREE using something like this:
PREFERRED_PROVIDER_virtual/dtb = "tegrademo-devicetree"
KERNEL_DEVICETREE:jetson-orin-nano-devkit-nvme = "tegra234-p3768-0000+p3767-0005-oe4t.dtb"
KERNEL_DEVICETREE:jetson-orin-nano-devkit = "tegra234-p3768-0000+p3767-0005-oe4t.dtb"

Where KERNEL_DEVICETREE overrides the setting for your MACHINE, referencing the devicetree filename with *.dtb in the place of *.dts.

  1. Build, flash, and boot the board, and cat /sys/firmware/devicetree/base/compatible to see the compatible string printed as configured in the devicetree. You should see a string which starts with “oe4t”, as shown here for the orin nano
root@jetson-orin-nano-devkit-nvme:~# cat /sys/firmware/devicetree/base/compatible
oe4t,p3768-0000+p3767-0005+tegrademonvidia,p3768-0000+p3767-0005-supernvidia,p3767-0005nvidia,tegra234

Runtime application of overlays in SPI Flash

This mechanism is supported in branches based on L4T R35.x and later. Overlays are appended to the kernel DTB by the NVIDIA flashing/signing tools, and are applied by the UEFI bootloader at runtime. Overlays are stored in SPI flash and are only updated on capsule update or tegraflash.

Locating overlays

The exact list of overlays supplied by NVIDIA varies by target platform. You can find them on R35.x-based branches by building the kernel recipe (virtual/kernel or linux-tegra) and examining its output under ${BUILDDIR}/work/tmp/${MACHINE}/linux-tegra. For R36.x-based branches, device trees are built as part of the nvidia-kernel-oot recipe.

Applying overlays

Append your additional overlays to the TEGRA_PLUGIN_MANAGER_OVERLAYS variable, which consists of a blank-separate list of .dtbo file names. You can do this in your machine configuration file, or add it, for example, to the local.conf file in your build workspace. That variable is set by the layer to include overlays that NVIDIA requires for its platforms, so be sure to append to it, rather than overwriting it.

Example

For example, to configure the pins on the 40-pin expansion header of the Jetson Orin Nano development kit, you would add the following line to your $BUILDDIR/conf/local.conf file:

    TEGRA_PLUGIN_MANAGER_OVERLAYS:append:jetson-orin-nano-devkit = " tegra234-p3767-0000+p3509-a02-hdr40.dtbo"

Runtime application of overlays in the rootfs partition

With https://github.com/OE4T/meta-tegra/pull/1968 support is available to apply overlays in the rootfs partition using the OVERLAYS extlinux.conf option. This means you are able to link overlays to a rootfs slot and store/update there instead of in the SPI flash.

Only overlays which modify the kernel DTB are supported, since the overlay application happens late in the boot sequence.

See this section of the extlinux.conf wiki page for details about configuring OVERLAYS in extlinux.conf.

Using gcc7 from the Contrib Layer

Starting with the warrior branch, meta-tegra includes a contrib layer with user-contributed recipes for optional inclusion in your builds. The layer includes recipes for gcc7 that you can use for compatibility with CUDA 10.0.

Configuring your builds for GCC 7

Follow the steps below to switch to GCC 7:

  1. Use bitbake-layers add-layer to add the meta-tegra/contrib layer to your project in build/conf/bblayers.conf.
  2. Select GCC version in your build/conf/local.conf and use the required configuration like this:
GCCVERSION = "7.%"
require contrib/conf/include/gcc-compat.conf

Troubleshooting

Older GCC versions, such as GCC 7, does NOT support fmacro-prefix-map. As a result, due to the default settings, while building newer releases of the Yocto Project, for example Warrior, with older GCC version you may get errors like “cannot compute suffix of object files”. To fix add the following lines to your build/conf/local.conf:

# GCC 7 doesn't support fmacro-prefix-map, results in "error: cannot compute suffix of object files: cannot compile"
DEBUG_PREFIX_MAP_remove = "-fmacro-prefix-map=${WORKDIR}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}"

NOTE: This configuration is applied in contrib/conf/include/gcc-compat.conf. No further actions are required if you have already required it in build/conf/local.conf.

See Also

Update 16-Dec-2021: The master branch has support for restricting the use of the older gcc toolchain just for CUDA compilations, and the meta-tegra main layer includes the recipes to support this. You no longer need to use an older toolchain for building everything, and the recipes for the older toolchains have been dropped from the contrib layer. See #867 for more information.

For honister and earlier branches

With the JetPack 4.4 Developer Preview release (L4T R32.4.2), NVIDIA updated CUDA support for the Jetson platforms to CUDA 10.2, which is compatible with GCC 8. On the dunfell-l4t-r32.4.2 and master branches, the contrib layer in this repository has been updated to include recipes for the gcc 8 toolchain, imported from the OE-Core warrior branch. If you intend to build packages that use CUDA, you should configure your build to use GCC 8.

If you have previously configured your builds for GCC 7 when using an earlier version of meta-tegra with an older L4T/JetPack release, you can retain those settings and continue to use GCC, as builds should be compatible with either version of the toolchain.

Configuring your builds for GCC 8

Follow the steps below to switch to GCC 8:

  1. Use bitbake-layers add-layer to add the meta-tegra/contrib layer to your project in build/conf/bblayers.conf.
  2. Select GCC version in your build/conf/local.conf and use the required configuration like this:
GCCVERSION = "8.%"

or

GCCVERSION_aarch64 = "8.%"

if you have other platforms (with other CPU architectures) in your build setup that require the latest toolchain provided by OE-Core.

Overview

As mentioned in the README, OE-Core removed gcc7 from support starting with the warrior release. However, CUDA 10 does not support gcc8. This means you need to pull in another layer or changes which support gcc7 toolchain in order to support CUDA 10.0.

Fortunately adding gcc7 does not require a lot of work to achieve if using the meta-linaro project. See tested instructions below.

Instructions for warrior branch

  1. Add the meta-linaro-toolchain layer as a submodule in your project by cloning this project, checking out the appropriate branch (warrior).
  2. Use bitbake-layers add-layer to add the meta-linaro/meta-linaro-toolchain layer to your project in build/conf/bblayers.conf. You can add just the meta-linaro-toolchain folder and not the entire meta-linaro layer.
  3. Reference the GCC version in your build/conf/local.conf like this:
GCCVERSION = "linaro-7.%"
  1. Add these lines to your build/conf/local.conf to prevent errors like “cannot compute suffix of object files” due to missing fmacro-prefix-map support on GCC7 and based on the default setting on the warrior branch:
# GCC 7 doesn't support fmacro-prefix-map, results in "error: cannot compute suffix of object files: cannot compile"
# Change the value from bitbake.conf DEBUG_PREFIX_MAP to remove -fmacro-prefix-map
DEBUG_PREFIX_MAP = "-fdebug-prefix-map=${WORKDIR}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR} \
                    -fdebug-prefix-map=${STAGING_DIR_HOST}= \
                    -fdebug-prefix-map=${STAGING_DIR_NATIVE}= \
                    "
  1. For recipes which fail during the configuration stage with messages like this:
cc1: error: -Werror=missing-attributes: no option -Wmissing-attributes
cc1: error: -Werror=missing-attributes: no option -Wmissing-attributes

Add a .bbappend to your layer which removes the unsupported missing-attributes flag from respective CPPFLAGS for host and target compile. For instance, to resolve with libxcrypt you can add a /recipes-core/libxcrypt/libxcrypt.bbappend to your layer with content:

# For GCC7 support
TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir}"
CPPFLAGS_append_class-nativesdk = ""

Note that the libxcrypt recipe in OE-Core’s warrior branch was updated in September 2019 (for Yocto Project 2.7.2) to remove the compiler option that causes this error with older compilers.

Wayland Weston Support on Jetson Platforms

Support for Wayland/Weston has been adapted from the open-source libraries and patches that NVIDIA has published, rather than using the binary-only libraries packaged into the L4T BSP.

DRM/KMS support

Starting with L4T R32.2.x, DRM/KMS support in the BSP is provided through a combination of a custom libdrm.so shared library and the tegra-udrm kernel module. The library intercepts some DRM API calls; any APIs it does not handle directly are passed through to the standard implementation of libdrm.

Builds that include weston will also include a configuration file (via the tegra-udrm-probeconf recipe) that loads the tegra-udrm module with the parameter modeset=1. This enables KMS support in the L4T-specific libdrm library. If your build includes a different Wayland-based compositor, you may also need to include this configuration file.

(Earlier versions of L4T used a different custom libdrm implementation that had no KMS support and was not ABI-compatible with the standard libdrm implementation.)

Mesa build changes

The Mesa build has been changed to enable libglvnd support, which creates the necessary vendor plugins of the EGL and GLX libraries and packages them as libegl-mesa and libgl-mesa.

xserver-xorg changes

The xserver-xorg build has also been changed to disable DRI and KMS support on Tegra platforms.

libglvnd

Starting with L4T R32.1, the BSP uses libglvnd rather than including pre-built copies of the OpenGL/EGL/GLES libraries.

egl-wayland

The egl-wayland extension is built from source, with an additional patch to correct an issue with detecting Wayland displays and surfaces. The recipe also installs the needed JSON file so that the extension can be found at runtime.

weston-eglstream

NVIDIA’s patches for supporting Weston using the EGLStream/EGLDevice backend are maintained in this repository. As of L4T R32.2.x, no additional Tegra-specific patches are required.

The --use-egldevice option gets added to the command line when starting Weston to activate this support.

Note that support for the EGLStream backend was dropped in Weston 10 in favor of using GBM. We supply a backend for libgbm that uses NVIDIA’s libnvgbm.so manage GBM objects, and we still patch Weston support the EGLStream protocol for Wayland clients.

XWayland

XWayland appears to work, but hardware-accelerated OpenGL (through the libGLX_nvidia provider) is not available.

Testing

The following tests are performed:

  1. Verify that core-image-weston builds.
  2. Verify that weston starts at boot time.
  3. Verify that weston sample programs, such as weston-simple-egl, display appropriate output.
  4. Verify that the nveglglessink gstreamer plugin works with the winsys=wayland parameter by running a gstreamer pipeline to display an H.264 video. Note that the DISPLAY environment variable must not be set, per the NVIDIA documentation.
  5. Verify that the l4t-graphics-demos applications work.

Troubleshooting

The following commands work on a Jetson TX2 and probably others:

Turn off HDMI:

echo -1 > /sys/kernel/debug/tegra_hdmi/hotplug
echo 4 > /sys/class/graphics/fb0/blank

(Source)

Turn on HDMI:

echo 1 > /sys/kernel/debug/tegra_hdmi/hotplug
echo 0 > /sys/class/graphics/fb0/blank

(Source)

Reading HDMI connection state:

/sys/devices/virtual/switch/hdmi/state is 0 when disconnected and 1 when connected. (Source)

While not enabled by default (except on the Jetsons that use the U-Boot bootloader), you can use the L4T extlinux.conf support in your builds.

For L4T R35.x and later

In the kirkstone and later branches based on the L4T R35.x and later series of releases, set UBOOT_EXTLINUX = "1" to configure the build to use an extlinux.conf file. (As of 14 Apr 2024, "1" is now the default setting in the master branch.)

See the comments in l4t-extlinux-config.bbclass for additional configuration settings you can use.

UBOOT_EXTLINUX_FDT

The UBOOT_EXTLINUX_FDT setting can be set to exactly UBOOT_EXTLINUX_FDT = "/boot/${DTBFILE}" before https://github.com/OE4T/meta-tegra/pull/1968 or to any dtb file without full path (like UBOOT_EXTLINUX_FDT = "${DTBFILE}") after https://github.com/OE4T/meta-tegra/pull/1968 and backports.

When set, this adds a devicetree entry in the extlinux.conf file. This setting is useful for easy testing of devicetree changes in the kernel and to support devicetree transitions on slot switch without capsule update. Note that when UBOOT_EXTLINUX or UBOOT_EXTLINUX_FDT is not set, the kernel-dtb partitions defined in the root filesystem are ignored and the devicetree for the kernel is taken from the devicetree which is appended to the uefi image, therefore only updated when the uefi image is changed via tegraflash or capsule update.

efivar -p --name 781e084c-a330-417c-b678-38e696380cb9-L4TDefaultBootMode should return a value of 1 when using this feature. For additional context see this thread in element.

UBOOT_EXTLINUX_FDTOVERLAYS

The PR at https://github.com/OE4T/meta-tegra/pull/1968 adds support for specifying a list of overlays in your extlinux.conf file. These overlays are also stored on the rootfs and applied to the kernel DTB at boot time after root slot selection.

This feature is only supported when UBOOT_EXTLINUX_FDT is specified.

To use, specify

UBOOT_EXTLINUX_FDT = "${DTBFILE}"
UBOOT_EXTLINUX_FDTOVERLAYS = "my-overlay.dtbo"

Where "my-overlay.dtbo" is an overlay built using the mechanisms specific to your branch implementation (or potentially one provided by NVIDIA. See Using-device-tree-overlays for more details. Note that since the overlay only happens to the kernel DTB this mechanism cannot be used to make any changes to the UEFI DTB.

Caveats

  • The upstream UEFI bootloader does not implement this; it was tacked on by NVIDIA in their L4TLauncher EFI application.
  • The ext4 filesystem implementation that NVIDIA provides in their bootloader may have some bugs/limitations that could prevent it from reading the extlinux.conf or other files in your root filesystem. Using newer ext4 features, or non-ext4 filesystems for your root filesystem, could lead to boot failures.
  • The extlinux.conf syntax supported in L4TLauncher is not the same as U-Boot’s, and the parsing code isn’t the most robust/forgiving, so be careful about any modifications you may want to make, to avoid boot failures.

For L4T R32.x

In L4T R32.x:

  • The TX1/Nano platforms use U-Boot by default, so no changes are required to use extlinux.conf files.
  • The TX2 platform defaults to using U-Boot which supports extlinux.conf. TX2 builds can be configured to use cboot without U-Boot, and the TX2 cboot implementation does not support extlinux.conf.
  • The Xavier platforms have a different cboot code base which (unlike the TX2 implementation) does have some support for extlinux.conf files. The rest of this page covers the Xavier implementation.

Configuring Xavier extlinux.conf support

Add the cboot-extlinux package to your image to enable booting your Xavier device with the kernel loaded from /boot in the rootfs instead from from a separate partition. This is only available in the kirkstone-l4t-r32.7.x branch (as of this writing).

Use with caution. Not recommended for production use.

Notes

The cboot bootloader on the Xavier (t194) platforms has support for loading the kernel, initial ramdisk, and device tree from files in the rootfs, rather than the kernel partition. The stock L4T BSP has supported this for several releases, installing the kernel image and initrd into /boot and a /boot/extlinux/extlinux.conf file that cboot uses to locate the files. This can simplify kernel development by eliminating the need to reflash the device to boot with updated kernels.

To implement this in meta-tegra, the cboot-extlinux recipe has been added. Adding cboot-extlinux to your image will include the necessary files – kernel, initrd (if not bundled), and optionally the device tree, along with the extlinux.conf file and signatures for the files that are expected to be signed – in your rootfs.

When extlinux support in cboot is enabled (which it is by default), cboot will first try to mount the rootfs to locate the extlinux.conf file. The rootfs is either marked as such with a partition GUID (see below) or is assumed to be the first partition on the boot medium (SDcard, eMMC, or external device). cboot then tries to open /boot/extlinux/extlinux.conf on that filesystem. If successful, it parses the configuration, then attempts to load the kernel, initrd, and/or device tree based on the path names in the file. For elements that are not configured in that file (or all of them, if the file does not exist), cboot falls back to loading them from partitions on the device (kernel for the kernel+initrd, kernel-dtb for the device tree).

extlinux.conf file format

The format of the configuration file is a subset of the format used in the distro boot feature of U-Boot. The cboot-extlinux-config.bbclass file implements the cboot-specific configuration subset; see the comments in that file for more information.

WARNING Modifying the extlinux.conf file incorrectly will often result in cboot crashes, making your device unbootable. Use caution when making any changes to the file.

Adding the device tree

By default, the cboot-extlinux recipe installs the default kernel image and initrd (if configured to be separate from the kernel), but not the device tree, to align with the default stock L4T setup. Set UBOOT_EXTLINUX_FDT = "/boot/${DTBFILE}" in either a bbappend or in your local.conf to include the device tree.

Incompatible with A/B redundancy

Using cboot-extlinux for loading the kernel is not compatible with the A/B redundancy mechanism - the kernel will always be loaded from the A rootfs partition.

It may be possible to fix this by assigning a unique partition GUID to each of the two rootfs partitions, and creating cboot options files (cbo.dtb files) to configure the rootfs GUIDs - one to be loaded into the CPUBL-CFG partition, and the other into CPUBL-CFG_b. However, that would conflict with the normal bootloader update mechanism, since BUP payloads don’t distinguish between the A and B slot for their content. Some extra mechanism would be needed to keep the two CPUBL-CFG partitions synchronized with the corresponding rootfs partition GUIDs.

Filesystem restrictions

This has only been tested with ext4-formatted root filesystems, and bugs found in cboot’s ext4 implementation have been patched to make this work. Other filesystem types are unlikely to work. Also, you should use the cboot-t19x recipe that builds cboot from source to get the required patches (this is the default).

Applying-PREEMPT-RT-Real-Time-Kernel-Patches.md

Dynamically apply the RT patches to Kirkstone

  1. Append the recipe to run an additional step prior to running the do_patch stage

recipes-kernel/linux/linux-tegra_4.9.bbappend

do_patch:prepend() {
    oldwd=$PWD
    cd ${S}/scripts
    ./rt-patch.sh apply-patches
    cd ${S}
    git add . && git commit -m 'Apply PREEMPT_RT patches'
    cd $oldwd
}

Dynamically apply the RT patches to Dunfell

  1. Append the recipe to run an additional step prior to running the do_patch stage

recipes-kernel/linux/linux-tegra_4.9.bbappend

do_patch_prepend() {
    oldwd=$PWD
    cd ${S}/scripts
    ./rt-patch.sh apply-patches
    cd ${S}
    git add . && git commit -m 'Apply PREEMPT_RT patches'
    cd $oldwd
}

How to use the Real-Time Kernel by switching to a patched branch

  1. Find which git has your meta-tegra is currently using
user@pc:~/meta-tegra$ cat recipes-kernel/linux/linux-tegra_4.9.bb | grep -n SRCREV -A5 -B5
13-
14-LINUX_VERSION_EXTENSION ?= "-l4t-r${@'.'.join(d.getVar('L4T_VERSION').split('.')[:2])}"
15-SCMVERSION ??= "y"
16-
17-SRCBRANCH = "patches${LINUX_VERSION_EXTENSION}"
18:SRCREV = "0be1a57448010ae60505acf4e2153638455cee7c"
19-KBRANCH = "${SRCBRANCH}"
20-SRC_REPO = "github.com/OE4T/linux-tegra-4.9"
21-KERNEL_REPO = "${SRC_REPO}"
22-SRC_URI = "git://${KERNEL_REPO};name=machine;branch=${KBRANCH} \
23-	   ${@'file://localversion_auto.cfg' if d.getVar('SCMVERSION') == 'y' else ''} \
  1. Fork https://github.com/OE4T/linux-tegra-4.9
  2. git clone https://github.com/you/linux-tegra-4.9
  3. cd linux-tegra-4.9
  4. git checkout -b patches-l4t-r32.4-rt patches-l4t-r32.4 (or specify the git hash from the SRCREV setting in the recipe)
  5. cd scripts
  6. ./rt-patch.sh apply-patches
  7. cd ..
  8. git add .
  9. git commit -m "Applied PREEMPT RT Patch to kernel"
  10. git push --set-upstream origin patches-l4t-r32.4-rt
  11. git log to find the commit hash for your rt patched kernel source
  12. Now go back to your meta-tegra folder
  13. touch recipes-kernel/linux/linux-tegra_4.9.bbappend
  14. put the below in your linux-tegra_4.9.bbappend
SRCBRANCH = "patches${LINUX_VERSION_EXTENSION}-rt"
SRCREV = "##YOUR_HASH###"
KBRANCH = "${SRCBRANCH}"
SRC_REPO = "##YOUR_FORK###"
KERNEL_REPO = "${SRC_REPO}"
SRC_URI = "git://${KERNEL_REPO};name=machine;branch=${KBRANCH};protocol=ssh \
           ${@'file://localversion_auto.cfg' if d.getVar('SCMVERSION') == 'y' else ''} \
"

Documentation Workflow

This project uses mdBook to generate documentation, with GitHub Actions for automated builds and GitHub Pages for hosting.

Repository Layout

Documentation source files live alongside the Yocto BSP layer content:

meta-tegra/
├── book.toml                      # mdBook configuration
├── docs/                          # Documentation source (markdown)
│   ├── SUMMARY.md                 # Table of contents for mdBook
│   ├── README.md                  # Introduction / landing page
│   ├── *.md                       # Documentation pages
│   └── mdbook/                    # Custom mdBook assets
│       ├── css/custom.css         # Version dropdown styling
│       └── js/version-dropdown.js # Version switching logic
└── .github/workflows/
    └── mdbook-versioned.yml       # CI/CD workflow

The book.toml in the repository root configures mdBook. The src setting points to the docs/ directory, and custom CSS and JavaScript are loaded for the version dropdown:

[book]
title = "OE4T Meta Tegra"
authors = ["Matt Madison", "Dan Walkes"]
language = "en"
src = "docs"

[output.html]
additional-css = ["docs/mdbook/css/custom.css"]
additional-js = ["docs/mdbook/js/version-dropdown.js"]

Multi-Version Support

Each tracked branch gets its own independent copy of the documentation on GitHub Pages. The list of published versions is controlled by a versions.json file in the GitHub Pages content repository (OE4T/oe4t.github.io).

Adding Pages

All documentation pages are Markdown files in the docs/ directory. To add a new page:

  1. Create a new .md file in docs/.
  2. Add an entry for it in docs/SUMMARY.md. The SUMMARY file defines the table of contents and sidebar navigation. Pages not listed in SUMMARY.md will not appear in the built documentation.

Page Editing Tips

  • Please ensure any embedded links to other documentation files are done with relative paths. For example, use [Link to another page in docs](OtherPageName.md) instead of [Link to another page in docs](https://github.com/OE4T/meta-tegra/blob/master/docs/OtherPageName.md)
  • You can use the trick at this stackoverflow post to add images to your markdown file without the need to check images into the repo.

Preview Locally

To preview the documentation locally with markdown, install mdBook and run:

mdbook serve

This starts a local web server with live reloading as you edit files.

Build and Deploy

The GitHub Actions workflow (.github/workflows/mdbook-versioned.yml) triggers on pushes to tracked branches:

  1. Build — runs mdbook build inside a peaceiris/mdbook container, producing output in a per-branch directory.
  2. Deploy — pushes the built HTML to a subdirectory in the main branch of the external GitHub Pages content repository (OE4T/oe4t.github.io) using peaceiris/actions-gh-pages.

Each branch deploys to its own directory, resulting in a structure like this in OE4T/oe4t.github.io:

<repo-root>/
├── index.html          # redirects to ./master/
├── versions.json       # lists available versions for the dropdown
├── master/             # docs built from the master branch
└── scarthgap/          # docs built from the scarthgap branch

The workflow can also be triggered manually via workflow_dispatch from the GitHub Actions UI.

Deployment credentials

The deploy step requires an SSH deploy key stored as a repository secret:

  • OE4T_GITHUB_DEPLOY_KEY

If the secret is missing (common in forks), the workflow will emit a warning: “The repository secret must contain the OE4T_GITHUB_DEPLOY_KEY to run this step.” Then it will skip the deploy step without failing the workflow.

Version Dropdown

A custom JavaScript file (docs/mdbook/js/version-dropdown.js) adds a version selector dropdown to the mdBook navigation bar. It fetches versions.json from the site root to populate the list, and when a different version is selected it navigates to the same page path under the new version’s directory.

The versions.json file is maintained manually in the GitHub Pages content repository (OE4T/oe4t.github.io, main branch) (not auto-generated), giving explicit control over which versions appear in the dropdown.

Adding a New Version

To add documentation for a new branch (e.g., kirkstone):

  1. Add the branch name to the on.push.branches list in .github/workflows/mdbook-versioned.yml.
  2. Push content to that branch. The workflow will automatically build and deploy to a new directory in OE4T/oe4t.github.io.
  3. Update versions.json in OE4T/oe4t.github.io (on the main branch) to include the new entry so it appears in the version dropdown.

Overview

This page hosts release note links for previous releases of NVIDIA L4T/meta-tegra. If you are viewing this page in the mdbook please ensure you use the “master” branch version of documentation to ensure you are referencing the most recent source

Jetpack 7 / R38.x

Jetpack 6 / R36.x

Jetpack 5 /R35.x

Legacy releases

As of 23 Feb 2026, the master-l4t-r38.4.x branch supports JetPack 7.1/L4T R38.4.0.

Changes from L4T R38.2.x/JetPack 7.0

This release introduces support for the Jetson T4000 module. Like R38.2.x, only Thor targets are supported. Machine configurations for the Orin family remain in the tree, but are not usable and not supported.

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

BSP changes

  • The BSP supports the Jetson T4000, but a machine configuration for it is not yet present in the layer.
  • Note that L4T R38.4.x does not support Orin hardware. Machine definitions have not been removed from the layer, but they should not be used.

Flashing process changes

Flashing in Jetson Linux for Thor targets is performed through a “unified” flashing process that is hidden under the L4T initrd/kernel flashing scripts.

For meta-tegra builds, after unpacking your tegraflash tarball, use initrd-flash while connected via USB to your Thor device to generate/sign the binary artifacts, stage them to a unified-flash workspace, and run the unified flashing scripts to flash the device. You can use ./initrd-flash --external-only to re-flash only the external (rootfs) drive, ./initrd-flash -k NAME to re-flash just the named partition, or ./initrd-flash --qspi-only to re-flash only the boot firmware in the QSPI flash. The script also takes a --debug option for enabling more verbose logging in the unified flashing tool.

See NVIDIA release notes and documentation for more information.

Kernel changes

The Linux kernel is still taken from the Ubuntu “Noble” release, and is based on Linux 6.8.12. The kernel recipe has been changed to use NVIDIA’s GitLab repo directly, instead of the OE4T copy in GitHub.

Please note that the linux-yocto kernel (or any other kernel than the NVIDIA/Ubuntu one) is still not supported on Thor hardware.

JetPack changes

JetPack 7.1 include only minor JetPack upgrades, for the PVA SDK and for VPI.

DeepStream SDK

No DeepStream SDK update has been issued, but it is unclear whether the existing DS-8.0 SDK is compatible.

As of 12 Nov 2025, the master-l4t-r38.2.x branch supports JetPack 7.0/L4T R38.2.2. Note that R38.2.2 only updates some of the components; the rest remain at 38.2.1.

Changes from L4T R36.4.4/JetPack 6.2.1

This release introduces support for the Jetson AGX Thor development kit, and supports only AGX Thor targets. Machine configurations for the Orin family remain in the tree, but are not usable and not supported.

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

Note that NVIDIA has not provided release notes for R38.2.2.

BSP changes

  • Many changes are present to support Thor hardware. Note that L4T R38.2.x does not support Orin hardware. Machine definitions have not been removed from the layer, but they should not be used.

Flashing process changes

Flashing in Jetson Linux for Thor targets is performed through a “unified” flashing process that is hidden under the L4T initrd/kernel flashing scripts.

For meta-tegra builds, after unpacking your tegraflash tarball, use initrd-flash while connected via USB to your Thor device to generate/sign the binary artifacts, stage them to a unified-flash workspace, and run the unified flashing scripts to flash the device. You can use ./initrd-flash --external-only to re-flash only the external (rootfs) drive, ./initrd-flash -k NAME to re-flash just the named partition, or ./initrd-flash --qspi-only to re-flash only the boot firmware in the QSPI flash. The script also takes a --debug option for enabling more verbose logging in the unified flashing tool.

See NVIDIA release notes and documentation for more information.

Kernel changes

The Linux kernel is now taken from the Ubuntu “Noble” release, and is based on Linux 6.8.12.

JetPack changes

JetPack 7.0 upgrades most of the JetPack content. See the NVIDIA documentation for more information.

CUDA 13, included in JetPack 7.0, supports the use of gcc/g++ 15 as the host toolchain, so the gcc-for-nvcc recipes, which were used for providing an older toolchain for use with nvcc, have been dropped.

DeepStream SDK

DeepStream 8.0 is available. The recipes for DeepStream in the meta-tegra-community layer have been updated.

As of 28 Feb 2026, the master and scarthgap branches support JetPack 6.2.2/L4T R36.5.0.

Changes from L4T R36.4.4/JetPack 6.2.1

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

BSP changes

This release mainly contains bugfixes.

See NVIDIA release notes for information on other changes/improvements.

Kernel changes

The Linux kernel (linux-jammy-nvidia-tegra) was updated to 5.15.185.

The NVIDIA out-of-tree drivers have been patched so they build with the NVIDIA-provided kernel as well as newer linux-yocto kernels (6.6 for scarthgap and 6.18 for master).

JetPack changes

No major updates to the JetPack SDK.

DeepStream SDK

No update to the DeepStream SDK. Version 7.1 remains compatible with JetPack 6.2.2.

As of 10 Aug 2025, the master and scarthgap branches support JetPack 6.2.1/L4T R36.4.4.

Changes from L4T R36.4.3/JetPack 6.2

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

BSP changes

Support has been added for using a Hardware Security Module (HSM) for signing the boot firmware. In the layer, our tegra-flash-helper script has been updated to support an --hsm option, which gets passed through to the NVIDIA signing scripts for this feature.

Otherwise, this release mainly contains bugfixes.

See NVIDIA release notes for information on other changes/improvements.

Kernel changes

The Linux kernel remains at 5.15.148.

JetPack changes

No major updates to the JetPack SDK.

DeepStream SDK

No update to the DeepStream SDK. Version 7.1 remains compatible with JetPack 6.2.1.

As of 02 May 2025, the master, walnascar, and scarthgap branches support JetPack 6.2/L4T R36.4.3.

Changes from L4T R36.4.0/JetPack 6.1

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

BSP changes

  • The main new feature in this release is the addition of “super” power modes for the P3767 series modules (Orin NX and Orin Nano).
  • The JetsonMinimal UEFI build configuration, instead of full UEFI, is now used for RCM booting. (RCM boot is used for initrd-based flashing.)

See NVIDIA release notes for information on other changes/improvements.

Kernel changes

The Linux kernel remains at 5.15.146.

JetPack changes

No major updates to the JetPack SDK.

DeepStream SDK

No update to the DeepStream SDK. Version 7.1 remains compatible with JetPack 6.2.

As of 27 Oct 2024, the master, styhead, and scarthgap branches support JetPack 6.1/L4T R36.4.0.

Changes from L4T R36.3.0/JetPack 6.0

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

BSP changes

  • NVIDIA has added an implementation of Firmware Trusted Platform Module (fTPM) as an OP-TEE TA. A recipe for building this has not yet been added to the layer.
  • The default kernel arguments no longer set net.ifnames=0, so network interfaces will use the newer kernel naming convention.
  • Source code is now provided for the nvipcpipeline and nvunixfd gstreamer plugins. Recipes have been added for these plugins, and they have been removed from the gstreamer1.0-plugins-tegra-binaryonly package.
  • A minimal UEFI configuration is now supported on AGX Orin series modules. See this section in the Jetson Linux Developer Guide for more information. The variable TEGRA_UEFI_MINIMAL can be set to "1" in your build configuration to use this configuration instead of the default.

See NVIDIA release notes for information on other changes/improvements.

Kernel changes

The Linux kernel has been updated to 5.15.146.

JetPack changes

Several of the JetPack packages have been updated, including CUDA (to 12.6). See the JetPack release notes for more information.

DeepStream SDK

DeepStream SDK 7.1 is compatible with JetPack 6.1. See the meta-tegra-community work-in-progress branch for the updated recipes.

As of 01 June 2024, the master and scarthgap branches support JetPack 6.0/L4T R36.3.0.

Changes from L4T R35.3.1/JetPack 5.1.1

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

Machine changes

L4T R36.x supports only Jetson Orin modules and development kits. Support for Jetson Xavier modules has been removed.

BSP changes

Updates in this release, as they apply to OE/Yocto builds, are mainly fixes and “improvements” to the existing boot firmware and low-level libraries. This includes new versions of the secure OS (TF-A and OP-TEE) and the UEFI bootloader.

Kernel changes

The 5.10+Android-based kernel has been replaced with a 5.15-based kernel from Ubuntu 22.04. Jetson-specific drivers and device trees have been moved out of the kernel source tree and are now built separately. This change allows for the replacement of the NVIDIA-provided, Ubuntu-derived base kernel with other upstream kernels; see the L4T documentation and release notes for more information.

JetPack changes

Many of the JetPack packages have been updated, including CUDA (to 12.2). See the JetPack release notes for more information.

DeepStream SDK

The DeepStream SDK has been updated to version 7.0.

Other Notes

  • There are several known issues documented in the release notes, which are worth reviewing. In particular, USB connectivity during flashing can sometimes be a problem. Workarounds mentioned in the release notes include swapping cables and/or USB ports on your host PC, and rebooting your host PC if you encounter flashing failures.
  • The NVIDIA-specific userland DRM library (libdrm-nvdc) has been removed in this release.
  • Vulkan support is present, but you must manually include the tegra-libraries-vulkan package in your image to install the necessary configuration file for the Vulkan dispatcher. While NVIDIA now also supports VulkanSC in Jetson Linux, recipes to install those libraries are not yet available.
  • Jetson-specific patches for wayland and weston are no longer needed.

As of 21 Feb 2026, the kirkstone and scarthgap-l4t-r35.x branches support JetPack 5.1.6/L4T R35.6.6.

This is a minor update to the L4T BSP only, with no JetPack changes.

Changes from L4T R35.6.2/JetPack 5.1.5

See the release notes in the NVIDIA documentation for this release for information on changes:

As of 07 Jul 2025, the kirkstone and scarthgap-l4t-r35.x branches support JetPack 5.1.5/L4T R35.6.2.

This is a minor update to the L4T BSP only, with no JetPack changes.

Changes from L4T R35.6.1/JetPack 5.1.5

See the release notes in the NVIDIA documentation for this release for information on changes, but essentially has only bug fixes over R35.6.2:

Kernel changes

No kernel update in this release; the exact same kernel sources are used as for R35.6.1. However, the default KERNEL_ARGS setting for all machines has been changed to remove the nospectre_bhb parameter. If you have a custom machine configuration, you may wish to update your KERNEL_ARGS setting accordingly, to enable Spectre-BHB mitigations in the kernel.

Other notes

While most of the BSP packages have been updated or re-issued with 35.6.2 version numbering, the Jetson Multimedia API .deb packages remain at version 35.6.1 in this release.

As of 18 May 2025, the kirkstone and scarthgap-l4t-r35.x branches support JetPack 5.1.5/L4T R35.6.1.

Changes from L4T R35.6.0/JetPack 5.1.4

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

Machine changes

No new machines in this release. Existing machines for P3767 modules (Orin Nano and NX-based) have been updated to use the new MAXN_SUPER power model configurations, which affect the kernel device tree, BPMP configuration, and NVPMODEL configuration files.

BSP changes

Other updates in this release are mainly fixes and improvements to the existing boot firmware and low-level libraries.

Kernel changes

The upstream base remains at 5.10.216. Updates include some bug fixes and the device tree updates to support MAXN_SUPER power model configurations.

As of 11 Oct 2024, the scarthgap-l4t-r35.x and kirkstone branches support JetPack 5.1.4/L4T R35.6.0.

Changes from L4T R35.5.0/JetPack 5.1.3

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

Machine changes

No new machines in this release.

BSP changes

Updates in this release, as they apply to OE/Yocto builds, are mainly fixes and improvements to the existing boot firmware and low-level libraries.

Kernel changes

The upstream base was updated to 5.10.216.

As of 01 June 2024, the scarthgap-l4t-r35.x and kirkstone branches support JetPack 5.1.3/L4T R35.5.0.

Changes from L4T R35.4.1/JetPack 5.1.2

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

Machine changes

No new machines in this release.

BSP changes

Updates in this release, as they apply to OE/Yocto builds, are mainly fixes and improvements to the existing boot firmware and low-level libraries.

Storage layouts (the flash_*.xml files) have been updated slightly over R35.4.1. If you have a custom storage layout you have derived from an earlier version of L4T, you should review the differences for any adjustments you may need to make.

Kernel changes

The upstream base was updated to 5.10.192.

UEFI changes

For secured devices, UEFI now authenticates its variables. This requires the addition of an authentication key to the EKB, without which your secured device will not boot. This also means that an OTA update from an earlier R35.x release will require that your OTA package also update the EKB, so be warned.

For more information , see this thread on the developer forum.

JetPack changes

  • VPI updated to version 2.4.8

Other Notes

  • While the NVIDIA release notes mention use of the grub loader in place of L4TLauncher for loading the OS from the UEFI bootloader, this has not been implemented or tested in the layer. Likewise for PXE booting.

As of 02 Sep 2023, the master, mickledore, and kirkstone branches support JetPack 5.1.2/L4T R35.4.1.

Changes from L4T R35.3.1/JetPack 5.1.1

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

Machine changes

L4T R35.4.1 adds support for the Jetson AGX Orin Industrial module. The machine configuration jetson-agx-orin-devkit-industrial has been added to the layer to build images for the module when installed in an AGX Orin development kit.

BSP changes

Updates in this release, as they apply to OE/Yocto builds, are mainly fixes and improvements to the existing boot firmware and low-level libraries.

For multimedia support, the deprecated nvbuf_utils library was removed in this release.

Kernel changes

No major updates. The upstream base was updated to 5.10.120.

JetPack changes

  • VPI updated to version 2.3.9
  • Nsight updated to 2023.2

DeepStream SDK

The DeepStream SDK has been updated to version 6.3-1. The recipe in the meta-tegra-community repo has been updated.

Other Notes

  • While the NVIDIA release notes mention use of the grub loader in place of L4TLauncher for loading the OS from the UEFI bootloader, this has not been implemented or tested in the layer. Likewise for PXE booting.

As of 16 Apr 2023, the master, mickledore, and kirkstone branches support JetPack 5.1.1/L4T R35.3.1.

Changes from L4T R35.2.1/JetPack 5.1.0

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

Machine changes

L4T R35.3.1 adds support for the Jetson Orin Nano developer kit. Machine configurations jetson-orin-nano-devkit and jetson-orin-nano-devkit-nvme have been added to the layer to build images for use of the kit with SDcard and NVMe storage, respectively. These configurations should also be usable as a starting point for a custom machine based on one of the Orin Nano production modules.

BSP changes

Other than the new hardware support, no major changes to the layer for the BSP update. The secureboot overlay that NVIDIA issued to fix problems with Orin secure boot support (specifically, using SBKPKC signing + encryption) for R35.2.1 is also applied here.

Kernel changes

There are no major updates to the kernel, which is still based on 5.10.104.

NOTE, however, that if you have a custom machine configuration based on one of the Xavier modules (tegra194), you should update your KERNEL_ARGS setting to add video=efifb:off as a kernel parameter to avoid system crashes at boot time.

JetPack changes

  • New versions of VPI and Nsight.

DeepStream SDK

DeepStream SDK updates usually lag new JetPack releases. Will update if/when a new version is released that supports JetPack 5.1.1.

Other Notes

  • The Orin Nano omits the hardware video encoder present in other Jetson models. Don’t try to use the NVIDIA-specific gstreamer plugins for video encoding on that platform; stick to the software-based plugins.

Known Issues

  • The nvpmodel may fail with an error on the first boot after flashing an Orin device. You can clear this problem by using the nvpmodel command to select a power model configuration, then rebooting (changing the power model may prompt you to reboot immediately). NOTE Fixed with PR #1294.
  • Soft reboots on the Orin Nano devkit may fail during OP-TEE startup with a Heap free list corrupted !!! error. Powering off the device for 10 seconds, then powering it back on, clears the problem. NOTE This does not occur when using the supported initrd-flash mechanism for flashing the development kit. Do not try to use the direct flashing mechanism (the doflash.sh script) when flashing an Orin NX or Nano device using an NVMe drive for its rootfs storage.
  • The nvfancontrol daemon on the Orin Nano devkit may raise the fan to the highest speed and log warnings about Failed to open empty file! (Bad address). This should also be fixed with PR #1294.
  • There are still some known issues outstanding with the JetPack 5 integration. See this issues list for the current status. Any help with resolving these issues would be appreciated!

As of 08 Feb 2023, the master, langdale, and kirkstone branches support Jetson Linux R35.2.1 and JetPack 5.1. As of 16 Apr 2023, master and kirkstone have been updated to JetPack 5.1.1/L4T R35.3.1.

Changes from L4T R35.1.0/JetPack 5.0.2

See the release notes in the NVIDIA documentation for this release for information on new and updated features:

Machine changes

L4T R35.2.1 adds support for Jetson Orin NX 16GB production modules. A machine configuration for this module mounted on a Xavier NX development kit carrier board is available in the layer: p3509-a02-p3767-0000.conf.

BSP changes

Many of the BSP components have been updated. The significant new feature in this release is support for UEFI Secure Boot.

UEFI and OP-TEE are now built from source by default, rather than using the pre-built copies from the L4T package.

Kernel changes

No major updates to the kernel, which is still based on 5.10.104.

JetPack changes

  • Minor updates to CUDA packages
  • New versions of cuDNN, TensorRT, and VPI

DeepStream SDK

With JetPack 5.1, the DeepStream SDK has been updated to version 6.2.0-1. The recipes in meta-tegra-community have been updated accordingly for the SDK itself and the Python bindings.

Known Issues

There are some known issues with the JetPack 5 integration. See this issues list for the current status. Any help with resolving these issues would be appreciated!

NVIDIA issued an “overlay” to the L4T R35.2.1 kit to fix problems with Orin secure boot support (specifically, using SBKPKC signing + encryption). That overlay was integrated in the master and kirkstone branches.

As of 18 Sep 2022, the master and kirkstone branches support Jetson Linux R35.1.0 and JetPack 5.0.2 (rev. 1).

Please note that the official name for the BSP from NVIDIA is Jetson Linux, instead of Linux for Tegra. We’ll continue to use “L4T” as an abbreviation.

Changes from L4T R32.7.2/JetPack 4.6.2

This is a major update to L4T and JetPack which adds support for the Jetson AGX Orin modules and removes support for all other Jetson modules except for the Jetson AGX Xavier and Jetson Xavier NX series.

NVIDIA documentation:

Machine changes

The only Jetson modules supported in this release (and future releases) are:

  • Jetson AGX Orin series
  • Jetson AGX Xavier series
  • Jetson Xavier NX series

We also have a machine configuration for the Clara AGX development kit.

BSP changes

This is a major update to the BSP and is not compatible with the R32.x series of releases.

  • The cboot bootloader has been replaced by UEFI. You can use either NVIDIA’s prebuilt copy, or build it from source.
  • Boot logos/splash screens are now built into the UEFI bootloader, rather than being separately loaded. Recipes for custom boot logos have been removed from the layer.
  • The device tree plugin manager, which was a cboot feature, is no longer supported. To dynamically modify the device tree (e.g., for camera configuration), you must configure a device tree overlay instead.
  • Bootloader updating and redundancy behaves differently than with earlier L4T releases. See the L4T documentation for details.
  • The Linux kernel has been updated to 5.10.104. The repository used in the layer for this new kernel is here.
  • The trusty trusted OS has been replaced by OP-TEE.
  • Open-source display driver (AGX Orin series only).
  • The OpenMAX gstreamer plugin (gstreamer1.0-omx-tegra), which has been deprecated since L4T R32.1.0, is no longer supported.
  • Improved support for Wayland/Weston via the EGL-GBM backend interface.
  • Container support now defaults to a smaller set of libraries passed through from the host to the container.

Kernel changes

Besides the upgrade to the Linux 5.10 LTS kernel as a base, the new default kernel configuration for Jetson devices builds far more features and drivers as modules, rather than building them into the kernel itself. If you have recipes that depend on specific kernel features/drivers, you may need to add RRECOMMENDS settings to ensure that the necessary modules get installed into your rootfs image. The kernel configuration in the layer diverges slightly from the stock Jetson Linux configuration by building in a small number of drivers, rather than leaving them as modules.

JetPack changes

  • CUDA updated to 11.4.
  • New versions of cuDNN, TensorRT, VPI.
  • libvisionworks is no longer supported.

DeepStream SDK

With JetPack 5.0.2 (rev. 1), the DeepStream SDK has been updated to version 6.1.1-1. The recipes in meta-tegra-community have been updated accordingly for the SDK itself and the Python bindings.

Known Issues

There are some known issues with the JetPack 5 integration. See this issues list for the current status. Any help with resolving these issues would be appreciated!

In addition, while NVIDIA supplies sources for the OP-TEE trusted OS, recipes for building OP-TEE from source are not yet ready for merging.

As of 29 Nov 2024, the kirkstone-l4t-r32.7.x and branch supports L4T R32.7.6/JetPack 4.6.6.

Changes from L4T R32.7.5/JetPack 4.6.5

NVIDIA documentation:

Content for this release is identical to L4T R32.7.5/JetPack 4.6.5, with the following updates:

  • Security fixes
  • Support for new DRAM part

Note: This is the final release of the L4T R32 series. See this NVIDIA developer forum post for the end-of-life announcement.

As of 30 Jun 2024, the kirkstone-l4t-r32.7.x and branch supports L4T R32.7.5/JetPack 4.6.5.

Changes from L4T R32.7.4/JetPack 4.6.4

NVIDIA documentation:

Content for this release is identical to L4T R32.7.4/JetPack 4.6.4, with the following updates:

  • Kernel and driver patches
  • Boot firmware updates to support SKU 3 of the TX2-NX module, as well as recent BOM changes (see PCN references in the release notes)

As of 03 Jul 2023, the kirkstone-l4t-r32.7.x and dunfell branches support L4T R32.7.4/JetPack 4.6.4.

Changes from L4T R32.7.3/JetPack 4.6.3

NVIDIA documentation:

Content for this release is identical to L4T R32.7.3/JetPack 4.6.3, with the following updates:

  • Security fixes: NVIDIA security bulletin
  • Kernel update to v4.9.337 base (the last 4.9 LTS kernel release)
  • Minor updates to Jetson Multimedia API

As of 21 Jan 2023, the kirkstone-l4t-r32.7.x and dunfell branches support L4T R32.7.3/JetPack 4.6.3.

Changes from L4T R32.7.2/JetPack 4.6.2

NVIDIA documentation:

Content for this release is identical to L4T R32.7.2/JetPack 4.6.2, with the following updates:

As of 22 May 2022, the master, kirkstone, and dunfell branches support L4T R32.7.2/JetPack 4.6.2.

Changes from L4T R32.7.1/JetPack 4.6.1

NVIDIA documentation:

Content for this release is identical to L4T R32.7.1/JetPack 4.6.1, with the following updates:

As of 18 Mar 2022, the master branch supports L4T R32.7.1/JetPack 4.6.1. As of 08 Apr 2022, the dunfell branch also supports L4T R32.7.1/JetPack 4.6.1.

Changes from L4T R32.6.1/JetPack 4.6

NVIDIA documentation:

Machine changes

  • R32.7.1 adds support for the 64GB Jetson AGX Xavier and 16GB Jetson Xavier NX modules. These should be supported without modifying any machine configuration files, although (as of this writing) that has not been tested.

Flash layout file changes for t194 (Xavier) platforms

The flash layout XML files for the Jetson AGX Xavier Industrial module has been changed to add a second badpage partition. Since there is no support yet for that module in meta-tegra, that should not affect any current users.

Kernel updates

The security engine driver has been enhanced to provide hwrng device to the kernel, which can be used with the rng-tools entropy gatherer (for t186/t194 only). Recipes to enable this on TX2 and Xavier platforms have been added to the meta-tegra-community repo.

Firmware loading issue

NVIDIA modified the kernel firmware loader in a way that causes significant delays with the default kernel configuration, which enables the long-deprecated userland helper interface for firmware loading. Disabling CONFIG_FW_LOADER_USER_HELPER and CONFIG_FW_LOADER_USER_HELPER_FALLBACK fixes the delay, and this is done by default in the kernel recipe.

U-Boot updates

The U-Boot in L4T R32.7.1 remains at U-Boot v2020.04, with added patches from NVIDIA. In this layer we continue to track the upstream OE-Core U-Boot recipes in each branch, porting NVIDIA patches to our fork of the upstream U-Boot repository.

Note that unlike NVIDIA, our U-Boot fork uses a separate build configuration for the SDcard-based Nano (sku 0000) and the eMMC production Nano (sku 0002), so patches related to supporting both in the same build have not been applied.

On t210 platforms, U-Boot is now responsible for loading the XUSB controller firmware; it is not loaded by cboot.

cboot updates

cboot on the t186/t194 platforms added a new “unified” A/B redundancy mode that associates a bootloader slot with a rootfs slot. For OE4T, this is equivalent to the “user” redundancy mode that we were already using.

JetPack updates

JetPack 4.6.1 includes the following updates:

  • VPI 1.2.3
  • TensorRT 8.2.1

DeepStream update

The DeepStream SDK has been updated from 6.0.0 to 6.0.1 for compatibility with the updated L4T/JetPack. The DeepStream recipes are in the meta-tegra-community repo.

As of 10 Aug 2021, the dunfell branch supports L4T R32.6.1/JetPack 4.6. Users of dunfell that wish to remain at L4T R32.3.1 should switch to the dunfell-l4t-r32.3.1 branch. As of 11 Aug 2021, the master branch was also updated to R32.6.1.

As of 15 Nov 2021, the dunfell, honister, and master branches have all been updated to R32.6.1 with the updated nvidia-l4t-multimedia and nvidia-l4t-multimedia-utils libraries that were released as part of JetPack 4.6 (rev 2) in October 2021. As of 17 Nov 2021, all JetPack 4.6 (rev 2) updates have been applied to the branches.

Changes from L4T R32.5.x/JetPack 4.5

NVIDIA documentation:

Machine changes

  • R32.6.1 adds support for the Jetson AGX Xavier industrial module. No machine .conf is present in the layer yet for this module.
  • Support for the Jetson AGX Xavier 8GB module has been removed.

Note that NVIDIA has announced that this L4T release will be the last to support the t210 (TX1/Nano) and t186 (TX2) platforms.

Flash layout file changes for t194 (Xavier) platforms

The flash layout XML files for the Xavier platforms in the L4T BSP have been modified to include new attributes (but no location/size changes). In particular, the xusb-fw partition is now marked oem_signed="true", as cboot now performs signature validation on the USB controller firmware.

Kernel updated to 4.9.253

The Linux kernel in R32.6.1 has been updated to a 4.9.253 base.

U-Boot updates

The U-Boot in L4T R32.6.1 remains at U-Boot v2020.04, with added patches from NVIDIA. In this layer we continue to track the upstream OE-Core U-Boot recipes in each branch, porting NVIDIA patches to our fork of the upstream U-Boot repository.

Note that unlike NVIDIA, our U-Boot fork uses a separate build configuration for the SDcard-based Nano (sku 0000) and the eMMC production Nano (sku 0002), so patches related to supporting both in the same build have not been applied.

cboot updates

cboot on the t186/t194 platforms added a new “unified” A/B redundancy mode that associates a bootloader slot with a rootfs slot. For OE4T, this is equivalent to the “user” redundancy mode that we were already using.

Flashing/signing tools updates

  • As mentioned above, the USB controller firmware is now signed on t194 platforms.
  • The PT (Tegra partition table) partition on t210 platforms now gets a 16-byte trailer added.
  • Bootloader update payloads (BUPs) have been changed to include more version information in the header.

Container support

  • The Jetson-specific container runtime (libnvidia-container-tools) was updated to version 0.10, which adds a mechanism for filtering out some of the pass-through mounts.

JetPack updates

JetPack 4.6 includes the following updates:

  • VPI 1.1
  • CUDA 10.2.300
  • TensorRT 8.0.1
  • cuDNN 8.2.1

JetPack 4.6 (rev2) was released in October 2021 with updates to support the DeepStream 6.0 SDK. (The DeepStream 6.0 SDK recipe has been integrated into the meta-tegra-community layer.)

As of 15 Jul 2021, the master branch supports L4T R32.5.2/JetPack 4.5.1 content for all current Jetson platforms.

Changes from L4T R32.5.1

This minor release includes fixes for issues mentioned in this security bulletin

NVIDIA documentation:

See also the notes on L4T R32.5.1 and L4T R32.5.0.

As of 28 Feb 2021, the master branch supports L4T R32.5.1/JetPack 4.5.1 content for all current Jetson platforms.

Changes from L4T R32.5.0/JetPack 4.5

This minor release mainly adds support for the Jetson TX2-NX module.

NVIDIA documentation:

Jetson TX2-NX Module support

Support for the Jetson TX2-NX module, installed in the P3509 carrier board from a Jetson Xavier NX development kit, is added. The MACHINE name is jetson-xavier-nx-devkit-tx2-nx.

JetPack updates

JetPack 4.5.1 is mostly identical to 4.5.0. VPI (Vision Programing Interface) was updated to version 1.0.15.

SDK Manager no longer required

As of 24 Apr 2021, the host-side (x86-64) CUDA recipes have been updated to use package feeds that NVIDIA has added for them as well. SDK Manager is no longer required.

As of 03 Feb 2021, the master branch supports L4T R32.5.0/JetPack 4.5 content for all current Jetson platforms. This support was then ported to the dunfell-l4t-r32.5.0 branch as of 10 Feb 2021.

Changes from L4T R32.4.4/JetPack 4.4.1

NVIDIA documentation:

Machine name changes

Names of machine configuration files have change to match the updated naming convention in L4T R32.5.0. The main change is that all of the machine names now include devkit, to make it clearer that they correspond to developer kits rather than bare modules. Machine names unchanged from the last release are jetson-nano-2gb-devkit, jetson-xavier-nx-devkit, and jetson-xavier-nx-devkit-emmc.

Old nameNew name
jetson-tx1jetson-tx1-devkit
jetson-tx2jetson-tx2-devkit
jetson-tx2ijetson-tx2-devkit-tx2i
jetson-tx2-4gbjetson-tx2-devkit-4gb
jetson-nano-qspi-sdjetson-nano-devkit
jetson-nano-emmcjetson-nano-devkit-emmc
jetson-xavierjetson-agx-xavier-devkit
jetson-xavier-8gbjetson-agx-xavier-devkit-8gb

Flash layout file changes for Jetson Nano Developer Kit

On the Nano developer kit, all boot-related partitions have been relocated to the QSPI flash in R32.5.0. Only the rootfs partition gets written to the SDcard.

Kernel updated to 4.9.201

The Linux kernel in R32.5.0 has changed from being based on 4.9.140 to 4.9.201.

U-Boot updates

The U-Boot in L4T R32.5.0 changed from a base of U-Boot v2016.07 to U-Boot v2020.04. The meta-tegra layer switched to upstream U-Boot starting with the dunfell-l4t-r32.4.3 branch (and is at v2020.07 in gatesgarth and v2020.10 in master), but relevant patches added by NVIDIA for the L4T release have been ported to our branches.

JetPack updates

Only minor updates were made in JetPack 4.5, with little impact on recipes in meta-tegra.

VPI 1.0

VPI (Vision Programing Interface) went from “developer preview” to general release with version 1.0.12.

SDK Manager no longer required

As of 24 Apr 2021, the host-side (x86-64) CUDA recipes have been updated to use package feeds that NVIDIA has added for them as well. SDK Manager is no longer required.

New Features not implemented in meta-tegra

The following features have been added in L4T R32.5.0/JetPack 4.5 but have not (yet) been implemented in meta-tegra.

A/B rootfs redundancy

R32.5.0 introduced rootfs A/B redundancy for L4T, decoupled from the bootloader A/B redundancy feature (which was only available on TX2 and Xavier platforms).

Encrypted rootfs support

R32.5.0 added a mechanism for setting up the Ubuntu distribution to use LUKS to encrypt the rootfs (on TX2 and Xavier). The implementation is specific to the Ubuntu distro used for L4T/JetPack and the sample Trusty image NVIDIA provides. It was already possible to implement this feature with an OE/Yocto-based distro, so it’s unlikely to be needed in meta-tegra.

As of 30 Oct 2020, the master branch supports L4T R32.4.4/JetPack 4.4.1 content for Jetson TX1, Jetson TX2, Jetson Nano (including Nano 2GB devkit), and Jetson AGX Xavier, and Jetson Xavier NX.

Changes from L4T R32.4.3/JetPack 4.4

NVIDIA documentation:

Flash layout file changes for Tegra210 platforms

NVIDIA has updated their flashing tools in the Tegra210 BSP package so the id= field in the flash layout XML file is used as the partition number in the GPT of the eMMC/SDcard. They have updated their flash layouts to reflect that, with the APP partition appearing at the end of the file with id="1". Bear this in mind if you have customized flash layouts for your TX1 or Nano devices.

Kernel recipe using new branch

The kernel update into the patches-l4t-r32.4 branch did not merge cleanly, so a the linux-tegra_4.9.bb recipe now points to a new oe4t-patches-l4t-r32.4 branch which has our patches rebased on top of the R32.4.4 release from NVIDIA.

JetPack updates

None of the JetPack packages have changed in 4.4.1, except for a minor update to the Tegra MultiMedia API SDK. The update obsoletes a patch for an issue that was present in the R32.4.3/4.4 version of the SDK.

SDK Manager no longer required

As of 24 Apr 2021, the host-side (x86-64) CUDA recipes have been updated to use package feeds that NVIDIA has added for them as well. SDK Manager is no longer required.

As of 13 Jul 2020, the master and dunfell-l4t-r32.4.3 branches support L4T R32.4.3/JetPack 4.4 (GA) content for Jetson TX1, Jetson TX2, Jetson Nano, and Jetson AGX Xavier, and Jetson Xavier NX.

Notable changes from R32.3.1 / JetPack 4.3

NVIDIA documentation:

U-Boot updated to v2020.04

NVIDIA has upstreamed all of their U-Boot changes, so the u-boot-tegra recipe is now based off the upstream U-Boot repository, instead of NVIDIA’s. NVIDIA has not yet created a separate U-Boot configuration for the Nano eMMC (sku 0002) module, so patches have been added for it, as was done for R32.3.1.

Note that with L4T R32.4.3, NVIDIA has defined a region of the eMMC boot1 block (or QSPI flash on platforms that use it) for storing the U-Boot environment block. The u-boot-tegra sources have been changed to use the NVIDIA-defined location and size for that region, which differs from prior versions.

CUDA 10.2

JetPack 4.4 updates CUDA to version 10.2, which is compatible with GCC 8. Recipes for building the GCC 8 toolchain have been added to the meta-tegra/contrib layer.

Fewer SDK Manager downloads required

With NVIDIA now providing direct package feeds for their L4T/JetPack OTA updates, recipes have been updated to use those feeds where possible. The host-side CUDA toolkit must still be downloaded using the SDK Manager, as before. As of 24 Apr 2021, the host-side (x86-64) CUDA recipes have been updated to use package feeds that NVIDIA has added for them as well. SDK Manager is no longer required.

Other Notes

CUDA host tools

If you ran the SDK Manager on Ubuntu 16.04 to download the CUDA host-side tools, you should add the following setting to your build configuration:

CUDA_BINARIES_x86-64 = "cuda-binaries-ubuntu1604"

By default, the recipes assume you used Ubuntu 18.04 and reference that version of the CUDA host-side tools.

Kernel defconfig file removed

The kernel (linux-tegra) recipe has been changed to generate the default configuration from the arch/arm64/configs/tegra_defconfig file in the source tree, rather than including the full kernel configuration as a defconfig file. If you have a customized kernel configuration and were overriding the default configuration by supplying your own defconfig file, you will either need to convert your modifications into config fragment files (see the YP Linux Kernel Dev Manual for documentation), or use a .bbappend file to add your defconfig file back into the SRC_URI.

Tegraflash default packaging change

The tegraflash image type now generates a compressed tarball (.tegraflash.tar.gz) by default instead of a ZIP package (.tegraflash.zip), to better utilize sparse file support. See this page for more information.

As of 30 Apr 2020, the master and dunfell-l4t-r32.4.2 branches support L4T R32.4.2/JetPack 4.4 Developer Preview content for Jetson TX1, Jetson TX2, Jetson Nano, and Jetson AGX Xavier. Experimental support for Jetson Xavier NX (eMMC module in Jetson Nano B01 carrier) is also present.

As of 12 Jul 2020, L4T R32.4.3/JetPack 4.4 GA is available on the master branch. R32.4.2/4.4 DP should not be used for new development.

Notable changes from R32.3.1 / JetPack 4.3

U-Boot updated to v2020.04

NVIDIA has upstreamed all of their U-Boot changes, so the u-boot-tegra recipe is now based off the upstream U-Boot repository, instead of NVIDIA’s. NVIDIA has not yet created a separate U-Boot configuration for the Nano eMMC (sku 0002) module, so patches have been added for it, as was done for R32.3.1.

CUDA 10.2

JetPack 4.4 DP updates CUDA to version 10.2, which is compatible with GCC 8. Recipes for building the GCC 8 toolchain have been added to the meta-tegra/contrib layer.

Fewer SDK Manager downloads required

With NVIDIA now providing direct package feeds for their L4T/JetPack OTA updates, recipes have been updated to use those feeds where possible. The host-side CUDA toolkit must still be downloaded using the SDK Manager, as before.

Other Notes

CUDA host tools

If you ran the SDK Manager on Ubuntu 16.04 to download the CUDA host-side tools, you should add the following setting to your build configuration:

CUDA_BINARIES_x86-64 = "cuda-binaries-ubuntu1604"

By default, the recipes assume you used Ubuntu 18.04 and reference that version of the CUDA host-side tools.

Kernel defconfig file removed

The kernel (linux-tegra) recipe has been changed to generate the default configuration from the arch/arm64/configs/tegra_defconfig file in the source tree, rather than including the full kernel configuration as a defconfig file. If you have a customized kernel configuration and were overriding the default configuration by supplying your own defconfig file, you will either need to convert your modifications into config fragment files (see the YP Linux Kernel Dev Manual for documentation), or use a .bbappend file to add your defconfig file back into the SRC_URI.

As of 01 Aug 2021, the dunfell-l4t-r32.3.1 branch was created off the last L4T R32.3.1-based commit into the dunfell branch, for users wishing to continue with the older BSP. The dunfell branch has been updated with L4T R32.6.1/JetPack 4.6.

As of 26 Apr 2020, the dunfell and zeus-l4t-r32.3.1 branches support L4T R32.3.1/JetPack 4.3 content for Jetson TX1, Jetson TX2, Jetson Nano, and Jetson AGX Xavier. (There is also thud-l4t-r32.3.1, but it is not actively maintained.)

Notable changes from R32.2.x

There are several changes in this version of L4T that required updates to Tegra platform support in this layer.

Bootloader update support

Support for bootloader updates has been added to tegra210 (Jetson-TX1 and Jetson Nano) platforms. The tegra186-redundant-boot recipe has been renamed to tegra-redundant-boot, which installs the l4t_payloader_updater_t210 script on tegra210 platforms. Note that bootloader redundancy on tegra210 is different from tegra186/tegra194 (for example, no A/B slots with failover). See the Bootloader chapter of the L4T documentation for details.

With the wider support for bootloader updates and more module variants that may need different boot-time configuration files, BUP payload packages now support all variants for a MACHINE. Also added is a service that runs at boot time to populate the TNSPEC field of the /etc/nv_boot_control.conf file based on the contents of the EEPROM on the module, so the update tools can select the correct files out of the BUP payload for the specific module in the system. This differs from stock L4T, where the configuration file is written into the rootfs after the module’s EEPROM has been read during the flashing process, but should result in the same TNSPEC as would be present after using L4T’s flash.sh.

Tegra Multimedia API

As of L4T R32.3.1, NVIDIA has stopped providing the Tegra Multimedia API kit with the BSP, so if you need the Multimedia API in your builds, you must download the kit to your NVIDIA_DEVNET_MIRROR directory.

The NVIDIA-specific OpenGL extension header files that used to be extracted from the Multimedia API kit are now obtained from the graphics demos source package in the L4T BSP.

Jetson Nano MACHINE rename

The MACHINE name for the original Jetson Nano developer kit (using SPI flash and an SDcard) has been changed from jetson-nano to jetson-nano-qspi-sd. This aligns with NVIDIA’s naming and will make it easier to distinguish between the older kit and the upcoming newer kit based on the 0002 SKU that uses eMMC.

Single flash layout file for Jetson Nano

Builds for jetson-nano-qspi-sd now use only the unified SPI+SDcard flash layout XML file (flash_l4t_t210_spi_sd_p3448.xml), as this layout file has been updated for compatibility with bootloader updates.

Also changed are the workflows for flashing and creating SDcard images for the Nano. The tegraflash.zip package now includes two shell scripts: doflash.sh for flashing via USB (which now flashes both the QSPI flash and and an SDcard mounted on the device), and dosdcard.sh for creating either a file containing SDcard image or for writing directly to an SDcard mounted on your development host.

TensorRT packaging change

TensorRT 6.0.1 has a more complicated packaging layout than prior versions, but has the same issue as prior versions where NVIDIA uses the exact same .deb package names for the Xavier-specific packages and the non-Xavier packages. To make it clearer which packages are which, the tensorrt recipe looks for the Xavier-specific packages in ${NVIDIA_DEVNET_MIRROR}/DLA and the non-Xavier packages in ${NVIDIA_DEVNET_MIRROR}/NoDLA. You must move the packages yourself once you have downloaded them using SDK Manager.

Example, for Xavier:

  $ cd ~/Downloads/nvidia/sdkm_downloads
  $ mkdir DLA
  $ mv tensorrt*.deb *libnvinfer*.deb libnv*parsers*.deb uff*.deb graphsurgeon*.deb DLA/

Example, for all other platforms:

  $ cd ~/Downloads/nvidia/sdkm_downloads
  $ mkdir NoDLA
  $ mv tensorrt*.deb *libnvinfer*.deb libnv*parsers*.deb uff*.deb graphsurgeon*.deb NoDLA/

Other notes

The following notes from prior releases also apply.

SDK Manager downloads required

JetPack 4.3 content cannot be downloaded anonymously from NVIDIA’s servers. You must use NVIDIA SDK Manager to download the JetPack 4.3 Debian packages to your build host, then add this setting to your build configuration (e.g., in conf/local.conf under your build directory):

NVIDIA_DEVNET_MIRROR = "file://path/to/downloads"

By default, the SDK Manager downloads to a directory called Downloads/nvidia/sdkm_downloads under your $HOME directory, so use that path in the above setting.

CUDA host tools

If you ran the SDK Manager on Ubuntu 16.04 to download the JetPack packages, you should add the following setting to your build configuration:

CUDA_BINARIES_NATIVE = "cuda-binaries-ubuntu1604-native"

By default, the recipes assume you used Ubuntu 18.04 and reference that version of the CUDA host-side tools.

L4T-R32.2.3-Notes

As of 23 Nov 2019, the master and zeus branches support L4T R32.2.3/JetPack 4.2.3 content for Jetson TX1, Jetson TX2, Jetson Nano, and Jetson AGX Xavier.

Please Note

The JetPack 4.2.3 content cannot be downloaded anonymously from NVIDIA’s servers. You must use NVIDIA SDK Manager to download the JetPack 4.2.3 Debian packages to your build host, then add this setting to your build configuration (e.g., in conf/local.conf under your build directory):

NVIDIA_DEVNET_MIRROR = "file://path/to/downloads"

By default, the SDK Manager downloads to a directory called Downloads/nvidia/sdkm_downloads under your $HOME directory, so use that path in the above setting.

If you are building TensorRT for Jetson AGX Xavier, you must create a subdirectory called P2888 under your ${NVIDIA_DEVNET_MIRROR} directory and copy the Xavier-specific tensorrt deb files there. The TensorRT deb files for Jetson TX1/TX2 should remain in the main ${NVIDIA_DEVNET_MIRROR} directory. NVIDIA uses the exact same names for the two sets of deb packages, even though the content for Xavier is different from the other platforms.

If you ran the SDK Manager on Ubuntu 16.04 to download the JetPack packages, you should add the following setting to your build configuration:

CUDA_BINARIES_NATIVE = "cuda-binaries-ubuntu1604-native"

By default, the recipes assume you used Ubuntu 18.04 and reference that version of the CUDA host-side tools.

The L4T BSP files (driver package, sources for gstreamer plugins, etc.) are accessible without a DevNet account, so if your builds only require the BSP, you don’t have to go through these extra steps.

L4T-R32.2.1-Notes

As of 01 Sep 2019, the master branch supports L4T R32.2.1/JetPack 4.2.2 content for Jetson TX1, Jetson TX2, Jetson Nano, and Jetson AGX Xavier.

Please Note

The JetPack 4.2.2 content cannot be downloaded anonymously from NVIDIA’s servers. You must use NVIDIA SDK Manager to download the JetPack 4.2.2 Debian packages to your build host, then add this setting to your build configuration (e.g., in conf/local.conf under your build directory):

NVIDIA_DEVNET_MIRROR = "file://path/to/downloads"

By default, the SDK Manager downloads to a directory called Downloads/nvidia/sdkm_downloads under your $HOME directory, so use that path in the above setting.

If you are building TensorRT for Jetson AGX Xavier, you must create a subdirectory called P2888 under your ${NVIDIA_DEVNET_MIRROR} directory and copy the Xavier-specific tensorrt deb files there. The TensorRT deb files for Jetson TX1/TX2 should remain in the main ${NVIDIA_DEVNET_MIRROR} directory. NVIDIA uses the exact same names for the two sets of deb packages, even though the content for Xavier is different from the other platforms.

If you ran the SDK Manager on Ubuntu 16.04 to download the JetPack packages, you should add the following setting to your build configuration:

CUDA_BINARIES_NATIVE = "cuda-binaries-ubuntu1604-native"

By default, the recipes assume you used Ubuntu 18.04 and reference that version of the CUDA host-side tools.

The L4T BSP files (driver package, sources for gstreamer plugins, etc.) are accessible without a DevNet account, so if your builds only require the BSP, you don’t have to go through these extra steps.

L4T-R32.2.0-Notes

As of 18 Aug 2019, the master and warrior-l4t-r32.2 branches support L4T R32.2.0/JetPack 4.2.1 content for Jetson TX1, Jetson TX2, Jetson Nano, and Jetson AGX Xavier.

Please Note

The JetPack 4.2.1 content cannot be downloaded anonymously from NVIDIA’s servers. You must use NVIDIA SDK Manager to download the JetPack 4.2.1 Debian packages to your build host, then add this setting to your build configuration (e.g., in conf/local.conf under your build directory):

NVIDIA_DEVNET_MIRROR = "file://path/to/downloads"

By default, the SDK Manager downloads to a directory called Downloads/nvidia/sdkm_downloads under your $HOME directory, so use that path in the above setting.

If you are building TensorRT for Jetson AGX Xavier, you must create a subdirectory called P2888 under your ${NVIDIA_DEVNET_MIRROR} directory and copy the Xavier-specific tensorrt deb files there. The TensorRT deb files for Jetson TX1/TX2 should remain in the main ${NVIDIA_DEVNET_MIRROR} directory. NVIDIA uses the exact same names for the two sets of deb packages, even though the content for Xavier is different from the other platforms.

If you ran the SDK Manager on Ubuntu 16.04 to download the JetPack packages, you should add the following setting to your build configuration:

CUDA_BINARIES_NATIVE = "cuda-binaries-ubuntu1604-native"

By default, the recipes assume you used Ubuntu 18.04 and reference that version of the CUDA host-side tools.

The L4T BSP files (driver package, sources for gstreamer plugins, etc.) are accessible without a DevNet account, so if your builds only require the BSP, you don’t have to go through these extra steps.

L4T-R32.1.0-Notes

As of 13 Apr 2019, the master and warrior branches support L4T R32.1.0/JetPack 4.2 content for Jetson TX1, Jetson TX2, Jetson Nano, and Jetson AGX Xavier.

Please Note

The JetPack 4.2 content cannot be downloaded anonymously from NVIDIA’s servers. You must use NVIDIA SDK Manager to download the JetPack 4.2 Debian packages to your build host, then add this setting to your build configuration (e.g., in conf/local.conf under your build directory):

NVIDIA_DEVNET_MIRROR = "file://path/to/downloads"

By default, the SDK Manager downloads to a directory called Downloads/nvidia/sdkm_downloads under your $HOME directory, so use that path in the above setting.

The L4T BSP files (driver package, sources for gstreamer plugins, etc.) are accessible without a DevNet account, so if your builds only require the BSP, you don’t have to go through these extra steps.

Jetson TX1 Notes

NVIDIA does not support Jetson TX1 with L4T R32.1.0, but it does appear to work with this version of the BSP. For production use of Jetson TX1, you should stick with L4T R28.x BSP releases.

L4T-R28.3-Notes

As of 10 Apr 2019, the thud-l4t-r28.3 branch supports L4T R28.3.0/JetPack 3.3 content for Jetson TX1 and Jetson TX2.

L4T-R28.2-Notes

As of 25 Aug 2018, the rocko-l4t-r28.2, sumo and master branches all support the Jetson TX1 with L4T R28.2 and the Jetson TX2 with L4T R28.2.1.

L4T-R28.1-notes

As of 30 Jul 2017, the pyro-l4t-r28.1 branch supports Jetson TX1 and Jetson TX2 builds using L4T R28.1.

Only minimal testing has been performed so far, but core-image-sato images build, and most functionality tested appears to work OK.

Note that with R28.1, the Jetson TX1 boot sequence has been changed to include cboot in the chain of bootloaders, similar to the Jetson TX2. cboot loads U-Boot from the LNX partition in the eMMC.

meta-tegra Wiki Redirects

If you are reading this page you have been redirected from the legacy OE4T wiki pages.

Content in the wiki has been replaced by the content of the mdbook here.

Documentation titles and markdown page names generally track previous wiki page names.

The master branch content is generally the most up-to-date and typically the branch to use. In some cases, however there may be branch specific content for a wiki page.