OpenEmbedded/Yocto BSP layer for NVIDIA Jetson Modules
Jetson Linux release: R36.4.4 JetPack release: 6.2.1
Boards supported:
- Jetson AGX Orin development kit
- Jetson Orin NX 16GB (p3767-0000) in Xavier NX (p3509) carrier
- Jetson Orin NX 16GB (p3767-0000) in Orin Nano (p3768) carrier
- Jetson Orin Nano development kit
- Jetson AGX Orin Industrial 64GB (P3701-0008) in Orin AGX (P3737) carrier
This layer depends on: URI: git://git.openembedded.org/openembedded-core branch: master LAYERSERIES_COMPAT: whinlatter
CUDA toolchain compatibility note
CUDA 12.6 supports up through gcc 13.2 only, so recipes are included
for adding the gcc 10 toolchain to the build for CUDA use, and cuda.bbclass
has been updated to pass the g++ 10 compiler to nvcc for CUDA code compilation.
Getting Help
For general build issues or questions about getting started with your build setup please use the Discussions tab of the meta-tegra repository:
- Use the Ideas category for anything you’d like to see included in meta-tegra, Wiki content, or the tegra-demo-distro.
- Use the Q&A category for questions about how to build or modify your Tegra target based on the content here.
- Use the “Show and Tell” category for any projects you’d like to share which are related to meta-tegra.
- Use the General channel for anything that doesn’t fit well into the categories above, and which doesn’t relate to a build or runtime issue with Tegra yocto builds.
Reporting Issues
Use the Issues tab in meta-tegra for reporting build or runtime issues with Tegra yocto build targets. When reporting build or runtime issues, please include as much information about your environment as you can. For example, the target hardware you are building for, branch/version information, etc. Please fill in the provided bug template when reporting issues.
We are required to provide an e-mail address, but please use GitHub as described above, instead of sending e-mail to oe4t-questions@madison.systems.
Contributing
Please see CONTRIBUTING.md for information on submitting patches to the maintainers.
Contributions are welcome!
This total-beginner’s guide will walk you through the process of flashing a newly-generated image your Jetson development kit! The instructions here are for branches based off L4T R32.4.3 and later. (For earlier releases, click the revisions count, under the title, to go back to an earlier revision of the page.)
Initrd Flashing
For branches based off L4T R35.1.0 and later (master, kirkstone, and langdale), and the kirkstone-l4t-r32.7.x branch, an alternative flashing process (called “initrd flashing”) is available, which supports flashing to a rootfs (APP partition) on an external storage device. See this page for more information.
The table below helps outline the flashing mechanism(s) supported depending on target root filesystem storage for all recent branches (kirkstone-l4t-r32.7.x and later)
| Target Rootfs Storage | Flashing method |
|---|---|
| on-board eMMC | doflash.sh or initrd-flash |
| SDCard | doflash.sh or initrd-flash. dosdcard.sh may be used for subsequent programming after initial bootloader programming with doflash.sh or initrd-flash. |
| NVMe | initrd-flash |
| M.2 drive or SATA drive | initrd-flash |
Prerequisites
Before you get started, you’ll need the following:
-
A suitable USB cable. For most Jetsons, this is a type A to micro-B cable, but for the AGX Xavier and AGX Orin dev kits, you’ll need a USB-C cable (or a USB-C to type A cable, if your development host does not have USB-C ports). As NVIDIA mentions in their documentation, it’s important to use a good-quality cable for successful flashing.
-
A free USB port on your development machine. The flashing tools work best if you can connect directly to a port on your system, rather than using a USB hub.
-
For L4T R32.5.0 and later, you must have the
dtccommand in your PATH, since the NVIDIA tools use that command when preparing the boot files for some of the Jetsons. On Ubuntu systems, that command is provided by thedevice-tree-compilerpackage. -
For L4T R35 and later, you must have the GNU
cppcommand in your PATH (and not the LLVM/Clangcpp, see #1959).
While not required, a serial console connection is very useful, particularly with troubleshooting flashing problems, since the bootloaders only write messages to the serial console.
Please note, also, that flashing typically does not work from a virtual machine. You should be running the flashing tools directly on a Linux host.
For SDcard-based development kits
If you have a Jetson Nano or Jetson Xavier NX development kit, you’ll need a good-quality MicroSDHC/SDXC card, preferably 16GB or larger. Higher-speed cards (at least UHS-I) are preferred, particularly if you plan to program the SDcard through an SDcard reader/writer on your development host. The reader/writer should be high-speed also, and connected through a high-speed I/O interface (e.g., USB 3.1).
Programming an SDcard in a reader/writer attached to your host is also faster (much faster) if you have the bmaptool
command in your PATH. On Ubuntu systems, that command is provided by the bmap-tools package. (But note that bmaptool
requires sudo.)
The Jetson AGX Xavier development kit also supports booting from a MicroSD card instead of the on-board eMMC, with some limitations.
Avoiding sudo
You can avoid using sudo during the flashing/SDcard writing process (except for using bmaptool, as noted above)
by adding yourself to suitable groups and installing a udev rules file to give yourself access to the Jetsons via
USB. The following instructions are for Ubuntu; other distros may have other groups or require additional setup.
- For SDcard writing, add yourself to group
disk. - For USB flashing, add yourself to group
plugdev,
You can use this script to
install the udev rules that grant the plugdev group write access to the Jetson devices when they are connected
in recovery mode to your development host.
Note that after changing your group membership and/or udev rules, you may need to reboot your development
host for the changes to take effect. It’s worth this extra setup, though, to eliminate the need for root access.
Building a tegraflash package
All of the Jetson machine configurations add a tegraflash image type by default, which generates a compressed tarball
contains all of the files, tools, and scripts for flashing the device and/or creating a fully-populated SDcard. If you’ve
successfully run a bitbake build of an image, you should see a file called
<image-type>-${MACHINE}.tegraflash.tar.gz
or, in more recent branches,
<image-type>-${MACHINE}.rootfs.tegraflash.tar.<compression>
in the directory $BUILDDIR/tmp/deploy/images/${MACHINE}. where <compression> could be either gz or zst, depending on the branch you are using (zstd replaced gzip as the default compression method in Feb 2025).
Using an SDcard with the Jetson AGX Xavier
By default, the tegraflash package for the AGX Xavier is set up for flashing the on-board eMMC. If you want to
boot your Xavier off an SDcard instead, you should add the following to your build configuration (e.g., in
$BUILDDIR/conf/local.conf):
TEGRA_ROOTFS_AND_KERNEL_ON_SDCARD = "1"
ROOTFSPART_SIZE = "15032385536"
The ROOTFSPART_SIZE setting is for a 16GB SDcard; adjust the size as needed for a larger or smaller card.
With these settings in place, the resulting tegraflash package supports flashing the bootloader file to
the on-board eMMC, moving the kernel, device tree, and rootfs on the SDcard. Note that this is only supported
for the Jetson AGX Xavier, and that SDcard booting does not support the bootloader redundancy features.
With this configuration, there will be two scripts in the tegraflash package: dosdcard.sh for writing the
SDcard, and doflash.sh for flashing the bootloader partitions to the eMMC. Run the dosdcard.sh script to
format and write the SDcard on your development host, insert the SDcard into the slot on the AGX Xavier dev kit,
then use the doflash.sh to flash the bootloader partitions. (Unlike for Xavier NX devices, you must perform
these steps separately.)
Unpacking the tegraflash package
To flash your Jetson, or create an SDcard image, create an empty directory and use the tar command to
unpack the tegraflash package into it:
$ mkdir ~/tegraflash
$ cd ~/tegraflash
$ tar -x -f $BUILDDIR/tmp/deploy/images/${MACHINE}/<image-type>-${MACHINE}.tegraflash.tar.gz
Be sure to use the tar command from a terminal window. Some users have reported issues with incorrect
results when extracting files using GUI-based tools.
Setting up for flashing
- Start with your Jetson powered off. (NVIDIA recommends connecting hardware only while the device is powered off.)
- Connect the USB cable from your Jetson to your development host.
- Insert an SDcard into the slot on the module, if needed.
- Power on the Jetson and put it into recovery mode.
For SDcard-based Jetsons (Nano and Xavier NX), you have the option of programming the SDcard contents either during USB flashing or separately using an SDcard reader/writer on your development host. If you program the SDcard separately, perform that step first and insert the already-programmed card into the slot on the module in step 3 above. (When using an SDcard with the AGX Xavier, you must pre-program the SDcard first.)
To verify that the device is in recovery mode and that the USB cable is connected properly, use the following command:
$ lsusb -d 0955:
Bus 001 Device 006: ID 0955:7c18 NVIDIA Corp. T186 [TX2 Tegra Parker] recovery mode
If you don’t see your Jetson listed, double-check the cable and try the recovery mode sequence again.
Recovery mode jumpers and buttons
The different Jetson develpoment kits have different mechanisms for entering recovery mode.
Jetson TX1 and TX2 development kits
Press and hold the REC (“recovery”) button, press and release the RST (“reset”) button. Continue to hold the REC button for 3-4 seconds, then release. [[images/TX1-TX2-Devkit-RecoveryMode-Button.jpg|alt=TX1-TX2 buttons]]
Jetson AGX Xavier development kit
Press and hold the center button, and press and release the reset button (on the right). [[images/AGX-Xavier-RecoveryMode-Button.jpg|alt=AGX Xavier buttons]]
Jetson Orin development kit
Press and hold the center button. Then plug in the power supply. Release the center button. Note that it can take 10-15 seconds for the device to fully enter recovery mode and export its serial console after power up.
All Jetson Nano, Xavier NX development kits
Connect a jumper between the 3rd and 4th pins from the right hand side of “button header” underneath the back of the module (FRC and GND; see the labeling on the underside of the carrier board). The module will power up in recovery mode automatically. [[images/Nano-NX-RecoveryMode-Jumper.jpg|alt=Nano-Xavier pins]]
For the older Jetson Nano rev A02 carrier boards, the FRC pin is in the 8-pin header next to
the module, beside the MIPI-CSI camera interface. The pins are labeled on the underside of
the carrier board.

Writing an SDcard
If you want to program the SDcard contents directly onto the card from your development host:
- Insert the card into the reader/writer on your host.
- Carefully determine the device name for the card. Using the wrong device name could destroy your host’s filesystems.
- Run the
dosdcard.shscript to program the card.
Here is an example, for a system where /dev/sda is the device name of the card:
$ ./dosdcard.sh /dev/sda
Remember to use sudo, if needed. The script will ask you to confirm before writing (which you can
skip by adding -y to the command above).
Creating an SDcard image
You can also create an SDcard image file that can later be written to one or more cards:
$ ./dosdcard.sh <filename>
The resulting file will be quite large, and writing the image can take a long time.
SPI flash on SDcard-based kits
The SDcard-based development kits store some (in some cases, all) of the bootloader content on a SPI flash device on the Jetson module. You must ensure that the bootloader content in this flash device is compatible with the layout on the SDcard you create, since the early-stage boot data is programmed with the locations/sizes of SDcard-resident partitions, and cannot read the GPT partition table at runtime. To do this, you must perform a USB flash to program the SPI flash at least once on you development kit, by following the steps in the next section.
Once the SPI flash has been programmed correctly, you should be able to update just by writing new SDcard images unless you make changes in your build that affect one of the boot-related partitions residing in the SPI flash, or change the flash layout XML in a way that alters the location/size of one of the SDcard-resident boot partitions (if there are any).
Flashing the Jetson
Once everything is set up, use the doflash.sh script to program the Jetson:
$ ./doflash.sh
Remember to use sudo to invoke the script, if needed. If successful, the Jetson will be rebooted
into your just-built image automatically after flashing is complete.
For SDcard-based development kits, you can program just the boot partitions in the SPI flash with:
$ ./doflash.sh --spi-only
You should insert your programmed SDcard in the slot on the Jetson before performing this step, so when the Jetson reboots after the flashing process completes, it will boot into your image.
Automating Unpack and Flash Steps
You can use this script if desired to automate the steps associated with unpacking and running the ./doflash.sh script for tegraflashing.
Issues during flashing
If you run sudo ./doflash.sh and flashing is started but then it hang in some step like:
[ 1.7586 ] Flashing the device
[ 1.7611 ] tegradevflash --pt flash.xml.bin --storageinfo storage_info.bin --create
[ 1.7636 ] Cboot version 00.01.0000
[ 1.7659 ] Writing partition GPT with gpt.bin
[ 1.7666 ] [................................................] 100%
[ 1.7707 ] Writing partition PT with flash.xml.bin
[ 15.9892 ] [................................................] 100%
[ 15.9937 ] Writing partition NVC with nvtboot.bin.encrypt
[ 16.2433 ] [................................................] 100%
[ 16.2569 ] Writing partition NVC_R with nvtboot.bin.encrypt
[ 26.2706 ] [................................................] 100%
[ 26.2877 ] Writing partition VER_b with jetson-nano-qspi-sd_bootblob_ver.txt
[ 36.3103 ] [................................................] 100%
[ 36.3202 ] Writing partition VER with jetson-nano-qspi-sd_bootblob_ver.txt
[ 36.5833 ] [................................................] 100%
[ 36.5927 ] Writing partition APP with test-image.ext4.img
[ 36.8548 ] [................................................] 100%
or if e.g following:
[ 1.9394 ] 00000007: Written less bytes than expected
[ 21.7219 ]
Error: Return value 7
Command tegradevflash --pt flash.xml.bin --storageinfo storage_info.bin --create
It’s good to connect serial console which in above case will print something like:
[0020.161] device_write_gpt: Erasing boot device spiflash0
[0039.824] Erasing Storage Device
[0039.827] Writing protective mbr
[0039.833] Error in command_complete 18003 int_status
[0039.840] Error in command_complete 18003 int_status
[0039.847] Error in command_complete 18003 int_status
[0039.852] sending the command failed 0xffffffec in sdmmc_send_command at 109
[0039.859] switch command send failed 0xffffffec in sdmmc_send_switch_command at 470
[0039.866] switch cmd send failed 0xffffffec in sdmmc_select_access_region at 1301
[0039.876] Error in command_complete 18001 int_status
[0039.883] Error in command_complete 18001 int_status
[0039.890] Error in command_complete 18001 int_status
[0039.895] sending the command failed 0xffffffec in sdmmc_send_command at 109
[0039.902] setting block length failed 0xffffffec in sdmmc_block_io at 945
[0039.909] block I/O failed 0xffffffec in sdmmc_io at 1215
[0039.914] block write failed 0xffffffec in sdmmc_bdev_write_block at 178
[0039.921] device_write_gpt: failed to write protective mbr
[0039.926] Number of bytes written -20
[0039.930] Written less bytes than expected with error 0x7
[0039.935] Write command failed for GPT partition
Things to try:
- ! USB cable must be plugged directly to PC host (don’t use USB hub otherwise issues like described above will appear) !
- verify USB cable quality (try to use another one)
- power off/on device and try flashing again
General Tegraflash Troubleshooting
See Tegraflash-Troubleshooting
Notes on extending support for flashing Jetson devices that boot from external storage media (NVMe, USB).
Last update: 25 Jul 2025
This is currently supported on branches based off JetPack 5/L4T R35 or later, and kirkstone-l4t-r32.7.x. For R32.7.x, there is support for T210 (TX1/Nano) as well as T186 (TX2) and T194 (Xavier) targets.
Prerequisites
Beyond the normal host tools required for building and normal flashing, you should also have these commands available on your build host:
sgdisk(from thegdisk/gptfdiskpackage)udisksctl(part of theudisks2package)
You should disable automatic mounting of removable media in your desktop settings. On recent Ubuntu (GNOME), go to Settings -> Removable Media, and check the box next to “Never prompt or start programs on media insertion.” You may also need to update the /org/gnome/desktop/media-handling/automount setting via dconf. Check the setting with:
$ dconf read /org/gnome/desktop/media-handling/automount
If it reports true, set it with:
$ dconf write /org/gnome/desktop/media-handling/automount false
For Ubuntu 24.04, use gsettings, and also disable automount-open:
$ gsettings set org.gnome.desktop.media-handling automount false
$ gsettings set org.gnome.desktop.media-handling automount-open false
If the bmaptool command is available, it will be used for writing to the storage device, which speeds up writes but (currently) requires root privileges (the scripts will automatically use sudo to invoke it when needed).
No additional host changes should be required
Your image needs to include a device-tree with usb2-0 in otg mode - as here.
Avoiding Sudo
Note: sudo access will be needed when writing the disks using bmap-tools. This method below will avoid sudo while mounting/unmounting the flaskpkg and related block devices.
For running the initrd-flash script without sudo, the host changes mentioned in the “Avoiding sudo” section on the Flashing the Jetson Dev Kit wiki page still apply.
In addition, to avoid prompts for authentication at several points in the process you need to configure polkit appropriately. On Ubuntu 22.04 this can be accomplished with the following script snippet run as sudo root:
cat << EOF > /var/lib/polkit-1/localauthority/50-local.d/com.github.oe4t.pkla
[Allow Mounting for Disk Group]
Identity=unix-group:disk
Action=org.freedesktop.udisks2.filesystem-mount
ResultAny=yes
[Allow Power Off Drive for Disk Group]
Identity=unix-group:disk
Action=org.freedesktop.udisks2.power-off-drive
ResultAny=yes
EOF
chmod 644 /var/lib/polkit-1/localauthority/50-local.d/com.github.oe4t.pkla
systemctl restart polkit
Build configuration
No configuration is required if you just want to use initrd flashing and still keep your rootfs on the Jetson’s internal storage device. You only need to add a configuration setting if you want to configure your system to have its rootfs (APP partition) on an external storage device. To do that, add a line to your local.conf such as:
TNSPEC_BOOTDEV:jetson-xavier-nx-devkit-emmc = "nvme0n1p1"
- If trying this out with a different Jetson device, use the MACHINE name for the override in the above.
- If trying USB storage instead of NVMe, use
sda1as the boot device, instead ofnvme0n1p1.
Flashing after build
- Put the Jetson device into recovery mode and connect it to your host via the USB OTG port.
- Unpack the
tegraflashtarball into an empty directory. cdto that directory and run./initrd-flashto start the flashing process.
The script:
- Uses the RCM boot feature to download a special initrd and kernel that sets up the device as a USB mass storage gadget.
- Waits for the USB storage device to appear on the host, then copies in the bootloader files and a command sequence for the target that instructs it to start the boot device update, and tells it which storage device(s) should be exported to the host for writing.
- Uses the
make-sdcardscript to write to storage device(s). This happens in parallel with the target’s programming of the boot device. - Waits for the target to export another storage device to report its final status and the logs generated on the target. The script copies the device logs into a subdirectory. When finished, it releases the storage device, and the target reboots automatically.
Note: add the current Linux user to the disk group to avoid the usage of sudo to run initrd-flash script.
Re-flashing just the rootfs storage device
The initrd-flash script has a --skip-bootloader option for skipping the programming of the boot partitions, so you can re-flash just the rootfs storage device. You should only use this option if you have already programmed the boot partitions once with the versions you’re using for your current build.
Possible future enhancements
- Develop the kernel/initrd used here into a more general “recovery” image, and/or applied for cross-version OTA updates, although the specific use cases will probably require something a bit different and need more customization.
- See if something could be done to automate setup when using LUKS encryption. Direct formatting and partition writing from the host isn’t really an option there. A hybrid approach (formatting and cryptsetup done on the device, then exporting the encrypted partitions via USB) should be workable.
How it works
- The helper scripts now support an
--external-deviceoption that passes appropriate options totegraflash.py(needed since one of the BCTs appears to include information the external storage device for the boot chain to work), and an--rcm-bootoption to allow direct download/execution of a kernel+initrd image. - The SDcard-related support in the
nvflashxmlparseandmake-sdcardscripts was generalized to distinguish between the ‘boot’ device and any ‘rootfs’ device. - The
tegra-flash-initrecipe was added to install a minimal init script for the flashing kernel, which sets up a USB mass storage gadget for the device to be flashed. The serial number advertised by the gadget is the unique chip id (ECID) of the Tegra SoC. - The
initrd-flashscript and the flashing kernel/initrd are added to the tegraflash package to drive the process. The ECID (unique ID) of the SoC is extracted during initial RCM contact and used to locate the correct/dev/sdXdevice for the partition writing. - A
find-jetson-usbscript has been added to wait for the appearance of the Jetson (in recovery mode) on the host USB bus. - The tegraflash package generator in
image_types_tegra.bbclassexports additional settings (e.g., theTNSPEC_BOOTDEVsetting) in the file.env.initrd-flashfor use by theinitrd-flashscript. - The
tegra-bootfilesrecipe populates an external flash layout (XML) file in addition to the main (internal storage) flash layout file. The default layout from the L4T kit are modified, if required, to ensure that the boot and kernel partitions are present in the correct layout (with no duplicates) whenTNSPEC_BOOTDEVset for using external storage.
Notes
- RCM booting on T194 platforms bypasses the UEFI bootloader, directly loading the kernel from nvtboot. This means that the kernel/initrd does not have access to any EFI variables. UEFI is used in the RCM boot chain on T234 platforms.
- On Xavier NX dev kits (SDcard-based), you must still have an SDcard installed in the slot even if you are booting off an external drive. The SDcard must not have an
esporAPPpartition on it. You must manually reformat the SDcard, as the flashing process will not do that for you. For all other Jetsons with internal eMMC storage, the eMMC will be erased as part of the flashing process (and re-partitioned/re-populated for those platforms that store some of the bootloader binaries in the eMMC). - Based on readings of some NVIDIA dev forum posts, A/B updates in JetPack 5.0 do not work properly in all cases when booting off an external drive. That is supposed to be fixed in JetPack 5.1.
- Depending on your device’s configuration (e.g., having multiple storage devices attached), you may need to manually configure the boot order in the UEFI bootloader by hitting
ESCwhen UEFI starts, and then selectingBoot Maintenance Manager, thenBoot Options, thenChange Boot Order. This is a limitation in JetPack 5.0 that is supposed to be fixed in JetPack 5.1. - If you use a custom flash layout for your builds, note that there are some limitations on the composition of your flash layout file(s) due to how the bootloaders and the NVIDIA tools work. For example, you cannot use a SPI flash-only layout for internal storage, since the BUP payload generator expects to be able to create a payload containing the kernel/kernel DTB. The generator will fail during the build, since those partitions are not present in the SPI flash. You also cannot use a single flash layout that includes only the boot partitions (in, for example, SPI flash on AGX Orin and Xavier NX) and the external storage device (
nvme). The tools that generate the MB1 BCT and/or MB2 BCT will error out because those bootloaders cannot access external storage. Hopefully NVIDIA will resolve these limitations in a future release.
Comparison with stock L4T initrd flashing
- OE builds are per-machine, so much of the additional scripting to handle different targets during the flashing process can be omitted.
- With OE builds, TNSPEC_BOOTDEV selection is performed at build time. Switching back and forth between external rootfs and internal storage should be done with different builds.
- Stock L4T provides its initrd in prebuilt form, which requires disassembling and reassembling the initrd in the flashing scripts. With OE, we can build the flashing initrd directly.
- Stock L4T requires customizing the external drive’s flash layout to specify the exact size of the storage device, in sectors. That’s not required with OE builds, which do not use NVIDIA’s flashing tools to partition the external drive.
- Stock L4T inserts udev rules on the host during flashing and does some network setup to talk to the device. The process implemented for OE builds does not use any networking and does not require any udev rules changes during the flashing process. You also don’t have to be root to perform initrd-based flashing for OE builds, if you have followed the instructions here. (However, the
bmaptool copycommand used in themake-sdcardscript does need root access for its setup, and the script will run it undersudofor you).
Limitations on using an external drive for the rootfs
- On Jetson TX2 devices, the bootloaders do not have support for loading the kernel from an external drive. The kernel, initrd, and device tree must reside on the eMMC (along with some of the boot partitions).
- Other Jetsons that boot directly from the eMMC (TX1, Nano-eMMC, Xavier NX-eMMC, AGX Xavier) also need to have some of the boot partitions in the main part of the eMMC.
- With Jetsons running JetPack 5/L4T R35.1.0, you may need to manually interrupt the UEFI bootloader to adjust the boot order to favor the external drive. Even then, UEFI may attempt a PXE (network) boot first. (This appears to be fixed with JetPack 5.1/L4T R35.2.1.)
Known issues
- On an AGX Orin configured to use an external drive for the rootfs (NVMe), once it has been flashed using
initrd-flash, the RCM boot of theinitrd-flashkernel stops working; the NVMe-resident OS is booted instead. This happens with the stock L4T initrd flashing tools also. To work around the problem, clear the partition table on the NVMe drive (e.g., usingsgdisk /dev/nvme0n1 --clear) before resetting the Orin into recovery mode to start the re-flashing process. - On T210 platforms (TX1/Nano), if you use the normal
doflash.shscript, boot binaries will get overwritten (due to the way the NVIDIA flashing tools work), and that will cause an “FDT_ERR_BADMAGIC” error if you later try to runinitrd-flash. The error is minor, and probably won’t cause any real issues with the flashing/booting process. To be safe, though, you should not mix normal and initrd-based flashing in the same tegraflash directory.
Customizing External Storage Size
Beginning with Jetpack 5.1.2 (r35.4.1) (and this commit), the TEGRA_EXTERNAL_DEVICE_SECTORS variable is used to customize the total size of device containing the root filesystem (as well as all other partitions in PARTITION_LAYOUT_EXTERNAL). The default size of this variable assumes a device which is at least 64GB in size.
You may increase your root filesystem size to a value of around 30GB, leaving space for two root filesystem partitions (to support A/B redundancy) and additional partitions by defining ROOTFSPART_SIZE to a 4K aligned value in bytes of ~30 GB using a setting like ROOTFSPART_SIZE = "30032384000" in your local.conf.
If you have an external device larger than 64GB and would like to use this for a larger root filesystem, in addition to modifying ROOTFSPART_SIZE you will also need to adjust the TEGRA_EXTERNAL_DEVICE_SECTORS to specify a larger size in sectors. For instance, to specify a ~60 GB rootfs on a 128 GB flash drive use ROOTFSPART_SIZE = "60064768000"and TEGRA_EXTERNAL_DEVICE_SECTORS = "250000000"
General Tegraflash Troubleshooting
See Tegraflash-Troubleshooting
Instructions for r35
See https://github.com/OE4T/tegra-demo-distro/discussions/310#discussioncomment-10534547
- Grab the pinmux spreadsheet and configure the pins the way you need then generate the new files https://developer.nvidia.com/downloads/jetson-orin-nx-and-orin-nano-series-pinmux-config-template.
- This will give you three new dtsi files. You need to match up what these are with your machine and meld them to get the changes you need from the machine. The relevant recipe is
tegra-bootfiles - Build in these recipes
libgpiod libgpiod-tools libgpiod-dev - Back at the command line run
gpioinfoand grep on your gpio you want. For my case I wantedGGPIO3_PCC.00 - Take the controller name (0 or 1 for me) and the line and you should now be able to
gpioset -c 1 12=1to set. where the c is the controller number and 12 is the line number.
Good reference https://docs.nvidia.com/jetson/archives/r35.3.1/DeveloperGuide/text/HR/JetsonModuleAdaptationAndBringUp/JetsonOrinNxNanoSeries.html#generating-the-pinmux-dtsi-files
Jetpack 4 instructions for Controlling the pin states on the Jetson TX2 SoM
There’s two ways:
- Through bootloader configuration.
- Through using the virtual /sys filesystem in userspace.
Pin settings in bootloader configuration
Summary
You need to do the following:
- Download an Microsoft Excel sheet(!) containing some macros(!!) and the L4T (“Linux For Tegra”) package from Nvidias downloadcenter. Note: For this you need a Nvidia developer account.
- In the Excel sheet, select the desired pin configuration using cell dropdown menus. Use the embedded macro to write out some device tree files.
- Use a Python script which comes with L4T to convert the device tree files into something file the bootloader can understand.
- Embed the bootloader configuration in the Yocto source tree.
Detailed steps
As an example, the following guide walks you through reconfiguring pin A9 from it’s default state (output GND) to input with weak pull-up.
- The MS Excel part:
- Visit the Nvidia developer download center and search for
Jetson TX2 Series Pinmux. Here’s a direct link for 1.08. Download and run it with macros enabled. - On the second sheet you’ll find the configuration for pin
A9on (at the time of writing) line 246. Cells in columnsARandASdefine it as output grounding the signal. Change these cells toInputandInt PU. - At the very top of the sheet, click the button labeled
Generate DT file. Some dialogues will pop up which asks for stuff and have an effect on the filename.
- Visit the Nvidia developer download center and search for
- The Python part:
- Go to the Nvidia developer download center and search for
Jetson Linux Driver Package (L4T). Follow the link to theL4T x.y.z Release Page. (For example, here’s the one for R32.4.3.) There, you should find a link labeledL4T Driver Package (BSP)leading to some tarball named similar toTegra186_Linux_Rx.y.z_aarch64.tbz2. (Again, as an example here’s the one for R32.4.3.). Uncompress it and change toLinux_for_Tegra/kernel/pinmux/t186/inside. - Run the
pinmux-dts2cfg.pyin the following way:
If it throws errors, it might be related to this.python pinmux-dts2cfg.py \ --pinmux \ addr_info.txt \ gpio_addr_info.txt \ por_val.txt \ --mandatory_pinmux_file mandatory_pinmux.txt \ /path/to/your/excel-created/tegra18x-jetson-tx2-config-template-*-pinmux.dtsi \ /path/to/your/excel-created/tegra18x-jetson-tx2-config-template-*-gpio-*.dtsi \ 1.0 \ > /tmp/new.cfg - Go to the Nvidia developer download center and search for
- Add a patch in your distro layer reflecting the pin settings in your
/tmp/new.cfgcreated above.
Controlling/reading the pin state from userspace
You can control/read the pin value from the virtual /sys filesystem but not the pull up/down state.
Software-wise, the GPIOs have other names than on the schematic. Nvidia doesn’t make it easy to go from schematic name (like A9) to the /sys name (like gpio488). The following user contributed posts explain it better than anything Nvidia has come up with so far:
- https://forums.developer.nvidia.com/t/gpio-doesnt-work/49203/14
- https://forums.developer.nvidia.com/t/gpio-doesnt-work/49203/2
- This post contains the equations in the links above solved for all possible input values.
Having found out the /sys name for your pin, you can take following snippets as an example:
The following snippet sets the gpio to output-low.
# GPIO488 is A9 on the SoM
pin=488
echo $pin > /sys/class/gpio/export
echo out > /sys/class/gpio/gpio$pin/direction
echo 0 > /sys/class/gpio/gpio$pin/value
The following snippet sets the pin to input and reads its logical state:
# GPIO488 is A9 on the SoM
pin=488
echo $pin > /sys/class/gpio/export
echo in > /sys/class/gpio/gpio$pin/direction
cat /sys/class/gpio/gpio$pin/value
The meta-tegra layer includes MACHINE definitions for NVIDIA’s Jetson development kits. If you are developing a custom device using one of the Jetson modules with, for example, a custom carrier board, or you just want to modify the default boot-time configuration (pinmux, etc.) for an existing development kit as a separate MACHINE in your own metadata layer, you may need to supply a MACHINE-specific file for your builds.
IMPORTANT: For any custom carrier board/hardware design, make sure you consult the appropriate Platform Adaptation and Bring-Up Guide document available from the NVIDIA Developer Download site to get all the details on how to customize the pinmux configuration and other low-level hardware configuration settings. Failing to provide the correct settings could damage your device.
Boot-time hardware configuration and boot flash programming is particularly complicated for Jetson modules, and varies substantially between models. Consult a recent version of the L4T Driver Package Documentation, particularly the “BSP Customization” and “Bootloader” chapters, for background information. As mentioned above, the Platform Adaptation documentation is also a good reference.
NOTE: Due to restrictions in the implementation of bootloader update payloads, the length of your custom MACHINE name should be 31 characters or less.
Jetson-TX1
No additional build-time files are necessary for MACHINEs based on the Jetson-TX1 module. All customizations can be done in the device tree and/or U-Boot. You’ll need to point your build at your customized kernel and/or U-Boot repository and set variables in the machine .conf file for your custom device.
Jetson-Nano
In the warrior and zeus branches, the only MACHINE-specific build-time file for Jetson-Nano is the SDCard layout file used by recipes-bsp/sdcard-layout/sdcard-layout_1.0.bb. If you modify the partition layout for the SDCard, you’ll need to supply a copy of the sdcard-layout.in file that matches the SDCard partitions you define in your customized version of the flash_l4t_t210_spi_sd_p3448.xml file from the L4T BSP.
Starting with the zeus-l4t-r32.3.1 branch, full support for all revisions and SKUs of the Jetson Nano module was added, and the SDcard layout file was eliminated. To modify your partition layout, you need only provide a customized copy of the flash_l4t_t210_spi_sd_p3448.xml (for 0000 SKUs) or flash_l4t_t210_emmc_p3448.xml (for 0002 SKUs) file. Different module revisions (FABs) use different device tree files, so you may need to have multiple device tree source files to account for module variants in your custom device/carrier.
Jetson-TX2 and Jetson-TX2i
For the Jetson-TX2 family, there are several boot-time configuration files that are machine-specific. Be sure to follow the Platform Adaptation Guide documentation carefully so all of the necessary customizations for the BPMP device tree and the MB1 .cfg files for the pinmux, PMIC, PMC, boot ROM, and other on-module hardware get created properly. The basic steps are filling in the pinmux spreadsheet and generating the dtsi fragments, then converting those fragments to cfg files using the L4T pinmux-dts2cfg.py script.
The recipes-bsp/tegra-binaries/tegra-flashvars_<bsp-version>.bb recipe installs a file called flashvars that identifies the boot-time configuration files that need to be processed by the tegra186-flash-helper script for feeding into NVIDIA’s flashing tools. With older OE4T branches, you need to supply a customized copy of the flashvars file in your BSP layer. With the latest branches, the flashvars file gets generated automatically from the variables listed in TEGRA_FLASHVARS. Check the recipe in meta-tegra to confirm which method you need to follow.
The files listed in your flashvars file must be installed into ${datadir}/tegraflash in the build sysroot by another recipe. The simplest method is to create an overlay for the recipes-bsp/tegra-binaries/tegra-bootfiles recipe, as it already extracts the files for the Jetson development kits from the L4T BSP package:
# The fetch task is disabled on this recipe, but we need our files included in the task signature.
CUSTOM_DTSI_DIR := "${THISDIR}/${BPN}"
FILESEXTRAPATHS:prepend := "${CUSTOM_DTSI_DIR}:"
SRC_URI:append:${machine} = "\
file://tegra19x-${machine}-padvoltage-default.cfg \
file://tegra19x-${machine}-pinmux.cfg \
"
# As the fetch task is disabled for this recipe, we access the files directly out of the layer.
do_install:append:${machine}() {
install -m 0644 ${CUSTOM_DTSI_DIR}/tegra19x-${machine}-padvoltage-default.cfg ${D}${datadir}/tegraflash/
install -m 0644 ${CUSTOM_DTSI_DIR}/tegra19x-${machine}-pinmux.cfg ${D}${datadir}/tegraflash/
}
The specifics of the configuration files and variables required may vary from version to version of the L4T BSP, so be sure to review any changes when upgrading.
Jetson AGX Xavier
Jetson AGX Xavier systems are similar to Jetson-TX2, but (as of this writing) have only two version-dependent boot-time files - the BPMP device tree and the PMIC configuration. Consult the NVIDIA documentation for customization steps, and see the Jetson-TX2 section above for information on how to integrate your custom files into the build.
Note that AGX Xavier targets handle UEFI variables differently than other platforms. If you plan to use with Jetpack 5 branches, please read https://github.com/OE4T/meta-tegra/pull/1865 and note that you likely will want to define TNSPEC_COMPAT_MACHINE.
Jetson Xavier NX
Jetson Xavier NX systems are similar to Jetson AGX Xavier, but (as of this writing) have no version-dependent boot-time files. Consult the NVIDIA documentation for customization steps, and see the Jetson-TX2 section above for information on how to integrate your customized files into the build.
Jetson Orin
This guide is based on Jetson Linux R35.4.1 so change bbappend names accordingly if you use a different release. Occurences of ${machine} should be replaced by your machine name.
Create a new machine config
Create a new Machine configuration at conf/machine/${machine}.conf in your layer.
For guidance on what it should contain look at any of the machine configurations in meta-tegra.
Create a new flash config
Create a new flash configuration recipes-bsp/tegra-binaries/tegra-flashvars/${machine}/flashvars. You can start by copying one of the flashvars files in meta-tegra.
To use the newly created flashvars file create the following recipes-bsp/tegra-binaries/tegra-flashvars_35.4.1.bbappend:
FILESEXTRAPATHS:prepend := "${THISDIR}/${BPN}:"
Add pinmux dtsi files
Generate the pinmux dtsi files with the Nvidia pinmux Excel sheet (or this one for Orin AGX).
Rename the resulting files to start with tegra234- (Otherwise meta-tegra has issues handling them.) and convert line endings to Unix using dos2unix. Copy the files to recipes-bsp/tegra-binaries/tegra-flashvars.
NOTE: If you manually rename your generated DTSI files, you may need to modify the #include statement on line 35 of your -pinmux.dtsi file, as it has the original filename for the -gpio-default.dtsi file hardcoded.
Install the files with following tegra-bootfiles_35.4.1.bbappend:
# Hack: The fetch task is disabled on this recipe, so the following is just for the task signature.
FILESEXTRAPATHS:prepend := "${THISDIR}/${BPN}:"
SRC_URI:append:${machine} = "\
file://tegra234-${machine}-gpio-default.dtsi \
file://tegra234-${machine}-padvoltage-default.dtsi \
file://tegra234-${machine}-pinmux.dtsi \
"
# Hack: As the fetch task is disabled for this recipe, we have to directly access the files."
CUSTOM_DTSI_DIR := "${THISDIR}/${BPN}"
do_install:append:${machine}() {
install -m 0644 ${CUSTOM_DTSI_DIR}/tegra234-${machine}-gpio-default.dtsi ${D}${datadir}/tegraflash/
install -m 0644 ${CUSTOM_DTSI_DIR}/tegra234-${machine}-padvoltage-default.dtsi ${D}${datadir}/tegraflash/
install -m 0644 ${CUSTOM_DTSI_DIR}/tegra234-${machine}-pinmux.dtsi ${D}${datadir}/tegraflash/
}
(Don’t forget to replace ${machine} with your machine name.)
Then modify flashvars to use the files:
PINMUX_CONFIGshould be set to yourtegra234-${machine}-pinmux.dtsiPMC_CONFIGshould be set to yourtegra234-${machine}-padvoltage-default.dtsi
(Optionally) disable board EEPROM usage
As explained in the Platform Adaptation and Bring-Up Guide by Nvidia, you might want to disable the usage of the board EEPROM.
For that create a copy of the file used in flashvars for MB2BCT_CFG and modify it according to the Nvidia guide.
Include this new file in Yocto the same way as explained in Add pinmux dtsi files and update MB2BCT_CFG in flashvars with the new file name.
Use a custom device tree
See Custom Device Tree and apply the described changes to your ${machine}.conf.
Customizing the kernel
For custom hardware, you’ll probably need to modify the kernel in at least one of the following ways:
- Custom kernel configuration
- Custom device tree
- Adding patches
Starting with the L4T R32.3.1-based branches, you can use the Yocto Linux tools to apply patches and configuration
changes during the build, although it may be simpler to fork the linux-tegra-4.9 repository to apply patches, and supply your own defconfig file for
the kernel configuration. Having your own fork of the kernel sources should also be easier for creating a custom device tree. (You should also set the KERNEL_DEVICETREE variable in your machine configuration file appropriately.)
Custom MACHINE definitions for existing hardware
If you need to define an alternate MACHINE configuration for an NVIDIA Jetson development kit without altering the boot-time configuration files for hardware initialization, you can have your MACHINE reuse the existing files in meta-tegra. For example, let’s say you want to create tegraflash packages for the Jetson-TX2 development kit for both the default cboot->U-boot->Linux boot sequence as well as for booting directly from cboot to Linux, without U-Boot. In your BSP or distro layer, you could add a machine configuration file called, for example, conf/machine/jetson-tx2-cboot.conf that looks like this:
MACHINEOVERRIDES = "jetson-tx2:${MACHINE}"
require conf/machine/jetson-tx2.conf
PACKAGE_EXTRA_ARCHS_append = " jetson-tx2"
PREFERRED_PROVIDER_virtual/bootloader = "cboot-prebuilt"
This would override the bootloader settings in the default jetson-tx2 configuration to use cboot instead of U-Boot, but otherwise reuse all of the MACHINE-specific packages, files, and settings for the jetson-tx2 MACHINE in meta-tegra.
For Jetson Xavier NX based machine types - jetson-xavier-nx-devkit and jetson-xavier-nx-devkit-emmc, the conf/machine/custom-machine.conf would look like this:
require conf/machine/jetson-xavier-nx-devkit-emmc.conf
MACHINEOVERRIDES = "cuda:tegra:tegra194:xavier-nx:jetson-xavier-nx-devkit-emmc:${MACHINE}"
PACKAGE_EXTRA_ARCHS_append = " jetson-xavier-nx-devkit-emmc"
Custom Device Tree
In many cases it is desirable to avoid forking or patching the kernel sources. The devicetree bbclass can be used to create a custom dtb. There’s an example in tegra-demo-distro documented at Using-device-tree-overlays which accomplishes this for recent branches.
Custom Partitioning
See Redundant-Rootfs-A-B-Partition-Support for suggestions regarding defining partition layout files for your MACHINE.
This page describes one mechanism for enabling disk encryption on meta-tegra, using the notes from Islam Hussein in this thread on matrix.
The encryption happens as a post-process initiated manually after the build.
Yocto changes
- Modify your partition xml to set ‘encrypted’ to true on the corresponding partition, as described in the NVIDIA Disk Encryption Documentation.
<partition name="data-partition" type="data" encrypted="true">
- Choose a different init script to be used in initramfs which uses
luks-srv-appand disable it totally after that to prevent further use. See code snippet below. For the “context”, refer to the build changes section below.
__l4t_enc_root_dm="l4t_enc_root";
__l4t_enc_root_dm_dev="/dev/mapper/${__l4t_enc_root_dm}"
eval nvluks-srv-app -g -c "<context>" | cryptsetup luksOpen /dev/nvme0n1p${current_rootfs} ${__l4t_enc_root_dm}
Build changes
Add a bash script to be called manually after finishing yocto build. The script will go to the path of the build, extract it in a temp directory. mount the rootfs and open it. Then it will make a luks-storage (And that’s why I couldn’t build it inside yocto) The problem is that when you want to open the luks using crypto you have to access device mapper which requires privileged access which yocto doesn’t have
- Store the size of rootfs which is written in xml it have to be the same and then create luks drive with the same size.
- To generate the password you’ll need to run
gen_ekb.py - You’ll have to to write down dummy uuid which is the context used in the code snippet above. (context will be used in two places generating pass to encrypt rootfs and generating the pass access it.)
- One way is to use a generic password which doesn’t need ecid. So the same key will be used for all of my devices.
GEN_LUKS_PASS_CMD="tools/gen_luks_passphrase.py"
genpass_opt=""
genpass_opt+=" -k tools/ekb.key "
genpass_opt+=" -g "
genpass_opt+=" -c '${__rootfsuuid}' "
GEN_LUKS_PASS_CMD+=" ${genpass_opt}"
truncate --size ${__rootfs_size} ${__rootfs_name}
eval ${GEN_LUKS_PASS_CMD} | sudo cryptsetup \
--type luks2 \
-c aes-xts-plain64 \
-s 256 \
--uuid "${__rootfsuuid}" \
luksFormat \
${__rootfs_name}
eval ${GEN_LUKS_PASS_CMD} | sudo cryptsetup luksOpen ${__rootfs_name} ${__l4t_enc}
sudo mkfs.ext4 /dev/mapper/${__l4t_enc}
sudo mount /dev/mapper/${__l4t_enc} ${__enc_rootfs_mountpoint}
sudo mount ${__original_rootfs} ${__rootfs_original_mountpoint}
sudo tar -cf - -C ${__rootfs_original_mountpoint} . | sudo tar -xpf - -C ${__enc_rootfs_mountpoint}
sleep 5
sudo umount ${__enc_rootfs_mountpoint}
sudo cryptsetup luksClose ${__l4t_enc}
sudo umount ${__rootfs_original_mountpoint}
Linux 4.x Kernel Notes
Starting with the 4.4 kernel, NVIDIA maintains separate repositories for some of their hardware-specific drivers and the device tree files. To simplify kernel builds under OE-Core, the linux-tegra recipes for 4.4 and later point to a repository where the files in those separate repositories have been merged back together using git subtrees.
This makes it more difficult to compare the sources used here against the NVIDIA upstream sources, but simplifies the recipe and the management any patches that might be needed.
Notes on integration of the Jetson-customized NVIDIA container runtime (beta version 0.9.0) with Docker support. See this page for information on how this is integrated with the JetPack SDK.
Supported branches
Support for the container runtime is available on the zeus-l4t-r32.3.1 and later branches.
Layers required
In addition to the OE-Core and meta-tegra layers, you will need the meta-virtualization layer and the meta-oe, meta-networking, and meta-python layers from the meta-openembedded repository.
Configuration
Add virtualization to your DISTRO_FEATURES setting.
Building
- To run any containers, add
nvidia-dockerto your image. - The Docker containers that NVIDIA supplies do not bundle in most of the hardware-specific libraries needed to run them, but expect them to be provided by the underlying host OS, so be sure to include TensorRT (note), CuDNN, and/or VisionWorks, if you expect to be running containers needing those packages.
- For containers that use GStreamer, be sure to include the Jetson-specific GStreamer plugins you may need.
NVIDIA DEVNET MIRROR and SDK Manager
Jetpack 4.3 content as well as CUDA host tool support before this PR was not anonymously downloadable from NVIDIA’s servers and requires an NVIDIA_DEVNET_MIRROR setup with the path to SDK manager downloads.
Attempting to build recipes which require host tool CUDA support will faied with message:
ERROR: Nothing PROVIDES 'cuda-binaries-ubuntu1804-native'
cuda-binaries-ubuntu1804-native was skipped: Recipe requires NVIDIA_DEVNET_MIRROR setup
To resolve, you must use the NVIDIA SDK manager to download the cotent to your build host, then add this setting to your build configuration (e.g., in conf/local.conf under your build directory):
NVIDIA_DEVNET_MIRROR = "file://path/to/downloads"
By default, the SDK Manager downloads to a directory called Downloads/nvidia/sdkm_downloads under your $HOME directory, so use that path in the above setting.
See example in tegra-demo-distro which demonstrates setting the path to the default download directory used by NVIDIA SDK manager.
There may be times when you need to perform the equivalent of a re-flashing of your Jetson-based device without being able to use the normal flashing process via USB. This is possible, although there are some risks, and it requires careful setup and testing.
Possible applications:
-
You need to alter the layout of the partitions in the Jetson’s eMMC storage.
-
You need to update a Jetson running software based off an older version of the L4T BSP to a newer version that requires a modified layout of the eMMC and/or SPI flash (for Jetsons that have a SPI flash boot device).
-
You just need the equivalent of a full “factory reset” that restores the device to a pristine state.
This page walks through a basic example of how to do this, using tools and scripts that you can modify/adapt as needed. The example uses a Jetson-TX2 development kit target; it has also been tested with Xavier and Nano development kits.
Overview of the process
The goal here is to perform the equivalent of a USB “tegraflash” on the running device. What that entails is: erasing/reformatting the storage devices on the Jetson module and writing the correct boot code/data, kernel, rootfs, etc. so that on reboot, the device successfully boots into the image.
To do this under Linux, we can’t be running in a rootfs that is mounted in the on-module storage. If your device supports external storage that is bootable, you could use that, or you could run the process entirely from an initial RAM disk loaded with the Linux kernel. The following example uses the latter approach.
Ingredients
- The tegra-sysinstall repo contains the scripts that execute the overall process.
- The tegra-boot-tools repo contains the tools for writing the boot partitions.
- Example recipes for creating the initramfs image for the TX2 running an old L4T R32.1-based build are here.
- The new image, based on L4T R32.5.0, is built from this test distro.
Key considerations
-
The flash layout from the new image build is used to generate configuration files that the tools use for correctly re-partitioning the storage devices. To ensure that the bootloaders and Linux agree on the eMMC partition layout, the primary GPT must be at least 16,896 bytes (33 512-byte sectors). (This is the case with the stock flash layouts for all recent releases of L4T.)
-
The
partition_tablefile generated by the sysinstall-partition-layout recipe from the R32.5.0-based build must be copied into the metadata for the warrior/R32.1-based build, since that file will be part of the warrior-based initramfs. -
The
tegra-bootloader-updatetool uses a BUP payload as the source of the contents for all of the boot partitions. The stock L4T BUP payload generator does not include all of the boot partition contents. Recent commits into meta-tegra include patches for the generator to include the missing pieces for TX2 and Xavier platforms. Update 10 Jun 2021: The additions to the BUP payload turned out to be incompatible with the stock L4Tnv_upgrade_enginebootloader update program on the TX2 and were reworked here to create an alternate payload that contains the full complement of boot partitions for TX2-based platforms. -
The
tegra-sysinstallscript expects the new rootfs image to be in tarball form, and does not perform any authentication or sanity checking on the image, so it is only usable for development purposes and should not be used in production.
Build process
-
The R32.5.0-based build includes
tegra-bup-payload, which installs a BUP payload in/opt/ota_packageand pulls in the bootloader update tool. Thedemo-image-eglimage was used for this example. Note that it hasIMAGE_FSTYPESset to include building atar.gztarball for the rootfs. -
The sysinstall-upgrader-initramfs recipe in the warrior/R32.1-based tree builds a BUP payload containing the kernel and initrd suitable for installing with
nv_update_engineon a system running an R32.1-based image. (Note that for platforms using U-Boot, installing the initrd would require a different process.) -
The
core-image-baseimage from the warrior/R32.1 tree was used as the starting point for the example.
Process steps
-
Start by flashing the R32.5.0-based image directly on the TX2. Use
sgdisk /dev/mmcblk0 --printto display the partition table, and save that output so you can compare the results against the partition table created later during the installation process. -
Boot the
core-image-baseimage from the warrior/R32.1-based distro on the TX2. -
Because the filesystem size is not expanded out to the full APP partition size in this build, use
mkfs.ext4to format the UDA partition, and mount that at/mnt. -
rmdir /opt/ota_package, thenln -sn /mnt /opt/ota_packageto provide space for the BUP payload. -
Use
wgetto download thesysinstall-upgrader-initramfs-jetson-tx2.bup-payloadbuilt in the warrior/R32.1-based build tree as/opt/ota_package/bl_update_payload. -
Use
nv_update_engine --enable-ab, thennv_update_engine --install no-rebootto install the BUP payload. If successful,reboot. -
The kernel command line in the initramfs image doesn’t have
console=set, so be patient while the image loads (takes about a minute or so) - there is no kernel output during the boot. -
mkdir /var/extra, as this directory is needed as a mount point during the installation. -
mkdir /installerand usewgetorcurlto download thedemo-image-egl-jetson-tx2-devkit.tar.gztarball from the R32.5.0-based build, naming it/installer/image.tar.gz. -
tegra-sysinstallto start the installation process. After it reformats the eMMC, the script will display the new partition table. Verify that the partition start and end sectors match the ones displayed in step 1 (after flashing the R32.5.0 image directly). If there is a mismatch, the device will probably not boot properly.
This section could be customized for a specific delivery mechanism. For instance,
instead of using wget to download the BUP payload, the package could be delivered through
your preferred update mechanism. If using an A/B update scheme like the one used for
tegrademo-mender it should be possible to use the filesystem in the new boot partition
to host the BUP payload and image content.
Installation steps
These are the steps performed by tegra-sysinstall:
-
The
sgdiskcommand (from thegptfdiskpackage) is used to zap the GPT partition table and create all of the partitions on the eMMC, based on the configuration file at/usr/share/tegra-sysinstall/partition_table. -
The APP, APP_b, DATA, LOGS, and EXTRA partitions are formatted using
mkfs.ext4. -
The EXTRA partition is mounted at
/var/extrafor use as temporary storage. -
The rootfs tarball is unpacked into the APP partition, then into the APP_b partition.
-
The boot partitions are initialized by
chrooting into the just-installed APP partition to runtegra-bootloader-update --initializeusing the BUP payload and/usr/share/tegra-boot-tools/boot-partitions.confconfiguration file from the just-installed rootfs.
Once the above steps are complete, the device can be rebooted, and should boot into the 32.5.0-based image.
(Please note that the tegra-sysinstall scripts were developed to test support for
secure boot combined with LUKS encrypted filesystems and programming a unique machine
ID in the odm_reserved fuses, so there are several functions in the scripts that can
be ignored/skipped for testing the installation process.)
Things to watch out for
-
If the initramfs with the installation tools is too large for cboot to handle properly (it has some compiled-in limits on the amount of memory it can reserve for the initial RAM disk), you’ll see data abort errors on the serial console.
-
If the BUP payload is missing any of the boot-related contents, the device will fail to boot when rebooting after the installation process is complete - one of the early-stage bootloaders will report errors on the serial console, and the device should go into USB recovery mode.
-
The above can also happen if there is a mismatch in the starting offsets and/or sizes of any of the boot partitions in the eMMC and the expected offsets that got built into the boot control tables for the bootloader during BUP generation. It’s important that the
/usr/share/tegra-sysinstall/partition_tableconfiguration file in the initramfs gets correctly generated from the same flash layout XML file that you are using for the image you are upgrading to. -
Any power interruption or other event that causes the device to reset or reboot, or otherwise interrupt the reflashing process, will render the device unbootable. Since the process can take several minutes (depending on the specific hardware, size of the image being installed, etc.), use of this process should be managed carefully.
-
Full BUP support in meta-tegra, covering multiple module revisions in a single payload, was added with the update to L4T R32.3.1. If you are currently running builds based off an older version of L4T, you may run into boot issues after installing the upgrader BUP payload on some TX2 modules. Adjusting the TEGRA_FAB setting in your build configuration to match the actual FAB revision of the module(s) you’re using should help with this.
Video walkthrough
See the OE4T May 2021 meeting video and notes for initial discussion and walkthrough of the content discussed here.
Testing
- Jetson TX2 - upgrade from L4T R32.1-based build to L4T R32.4.3-based build
- Jetson TX2 - upgrade from custom Sumo+L4T R28.1-based (U-Boot) build to L4T R32.4.3-based build
PPS GPIO Support on Jetson TX1 TX2
I thought I would add this here in the event someone else is searching for how to add a PPS input to TX1/TX2 systems. Hours of reading and searching yielded nothing other than the fact that NVIDIA doesn’t support it on the dev kits and they don’t provide any more information. I hope that someone can take this and use it for what they need, whether on commercial carriers or even on the dev kit board – maybe this is fairly common knowledge to those who work in device trees all the time, but for a noob to ARM and device trees, I would have found a page like this extremely valuable.
My setup is I have a TX1 on the Astro carrier from ConnectTech. I’m using the pyro-r24.2.2 branch of meta-tegra and pyro for poky/meta-openembedded.
I requested the DTS files for the ASG001 (Astro carrier) from ConnectTech and created my own machine layer, using the jetson-tx1 machine from meta-tegra as a starting point. This utilizes the 3.10 kernel.
To enable PPS support, I added the following block immediately below the gpio@6000d000 section of mono-tegra210-jetson-tx1-CTI-ASG001.dts:
pps {
gpios = <&{/gpio@6000d000} 187 0>;
compatible = "pps-gpio";
status = "okay";
};
This only added PPS support to the device tree, however the 3.10 kernel doesn’t support PPS GPIO clients on the device tree, so that support needed to be added by manually applying this patch to the source (I applied it in the tmp/work-shared kernel source git repo and created a patch I used in my linux-tegra bbappend): https://github.com/beagleboard/meta-beagleboard/blob/master/common-bsp/recipes-kernel/linux/linux-mainline-3.8/pps/0003-pps-gpio-add-device-tree-binding-and-support.patch
For later releases (it appears as early as R27.1), PPS GPIO support for device trees is present in the linux-tegra kernel, so the only requirement is adding the pps block to the DTS.
Finally, ensure that CONFIG_PPS and CONFIG_PPS_CLIENT_GPIO are enabled in your kernel configuration (I copied the defconfig, modified it, and added a do_configure_prepend() to my bbappend).
do_configure_prepend() {
cp ${WORKDIR}/defconfig-cti ${WORKDIR}/defconfig
}
At that point, build a typical image (I use core-image-full-cmdline - I take it others will work the same way) gives a functional PPS input into the kernel.
As of 08 Dec 2023, this feature is supported in the kirkstone, mickledore, nanbield, and master branches.
As of the latest Jetpack 5 r35.x releases, NVIDIA provides partition layouts which support Root File System Redundancy, whereby bootloader slots and rootfs slots are paired together to support automatically selecting the associated root filesystem partition at boot to match the selected bootloader slot. The selected bootloader slot, a or b, will select the corresponding rootfs slot a or b.
When paired with the UEFI capsule update feature, a redundant root filesystem supports switching the root filesystem, kernel, and kernel dtb to match the updated bootloader slot. When paired with an update tool which can update kernel, dtb and rootfs partitions (swupdate, rauc, mender, or others) the process of performing capsule update can also switch to an updated rootfs through the redundant rootfs feature.
If you have the available root filesystem space to support redundant rootfs, using a redundant partition layout at the outset of your project might give you the option to support updates later without a repartition (or tegraflash) of the device.
Selecting Redundant Root Filesystem Partition Layout
By default, both the stock NVIDIA provided Jetpack image as well as OE4T images use the non redundant partition layouts.
To use NVIDIA provided redundant partition layouts and automatically apply the necessary dtb changes performed by NVIDIA’s flash.sh script, on branches which include https://github.com/OE4T/meta-tegra/pull/1428, you simply need to set USE_REDUNDANT_FLASH_LAYOUT_DEFAULT = "1" in your distro configuration, custom MACHINE configuration, (or local.conf). This is currently supported for most targets. See the notes below for limitations.
This configuration is set as the default for all supported targets when building with tegra-demo-distro.
Testing Root Filesystem A/B Slot Switching
See the sequence in https://github.com/OE4T/meta-tegra/pull/1428 to validate root slot and boot slot switching.
Setting Up a Custom MACHINE
Use these variables to setup a MACHINE or distro with support for redundant flash layouts:
USE_REDUNDANT_FLASH_LAYOUT_DEFAULT- Set to"1"in your distro layer to use redundant flash layouts for any supported MACHINEs. Set to"0"to use default non-redundant layouts from NVIDIA when using tegra-demo-distro (USE_REDUNDANT_FLASH_LAYOUT_DEFAULTis the default for master branch builds of tegra-demo-distro).ROOTFSPART_SIZE_DEFAULT- Set with the size of the root filesystem partition when using the default (non-redundant) flash layout. This size will be automatically divided by 2 whenUSE_REDUNDANT_FLASH_LAYOUTis selected.PARTITION_LAYOUT_TEMPLATE_DEFAULT- set with the partition layout to use with the default (non, external, non redundant) flash layout, for instancecustom_layout.xml. Either provide acustom_external_layout_rootfs_ab.xmlfile or definePARTITION_LAYOUT_TEMPLATE_REDUNDANTwith your redundant file.PARTITION_LAYOUT_TEMPLATE_DEFAULT_SUPPORTS_REDUNDANT- Set to"1"if noPARTITION_LAYOUT_TEMPLATE_REDUNDANTis required for this MACHINE (and the same template is used for redundant or non redundant builds).PARTITION_LAYOUT_EXTERNAL_DEFAULT- Set with the default partition layout when using an external device (sdcard or NVMe) for rootfs partition storage, for instancecustom_external_layout.xml. Either provide acustom_external_layout_rootfs_ab.xmlfile or definePARTITION_LAYOUT_EXTERNAL_REDUNDANTwith your redundant file.HAS_REDUNDANT_PARTITION_LAYOUT_EXTERNAL- Set to"0"if your MACHINE does not support aPARTITION_LAYOUT_EXTERNAL_REDUNDANTand therefore does not supportUSE_REDUNDANT_FLASH_LAYOUT_DEFAULT
Overriding BSP Layer Changes
Use ROOTFSPART_SIZE, PARTITION_LAYOUT_EXTERNAL and PARTITION_LAYOUT_TEMPLATE as done before changes in https://github.com/OE4T/meta-tegra/pull/1428, to provide your own implementation outside the BSP layer and ignore the setting of USE_REDUNDANT_FLASH_LAYOUT.
Limitations
NVIDIA does not provide a redundant flash layout for flash_l4t_external.xml. Any targets which use flash_l4t_external.xml, which as of https://github.com/OE4T/meta-tegra/pull/1295 include Orin NX 16 GB in P3509 carrier, Orin NX 16 GB in P3768 carrier, or Orin Nano 4GB in p3768 carrier use HAS_REDUNDANT_PARTITION_LAYOUT_EXTERNAL ?= "0" and therefore don’t support the USE_REDUNDANT_FLASH_LAYOUT feature described here. Alternatively, override USE_REDUNDANT_FLASH_LAYOUT = "1" and set PARTITION_LAYOUT_EXTERNAL_DEFAULT ?= "flash_l4t_nvme.xml" or your custom external layout, but be aware of issue https://github.com/OE4T/meta-tegra/discussions/1286.
`
SPI support on 40 pin header - Jetson Nano devkit
For enabling SPI support for Jetson Nano please use this patch. This patch cover Jetson nano (eMMC and SDcard version) only.
SPI devices after applying the patch are available on /dev/spidev0.0 and /dev/spidev0.1 (as generic spidev devices). You can use spidev_test tool and shortcut MOSI/MISO pins to test if communication is working as expected.
Note: some extension boards with SPI chips maybe will not work due to the level shifters which are assembled on 40 pin header. Please refer to 40 pin header considerations for more details.
Jetson secure boot support in L4T R35.2.1 implements a different chain of trust from what was present in the L4T R32 releases:
- The Trusty secure OS has been replaced by OP-TEE, which allows for dynamic loading of trusted applications (TAs) from the non-secure world. TAs must be signed, and the public key used for checking the signature is compiled into the OP-TEE OS.
- The cboot bootloader has been replaced by UEFI, which uses its own set of keys for validating signatures on binaries that it loads (Linux kernel, EFI applications, and EFI capsules).
NOTE NVIDIA made some changes to the UEFI bootloader in L4T R35.5.0 that require that an “authentication key” be programmed into the Encrypted Key Block on secured devices. If you are updating your secured device from an earlier R35.x release to R35.5.0, you must update the EKB on the device with the added key. See this developer forum thread for more information.
Getting started
Start by reading the Secure Boot section of the Jetson Linux Developer’s Guide.
The sections below cover specifics of how secure boot and signing are implemented for OE/Yocto builds with meta-tegra.
Bootloader signing
Setting fuses for secure boot
Follow the instructions in the NVIDIA documentation for generating keys and burning secure boot fuses for your Jetson device. Be warned that burning the fuses is a one-time operation, so be extremely careful. You could render your Jetson permanently unbootable if something goes wrong during the fuse burning process.
Build-time bootloader signing
If you have the bootloader signing and encryption key files available, you can add the following setting to your local.conf to create signed boot images and BUP packages:
TEGRA_SIGNING_ARGS = "-u /path/to/pkc-signing-key.pem -v /path/to/sbk.key --user_key /path/to/user.key"
These arguments parallel the ones used with the L4T flash.sh script for signing:
- The
-uoption takes the path name of the RSA private key for PKC signing. - The
-voption takes the path name of the SBK key used for encrypting the binaries loaded at boot time. - The
--user_keyoption takes the path name of the encryption key you create for use with the NVIDIA sample OP-TEE TAs.
Note that with R35.2.1, the --user_key encryption key is used only for the XUSB firmware. Starting with R35.3.1, the user encryption key is not used for any of the boot firmware.
Build-time bootloader signing will be performed on the boot-related files in the tegraflash package for flashing, as well as the entries in any bootloader update payloads (BUPs).
Post-build signing
You can elect to perform bootloader signing outside of the build process by adding the -u, -v, and --user_key options when running the doflash.sh or initrd-flash script during flashing of your tegraflash package. For BUP generation, add those options when running the generate_bup_payload.sh script to have the bootloader components signed.
UEFI Secure Boot
To enable UEFI secure boot support, start by generating the PK, KEK, and DB keys and related configuration files, as described in the UEFI Secure Boot section of the Jetson Linux documentation.
It should be noted that UEFI boot is not compatible with the legacy secure boot supported on Tegra devices.
Build-time UEFI signing
During the build, signing of the EFI launcher app, the kernel, and device tree files is performed automatically when the following settings are present in your build configuration:
TEGRA_UEFI_DB_KEY = "/path/to/db.key"
TEGRA_UEFI_DB_CERT = "/path/to/db.crt"
Both settings must be present, and must point to one of the DB keys you generated (you do not need the PK or KEK keys).
Post-build UEFI signing
Post-build UEFI signing is not currently supported.
Enrolling UEFI keys at build time
To enable UEFI secure boot, the PK, KEK, and DB keys you generated must be “enrolled” at boot time. On Jetson platforms, this done by adding the needed key enrollment variable settings to the bootloader’s device tree via the UefiDefaultSecurityKeys.dts file you generated when creating the keys and configuration files. For meta-tegra builds, you can supply this file by adding a bbappend for the tegra-uefi-keys-dtb.bb recipe in one of your own metadata layers, substituting variables MY_LAYER with the path to your layer and MY_UEFI_KEYS_DIR with the path to your uefi_keys directory setup after following instructions linked above
export MY_LAYER=tegra-demo-distro/layers/meta-tegrademo
export MY_UEFI_KEYS_DIR=~/uefi_keys/
mkdir -p ${MY_LAYER}/recipes-bsp/uefi
cat > ${MY_LAYER}/recipes-bsp/uefi/tegra-uefi-keys-dtb.bbappend <<'EOF'
FILESEXTRAPATHS:prepend := "${THISDIR}/files:"
EOF
mkdir -p ${MY_LAYER}/recipes-bsp/uefi/files
cp ${MY_UEFI_KEYS_DIR}/UefiDefaultSecurityKeys.dts ${MY_LAYER}/recipes-bsp/uefi/files/
echo "Copy below is optional, only needed if you plan to update your keys with a capsule update"
cp ${MY_UEFI_KEYS_DIR}/UefiUpdateSecurityKeys.dts ${MY_LAYER}/recipes-bsp/uefi/files/
Enrolling UEFI keys at runtime
The Jetson Linux documentation describes the process for enrolling UEFI keys and enabling UEFI secure boot at runtime. You will need to add some packages to your image build to make the necessary commands available. As of this writing, runtime enrollment has not been tested.
OP-TEE Trusted Application signing
OP-TEE provides a mechanism for loading TAs from the “Rich Execution Environment” (REE, another term for the normal, non-secure OS), which must be signed with a key that is known the OP-TEE OS. Read the OP-TEE documentation on TAs for more information.
By default, a development/test key from the upstream OP-TEE source is compiled in; this configuration should not be used in any production device, since the key is publicly available. You should generate a suitable RSA keypair as described in the OP-TEE documentation. For build-time signing, add a bbappend for the optee-os recipe in one of your layers. For build-time signing, your bbappend should resemble the following:
FILESEXTRAPATHS:prepend := "${THISDIR}/files:"
SRC_URI += "file://optee-signing-key.pem"
EXTRA_OEMAKE += "TA_SIGN_KEY=${WORKDIR}/optee-signing-key.pem"
Post-build signing of TAs is more difficult, since external TAs are generally packaged and installed into the root filesystem as part of the build. For that approach, though, you would include the public key file in the optee-os bbappend, and set TA_PUBLIC_KEY instead of TA_SIGN_KEY. The OP-TEE makefiles will sign TAs with the a dummy private key, but the public key you specify will be compiled into the secure OS. You will have to figure out how to re-sign the TAs with your actual private key before they get used.
Using the NVIDIA built-in sample TAs
To make use of the encryption/decryption functions NVIDIA provides by default with their OP-TEE implementation, you will need to supply an “Encrypted Keyblob” (EKB) that corresponds to the KEK/K2 fuses you have burned on your Jetson device. Instructions for generating an EKB are in this section of the Jetson Linux documentation. See the note at the top of this page for information about changes in L4T R35.5.0 that require the re-generation of the EKB.
The tegra-bootfiles recipe installs the default EKB from the L4T kit. Add a bbappend for that recipe to replace the default with the custom EKB for your device.
Generating a Custom EKB
Before replacing the default EKB in your Yocto build, you must generate a custom one that matches OemK1 fuse burned on your Jetson device. To do this, you need the gen_ekb.py script from the NVIDIA OP-TEE samples code base (for the hwkey-agent sample). You can find that script either in the L4T public sources tarball, or on NVIDIA’s git server (making sure you choose the branch for the L4T version you are targeting).
Example:
python3 gen_ekb.py -chip t234 \
-oem_k1_key oem_k1.key \
-in_sym_key2 sym2_t234.key \
-in_auth_key auth_t234.key \
-out eks_t234.img
where
oem_k1.keyis the OEM_K1 key stored in the OEM_K1 fuse.sym2_t234.keyis the disk encryption key.auth_t234.keyis the UEFI variable authentication keyeks_t234.imgis the generated EKB image to be flashed to the EKS partition of the device
Kernel encryption is not currently supported in meta-tegra, so do not provide the UEFI payload encryption key (using -in_sym_key).
Secure Boot Support
Bootloader signing is supported for all Jetson targets for which secure boot is available (consult the L4T documentation). Support was added in the zeus branch for tegra186 (Jetson-TX2), and extended to the other SoC types in the dunfell-l4t-r32.4.3 branch.
Note that with L4T R35.2.1 and later, the secure boot sequence has changed. See this page for more information.
Setting fuses for secure boot
To enable secure boot on your device, follow the instructions in the L4T BSP documentation and the README included in the L4T Secure Boot package that can be downloaded here.
Caveats
- The
odmfuse.shscript in some L4T releases has a bug that causes fusing to fail on Jetson-TX2 devices; see issue #193 for an explanation and patch. - The L4T bootloader for tegra210 (TX1/Nano) has a bug that always disables secure boot during fuse burning in versions of L4T prior to R32.4.4. See this NVIDIA Developer Forum post for more information, and patched copies of the bootloader with a fix.
- NVIDIA does not support secure boot on SDcard-based developer kits (Jetson Nano/Nano-2GB and Jetson Xavier NX). You may render your developer kit permanently unbootable if you attempt to burn the secure boot fuses.
- The tools and scripts in L4T for secure boot support do not appear to be very well tested from release to release, and occasionally regressions get introduced that break fuse burning for some of the Jetson platforms, so be very careful when updating to a new release of the BSP.
Enabling boot image and BUP signing during the build
If you have the signing and (optional) encryption key files available, you can add the following setting to your local.conf to create signed boot images and BUP packages:
TEGRA_SIGNING_ARGS = "-u /path/to/signing-key.pem -v /path/to/encryption-key"
The additional arguments will be passed through to the flash-helper script and all files will be signed (and boot files will be encrypted, if the -v option is provided) during the build. The doflash.sh script in the resulting tegraflash package will flash the signed files to the devices. This is similar to the flashcmd.txt script you would get if you used the L4T flash.sh script with the --no-flash option as mentioned in the NVIDIA secure boot documentation.
Kernel and DTB encryption
Starting with L4T R32.5.0, cboot on tegra186 (TX2) and tegra194 (Xavier) platforms expect the kernel (boot.img) and kernel device tree to be encrypted as well as signed. This encryption is performed by a service in Trusty and uses a different encryption key than the one used for encrypting the bootloaders. See the L4T documentation for information on setting this up.
If you have set up kernel/DTB encryption on your device, add --user_key /path/to/kernel-encryption-key to TEGRA_SIGNING_ARGS.
If you do not go through the extra steps of setting up a kernel encryption key, an all-zeros key will be used by default.
Manual signing
If you prefer not to have the signing occur during your build, you can manually add the necessary arguments to your invocation of doflash.sh after unpacking the tegraflash package. For example:
$ BOARDID=<boardid> FAB=<fab> BOARDSKU=<boardsku> BOARDREV=<boardrev> ./doflash.sh -u /path/to/signing-key.pem -v /path/to-encryption-key
The environment variable settings you need on the command will vary from target to target; consult the “Signing and Flashing Boot Files” section of the L4T BSP documentation for the specifics.
With recent branches, BUP generation can also be performed manually. The tegraflash package includes a generate_bup_payload.sh script that can be run with the same -u (and, if applicable -v) options to generate a BUP payload manually.
Using a code signing server
If you prefer not to have your signing/encryption keys local to your development host, you can override the tegraflash_custom_sign_pkg and tegraflash_custom_sign_bup functions in image_types_tegra.bbclass to package up the files in the current working directory, send them to be signed, then unpack the results back into the current directory. Everything needed to perform the signing, except for the keys, will be present in the package sent to the server. An example implementation of a code signing server is available here.
Tegra Specific Gstreamer Plugins
Originally, the machine configurations set MACHINE_GSTREAMER_1_0_PLUGIN to include the gstreamer1.0-plugins-tegra package, which is the base set of binary-only gstreamer plugins that is provided with L4T. In more recent releases, this has been changed to point to gstreamer1.0-omx-tegra instead (and using the now-current MACHINE_HWCODECS variable) to make it easier to build multimedia-ready images.
Note that since the OpenMAX plugins package is flagged as commercially licensed, it is also whitelisted in the machine configuration with:
LICENSE_FLAGS_WHITELIST_append = " commercial_gstreamer1.0-omx-tegra"
Update 2020-09-17
Starting with the branches using L4T R32.4.3 (dunfell-l4t-r32.4.3 and later), the commercially-licensed flag was removed from the OpenMAX plugin recipe, as the sources are available and do not appear to contain any encumbered code.
This page includes some guidance about how to resolve or work around issues with device flashing using the tegraflash package build by the Yocto build.
General Troubleshooting Tips/Suggestions
- Make sure you are using the correct flashing operation for your device/target storage. See the table here for guidance.
- If your target can support either method, try the alternate method as a troubleshooting step.
- Try swapping USB cables/ensure you are using a high quality cable.
- Try power cycling the device/entering tegraflash mode from power on rather than reboot.
- Try running as sudo root or root rather than a user account, especially if any error message mention permissions.
- Switch to an alternative USB host controller as several people have noticed issues with these. See this issue for instance.
- If you are using a USB 3.0 add-in card, switch to the one connected to the motherboard.
- Try a USB 2.0 port if you have no other USB 3.0 controllers.
- Note any failures in logs for the respective flashing method
- Start with the console log.
- Connect the serial console on the target device if possible.
- For initrd-flash steps, consult the host and device logs which are output at the end of the flash process.
- Suspect issues with partition table, especially if you’ve modified the partition table or increase sizes of partitions
- Obscure errors like
cp: cannot stat 'signed/*': No such file or directorytypically mean you’ve got some problem with your custom partition table and/or target storage device size. See this issue for example.
- Obscure errors like
- Attempt to reproduce with a devkit and a similar setup from tegra-demo-distro.
- Use hardware recovery mode entry rather than reboot force-recovery
- See instructions at Flashing-the-Jetson-Dev-Kit for putting the device in recovery mode.
- Although it’s possible to use
reboot force-recovery, note the issues here which can occur in some scenarios. Using hardware recovery is typically a safer option if you are experiencing issues with tegraflash.
- Check, if the power-saving TLP Package is installed and running (preferably installed on notebooks/laptops to save battery power). This package disturbs the flashing process. Use
sudo apt remove tlpand reboot your host computer to remove it before flashing. - Use command line to extract tegraflash.tar.gz image file. When extracting by using a GUI app, esp.img file become corrupted. To use command-line like
tar -xf your-image.tegraflash.tar.gzand follow normal flashing procedure with doflash.sh script.
Update: 10 Feb 2025
In the master branch:
- The image type for tegraflash packages has been changed to
tegraflash.tar. - The
zipformat for tegraflash packages has been removed. Zip packages do not work well with Linux sparse files, which are used for the EXT4 filesystem images we include in the package. - The default for
IMAGE_FSTYPESis now set totegraflash.tar.zst, using zstd compression on the package, which provides good compression with much faster compression and decompression times than gzip. You can override this in your build configuration, if needed.
Update: 27 May 2020
As of 27 May 2020, the image_types_tegraflash.bbclass and the helper scripts have
been enhanced in the branches that support L4T R32.3.1 and later (zeus-l4t-r32.3.1, dunfell,
dunfell-l4t-r32.4.2, and master). The sections below describe these updates.
Compressed-tar instead of zip for packaging
The venerable zip archive format has worked well enough over the years, but the zip tools are quite old and don’t have support for modern features like parallelism and sparse files. Switching to using a compressed tarball for tegraflash packages substantially speeds up build times and preserves sparse-file attributes for EXT4 filesystem images, resulting in much smaller (actual size vs. apparent size) packages.
In the zeus-l4t-r32.3.1 and dunfell branches, the default packaging remains zip.
In dunfell-l4t-r32.4.2 and master, the default packaging has been changed to tar.
You can set the variable TEGRAFLASH_PACKAGE_FORMAT in your build configuration
to set the package format you want to use. Note however, that zip format is deprecated
and support for it will likely be removed in a future release.
Use of bmaptool for SDcard creation
If you have the bmaptool package installed on your development host, the make-sdcard
script will use it in place of dd to copy the EXT4 filesystem into the APP partition
of an SDcard, which (when combined with the tar packaging mentioned above) results in
much faster SDcard writing.
To take advantage of this, make sure bmaptool is available on your PATH and specify the
device name of your SDcard writer when running dosdcard.sh. For example:
$ ./dosdcard.sh /dev/sda
The device name will be passed through to the underlying make-sdcard script. (If you
run into permissions problems, you may need to use sudo.)
BUP payload generation
If you need to create BUP payloads outside of your bitbake builds, the tegraflash
package now includes all of the files needed to do so, including a script to create
the payload (similar to the l4t_generate_soc_bup.sh script in L4T):
$ ./generate_bup_payload.sh
You can pass the -u and/or -v options to this script to specify the public and/or private
keys for signing the payload contents if your devices are fused for secure boot, and they
will be passed through to each invocation of the flash helper script.
USB Device Mode Support
On the zeus and later branches (for L4T R32.2.3 and later), the l4t-usb-device-mode recipe is available to set up USB gadgets on a Jetson device for network and serial TTY access. The setup is similar to what’s provided in the L4T/JetPack BSP, except:
- the scripts in the BSP under
/opt/nvidia/l4t-usb-device-modehave been replaced by a combination of systemd, udev, andlibusbgxconfiguration files; - the USB device identifier uses the Linux Foundation vendor ID; and
- no mass storage gadget is created
Note that as of this writing, support for creating both an ECM gadget and an RNDIS gadget is provided, but the RNDIS gadget has not been tested.
Prerequisites
- You must have the
meta-oelayer from meta-openembedded in your build for thelibusbgxrecipe. - You must use systemd, and include udev and networkd support in its configuration (both of which are on by default in OE-Core zeus).
Network configuration
The systemd-networkd configuration files provided automatically create an l4tbr0 bridge device that combines the usb0 ECM interface and the rndis0 RNDIS interface. The bridge is assigned the IP address 192.168.55.1 and runs a DHCP server to serve the address 192.168.55.100 to the host side of the USB connection.
Serial port configuration
The serial port is called /dev/ttyGS0 on the device, and a udev rule automatically starts serial-getty on the device when it is created. If the connecting host is running Linux, the corresponding serial TTY will be /dev/ttyACM0 (or another /dev/ttyACMx device if there are multiple such devices on your host system).
Using device mode support
To use device mode support, just include l4t-usb-device-mode in your image.
Using cboot as Bootloader
[Applicable to L4T R32.1.0 and later]
For Jetson AGX Xavier, NVIDIA provides only cboot as the bootloader, so there is no U-Boot recipe for that platform. For Jetson TX2, the default configuration uses both - cboot loads U-Boot, which then loads the Linux kernel. You can, however, use just cboot as the bootloader by setting
PREFERRED_PROVIDER_virtual/bootloader = "cboot-prebuilt"
in your build configuration. If you do this, cboot directly loads the Linux kernel and initial ramdisk from the kernel (or kernel_b) partition, and the kernel image is not added to the root filesystem.
For branches with L4T R32.4.3 and later (dunfell-l4t-r32.4.3, gatesgarth and later branches), cboot is now built from sources by default, rather than using the prebuilt copy that comes with the L4T kit, so you should specify cboot-t18x instead of cboot-prebuilt for the PREFERRED_PROVIDER setting.
Note that in L4T R32.2.x, cboot has issues if the kernel or the initrd is too large, at least on TX2 platforms, causing kernel panics at boot time. With L4T R32.3.1, the kernel size limitation appears to be resolved, but if you use a separate initrd (instead of building it into the kernel as an initramfs), there is still a limit of just a few megabytes on its size (the relevant definitions (for the TX2) are probably in bootloader/partner/t18x/common/include/soc/t186/tegrabl_sdram_usage.h in the cboot sources). If you plan to customize your kernel to build in more drivers, rather than leaving them as loadable modules, or if you need to build more functionality into your initial ram filesystem, use R32.3.1 and bundle the initramfs into your kernel.
Building cboot from sources
NVIDIA has, from time to time, made cboot source code available. For Jetson AGX Xavier platforms, the most recent source release was with L4T R32.2.3, published in the L4T public_sources archive. This copy of cboot was removed from L4T R32.3.1. For L4T R32.4.2, cboot sources have been published again (for Xavier platforms only) as a separate download. For L4T R32.4.3 and R32.4.4, cboot sources are available for both TX2 and Xavier platforms.
Older releases (R28.x for TX2, R31.1 for Xavier) were restricted downloads. You must use your Developer Network login credentials to download the source package from the appropriate L4T page on NVIDIA’s website and store that tarball on your build host. The NVIDIA_DEVNET_MIRROR variable is used to locate the sources; see the recipes for more details on naming.
To use cboot built from source in your pre-R32.4.3 builds, set
PREFERRED_PROVIDER_virtual/bootloader = "cboot"
For R32.4.3 and later, the default is to build cboot from source, and the recipe names changed to be cboot-t18x for Jetson TX2 platforms and cboot-t19x for Jetson Xavier platforms.
PACKAGECONFIG for cboot builds
In branches with L4T R32.4.3 and later, you can control the inclusion of some cboot features by modifying the PACKAGECONFIG setting for the cboot recipe for your target device. All features are enabled by default, to match the stock L4T settings.
For Jetson-TX2 (tegra186/t18x) platforms, the following PACKAGECONFIG options are available:
| PACKAGECONFIG option | Description |
|---|---|
| display | cboot initializes the display; can be disabled for headless targets |
| recovery | enables booting the recovery kernel and rootfs (not currently populated in L4T) |
For Xavier (tegra194/t19x) platforms, the following PACKAGECONFIG options are available:
| PACKAGECONFIG option | Description |
|---|---|
| bootdev-select | enables booting from devices other than the built-in eMMC or SATA interfaces |
| display | cboot initializes the display; can be disabled for headless targets |
| ethernet | enables booting over the Ethernet interface |
| extlinux | enables cboot’s half-baked support for using an extlinux.conf file |
| recovery | enables booting the recovery kernel and rootfs (not currently populated in L4T) |
| shell | enables the countdown pause during boot to break into the cboot “shell” |
Note that removing the bootdev-select option has no effect on builds for the Xavier NX development kit; the recipe always enables that option for that target, since it is required for booting from the SDcard.
Jetson TX1/Nano platforms
While NVIDIA does ship a pre-built version of cboot for the tegra210 platforms (TX1 and Nano), they do not provide source code. U-Boot is the user-modifiable bootloader for those platforms.
For many L4T/Jetson Linux releases, NVIDIA has provided a mechanism (the jetson-io scripts) for applying device tree overlays (.dtbo files) dynamically at runtime. For OE/Yocto-based builds, device trees are built from sources, so runtime application of DTB overlays is less of an issue. The meta-tegra layer does provide some mechanisms for applying DTB overlays, through some build-time variable settings.
Build-time application of overlays
This mechanism is supported in the branches based on L4T R32.6.x through R35.x only. Overlays are applied to the device tree during the kernel build, directly modifying your kernel DTB. (For L4T R36 and later, the NVIDIA device trees are no longer provided in the kernel source tree.)
Locating overlays
The exact list of overlays supplied by NVIDIA varies by target platform. You can find them by building the kernel recipe (virtual/kernel or linux-tegra) and examining its output under ${BUILDDIR}/work/tmp/${MACHINE}/linux-tegra.
Applying overlays
Set the KERNEL_DEVICETREE_APPLY_OVERLAYS variable to a blank-separate list of .dtbo file names to have those overlays applied during the kernel build. You can do this in your machine configuration file, or add it, for example, to the local.conf file in your build workspace.
Example
For example, to configure a Jetson Xavier NX development kit for IMX477 and IMX219 cameras, you would add the following line to your $BUILDDIR/conf/local.conf file:
KERNEL_DEVICETREE_APPLY_OVERLAYS:jetson-xavier-nx-devkit = "tegra194-p3668-all-p3509-0000-camera-imx477-imx219.dtbo"
Other possible use cases
For U-Boot-based Jetsons (only supported on a subset of Jetson modules with L4T R32.x), the .dtbo files will get populated into the /boot directory in the rootfs, and you could modify the /boot/extlinux/extlinux.conf file to add an FDTOVERLAY line to have one or more overlays applied at boot time. Unfortunately, OE-Core’s support for generating extlinux.conf content does not include support for FDTOVERLAY lines, so to make such a change you would have to work out a way to rewrite that file in a bbappend.
For out-of-tree device trees
For L4T R36.x, the nvidia-kernel-oot recipe is the default device tree provider for the Jetson platforms. You can also set the PREFERRED_PROVIDER_virtual/dtb variable to point to a recipe for providing your own customized device tree. To apply overlays to these device trees, add fdtoverlay invocations to the compilation step via a bbappend (for nvidia-kernel-oot) or in your custom recipe.
Example out-of-tree devicetree in tegra-demo-distro
See the tegra-demo-distro example at meta-tegrademo/recipes-bsp/tegrademo-devicetree which shows how to modify a base devicetree from nvidia-kernel-oot to one specific to your hardware platform. This simple example just adds a single “compatible” line to your base devicetree. To use this example:
- Determine which devicetree is currently in use. One way to do this is with
bitbake -e <your image>and look at the value ofKERNEL_DEVICETREE. - Determine whether there’s an existing devicetree in the meta-tegrademo/recipes-bsp/tegrademo-devicetree which uses your existing devictree as a base. Current examples are:
tegra234-p3768-0000+p3767-0005-oe4t.dts:jetson-orin-nano-devkitorjetson-orin-nano-devkit-nvmebuilds on a p3768 (Orin Nano Devboard) carriertegra234-p3768-0000+p3767-0000-oe4t.dts: Nvidia Jetson Orin NX 16GB in a p3768 (Orin Nano Devboard) carriertegra234-p3737-0000+p3701-0000-oe4t.dts:jetson-agx-orin-devkit
- If there’s not an existing devicetree built from your base
KERNEL_DEVICETREE, follow the examples to add one to SRC_URI and to the repo. - Modify your MACHINE conf or local conf to specify your dtb provider and
KERNEL_DEVICETREEusing something like this:
PREFERRED_PROVIDER_virtual/dtb = "tegrademo-devicetree"
KERNEL_DEVICETREE:jetson-orin-nano-devkit-nvme = "tegra234-p3768-0000+p3767-0005-oe4t.dtb"
KERNEL_DEVICETREE:jetson-orin-nano-devkit = "tegra234-p3768-0000+p3767-0005-oe4t.dtb"
Where KERNEL_DEVICETREE overrides the setting for your MACHINE, referencing the devicetree filename with *.dtb in the place of *.dts.
5. Build, flash, and boot the board, and cat /sys/firmware/devicetree/base/compatible to see the compatible string printed as configured in the devicetree. You should see a string which starts with “oe4t”, as shown here for the orin nano
root@jetson-orin-nano-devkit-nvme:~# cat /sys/firmware/devicetree/base/compatible
oe4t,p3768-0000+p3767-0005+tegrademonvidia,p3768-0000+p3767-0005-supernvidia,p3767-0005nvidia,tegra234
Runtime application of overlays in SPI Flash
This mechanism is supported in branches based on L4T R35.x and later. Overlays are appended to the kernel DTB by the NVIDIA flashing/signing tools, and are applied by the UEFI bootloader at runtime. Overlays are stored in SPI flash and are only updated on capsule update or tegraflash.
Locating overlays
The exact list of overlays supplied by NVIDIA varies by target platform. You can find them on R35.x-based branches by building the kernel recipe (virtual/kernel or linux-tegra) and examining its output under ${BUILDDIR}/work/tmp/${MACHINE}/linux-tegra. For R36.x-based branches, device trees are built as part of the nvidia-kernel-oot recipe.
Applying overlays
Append your additional overlays to the TEGRA_PLUGIN_MANAGER_OVERLAYS variable, which consists of a blank-separate list of .dtbo file names. You can do this in your machine configuration file, or add it, for example, to the local.conf file in your build workspace. That variable is set by the layer to include overlays that NVIDIA requires for its platforms, so be sure to append to it, rather than overwriting it.
Example
For example, to configure the pins on the 40-pin expansion header of the Jetson Orin Nano development kit, you would add the following line to your $BUILDDIR/conf/local.conf file:
TEGRA_PLUGIN_MANAGER_OVERLAYS:append:jetson-orin-nano-devkit = " tegra234-p3767-0000+p3509-a02-hdr40.dtbo"
Runtime application of overlays in the rootfs partition
With https://github.com/OE4T/meta-tegra/pull/1968 support is available to apply overlays in the rootfs partition using the OVERLAYS extlinux.conf option. This means you are able to link overlays to a rootfs slot and store/update there instead of in the SPI flash.
Only overlays which modify the kernel DTB are supported, since the overlay application happens late in the boot sequence.
See this section of the extlinux.conf wiki page for details about configuring OVERLAYS in extlinux.conf.
Using gcc7 from the Contrib Layer
Starting with the warrior branch, meta-tegra includes a contrib layer with user-contributed recipes
for optional inclusion in your builds. The layer includes recipes for gcc7 that you can use for compatibility
with CUDA 10.0.
Configuring your builds for GCC 7
Follow the steps below to switch to GCC 7:
- Use
bitbake-layers add-layerto add themeta-tegra/contriblayer to your project inbuild/conf/bblayers.conf. - Select GCC version in your
build/conf/local.confand use the required configuration like this:
GCCVERSION = "7.%"
require contrib/conf/include/gcc-compat.conf
Troubleshooting
Older GCC versions, such as GCC 7, does NOT support fmacro-prefix-map. As a result, due to the default settings, while building newer releases of the Yocto Project, for example Warrior, with older GCC version you may get errors like “cannot compute suffix of object files”. To fix add the following lines to your build/conf/local.conf:
# GCC 7 doesn't support fmacro-prefix-map, results in "error: cannot compute suffix of object files: cannot compile"
DEBUG_PREFIX_MAP_remove = "-fmacro-prefix-map=${WORKDIR}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR}"
NOTE: This configuration is applied in contrib/conf/include/gcc-compat.conf. No further actions are required if you have already required it in build/conf/local.conf.
See Also
- Working with NVIDIA Tegra BSP and Supporting Latest CUDA Versions, Leon Anavi, Yocto Dev Summit 2019 slides
Update 16-Dec-2021: The master branch has support for restricting the use of the older gcc toolchain just for CUDA compilations, and the meta-tegra main layer includes the recipes to support this. You no longer need to use an older toolchain for building everything, and the recipes for the older toolchains have been dropped from the contrib layer. See #867 for more information.
For honister and earlier branches
With the JetPack 4.4 Developer Preview release (L4T R32.4.2), NVIDIA updated CUDA support
for the Jetson platforms to CUDA 10.2, which is compatible with GCC 8. On the dunfell-l4t-r32.4.2
and master branches, the contrib layer in this repository has been updated to include recipes
for the gcc 8 toolchain, imported from the OE-Core warrior branch. If you intend to build
packages that use CUDA, you should configure your build to use GCC 8.
If you have previously configured your builds for GCC 7 when using an earlier version of meta-tegra with an older L4T/JetPack release, you can retain those settings and continue to use GCC, as builds should be compatible with either version of the toolchain.
Configuring your builds for GCC 8
Follow the steps below to switch to GCC 8:
- Use
bitbake-layers add-layerto add themeta-tegra/contriblayer to your project inbuild/conf/bblayers.conf. - Select GCC version in your
build/conf/local.confand use the required configuration like this:
GCCVERSION = "8.%"
or
GCCVERSION_aarch64 = "8.%"
if you have other platforms (with other CPU architectures) in your build setup that require the latest toolchain provided by OE-Core.
Overview
As mentioned in the README, OE-Core removed gcc7 from support starting with the warrior release. However, CUDA 10 does not support gcc8. This means you need to pull in another layer or changes which support gcc7 toolchain in order to support CUDA 10.0.
Fortunately adding gcc7 does not require a lot of work to achieve if using the meta-linaro project. See tested instructions below.
Instructions for warrior branch
- Add the meta-linaro-toolchain layer as a submodule in your project by cloning this project, checking out the appropriate branch (warrior).
- Use
bitbake-layers add-layerto add the meta-linaro/meta-linaro-toolchain layer to your project inbuild/conf/bblayers.conf. You can add just the meta-linaro-toolchain folder and not the entire meta-linaro layer. - Reference the GCC version in your
build/conf/local.conflike this:
GCCVERSION = "linaro-7.%"
- Add these lines to your
build/conf/local.confto prevent errors like “cannot compute suffix of object files” due to missing fmacro-prefix-map support on GCC7 and based on the default setting on the warrior branch:
# GCC 7 doesn't support fmacro-prefix-map, results in "error: cannot compute suffix of object files: cannot compile"
# Change the value from bitbake.conf DEBUG_PREFIX_MAP to remove -fmacro-prefix-map
DEBUG_PREFIX_MAP = "-fdebug-prefix-map=${WORKDIR}=/usr/src/debug/${PN}/${EXTENDPE}${PV}-${PR} \
-fdebug-prefix-map=${STAGING_DIR_HOST}= \
-fdebug-prefix-map=${STAGING_DIR_NATIVE}= \
"
- For recipes which fail during the configuration stage with messages like this:
cc1: error: -Werror=missing-attributes: no option -Wmissing-attributes
cc1: error: -Werror=missing-attributes: no option -Wmissing-attributes
Add a .bbappend to your layer which removes the unsupported missing-attributes flag from respective CPPFLAGS for host and target compile. For instance, to resolve with libxcrypt you can add a /recipes-core/libxcrypt/libxcrypt.bbappend to your layer with content:
# For GCC7 support
TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir}"
CPPFLAGS_append_class-nativesdk = ""
Note that the libxcrypt recipe in OE-Core’s warrior branch was updated in September 2019 (for Yocto Project 2.7.2) to remove the compiler option that causes this error with older compilers.
Wayland Weston Support on Jetson Platforms
Support for Wayland/Weston has been adapted from the open-source libraries and patches that NVIDIA has published, rather than using the binary-only libraries packaged into the L4T BSP.
DRM/KMS support
Starting with L4T R32.2.x, DRM/KMS support in the BSP is provided through a combination of a custom
libdrm.so shared library and the tegra-udrm kernel module. The library intercepts some DRM API calls;
any APIs it does not handle directly are passed through to the standard implementation of libdrm.
Builds that include weston will also include a configuration file (via the tegra-udrm-probeconf recipe)
that loads the tegra-udrm module with the parameter modeset=1. This enables KMS support in the
L4T-specific libdrm library. If your build includes a different Wayland-based compositor, you may also
need to include this configuration file.
(Earlier versions of L4T used a different custom libdrm implementation that had no KMS support and was
not ABI-compatible with the standard libdrm implementation.)
Mesa build changes
The Mesa build has been changed to enable libglvnd support, which creates the necessary vendor plugins of the EGL and GLX libraries and packages them as libegl-mesa and libgl-mesa.
xserver-xorg changes
The xserver-xorg build has also been changed to disable DRI and KMS support on Tegra platforms.
libglvnd
Starting with L4T R32.1, the BSP uses libglvnd rather than including pre-built copies of the OpenGL/EGL/GLES libraries.
egl-wayland
The egl-wayland extension is built from source, with an additional patch to correct an issue with detecting Wayland displays and surfaces. The recipe also installs the needed JSON file so that the extension can be found at runtime.
weston-eglstream
NVIDIA’s patches for supporting Weston using the EGLStream/EGLDevice backend are maintained in this repository. As of L4T R32.2.x, no additional Tegra-specific patches are required.
The --use-egldevice option gets added to the command line when starting Weston to activate this support.
Note that support for the EGLStream backend was dropped in Weston 10 in favor of using GBM. We supply a backend for libgbm that uses
NVIDIA’s libnvgbm.so manage GBM objects, and we still patch Weston support the EGLStream protocol for Wayland clients.
XWayland
XWayland appears to work, but hardware-accelerated OpenGL (through the libGLX_nvidia provider) is not available.
Testing
The following tests are performed:
- Verify that
core-image-westonbuilds. - Verify that weston starts at boot time.
- Verify that weston sample programs, such as
weston-simple-egl, display appropriate output. - Verify that the
nveglglessinkgstreamer plugin works with thewinsys=waylandparameter by running a gstreamer pipeline to display an H.264 video. Note that theDISPLAYenvironment variable must not be set, per the NVIDIA documentation. - Verify that the
l4t-graphics-demosapplications work.
Troubleshooting
The following commands work on a Jetson TX2 and probably others:
Turn off HDMI:
echo -1 > /sys/kernel/debug/tegra_hdmi/hotplug
echo 4 > /sys/class/graphics/fb0/blank
(Source)
Turn on HDMI:
echo 1 > /sys/kernel/debug/tegra_hdmi/hotplug
echo 0 > /sys/class/graphics/fb0/blank
(Source)
Reading HDMI connection state:
/sys/devices/virtual/switch/hdmi/state is 0 when disconnected and 1 when connected. (Source)
While not enabled by default (except on the Jetsons that use the U-Boot bootloader), you can use the
L4T extlinux.conf support in your builds.
For L4T R35.x and later
In the kirkstone and later branches based on the L4T R35.x and later series of releases, set UBOOT_EXTLINUX = "1" to configure the build to use an extlinux.conf file. (As of 14 Apr 2024, "1" is now the default setting in the master branch.)
See the comments in l4t-extlinux-config.bbclass for additional configuration settings you can use.
UBOOT_EXTLINUX_FDT
The UBOOT_EXTLINUX_FDT setting can be set to exactly UBOOT_EXTLINUX_FDT = "/boot/${DTBFILE}" before https://github.com/OE4T/meta-tegra/pull/1968 or to any dtb file without full path (like UBOOT_EXTLINUX_FDT = "${DTBFILE}") after https://github.com/OE4T/meta-tegra/pull/1968 and backports.
When set, this adds a devicetree entry in the extlinux.conf file. This setting is useful for easy testing of devicetree changes in the kernel and to support devicetree transitions on slot switch without capsule update. Note that when UBOOT_EXTLINUX or UBOOT_EXTLINUX_FDT is not set, the kernel-dtb partitions defined in the root filesystem are ignored and the devicetree for the kernel is taken from the devicetree which is appended to the uefi image, therefore only updated when the uefi image is changed via tegraflash or capsule update.
efivar -p --name 781e084c-a330-417c-b678-38e696380cb9-L4TDefaultBootMode should return a value of 1 when using this feature. For additional context see this thread in element.
UBOOT_EXTLINUX_FDTOVERLAYS
The PR at https://github.com/OE4T/meta-tegra/pull/1968 adds support for specifying a list of overlays in your extlinux.conf file. These overlays are also stored on the rootfs and applied to the kernel DTB at boot time after root slot selection.
This feature is only supported when UBOOT_EXTLINUX_FDT is specified.
To use, specify
UBOOT_EXTLINUX_FDT = "${DTBFILE}"
UBOOT_EXTLINUX_FDTOVERLAYS = "my-overlay.dtbo"
Where "my-overlay.dtbo" is an overlay built using the mechanisms specific to your branch implementation (or potentially one provided by NVIDIA. See Using-device-tree-overlays for more details. Note that since the overlay only happens to the kernel DTB this mechanism cannot be used to make any changes to the UEFI DTB.
Caveats
- The upstream UEFI bootloader does not implement this; it was tacked on by NVIDIA in their
L4TLauncherEFI application. - The ext4 filesystem implementation that NVIDIA provides in their bootloader may have some bugs/limitations that could prevent it from reading the
extlinux.confor other files in your root filesystem. Using newer ext4 features, or non-ext4 filesystems for your root filesystem, could lead to boot failures. - The
extlinux.confsyntax supported inL4TLauncheris not the same as U-Boot’s, and the parsing code isn’t the most robust/forgiving, so be careful about any modifications you may want to make, to avoid boot failures.
For L4T R32.x
In L4T R32.x:
- The TX1/Nano platforms use U-Boot by default, so no changes are required to use
extlinux.conffiles. - The TX2 platform defaults to using U-Boot which supports
extlinux.conf. TX2 builds can be configured to usecbootwithout U-Boot, and the TX2cbootimplementation does not supportextlinux.conf. - The Xavier platforms have a different
cbootcode base which (unlike the TX2 implementation) does have some support forextlinux.conffiles. The rest of this page covers the Xavier implementation.
Configuring Xavier extlinux.conf support
Add the cboot-extlinux package to your image to enable booting your Xavier device
with the kernel loaded from /boot in the rootfs instead from from a separate partition.
This is only available in the kirkstone-l4t-r32.7.x branch (as of this writing).
Use with caution. Not recommended for production use.
Notes
The cboot bootloader on the Xavier (t194) platforms has support for loading the kernel, initial ramdisk,
and device tree from files in the rootfs, rather than the kernel partition. The stock L4T BSP has
supported this for several releases, installing the kernel image and initrd into /boot and
a /boot/extlinux/extlinux.conf file that cboot uses to locate the files. This can simplify kernel
development by eliminating the need to reflash the device to boot with updated kernels.
To implement this in meta-tegra, the cboot-extlinux recipe has been added. Adding cboot-extlinux to your
image will include the necessary files – kernel, initrd (if not bundled), and optionally the
device tree, along with the extlinux.conf file and signatures for the files that are expected to
be signed – in your rootfs.
When extlinux support in cboot is enabled (which it is by default), cboot will first try to mount the rootfs to locate
the extlinux.conf file. The rootfs is either marked as such with a partition GUID (see below) or is assumed
to be the first partition on the boot medium (SDcard, eMMC, or external device). cboot then tries to open
/boot/extlinux/extlinux.conf on that filesystem. If successful, it parses the configuration, then attempts
to load the kernel, initrd, and/or device tree based on the path names in the file. For elements that are
not configured in that file (or all of them, if the file does not exist), cboot falls back to loading them
from partitions on the device (kernel for the kernel+initrd, kernel-dtb for the device tree).
extlinux.conf file format
The format of the configuration file is a subset of the format used in the
distro boot
feature of U-Boot. The cboot-extlinux-config.bbclass file implements the cboot-specific configuration subset; see the comments in that
file for more information.
WARNING Modifying the extlinux.conf file incorrectly will often result in
cboot crashes, making your device unbootable. Use caution when making any changes
to the file.
Adding the device tree
By default, the cboot-extlinux recipe installs the default kernel image and initrd
(if configured to be separate from the kernel), but not the device tree, to align
with the default stock L4T setup. Set UBOOT_EXTLINUX_FDT = "/boot/${DTBFILE}" in
either a bbappend or in your local.conf to include the device tree.
Incompatible with A/B redundancy
Using cboot-extlinux for loading the kernel is not compatible with the A/B
redundancy mechanism - the kernel will always be loaded from the A rootfs partition.
It may be possible to fix this by assigning a unique partition GUID to each of
the two rootfs partitions, and creating cboot options files (cbo.dtb files) to
configure the rootfs GUIDs - one to be loaded into the CPUBL-CFG partition, and
the other into CPUBL-CFG_b. However, that would conflict with the normal bootloader
update mechanism, since BUP payloads don’t distinguish between the A and B slot for their
content. Some extra mechanism would be needed to keep the two CPUBL-CFG partitions
synchronized with the corresponding rootfs partition GUIDs.
Filesystem restrictions
This has only been tested with ext4-formatted root filesystems, and bugs found in
cboot’s ext4 implementation have been patched to make this work. Other filesystem types
are unlikely to work. Also, you should use the cboot-t19x recipe that builds
cboot from source to get the required patches (this is the default).
OE4T Contributor Guide
See the CONTRIBUTING.md file for details.
In addition to code and documentation contributions we greatly appreciate help in the form of testing.
Please see the Release and Validation sheet for a list of current test coverage and test cases. Request edit on this sheet if you’d like to help contribute.
Documentation Workflow
This project uses mdBook to generate documentation, with GitHub Actions for automated builds and GitHub Pages for hosting.
Repository Layout
Documentation source files live alongside the Yocto BSP layer content:
meta-tegra/
├── book.toml # mdBook configuration
├── docs/ # Documentation source (markdown)
│ ├── SUMMARY.md # Table of contents for mdBook
│ ├── README.md # Introduction / landing page
│ ├── *.md # Documentation pages
│ └── mdbook/ # Custom mdBook assets
│ ├── css/custom.css # Version dropdown styling
│ └── js/version-dropdown.js # Version switching logic
└── .github/workflows/
└── mdbook-versioned.yml # CI/CD workflow
The book.toml in the repository root configures mdBook. The src setting
points to the docs/ directory, and custom CSS and JavaScript are loaded for
the version dropdown:
[book]
title = "OE4T Meta Tegra"
authors = ["Matt Madison", "Dan Walkes"]
language = "en"
src = "docs"
[output.html]
additional-css = ["docs/mdbook/css/custom.css"]
additional-js = ["docs/mdbook/js/version-dropdown.js"]
Multi-Version Support
Each tracked branch gets its own independent copy of the documentation on GitHub
Pages. The list of published versions is controlled by a versions.json file
in the GitHub Pages content repository (OE4T/oe4t.github.io).
Adding Pages
All documentation pages are Markdown files in the docs/ directory. To add a
new page:
- Create a new
.mdfile indocs/. - Add an entry for it in
docs/SUMMARY.md. The SUMMARY file defines the table of contents and sidebar navigation. Pages not listed in SUMMARY.md will not appear in the built documentation.
Page Editing Tips
- Please ensure any embedded links to other documentation files are done with relative
paths. For example, use
[Link to another page in docs](OtherPageName.md)instead of[Link to another page in docs](https://github.com/OE4T/meta-tegra/blob/master/docs/OtherPageName.md) - You can use the trick at this stackoverflow post to add images to your markdown file without the need to check images into the repo.
Preview Locally
To preview the documentation locally with markdown, install mdBook and run:
mdbook serve
This starts a local web server with live reloading as you edit files.
Build and Deploy
The GitHub Actions workflow (.github/workflows/mdbook-versioned.yml) triggers
on pushes to tracked branches:
- Build — runs
mdbook buildinside apeaceiris/mdbookcontainer, producing output in a per-branch directory. - Deploy — pushes the built HTML to a subdirectory in the
mainbranch of the external GitHub Pages content repository (OE4T/oe4t.github.io) usingpeaceiris/actions-gh-pages.
Each branch deploys to its own directory, resulting in a structure like this in OE4T/oe4t.github.io:
<repo-root>/
├── index.html # redirects to ./master/
├── versions.json # lists available versions for the dropdown
├── master/ # docs built from the master branch
└── scarthgap/ # docs built from the scarthgap branch
The workflow can also be triggered manually via workflow_dispatch from the
GitHub Actions UI.
Deployment credentials
The deploy step requires an SSH deploy key stored as a repository secret:
OE4T_GITHUB_DEPLOY_KEY
If the secret is missing (common in forks), the workflow will emit a warning: “The repository secret must contain the OE4T_GITHUB_DEPLOY_KEY to run this step.” Then it will skip the deploy step without failing the workflow.
Version Dropdown
A custom JavaScript file (docs/mdbook/js/version-dropdown.js) adds a version
selector dropdown to the mdBook navigation bar. It fetches versions.json from
the site root to populate the list, and when a different version is selected it
navigates to the same page path under the new version’s directory.
The versions.json file is maintained manually in the GitHub Pages content
repository (OE4T/oe4t.github.io, main branch) (not auto-generated), giving
explicit control over which versions appear in the dropdown.
Adding a New Version
To add documentation for a new branch (e.g., kirkstone):
- Add the branch name to the
on.push.brancheslist in.github/workflows/mdbook-versioned.yml. - Push content to that branch. The workflow will automatically build and deploy to a new directory in OE4T/oe4t.github.io.
- Update
versions.jsonin OE4T/oe4t.github.io (on themainbranch) to include the new entry so it appears in the version dropdown.