Overview


This tutorial guides you from initial test_board reference design for TE0820 SoM to custom extensible vitis platfom and then shows how to implement and run basic VADD example, Vitis-AI 3.0 test_dpu_trd example (ResNet50) on DPU and vehicleclassification example on DPU with video input from USB camera.

Key Features


  • AMD Vitis 2022.2
  • Vitis AI 3.0
  • Vitis custom extensible platform
  • PetaLinux 2022.2 extension with AI 3.0 for Vitis  
  • Vector addition
  • ResNet50 on DPU
  • Vehicleclassification on DPU with video input from USB camera

Requirements


TypeNameVersionNote
HWTE0820 Module----
HWTE0706-03 test board
or
TE0701-06 test board
----
Diverse CableUSB, Power...----
Virtual MaschineOracle, VMWare or MS WSL--optional
OSLinuxXilinx Supported OS running on VM or native
Reference DesignTE0820-test_board-vivado_2022.2-build_2_*.zipbuild 2 or higher to match Vivado 2022.2Tutorial was created and tested with:
SWVitis2022.2--
SWVivado2022.2
SWPetalinux2022.2--
SWPutty----
Repo

Vitis-AI

3.0

GitHub - Xilinx/Vitis-AI at 3.0

https://xilinx.github.io/Vitis-AI/3.0/html/index.html


Prepare Development Environment

Virtual Machine


On Win10 Pro PC, you can use:

The presented extendible platform has been created on: Windows 10 Pro, ver. 21H2 OS build 19044.1889, VMware Workstation 16 Player (Version 16.2.4 build-20089737), Ubuntu 20.04 LTS Desktop 64-bit PC (AMD64)
https://linuxconfig.org/Ubuntu-20-04-download  


Vitis/Vivado 2022.2 and creation of the extendible platform from ZIP archive has been also tested on:
Windows 11 Pro PC (upgrade from Windows 10 Pro, ver. 21H2 OS build 19044.1889)
VMware Workstation 16 Player (Version 16.2.4 build-20089737),
Ubuntu 20.04 LTS Desktop 64-bit PC (AMD64).
https://linuxconfig.org/Ubuntu-20-04-download

Linux OS


Only supported OS are selected Linux distributions. You will need either native or virtual PC with Linux distribution.

Create new VM with Linux OS supported by Vitis 2022.2 tools.

Use English as OS language for your Linux System. Keyboard language can be any language.
Other languages may cause errors in PetaLinux build process.

Set Language


In Ubuntu 20.04, open terminal and type command:

$ locale

Language is OK, if the command response starts with:

LANG=en_US.UTF-8

Set Bash as Terminal in Ubuntu


In Ubuntu, set bash as terminal.

$ sudo dpkg-reconfigure dash shell


select:  no

Use of bash shell is required by Xilinx tools.

The Ubuntu 20.04 LTS terminal (selected as default installation) is dash.

 

Install OpenCL Client Drivers


On Ubuntu, install OpenCL Installable Client Driver Loader by executing:

$ sudo apt-get install ocl-icd-libopencl1
$ sudo apt-get install opencl-headers
$ sudo apt-get install ocl-icd-opencl-dev

Software Installation


Vitis 2022.2


Download  Vitis

Download the Vitis Tools installer from the link below https://www.xilinx.com/support/download.html

Install Vitis

If Vitis 2022.2 is not installed, follow installation steps described in:

https://docs.xilinx.com/r/en-US/ug1393-vitis-application-acceleration/Vitis-Software-Platform-Installation

After a successful installation of the Vitis 2022.2 and Vivado 2022.2 in /tools directory, a confirmation message is displayed, with a prompt to run the installLibs.sh script.

Script location:
/tools/Vitis/2022.2/scripts/installLibs.sh

In Ubuntu terminal, change directory to /tools/Vitis/2022.2/script  and run the script using sudo privileges:

$ sudo installLibs.sh

The command installs a number of necessary packages for the Vitis 2021.2 tools based on the actual OS version of your Ubuntu system.

Install License Supporting Vivado

In Ubuntu terminal, source paths to Vivado tools by executing

$ source /tools/Xilinx/Vitis/2022.2/settings64.sh

Execute Vivado License Manager:

$ vlm

From vlm, login to your Xilinx account by an www browser.

In www browser, specify Vitis 2021.2 license. Select Linux target.

Download xilinx license file and copy it into the directory of your choice.
~/License/vitis_2022_2/Xilinx.lic

In vlm, select Load License -> Copy License

Putty


The putty terminal can be used for Ethernet connected terminal. Putty supports keyboard, mouse and forwarding of X11 for Zynq Ultrascale+ applications designed for X11 desktop GUI.

In Ubuntu terminal, execute:

$ sudo apt install putty



To test the installation, execute putty application from Ubuntu terminal by:

$ putty &

Exit from putty.

Petalinux 2022.2


Download  Petalinux

Download the PetaLinux Tools installer from the link below https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/embedded-design-tools.html

Install Required Libraries

Install Petalinux 2022.2. Follow guideline described in:
PetaLinux KICKstart - Public Docs - Trenz Electronic Wiki (trenz-electronic.de)


Before PetaLinux installation, check UG1144 chapter "PetaLinux Tools Installation Requirements" and install missing tool/libraries with help of script plnx-env-setup.sh attached to the Xilinx Answer Record 73296 - PetaLinux: How to install the required packages for the PetaLinux Build Host?
https://www.xilinx.com/support/answers/73296.html

Use this page to download script: plnx-env-setup.sh

The script detects whether the Host OS is a Ubuntu, RHEL, or CentOS Linux distribution and then automatically installs all of the required packages for the PetaLinux Build Host.

The script requires root privileges. The script does not install the PetaLinux Tools. Command to run the script:

$ sudo ./plnx-env-setup.sh

Perform update of your PetaLinux and additional installation libraries.

$ sudo apt-get update
$ sudo apt-get install iproute2 gawk python3 python build-essential gcc git make net-tools libncurses5-dev tftpd zlib1g-dev libssl-dev flex bison libselinux1 gnupg wget git-core diffstat chrpath socat xterm autoconf libtool tar unzip texinfo zlib1g-dev gcc-multilib automake zlib1g:i386 screen pax gzip cpio python3-pip python3-pexpect xz-utils debianutils iputils-ping python3-git python3-jinja2 libegl1-mesa libsdl1.2-dev pylint3 -y
Install Petalinux

and follow the directions in the "Installing the PetaLinux Tool" section of (UG1144).
https://www.xilinx.com/support/documentation/sw_manuals/xilinx2020_1/ug1144-petalinux-tools-reference-guide.pdf

To install petalinux do not start from shared folder, copy installer into your home directory.

$ mkdir -p ~/petalinux/2022.2



Copy  petalinux-v2022.2-final-installer.run into  ~/petalinux/2022.2

$ ./petalinux-v2022.2-final-installer.run

Source environment

$ source ~/petalinux/2022.2/settings.sh

Prepare Reference Design for Extensible Custom Platform


Update Vivado Project for Extensible Platform


Trenz Electronic Scripts allows posibility change some setup via enviroment variables, which depends on the used OS and PC performace.

To improve performance on multicore CPU add global envirment on line 64:
export TE_RUNNING_JOBS=10

to  /etc/bash.bashrc or local to design_basic_settings.sh

For othervariables see also:

Project Delivery - Xilinx devices#EnvironmentVariables

In Ubuntu terminal, source paths to Vitis and Vivado tools by

$ source /tools/Xilinx/Vitis/2022.2/settings64.sh

Download TE0820 test_board Linux Design file(see Reference Design download link on chapter Requirements) with pre-build files to

 ~/Downloads/TE0820-test_board-vivado_2022.2-build_2_20230622121437.zip

This TE0820 test_board ZIP file contains bring-up scripts for creation of Petalinux for range of modules in zipped directory named “test_board”.

Unzip the file to directory:
~/work/te0820_84_240

All supported modules are identified in file: ~/work/te0820_84_240/test_board/board_files/TE0820_board_files.csv

We will select module 84 with name TE0820-05-4DE21MA, with device xczu04ev-sfvc784-1-e on TE0706-03 carrier board. We will use default clock 240 MHz.
That is why we name the package te0820_84_240 and proposed to unzip the TE0820 test_board Linux Design files into the directory:
~/work/te0820_84_240

In Ubuntu terminal, change directory to the test_board directory:

$ cd ~/work/te0820_84_240/test_board

Setup the test_board directory files for a Linux host machine.
In Ubuntu terminal, execute:

$ chmod ugo+rwx ./console/base_sh/*.sh
$ chmod ugo+rwx ./_create_linux_setup.sh
$ ./_create_linux_setup.sh

Select option (0) to open Selection Guide and press Enter

Select variant 24 from the selection guide, press enter and agree selection

Create Vivado Project with option 1

Vivado Project will be generated for the selected variant.

Selection Guide automatically modified ./design_basic_settings.sh with correct variant, so other provided bash files to recreate or open Vivado project again can be used later also.

In case of using selection guide, variant can be selected also manually:

Select option (2) to create maximum setup of CMD-Files and exit the script (by typing any key).

It moves main design bash scripts to the top of the test_board directory. Set these files as executable, from the Ubuntu terminal:

$ chmod ugo+rwx *.sh

In text editor, open file
~/work/te0820_84_240/test_board/design_basic_settings.sh

On line 63, change
export PARTNUMBER=LAST_ID
to
export PARTNUMBER=84

To improve performance on multicore CPU add on line 64:
export TE_RUNNING_JOBS=10

Vivado will be utilizing up to 10 parallel logical processor cores with this setup
instead of the default of 2 parallel logical processor cores.

Save the modified file.

This modification will guide the Trenz TE0820 test_board Linux Design scripts to generate Vivado HW for the module 84 with name TE0820-05-4DE21MA, with device xczu04ev-sfvc784-1-e on TEBF0706 carrier board.

In Ubuntu terminal, change directory to
~/work/te0820_84_240/test_board

The Vivado tool will be opened and Trenz Electronic HW project for the TE0820 test_board Linux Design, part 84 will be generated  by running this script:

$ ./vivado_create_project_guimode.sh

The Vivado tool will be opened and Trenz Electronic HW project for the TE0820 test_board Linux Design, part 84 will be generated.

In Vivado window Sources, click on zusys_wrapper and next on zusys.bd to open the HW diagram in IP integrator:

It is possible to display diagram in separate window by clicking on float icon in upper right corner of the diagram.

Zynq Ultrascale+ block is configured for the Trenz TE0820 test_board Linux Design on the TE0706-03 carrier board.

This is starting point for the standard PetaLinux system supported by Trenz with steps for generation of the PetaLinux system. Parameters of this system and compilation steps are described on Trenz Wiki page:

TE0820 Test Board - Public Docs - Trenz Electronic Wiki (trenz-electronic.de)

Follow steps described in these wiki page if you would like to create fixed, not extensible Vitis platform.

The Extensible Vitis platform generation steps are described in next paragraphs.

Create Extensible Vitis platform


To implement hardware this tutorial offers two alternatives: Fast Track or Manual Track:

  • Choose Fast Track to use TCL script to do the same modifications as in manual track case automatically,
  • Select Manual Track path if you want to see all required hardware modifications required for custom platform.
Fast Track

Block Design of the Vivado project must be opened for this step. Copy following TCL Code to the TCL comand console of Vivado:

TCL Script to prepare Extensible Vitits Platform
#activate extensible platform
set_property platform.extensible true [current_project]
save_bd_design
 
set_property PFM_NAME [string map {part0 zusys} [string map {trenz.biz trenz} [current_board_part]]] [get_files zusys.bd]
set_property platform.design_intent.embedded {true} [current_project]
set_property platform.design_intent.datacenter {false} [current_project]
set_property platform.design_intent.server_managed {false} [current_project]
set_property platform.design_intent.external_host {false} [current_project]
set_property platform.default_output_type {sd_card} [current_project]
set_property platform.uses_pr {false} [current_project]
save_bd_design
 
startgroup
create_bd_cell -type ip -vlnv xilinx.com:ip:clk_wiz:6.0 clk_wiz_0
endgroup

set_property -dict [list \
  CONFIG.CLKOUT2_JITTER {102.086} \
  CONFIG.CLKOUT2_PHASE_ERROR {87.180} \
  CONFIG.CLKOUT2_REQUESTED_OUT_FREQ {200.000} \
  CONFIG.CLKOUT2_USED {true} \
  CONFIG.CLKOUT3_JITTER {90.074} \
  CONFIG.CLKOUT3_PHASE_ERROR {87.180} \
  CONFIG.CLKOUT3_REQUESTED_OUT_FREQ {400.000} \
  CONFIG.CLKOUT3_USED {true} \
  CONFIG.CLKOUT4_JITTER {98.767} \
  CONFIG.CLKOUT4_PHASE_ERROR {87.180} \
  CONFIG.CLKOUT4_REQUESTED_OUT_FREQ {240.000} \
  CONFIG.CLKOUT4_USED {true} \
  CONFIG.MMCM_CLKOUT1_DIVIDE {6} \
  CONFIG.MMCM_CLKOUT2_DIVIDE {3} \
  CONFIG.MMCM_CLKOUT3_DIVIDE {5} \
  CONFIG.NUM_OUT_CLKS {4} \
  CONFIG.RESET_PORT {resetn} \
  CONFIG.RESET_TYPE {ACTIVE_LOW} \
] [get_bd_cells clk_wiz_0]
connect_bd_net [get_bd_pins clk_wiz_0/resetn] [get_bd_pins zynq_ultra_ps_e_0/pl_resetn0]
connect_bd_net [get_bd_pins clk_wiz_0/clk_in1] [get_bd_pins zynq_ultra_ps_e_0/pl_clk0]

startgroup
create_bd_cell -type ip -vlnv xilinx.com:ip:proc_sys_reset:5.0 proc_sys_reset_1
endgroup

set_property location {3 1192 -667} [get_bd_cells proc_sys_reset_1]
copy_bd_objs /  [get_bd_cells {proc_sys_reset_1}]
set_property location {3 1190 -487} [get_bd_cells proc_sys_reset_2]
copy_bd_objs /  [get_bd_cells {proc_sys_reset_2}]
set_property location {3 1126 -309} [get_bd_cells proc_sys_reset_3]
copy_bd_objs /  [get_bd_cells {proc_sys_reset_3}]
set_property location {3 1148 -136} [get_bd_cells proc_sys_reset_4]
connect_bd_net [get_bd_pins proc_sys_reset_1/slowest_sync_clk] [get_bd_pins clk_wiz_0/clk_out1]
connect_bd_net [get_bd_pins proc_sys_reset_2/slowest_sync_clk] [get_bd_pins clk_wiz_0/clk_out2]
connect_bd_net [get_bd_pins proc_sys_reset_3/slowest_sync_clk] [get_bd_pins clk_wiz_0/clk_out3]
connect_bd_net [get_bd_pins proc_sys_reset_4/slowest_sync_clk] [get_bd_pins clk_wiz_0/clk_out4]

startgroup
connect_bd_net [get_bd_pins proc_sys_reset_4/ext_reset_in] [get_bd_pins zynq_ultra_ps_e_0/pl_resetn0]
connect_bd_net [get_bd_pins zynq_ultra_ps_e_0/pl_resetn0] [get_bd_pins proc_sys_reset_3/ext_reset_in]
connect_bd_net [get_bd_pins zynq_ultra_ps_e_0/pl_resetn0] [get_bd_pins proc_sys_reset_2/ext_reset_in]
connect_bd_net [get_bd_pins zynq_ultra_ps_e_0/pl_resetn0] [get_bd_pins proc_sys_reset_1/ext_reset_in]
endgroup

startgroup
connect_bd_net [get_bd_pins proc_sys_reset_4/dcm_locked] [get_bd_pins clk_wiz_0/locked]
connect_bd_net [get_bd_pins clk_wiz_0/locked] [get_bd_pins proc_sys_reset_2/dcm_locked]
connect_bd_net [get_bd_pins clk_wiz_0/locked] [get_bd_pins proc_sys_reset_1/dcm_locked]
connect_bd_net [get_bd_pins clk_wiz_0/locked] [get_bd_pins proc_sys_reset_3/dcm_locked]
endgroup

set_property PFM.CLOCK {clk_out1 {id "2" is_default "false" proc_sys_reset "/proc_sys_reset_1" status "fixed" freq_hz "100000000"}} [get_bd_cells /clk_wiz_0]
set_property PFM.CLOCK {clk_out1 {id "2" is_default "false" proc_sys_reset "/proc_sys_reset_1" status "fixed" freq_hz "100000000"} clk_out2 {id "3" is_default "false" proc_sys_reset "/proc_sys_reset_2" status "fixed" freq_hz "200000000"}} [get_bd_cells /clk_wiz_0]
set_property PFM.CLOCK {clk_out1 {id "2" is_default "false" proc_sys_reset "/proc_sys_reset_1" status "fixed" freq_hz "100000000"} clk_out2 {id "3" is_default "false" proc_sys_reset "/proc_sys_reset_2" status "fixed" freq_hz "200000000"} clk_out3 {id "4" is_default "false" proc_sys_reset "/proc_sys_reset_3" status "fixed" freq_hz "400000000"}} [get_bd_cells /clk_wiz_0]
set_property PFM.CLOCK {clk_out1 {id "2" is_default "false" proc_sys_reset "/proc_sys_reset_1" status "fixed" freq_hz "100000000"} clk_out2 {id "3" is_default "false" proc_sys_reset "/proc_sys_reset_2" status "fixed" freq_hz "200000000"} clk_out3 {id "4" is_default "false" proc_sys_reset "/proc_sys_reset_3" status "fixed" freq_hz "400000000"} clk_out4 {id "5" is_default "false" proc_sys_reset "/proc_sys_reset_4" status "fixed" freq_hz "240000000"}} [get_bd_cells /clk_wiz_0]
set_property pfm_name zusys [get_files {zusys.bd}]
set_property PFM.CLOCK {clk_out1 {id "1" is_default "false" proc_sys_reset "/proc_sys_reset_1" status "fixed" freq_hz "100000000"} clk_out2 {id "3" is_default "false" proc_sys_reset "/proc_sys_reset_2" status "fixed" freq_hz "200000000"} clk_out3 {id "4" is_default "false" proc_sys_reset "/proc_sys_reset_3" status "fixed" freq_hz "400000000"} clk_out4 {id "5" is_default "false" proc_sys_reset "/proc_sys_reset_4" status "fixed" freq_hz "240000000"}} [get_bd_cells /clk_wiz_0]
set_property PFM.CLOCK {clk_out1 {id "1" is_default "false" proc_sys_reset "/proc_sys_reset_1" status "fixed" freq_hz "100000000"} clk_out2 {id "2" is_default "false" proc_sys_reset "/proc_sys_reset_2" status "fixed" freq_hz "200000000"} clk_out3 {id "4" is_default "false" proc_sys_reset "/proc_sys_reset_3" status "fixed" freq_hz "400000000"} clk_out4 {id "5" is_default "false" proc_sys_reset "/proc_sys_reset_4" status "fixed" freq_hz "240000000"}} [get_bd_cells /clk_wiz_0]
set_property PFM.CLOCK {clk_out1 {id "1" is_default "false" proc_sys_reset "/proc_sys_reset_1" status "fixed" freq_hz "100000000"} clk_out2 {id "2" is_default "false" proc_sys_reset "/proc_sys_reset_2" status "fixed" freq_hz "200000000"} clk_out3 {id "3" is_default "false" proc_sys_reset "/proc_sys_reset_3" status "fixed" freq_hz "400000000"} clk_out4 {id "5" is_default "false" proc_sys_reset "/proc_sys_reset_4" status "fixed" freq_hz "240000000"}} [get_bd_cells /clk_wiz_0]
set_property PFM.CLOCK {clk_out1 {id "1" is_default "false" proc_sys_reset "/proc_sys_reset_1" status "fixed" freq_hz "100000000"} clk_out2 {id "2" is_default "false" proc_sys_reset "/proc_sys_reset_2" status "fixed" freq_hz "200000000"} clk_out3 {id "3" is_default "false" proc_sys_reset "/proc_sys_reset_3" status "fixed" freq_hz "400000000"} clk_out4 {id "4" is_default "false" proc_sys_reset "/proc_sys_reset_4" status "fixed" freq_hz "240000000"}} [get_bd_cells /clk_wiz_0]
set_property PFM.CLOCK {clk_out1 {id "1" is_default "false" proc_sys_reset "/proc_sys_reset_1" status "fixed" freq_hz "100000000"} clk_out2 {id "2" is_default "false" proc_sys_reset "/proc_sys_reset_2" status "fixed" freq_hz "200000000"} clk_out3 {id "3" is_default "false" proc_sys_reset "/proc_sys_reset_3" status "fixed" freq_hz "400000000"} clk_out4 {id "4" is_default "true" proc_sys_reset "/proc_sys_reset_4" status "fixed" freq_hz "240000000"}} [get_bd_cells /clk_wiz_0]
save_bd_design

startgroup
set_property -dict [list \
  CONFIG.PSU__USE__IRQ0 {1} \
  CONFIG.PSU__USE__M_AXI_GP0 {1} \
] [get_bd_cells zynq_ultra_ps_e_0]
endgroup

connect_bd_net [get_bd_pins zynq_ultra_ps_e_0/maxihpm0_fpd_aclk] [get_bd_pins clk_wiz_0/clk_out4]

startgroup
create_bd_cell -type ip -vlnv xilinx.com:ip:axi_intc:4.1 axi_intc_0
endgroup

set_property CONFIG.C_IRQ_CONNECTION {1} [get_bd_cells axi_intc_0]
connect_bd_net [get_bd_pins axi_intc_0/s_axi_aclk] [get_bd_pins clk_wiz_0/clk_out4]
connect_bd_net [get_bd_pins axi_intc_0/s_axi_aresetn] [get_bd_pins proc_sys_reset_4/peripheral_aresetn]
connect_bd_net [get_bd_pins axi_intc_0/irq] [get_bd_pins zynq_ultra_ps_e_0/pl_ps_irq0]

startgroup
set_property CONFIG.PSU__MAXIGP0__DATA_WIDTH {32} [get_bd_cells zynq_ultra_ps_e_0]
endgroup

startgroup
apply_bd_automation -rule xilinx.com:bd_rule:axi4 -config { Clk_master {/clk_wiz_0/clk_out4 (240 MHz)} Clk_slave {/clk_wiz_0/clk_out4 (240 MHz)} Clk_xbar {/clk_wiz_0/clk_out4 (240 MHz)} Master {/zynq_ultra_ps_e_0/M_AXI_HPM0_FPD} Slave {/axi_intc_0/s_axi} ddr_seg {Auto} intc_ip {New AXI Interconnect} master_apm {0}}  [get_bd_intf_pins axi_intc_0/s_axi]
endgroup

disconnect_bd_net /proc_sys_reset_4_peripheral_aresetn [get_bd_pins ps8_0_axi_periph/S00_ARESETN]
disconnect_bd_net /proc_sys_reset_4_peripheral_aresetn [get_bd_pins ps8_0_axi_periph/M00_ARESETN]

startgroup
connect_bd_net [get_bd_pins ps8_0_axi_periph/S00_ARESETN] [get_bd_pins proc_sys_reset_4/interconnect_aresetn]
connect_bd_net [get_bd_pins proc_sys_reset_4/interconnect_aresetn] [get_bd_pins ps8_0_axi_periph/M00_ARESETN]
endgroup

set_property name ps8_0_axi_interconnect_1 [get_bd_cells ps8_0_axi_periph]
set_property name axi_interconnect_1 [get_bd_cells ps8_0_axi_interconnect_1]
set_property PFM.IRQ {intr { id 0 range 32 }} [get_bd_cells /axi_intc_0]
set_property PFM.AXI_PORT {M01_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"}} [get_bd_cells /axi_interconnect_1]
set_property PFM.AXI_PORT {M01_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M02_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"}} [get_bd_cells /axi_interconnect_1]
set_property PFM.AXI_PORT {M01_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M02_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M03_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"}} [get_bd_cells /axi_interconnect_1]
set_property PFM.AXI_PORT {M01_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M02_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M03_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M04_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"}} [get_bd_cells /axi_interconnect_1]
set_property PFM.AXI_PORT {M01_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M02_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M03_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M04_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M05_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"}} [get_bd_cells /axi_interconnect_1]
set_property PFM.AXI_PORT {M01_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M02_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M03_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M04_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M05_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M06_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"}} [get_bd_cells /axi_interconnect_1]
set_property PFM.AXI_PORT {M01_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M02_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M03_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M04_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M05_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M06_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"} M07_AXI {memport "M_AXI_GP" sptag "" memory "" is_range "false"}} [get_bd_cells /axi_interconnect_1]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HPC" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HPC" sptag "" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HPC" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HPC" sptag "" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HPC" sptag "" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HPC" sptag "" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HPC" sptag "" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HPC" sptag "" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HPC" sptag "" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HPC" sptag "" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HPC" sptag "" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP3_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HPC" sptag "" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP3_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP3_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HP" sptag "HPC0" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP3_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HP" sptag "HPC0" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HP" sptag "HPC1" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP3_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HP" sptag "HPC0" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HP" sptag "HPC1" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP3_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HP" sptag "HPC0" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HP" sptag "HPC1" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "HP0" memory "" is_range "false"} S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP3_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HP" sptag "HPC0" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HP" sptag "HPC1" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "HP0" memory "" is_range "false"} S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "HP1" memory "" is_range "false"} S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"} S_AXI_HP3_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HP" sptag "HPC0" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HP" sptag "HPC1" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "HP0" memory "" is_range "false"} S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "HP1" memory "" is_range "false"} S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "HP2" memory "" is_range "false"} S_AXI_HP3_FPD {memport "S_AXI_HP" sptag "" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]
set_property PFM.AXI_PORT {M_AXI_HPM1_FPD {memport "M_AXI_GP" sptag "" memory "" is_range "false"} S_AXI_HPC0_FPD {memport "S_AXI_HP" sptag "HPC0" memory "" is_range "false"} S_AXI_HPC1_FPD {memport "S_AXI_HP" sptag "HPC1" memory "" is_range "false"} S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "HP0" memory "" is_range "false"} S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "HP1" memory "" is_range "false"} S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "HP2" memory "" is_range "false"} S_AXI_HP3_FPD {memport "S_AXI_HP" sptag "HP3" memory "" is_range "false"}} [get_bd_cells /zynq_ultra_ps_e_0]

save_bd_design

# add addresses to unmapped peripherals
assign_bd_address
 
#save
save_bd_design
 
#save project XPR name
global proj_xpr
set proj_xpr [current_project]
append proj_xpr .xpr
 
#close project
close_project
 
# reopen project
open_project $proj_xpr
 
# open block design
open_bd_design [current_project].srcs/sources_1/bd/zusys/zusys.bd
 
#validate
#validate_bd_design

This script modifies the Initial platform Block design into the Extensible platform Block design and also defines define Platform Setup configuration.

In Vivado, open the design explorer and Platform description.
The fast track result is identical to the manually performed modifications described in next sections. In Vivado, save block design by clicking on icon “Save Block Design”.

Continue the design path with Validate Design.

Manual Track

In Vivado project, click in Flow Navigator on Settings. In opened Settings window, select General in Project Settings, select Project is an extensible Vitis platform. Click on OK.

IP Integrator of project set up as an extensible Vitis platform has an additional Platform Setup window.

Add multiple clocks and processor system reset IPs
In IP Integrator Diagram Window, right click, select Add IP and add Clocking Wizard IP clk_wiz_0. Double-click on the IP to Re-customize IP window.  Select Output Clocks panel. Select four clocks with frequency 100, 200, 400 and 240 MHz.
100 MHz clock will serve as low speed clock.
200 MHz and 400 MHz clock will serve as clock for possible AI engine.
240 MHz clock will serve as the default extensible platform clock. By default, Vitis will compile HW IPs with this default clock. 

Set reset type from the default Active High to Active Low

Clik on OK to close the Re-customize IP window.

Connect input resetn of clk_wiz_0 with output pl_resetn0 of zynq_ultra_ps_e_0.
Connect input clk_in1 of clk_wiz_0 with output pl_clk0 of zynq_ultra_ps_e_0.

Add and connect four Processor System Reset blocks for each generated clock.

Open Platform Setup window of IP Integrator to define Clocks. In Settings, select Clock.

In “Enabled” column select all four defined clocks clk_out1, clk_out2, clk_out3, clk_out4 of clk_wiz_0 block.

In “ID” column keep the default Clock ID: 1, 2, 3, 4

In “Is Default” column, select clk_out4 (with ID=4) as the default clock.  One and only one clock must be selected as default clock.

Double-click on zynq_ultra_ps_e_0 block and enable M_AXI_HPM0_FPD port. Select data width 32bit. It will be used for integration of inerrupt controller on new dedicated AXI stream subsystem with 240 MHz clock. It will also enable new input pin maxihpm0_fpd_aclk of zynq_ultra_ps_e_0. Connect it to 240 MHz clock net.

Connect input pin maxihpm0_fpd_aclk of zynq_ultra_ps_e_0 to the 240 MHz clk_out4 of clk_wiz_0 IP block.

Add, customize and connect the AXI Interrupt Controller

Add AXI Interrupt Controller IP axi_intc_0.
Double-click on axi_intc_0 to re-customize it.

In “Processor Interrupt Type and Connection” section select the “Interrupt Output Connection” from “Bus” to “Single”.

Click on OK to accept these changes.


Connect interrupt controller clock input s_axi_aclk of axi_intc_0 to output dlk_out4 of clk_wiz_0. It is the default, 240 MHz clock of the extensible platform.

Connect interrupt controller input s_axi_aresetn of axi_intc_0 to output peripheral_aresetn[0:0] of proc_sys_reset_4 . It is the reset block for default, 240 MHz clock of the extensible platform.

Use the Run Connection Automation wizard to connect the axi lite interface of interrupt controller axi_intc_0 to master interface M_AXI_HPM0_FPD  of zynq_ultra_ps_e_0.

In Run Connection Automaton window, click OK.

New AXI interconnect ps_8_axi_periph is created. It connects master interface M_AXI_HPM0_FPD  of zynq_ultra_ps_e_0 with interrupt controller axi_intc_0.


Double-click on zynq_ultra_ps_e_0 to re-customize it by enabling of an interrupt input pl_ps_irq0[0:0]. Click OK.


Modify the automatically generated reset network of AXI interconnect ps_8_axi_periph .

Disconnect input S00_ARESETN of ps_8_axi_periph from the network driven by output peripherial_aresetn[0:0] of proc_sys_reset_4 block.

Connect input S00_ARESETN of ps_8_axi_periph block with output interconnect_aresetn[0:0] of proc_sys_reset_4 block.

Disconnect input M00_ARESETN of ps_8_axi_periph block from the network driven by output peripherial_aresetn[0:0] of proc_sys_reset_4 block.

Connect input M00_ARESETN of ps_8_axi_periph to output interconnect_aresetn[0:0] of proc_sys_reset_4 block.

This modification will make the reset structure of the AXI interconnect ps_8_axi_periph block identical to the future extensions of this interconnect generated by the Vitis extensible design flow.

Connect the interrupt input pl_ps_irq0[0:0] of zynq_ultra_ps_e_0 block with output irq of axi_intc_0 block.

In Platform Setup, select “Interrupt” and enable intr in the “Enabled” column.

Rename automatically generated name ps8_0_axi_periph of the interconnect to new name: axi_interconnect_1 . This new name will be used in Platform Setup selection of AXI ports for the extensible platform.

In Platform Setup, select AXI Ports for zynq_ultra_ps_e_0:

Select M_AXI_HPM0_FPD and M_AXI_HPM1_FPD in column “Enabled”.

Select S_AXI_HPC0_FPD and S_AXI_HPC1_FPD in column “Enabled”.

For S_AXI_HPC0_FPD, change S_AXI_HPC to S_AXI_HP in column “Memport”.

For S_AXI_HPC1_FPD, change S_AXI_HPC to S_AXI_HP in column “Memport”.

Select S_AXI_HP0_FPD, S_AXI_HP1_FPD, S_AXI_HP2_FPD, S_AXI_HP3_FPD in column “Enabled”.

Type into the “sptag” column the names for these 6 interfaces so that they can be selected by v++ configuration during linking phase. HPC0HPC1HP0HP1HP2HP3

In “Platform Setup”, select AXI Ports for the recently renamed axi_interconnect_1:

Select M01_AXI, M02_AXI, M03_AXI, M04_AXI, M05_AXI, M06_AXI and M07_AXI in column “Enabled”.

Make sure, that you are selecting these AXI ports for the 240 MHz AXI interconnect axi_interconnect_1

Keep all AXI ports of the 100 MHz interconnect axi_interconnect_0 unselected. The AXI interconnect axi_interconnect_0 connects other logic and IPs which are part of the initial design.

The modifications of the default design for the extensible platform are completed, now.

In Vivado, save block design by clicking on icon “Save Block Design”.

Continue the design path with Validate Design.

Validate Design


Results of HW creation via Manual Track or Fast Track are identical.

Open diagram by clicking on zusys.bd if not already open.
In Diagram window, validate design by clicking on “Validate Design” icon.

Received Critical Messages window indicates that input intr[0:0] of axi_intc_0 is not connected. This is expected. The Vitis extensible design flow will connect this input to interrupt outputs from generated HW IPs.

 Click OK.

You can generate pdf of the block diagram by clicking to any place in diagram window and selecting “Save as PDF File”. Use the offered default file name:
~/work/te0820_84_240/test_board/vivado/zusys.pdf

Compile Created HW and Custom SW with Trenz Scripts


In Vivado Tcl Console, type following script and execute it by Enter. It will take some time to compile HW. HW design and to export the corresponding standard XSA package with included bitstream.

TE::hw_build_design -export_prebuilt

An archive for standard non-extensible system is created:
~/work/te0820_84_240/test_board/vivado/test_board_4ev_1e_2gb.xsa

In Vivado Tcl Console, type the following script and execute it by Enter. It will take some time to compile.

TE::sw_run_vitis -all

After the script controlling SW compilation is finished, the Vitis SDK GUI is opened.

Close the Vitis “Welcome” page.
Compile the two included SW projects.
Standalone custom Vitis platform TE0820-05-4DE21MA has been created and compiled. 

The TE0820-05-4DE21MA Vitis platform includes Trenz Electronic custom first stage boot loader in folder zynqmp_fsbl. It includes SW extension specific for the Trenz module initialisation.

This custom zynqmp_fsbl project has been compiled into executable file fsbl.elf.  It is located in: ~/work/te0820_84_240/test_board/prebuilt/software/4ev_1e_2gb/fsbl.elf

This customised first stage boot loader is needed for the Vitis extensible platform.
We have used the standard Trenz scripts to generate it for next use in the extensible platform.

Exit the opened Vitis SDK project.

In Vivado top menu select File->Close Project to close project. Click OK.

In Vivado top menu select File->Exit to close Vivado. Click OK.

The exported Vitis Extensible Hardware platform named test_board_4ev_1e_2gb.xsa can be found in the vivado folder.

Copy Created Custom First Stage Boot Loader


Up to now, test_board directory has been used for all development.
~/work/te0820_84_240/test_board

Create new folders:
~/work/te0820_84_240/test_board_pfm/pfm/boot
~/work/te0820_84_240/test_board_pfm/pfm/sd_dir

Copy the recently created custom first stage boot loader executable file from
~/work/te0820_84_240/test_board/prebuilt/software/4ev_1e_2gb/fsbl.elf
to
~/work/te0820_84_240/test_board_pfm/pfm/boot/fsbl.elf

Building Platform OS and SDK


Configuration of the Default Trenz Petalinux for the Vitis Extensible Platform


Change directory to the default Trenz Petalinux folder
~/work/te0820_84_240/test_board/os/petalinux

Source Vitis and Petalinux scripts to set environment for access to Vitis and PetaLinux tools.

$ source /tools/Xilinx/Vitis/2022.2/settings64.sh
$ source ~/petalinux/2022.2/settings.sh

Configure petalinux with the test_board_4ev_2gb.xsa for the extensible design flow by executing:

$ petalinux-config --get-hw-description=~/work/te0820_84_240/test_board/vivado


Select Exit->Yes to close this window.

Customize Root File System, Kernel, Device Tree and U-boot

Download the Vitis-AI 3.0 repository.
In browser, open page:

https://github.com/Xilinx/Vitis-AI/tree/3.0

Clik on green Code button and download Vitis-AI-3.0.zip file.
Unzip Vitis-AI-3.0.zip file to directory ~/Downloads/Vitis-AI.

Copy ~/Downloads/Vitis-AI to  ~/work/Vitis-AI-3.0 

Delete Vitis-AI-3.0.zip,  clean trash.

The directory ~/work/Vitis-AI-3.0 contains the Vitis-AI 3.0 framework, now.

To install the Vitis-AI 3.0 version of shared libraries into rootfs (when generating system image by PetaLinux) we have to copy recepies recipes-vitis-ai to the Petalinux project :

Copy  
~/work/Vitis-AI-3.0/src/vai_petalinux_recepies/recipes-vitis-ai

to
~/work/te0820_84_240/test_board/os/petalinux/project-spec/meta-user/

Delete file:
~/work/te0820_84_240/test_board/os/petalinux/project-spec/meta-user/recipes-vitis-ai/vart/vart_3.0_vivado.bb
and keep only the unmodified file:
~/work/te0820_84_240/test_board/os/petalinux/project-spec/meta-user/recipes-vitis-ai/vart/vart_3.0.bb

File vart_3.0.bb will create vart libraries for Vitis design flow with dependency on xrt. 

In text editor, modify the user-rootfsconfig file:
~/work/te0820_86_240/test_board/os/petalinux/project-spec/meta-user/conf/user-rootfsconfig

In text editor, append these lines:

#Note: Mention Each package in individual line
#These packages will get added into rootfs menu entry
 
CONFIG_startup
CONFIG_webfwu
 
CONFIG_xrt
CONFIG_xrt-dev
CONFIG_zocl
CONFIG_opencl-clhpp-dev
CONFIG_opencl-headers-dev
CONFIG_packagegroup-petalinux-opencv
CONFIG_packagegroup-petalinux-opencv-dev
CONFIG_dnf
CONFIG_e2fsprogs-resize2fs
CONFIG_parted
CONFIG_resize-part
 
CONFIG_packagegroup-petalinux-vitisai
CONFIG_packagegroup-petalinux-self-hosted
CONFIG_cmake
 
CONFIG_packagegroup-petalinux-vitisai-dev
CONFIG_mesa-megadriver
CONFIG_packagegroup-petalinux-x11
CONFIG_packagegroup-petalinux-v4lutils
CONFIG_packagegroup-petalinux-matchbox
 
CONFIG_packagegroup-petalinux-vitis-acceleration
CONFIG_packagegroup-petalinux-vitis-acceleration-dev
 
CONFIG_vitis-ai-library
CONFIG_vitis-ai-library-dev
CONFIG_vitis-ai-library-dbg

xrt, xrt-dev and zocl  are required for Vitis acceleration flow.
dnf is for package management.
parted, e2fsprogs-resize2fs and resize-part can be used for ext4 partition resize.

Other included packages serve for natively building Vitis AI applications on target board and for running Vitis-AI demo applications with GUI.

The last three packages will enable use of the Vitis-AI 3.0 recepies for installation of the correspoding Vitis-AI 3.0 libraries into rootfs of PetaLinux.

Enable all required packages in Petalinux configuration, from the Ubuntu terminal with exception of vitis-ai-library-dev and 
vitis-ai-library-dbg:

$ petalinux-config -c rootfs

Select all user packages by typing “y” with the exception of vitis-ai-library-dev and 
vitis-ai-library-dbg. All packages will have to have an asterisk.

Only vitis-ai-library-dev and vitis-ai-library-dbg will stay indicated as unselected by: [ ].

Still in the RootFS configuration window, go to root directory by select Exit once.

Enable OpenSSH and Disable Dropbear


Dropbear is the default SSH tool in Vitis Base Embedded Platform. If OpenSSH is used to replace Dropbear, the system could achieve faster data transmission speed over ssh. Created Vitis extensible platform applications may use remote display feature. Using of OpenSSH can improve the display experience.

Go to Image Features.
Disable ssh-server-dropbear and enable ssh-server-openssh and click Exit once.

Go to Filesystem Packages->misc->packagegroup-core-ssh-dropbear and disable packagegroup-core-ssh-dropbear.

Go to Filesystem Packages level by Exit twice.

Go to console->network->openssh and enable openssh, openssh-sftp-server, openssh-sshd, openssh-scp.

Go to root level by selection of Exit four times.

Enable Package Management


Package management feature can allow the board to install and upgrade software packages on the fly.

In rootfs config go to Image Features and enable package-management and debug_tweaks option
Click OK, Exit twice and select Yes to save the changes.

Disable CPU IDLE in Kernel Config


CPU IDLE would cause processors get into IDLE state (WFI) when the processor is not in use. When JTAG is connected, the hardware server on host machine talks to the processor regularly. If it talks to a processor in IDLE status, the system will hang because of incomplete AXI transactions.

So, it is recommended to disable the CPU IDLE feature during project development phase.

It can be re-enabled after the design has completed to save power in final products.

Launch kernel config:

$ petalinux-config -c kernel

Ensure the following items are TURNED OFF by entering 'n' in the [ ] menu selection:

CPU Power Management->CPU Idle->CPU idle PM support

CPU Power Management->CPU Frequency scaling->CPU Frequency scaling

Exit and Yes to Save changes.

Add EXT4 rootfs Support


Let PetaLinux generate EXT4 rootfs. In terminal, execute:

$ petalinux-config

Go to Image Packaging Configuration.
Enter into Root File System Type

Select Root File System Type  EXT4

Change the “Device node” of SD device from the default value
/dev/mmcblk0p2

to new value required for the TE0820 modules on TE0707 test board:
/dev/mmcblk1p2

Exit and Yes to save changes.

Let Linux Use EXT4 rootfs During Boot


The setting of which rootfs to use during boot is controlled by bootargs. We would change bootargs settings to allow Linux to boot from EXT4 partition.

In terminal, execute:

$ petalinux-config

Change DTG settings->Kernel Bootargs->generate boot args automatically to NO.

Update User Set Kernel Bootargs to:
earlycon console=ttyPS0,115200 clk_ignore_unused root=/dev/mmcblk1p2 rw rootwait cma=512M

Click OK, Exit three times and Save.

Build PetaLinux Image


In terminal, build the PetaLinux project by executing:

$ petalinux-build

The PetaLinux image files will be generated in the directory:
~/work/te0820_84_240/test_board/os/petalinux/images/linux

Generation of PetaLinux takes some time and requires Ethernet connection and sufficient free disk space.

Create Petalinux SDK 


The SDK is used by Vitis tool to cross compile applications for newly created platfom.

In terminal, execute:

$ petalinux-build --sdk

The generated sysroot package sdk.sh will be located in directory
~/work/te0820_84_240/test_board/os/petalinux/images/linux
 
Generation of SDK package takes some time and requires sufficient free disk space.
Time needed for these two steps depends also on number of allocated processor cores.

Copy Files for Extensible Platform


Copy these four files:

FilesFromTo
bl31.elf
pmufw.elf
system.dtb
u-boot-dtb.elf
~/work/te0820_84_240/test_board/os/petalinux/images/linux~/work/te0820_84_240/test_board_pfm/pfm/boot

Rename the copied file u-boot-dtb.elf to u-boot.elf

The directory
~/work/te0820_84_240/test_board_pfm/pfm/boot
contains these five files:

  1. bl31.elf
  2. fsbl.elf
  3. pmufw.elf
  4. system.dtb
  5. u-boot.elf

Copy files:

FilesFrom To
boot.scr
system.dtb
~/work/te0820_84_240/test_board/os/petalinux/images/linux~/work/te0820_84_240/test_board_pfm/pfm/sd_dir

Copy file:

FileFromTo
init.sh~/work/te0820_84_240/test_board/misc/sd~/work/te0820_84_240/test_board_pfm/pfm/sd_dir


init.sh is an place-holder for user defined bash code to be executed after the boot:

#!/bin/sh
normal="\e[39m"
lightred="\e[91m"
lightgreen="\e[92m"
green="\e[32m"
yellow="\e[33m"
cyan="\e[36m"
red="\e[31m"
magenta="\e[95m"

echo -ne $lightred
echo Load SD Init Script
echo -ne $cyan
echo User bash Code can be inserted here and put init.sh on SD
echo -ne $normal

Create Extensible Platform zip File


Create new directory tree:
~/work/te0820_84_240_move/test_board/os/petalinux/images
~/work/te0820_84_240_move/test_board/Vivado
~/work/te0820_84_240_move/test_board_pfm/pfm/boot ~/work/te0820_84_240_move/test_board_pfm/pfm/sd_dir

Copy all files from the directory:

FilesSourceDestination
all~/work/te0820_84_240/test_board/os/petalinux/images~/work/te0820_84_240_move/test_board/os/petalinux/images
all~/work/te0820_84_240/test_board_pfm/pfm/boot~/work/te0820_84_240_move/test_board_pfm/pfm/boot
all~/work/te0820_84_240/test_board_pfm/pfm/sd_dir~/work/te0820_84_240_move/test_board_pfm/pfm/sd_dir
test_board_4ev_1e_2gb.xsa~/work/te0820_84_240/test_board/Vivado/test_board_4ev_1e_2gb.xsa~/work/te0820_84_240_move/test_board/Vivado/test_board_4ev_1e_2gb.xsa

Zip the directory
~/work/te0820_84_240_move
into ZIP archive:
~/work/te0820_84_240_move.zip

The archive te0820_84_240_move.zip can be used to create extensible platform on the same or on an another PC with installed Ubuntu 20.04 and Vitis tools, with or without installed Petalinux. The archive includes all needed components, including the Xilinx xrt library and the script sdk.sh serving for generation of the sysroot .

The archive has size approximately 3.6 GB and it is valid for the initially selected module (84).
This is the te0820 HW module with xczu4ev-sfvc784-1-e device with 2 GB memory.
The extensible Vitis platform will have the default clock 240 MHz.

Move the te0820_84_240_move.zip file to an PC disk drive.

Delete:
~/work/te0820_84_240_move
~/work/te0820_84_240_move.zip
Clean the Ubuntu Trash.

Generation of SYSROOT


This part of development can be direct continuation of the previous Petalinux configuration and compilation steps.

Alternatively, it is also possible to implement all next steps on an Ubuntu 20.04 without installed PetaLinux Only the Ubuntu 20.04 and Vitis/Vivado installation is needed.
All required files created in the PetaLinux for the specific module (24) are present in the archive: te0820_84_240_move.zip
In this case, unzip the archive to the directory:
~/work/te0820_84_240_move
and copy all content of directories to
~/work/te0820_84_240
Delete the te0820_84_240_move.zip file and the ~/work/te0820_84_240_move directory to save filesystem space.

In Ubuntu terminal, change the working directory to:
~/work/te0820_84_240/test_board/os/petalinux/images/linux

In Ubuntu terminal, execute script enabling access to Vitis 2022.2 tools.
Execution of script serving for setting up PetaLinux environment is not necessary:

$ source /tools/Xilinx/Vitis/2022.2/settings64.sh

In Ubuntu terminal, execute script

$ ./sdk.sh -d ~/work/te0820_84_240/test_board_pfm

SYSROOT directories and files for PC and for Zynq Ultrascale+  will be created in:
~/work/te0820_84_240/test_board_pfm/sysroots/x86_64-petalinux-linux
~/work/te0820_84_240/test_board_pfm/sysroots/cortexa72-cortexa53-xilinx-linux

Once created, do not move these sysroot directories (due to some internally created paths).

Generation of Extensible Platform for Vitis


In Ubuntu terminal, change the working directory to:
~/work/te0820_84_240/test_board_pfm

Start the Vitis tool by executing

$ vitis &

In Vitis “Launcher”, set the workspace for the extensible platform compilation:
~/work/te0820_84_240/test_board_pfm

Click on “Launch” to launch Vitis

Close Welcome page.

In Vitis, select in the main menu: File -> New -> Platform Project

Type name of the extensible platform:  te0820_84_240_pfm. Click Next.

 Choose for hardware specification for the platform file:
 ~/work/te0820_84_240/test_board/vivado/test_board_4ev_1e_2gb.xsa

In “Software specification” select: linux
In “Boot Components” unselect Generate boot components
(these components have been already generated by Vivado and PetaLinux design flow)

New window te0820_84_240_pfm is opened.

Click on linux on psu_cortex53 to open window Domain: linux_domain

In “Description”: write xrt  

In “Bif File” find and select the pre-defied option:  Generate Bif

In “Boot Components Directory” select:
~/work/te0820_84_240/test_board_pfm/pfm/boot

In “FAT32 Partition Directory” select:
~/work/te0820_84_240/test_board_pfm/pfm/sd_dir

In Vitis IDE “Explorer” section, click on te0820_84_240_pfm to highlight it.

Right-click on the highlighted te0820_84_240_pfm and select build project in the open submenu. Platform is compiled in few seconds.
Close the Vitis tool by selection: File -> Exit.

Vits extensible platform te0820_84_240_pfm has been created in the directory:
~/work/te0820_84_240/test_board_pfm/te0820_84_240_pfm/export/te0820_84_240_pfm

Platform Usage


Test 1: Read Platform Info


With Vitis environment setup, platforminfo tool can report XPFM platform information.

platforminfo ~/work/te0820_84_240/test_board_pfm/te0820_84_240_pfm/export/te0820_84_240_pfm/te0820_84_240_pfm.xpfm 
Detailed listing from platforminfo utility
==========================
Basic Platform Information
==========================
Platform:           te0820_84_240_pfm
File:               /home/devel/work/te0820_84_240/test_board_pfm/te0820_84_240_pfm/export/te0820_84_240_pfm/te0820_84_240_pfm.xpfm
Description:        
te0820_84_240_pfm
    

=====================================
Hardware Platform (Shell) Information
=====================================
Vendor:                           vendor
Board:                            zusys
Name:                             zusys
Version:                          1.0
Generated Version:                2022.2
Hardware:                         1
Software Emulation:               1
Hardware Emulation:               0
Hardware Emulation Platform:      0
FPGA Family:                      zynquplus
FPGA Device:                      xczu4ev
Board Vendor:                     trenz.biz
Board Name:                       trenz.biz:te0820_4ev_1e:2.0
Board Part:                       xczu4ev-sfvc784-1-e

=================
Clock Information
=================
  Default Clock Index: 4
  Clock Index:         1
    Frequency:         100.000000
  Clock Index:         2
    Frequency:         200.000000
  Clock Index:         3
    Frequency:         400.000000
  Clock Index:         4
    Frequency:         240.000000

==================
Memory Information
==================
  Bus SP Tag: HP0
  Bus SP Tag: HP1
  Bus SP Tag: HP2
  Bus SP Tag: HP3
  Bus SP Tag: HPC0
  Bus SP Tag: HPC1

=============================
Software Platform Information
=============================
Number of Runtimes:            1
Default System Configuration:  te0820_84_240_pfm
System Configurations:
  System Config Name:                      te0820_84_240_pfm
  System Config Description:               te0820_84_240_pfm
  System Config Default Processor Group:   linux_domain
  System Config Default Boot Image:        standard
  System Config Is QEMU Supported:         1
  System Config Processor Groups:
    Processor Group Name:      linux on psu_cortexa53
    Processor Group CPU Type:  cortex-a53
    Processor Group OS Name:   linux
  System Config Boot Images:
    Boot Image Name:           standard
    Boot Image Type:           
    Boot Image BIF:            te0820_84_240_pfm/boot/linux.bif
    Boot Image Data:           te0820_84_240_pfm/linux_domain/image
    Boot Image Boot Mode:      sd
    Boot Image RootFileSystem: 
    Boot Image Mount Path:     /mnt
    Boot Image Read Me:        te0820_84_240_pfm/boot/generic.readme
    Boot Image QEMU Args:      te0820_84_240_pfm/qemu/pmu_args.txt:te0820_84_240_pfm/qemu/qemu_args.txt
    Boot Image QEMU Boot:      
    Boot Image QEMU Dev Tree:  
Supported Runtimes:
  Runtime: OpenCL



Test 2: Run Vector Addition Example


Create new directory test_board_test_vadd  to test Vitis extendable flow example “vector addition”
~/work/te0820_84_240/test_board_test_vadd

Current directory structure:
~/work/te0820_84_240/test_board
~/work/te0820_84_240/test_board_pfm
~/work/te0820_84_240/test_board_test_vadd

Change working directory:

$cd ~/work/te0820_84_240/test_board_test_vadd

In Ubuntu terminal, start Vitis by:

$ vitis &

In Vitis IDE Launcher, select your working directory
~/work/te0820_84_240/test_board_test_vadd
Click on Launch to launch Vitis.

Select File -> New -> Application project. Click Next.

Skip welcome page if shown.

Click on “+ Add” icon and select the custom extensible platform te0820_84_240_pfm[custom] in the directory:
~/work/te0820_84_240/test_board_pfm/te0820_84_240_pfm/export/te0820_84_240_pfm

We can see available PL clocks and frequencies.

PL4 with 240 MHz clock is has been set as default in the platform creation process.


 Click Next.
In “Application Project Details” window type into Application project name: test_vadd
Click Next.
In “Domain window” type (or select by browse):
“Sysroot path”:
~/work/te0820_84_240/test_board_pfm/sysroots/cortexa72-cortexa53-xilinx-linux
“Root FS”:
~/work/te0820_84_240/test_board/os/petalinux/images/linux/rootfs.ext4
“Kernel Image”:
~/work/te0820_84_240/test_board/os/petalinux/images/linux/Image
Click Next.

In “Templates window”, if not done before, update “Vitis IDE Examples” and “Vitis IDE Libraries”.

Select Host Examples
In “Find”, type: “vector add” to search for the “Vector Addition” example.

Select: “Vector Addition”
Click Finish
New project template is created.

In test_vadd window menu “Active build configuration” switch from “SW Emulation” to “Hardware”.

In “Explorer” section of Vitis IDE, click on:  test_vadd_system[te0820_84_240_pfm] to select it.

Right Click on:  test_vadd_system[te0820_84_240_pfm] and select in the opened sub-menu:
Build project

Vitis will compile:
In test_vadd_kernels subproject, compile the krnl_vadd from C++ SW to HDL HW IP source code
In test_vadd_system_hw_link subproject, compile  the krnl_vadd HDL together with te0820_84_240_pfm into new, extended HW design with new accelerated (krnl_vadd) will run on the default 240 MHz clock. This step can take some time.
In test_vadd subproject, compile the vadd.cpp application example.


Extended HW

Run Compiled test_vadd Example Application


The sd_card.img file is output of the compilation and packing by Vitis. It is located in directory:
~/work/te0820_84_240/test_board_test_vadd/test_vadd_system/Hardware/package/sd_card.img

Write the sd card image from the sd_card.img file to SD card.

In Windows Pro 10 (or Windows 11 Pro) PC, inst all program Win32DiskImager  for this task. Win32 Disk Imager can write raw disk image to removable devices.
https://win32diskimager.org/

Insert the SD card to the TE0706-03 carrier board.

Connect PC USB terminal (115200 bps) card to the TE0706-03 carrier board.

Connect Ethernet cable to the TE0706-03 carrier board.

Power on the TE0706-03 carrier board.

In PC, find the assigned serial line COM port number for the USB terminal. In case of Win 10 use device manager.

In PC, open serial line terminal with the assigned COM port number. Speed 115200 bps.

On TE0706-03, reset button to start the system. USB terminal starts to display booting information.

In PC terminal, type:

sh-5.0# cd /media/sd-mmcblk1p1/
sh-5.0# ./test_vadd krnl_vadd.xclbin

The application test_vadd should run with this output:

sh-5.0# cd /media/sd-mmcblk1p1/
sh-5.0# ./test_vadd krnl_vadd.xclbin
INFO: Reading krnl_vadd.xclbin
Loading: 'krnl_vadd.xclbin'
Trying to program device[0]: edge
Device[0]: program successful!
TEST PASSED
sh-5.0#

The Vitis application has been compiled to HW and evaluated on custom system
with extensible custom te0820_84_240_pfm platform.

In PC terminal type:

# halt

System is halted. Messages relate to halt of the system can be seen on the USB terminal).

The SD card can be safely removed from the TE0706-03 carrier board, now.

The TE0706-03 carrier board can be disconnected from power.


 

Full listing of PC USB petalinux console after following operations are performed:
--------------------------------------------------------------------------------
TE0820 TE_XFsbl_HookPsuInit_Custom
Configure PLL: SI5338-B
Si5338 Init Registers Write.
Si5338 Init Complete
PLL Status Register 218:0x8
USB Reset Complete
ETH Reset Complete

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Xilinx Zynq MP First Stage Boot Loader (TE modified)
Release 2022.2   Aug 29 2023  -  17:42:55
Device Name: XCZU4EV

--------------------------------------------------------------------------------
TE0820 TE_XFsbl_BoardInit_Custom

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
TE0820 TE_XFsbl_HookAfterBSDownload_Custom

--------------------------------------------------------------------------------
NOTICE:  BL31: v2.6(release):xlnx_rebase_v2.6_2022.1_update3-18-g0897efd45
NOTICE:  BL31: Built : 03:55:03, Sep  9 2022


U-Boot 2022.01 (Sep 20 2022 - 06:35:33 +0000)TE0820

CPU:   ZynqMP
Silicon: v3
Board: Xilinx ZynqMP
DRAM:  2 GiB
PMUFW:  v1.1
PMUFW no permission to change config object
EL Level:       EL2
Chip ID:        zu4ev
NAND:  0 MiB
MMC:   mmc@ff160000: 0, mmc@ff170000: 1
Loading Environment from nowhere... OK
In:    serial
Out:   serial
Err:   serial
Bootmode: SD_MODE1
Reset reason:   EXTERNAL
Net:   FEC: can't find phy-handle

ZYNQ GEM: ff0e0000, mdio bus ff0e0000, phyaddr 1, interface rgmii-id

Error: ethernet@ff0e0000 address not set.
No ethernet found.

scanning bus for devices...
starting USB...
Bus usb@fe200000: Register 2000440 NbrPorts 2
Starting the controller
USB XHCI 1.00
scanning bus usb@fe200000 for devices... 1 USB Device(s) found
       scanning usb for storage devices... 0 Storage Device(s) found
Hit any key to stop autoboot:  0
switch to partitions #0, OK
mmc1 is current device
Scanning mmc 1:1...
Found U-Boot script /boot.scr
2777 bytes read in 15 ms (180.7 KiB/s)
## Executing script at 20000000
Trying to load boot images from mmc1
21457408 bytes read in 1654 ms (12.4 MiB/s)
41563 bytes read in 17 ms (2.3 MiB/s)
## Flattened Device Tree blob at 00100000
   Booting using the fdt blob at 0x100000
FEC: can't find phy-handle

ZYNQ GEM: ff0e0000, mdio bus ff0e0000, phyaddr 1, interface rgmii-id

Error: ethernet@ff0e0000 address not set.
FEC: can't find phy-handle

ZYNQ GEM: ff0e0000, mdio bus ff0e0000, phyaddr 1, interface rgmii-id

Error: ethernet@ff0e0000 address not set.
   Loading Device Tree to 000000007bbee000, end 000000007bbfb25a ... OK

Starting kernel ...

[    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
[    0.000000] Linux version 5.15.36-xilinx-v2022.2 (oe-user@oe-host) (aarch64-xilinx-linux-gcc (GCC) 11.2.0, GNU ld (GNU Binutils) 2.37.20210721) #1 SMP Mon Oct 3 07:50:07 UTC 2022
[    0.000000] Machine model: xlnx,zynqmp
[    0.000000] earlycon: cdns0 at MMIO 0x00000000ff000000 (options '115200n8')
[    0.000000] printk: bootconsole [cdns0] enabled
[    0.000000] efi: UEFI not found.
[    0.000000] Zone ranges:
[    0.000000]   DMA32    [mem 0x0000000000000000-0x000000007fefffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x0000000000000000-0x000000007fefffff]
[    0.000000] Initmem setup node 0 [mem 0x0000000000000000-0x000000007fefffff]
[    0.000000] On node 0, zone DMA32: 256 pages in unavailable ranges
[    0.000000] cma: Reserved 512 MiB at 0x000000005b800000
[    0.000000] psci: probing for conduit method from DT.
[    0.000000] psci: PSCIv1.1 detected in firmware.
[    0.000000] psci: Using standard PSCI v0.2 function IDs
[    0.000000] psci: MIGRATE_INFO_TYPE not supported.
[    0.000000] psci: SMC Calling Convention v1.2
[    0.000000] percpu: Embedded 18 pages/cpu s34328 r8192 d31208 u73728
[    0.000000] Detected VIPT I-cache on CPU0
[    0.000000] CPU features: detected: ARM erratum 845719
[    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 515844
[    0.000000] Kernel command line: earlycon console=ttyPS0,115200 clk_ignore_unused root=/dev/mmcblk1p2 rw rootwait cma=512M
[    0.000000] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[    0.000000] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
[    0.000000] Memory: 1509816K/2096128K available (13824K kernel code, 986K rwdata, 3896K rodata, 2112K init, 573K bss, 62024K reserved, 524288K cma-reserved)
[    0.000000] rcu: Hierarchical RCU implementation.
[    0.000000] rcu:     RCU event tracing is enabled.
[    0.000000] rcu:     RCU restricting CPUs from NR_CPUS=16 to nr_cpu_ids=4.
[    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
[    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
[    0.000000] GIC: Adjusting CPU interface base to 0x00000000f902f000
[    0.000000] Root IRQ handler: gic_handle_irq
[    0.000000] GIC: Using split EOI/Deactivate mode
[    0.000000] random: get_random_bytes called from start_kernel+0x474/0x6d8 with crng_init=0
[    0.000000] arch_timer: cp15 timer(s) running at 33.33MHz (phys).
[    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x7b00c47c0, max_idle_ns: 440795202120 ns
[    0.000000] sched_clock: 56 bits at 33MHz, resolution 30ns, wraps every 2199023255541ns
[    0.008296] Console: colour dummy device 80x25
[    0.012394] Calibrating delay loop (skipped), value calculated using timer frequency.. 66.66 BogoMIPS (lpj=133333)
[    0.022665] pid_max: default: 32768 minimum: 301
[    0.027459] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
[    0.034608] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
[    0.043470] rcu: Hierarchical SRCU implementation.
[    0.047379] EFI services will not be available.
[    0.051792] smp: Bringing up secondary CPUs ...
[    0.056522] Detected VIPT I-cache on CPU1
[    0.056562] CPU1: Booted secondary processor 0x0000000001 [0x410fd034]
[    0.056961] Detected VIPT I-cache on CPU2
[    0.056985] CPU2: Booted secondary processor 0x0000000002 [0x410fd034]
[    0.057361] Detected VIPT I-cache on CPU3
[    0.057384] CPU3: Booted secondary processor 0x0000000003 [0x410fd034]
[    0.057428] smp: Brought up 1 node, 4 CPUs
[    0.091605] SMP: Total of 4 processors activated.
[    0.096277] CPU features: detected: 32-bit EL0 Support
[    0.101381] CPU features: detected: CRC32 instructions
[    0.106520] CPU: All CPU(s) started at EL2
[    0.110562] alternatives: patching kernel code
[    0.115890] devtmpfs: initialized
[    0.122631] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[    0.127950] futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
[    0.147761] pinctrl core: initialized pinctrl subsystem
[    0.148207] DMI not present or invalid.
[    0.151411] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[    0.158138] DMA: preallocated 256 KiB GFP_KERNEL pool for atomic allocations
[    0.164166] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
[    0.171969] audit: initializing netlink subsys (disabled)
[    0.177362] audit: type=2000 audit(0.116:1): state=initialized audit_enabled=0 res=1
[    0.177727] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
[    0.191785] ASID allocator initialised with 65536 entries
[    0.197198] Serial: AMBA PL011 UART driver
[    0.219032] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
[    0.220090] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
[    0.226760] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[    0.233415] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
[    1.309266] cryptd: max_cpu_qlen set to 1000
[    1.333274] DRBG: Continuing without Jitter RNG
[    1.436879] raid6: neonx8   gen()  2116 MB/s
[    1.504929] raid6: neonx8   xor()  1581 MB/s
[    1.573000] raid6: neonx4   gen()  2169 MB/s
[    1.641053] raid6: neonx4   xor()  1551 MB/s
[    1.709124] raid6: neonx2   gen()  2069 MB/s
[    1.777179] raid6: neonx2   xor()  1428 MB/s
[    1.845254] raid6: neonx1   gen()  1763 MB/s
[    1.913304] raid6: neonx1   xor()  1207 MB/s
[    1.981382] raid6: int64x8  gen()  1354 MB/s
[    2.049424] raid6: int64x8  xor()   773 MB/s
[    2.117503] raid6: int64x4  gen()  1597 MB/s
[    2.185551] raid6: int64x4  xor()   848 MB/s
[    2.253626] raid6: int64x2  gen()  1397 MB/s
[    2.321672] raid6: int64x2  xor()   747 MB/s
[    2.389748] raid6: int64x1  gen()  1033 MB/s
[    2.457795] raid6: int64x1  xor()   517 MB/s
[    2.457833] raid6: using algorithm neonx4 gen() 2169 MB/s
[    2.461787] raid6: .... xor() 1551 MB/s, rmw enabled
[    2.466723] raid6: using neon recovery algorithm
[    2.471762] iommu: Default domain type: Translated
[    2.476154] iommu: DMA domain TLB invalidation policy: strict mode
[    2.482591] SCSI subsystem initialized
[    2.486236] usbcore: registered new interface driver usbfs
[    2.491576] usbcore: registered new interface driver hub
[    2.496848] usbcore: registered new device driver usb
[    2.501907] mc: Linux media interface: v0.10
[    2.506097] videodev: Linux video capture interface: v2.00
[    2.511562] pps_core: LinuxPPS API ver. 1 registered
[    2.516461] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[    2.525552] PTP clock support registered
[    2.529454] EDAC MC: Ver: 3.0.0
[    2.532824] zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX channels.
[    2.541215] FPGA manager framework
[    2.544510] Advanced Linux Sound Architecture Driver Initialized.
[    2.550768] Bluetooth: Core ver 2.22
[    2.554015] NET: Registered PF_BLUETOOTH protocol family
[    2.559279] Bluetooth: HCI device and connection manager initialized
[    2.565596] Bluetooth: HCI socket layer initialized
[    2.570438] Bluetooth: L2CAP socket layer initialized
[    2.575460] Bluetooth: SCO socket layer initialized
[    2.580623] clocksource: Switched to clocksource arch_sys_counter
[    2.586484] VFS: Disk quotas dquot_6.6.0
[    2.590282] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    2.601392] NET: Registered PF_INET protocol family
[    2.602012] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    2.610184] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
[    2.617777] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
[    2.625727] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
[    2.633084] TCP: Hash tables configured (established 16384 bind 16384)
[    2.639446] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
[    2.646076] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
[    2.653233] NET: Registered PF_UNIX/PF_LOCAL protocol family
[    2.659038] RPC: Registered named UNIX socket transport module.
[    2.664645] RPC: Registered udp transport module.
[    2.669310] RPC: Registered tcp transport module.
[    2.673977] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    2.680383] PCI: CLS 0 bytes, default 64
[    2.684592] armv8-pmu pmu: hw perfevents: no interrupt-affinity property, guessing.
[    2.692048] hw perfevents: enabled with armv8_pmuv3 PMU driver, 7 counters available
[    2.725184] Initialise system trusted keyrings
[    2.725318] workingset: timestamp_bits=46 max_order=19 bucket_order=0
[    2.731101] NFS: Registering the id_resolver key type
[    2.735429] Key type id_resolver registered
[    2.739571] Key type id_legacy registered
[    2.743566] nfs4filelayout_init: NFSv4 File Layout Driver Registering...
[    2.750214] nfs4flexfilelayout_init: NFSv4 Flexfile Layout Driver Registering...
[    2.757577] jffs2: version 2.2. (NAND) (SUMMARY)  © 2001-2006 Red Hat, Inc.
[    2.801444] NET: Registered PF_ALG protocol family
[    2.801490] xor: measuring software checksum speed
[    2.809522]    8regs           :  2363 MB/sec
[    2.813191]    32regs          :  2799 MB/sec
[    2.818263]    arm64_neon      :  2308 MB/sec
[    2.818323] xor: using function: 32regs (2799 MB/sec)
[    2.823347] Key type asymmetric registered
[    2.827411] Asymmetric key parser 'x509' registered
[    2.832291] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 244)
[    2.839609] io scheduler mq-deadline registered
[    2.844106] io scheduler kyber registered
[    2.848350] irq-xilinx: mismatch in kind-of-intr param
[    2.853192] irq-xilinx: /amba_pl@0/interrupt-controller@a0000000: num_irq=32, sw_irq=0, edge=0x1
[    2.890448] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[    2.892199] Serial: AMBA driver
[    2.895062] cacheinfo: Unable to detect cache hierarchy for CPU 0
[    2.904765] brd: module loaded
[    2.908245] loop: module loaded
[    2.909107] mtdoops: mtd device (mtddev=name/number) must be supplied
[    2.915669] tun: Universal TUN/TAP device driver, 1.6
[    2.917982] CAN device driver interface
[    2.922331] usbcore: registered new interface driver asix
[    2.927104] usbcore: registered new interface driver ax88179_178a
[    2.933147] usbcore: registered new interface driver cdc_ether
[    2.938939] usbcore: registered new interface driver net1080
[    2.944559] usbcore: registered new interface driver cdc_subset
[    2.950441] usbcore: registered new interface driver zaurus
[    2.955985] usbcore: registered new interface driver cdc_ncm
[    2.962230] usbcore: registered new interface driver uas
[    2.966891] usbcore: registered new interface driver usb-storage
[    2.973459] rtc_zynqmp ffa60000.rtc: registered as rtc0
[    2.978031] rtc_zynqmp ffa60000.rtc: setting system clock to 2023-08-30T10:52:34 UTC (1693392754)
[    2.986890] i2c_dev: i2c /dev entries driver
[    2.992711] usbcore: registered new interface driver uvcvideo
[    2.997197] Bluetooth: HCI UART driver ver 2.3
[    3.001203] Bluetooth: HCI UART protocol H4 registered
[    3.006303] Bluetooth: HCI UART protocol BCSP registered
[    3.011593] Bluetooth: HCI UART protocol LL registered
[    3.016683] Bluetooth: HCI UART protocol ATH3K registered
[    3.022060] Bluetooth: HCI UART protocol Three-wire (H5) registered
[    3.028305] Bluetooth: HCI UART protocol Intel registered
[    3.033653] Bluetooth: HCI UART protocol QCA registered
[    3.038853] usbcore: registered new interface driver bcm203x
[    3.044471] usbcore: registered new interface driver bpa10x
[    3.050010] usbcore: registered new interface driver bfusb
[    3.055457] usbcore: registered new interface driver btusb
[    3.060917] usbcore: registered new interface driver ath3k
[    3.066412] EDAC MC: ECC not enabled
[    3.069999] EDAC DEVICE0: Giving out device to module edac controller cache_err: DEV edac (POLLED)
[    3.078943] EDAC DEVICE1: Giving out device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV ff960000.memory-controller (INTERRUPT)
[    3.091211] sdhci: Secure Digital Host Controller Interface driver
[    3.097042] sdhci: Copyright(c) Pierre Ossman
[    3.101366] sdhci-pltfm: SDHCI platform and OF driver helper
[    3.107287] ledtrig-cpu: registered to indicate activity on CPUs
[    3.113049] SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping ....
[    3.119421] zynqmp_firmware_probe Platform Management API v1.1
[    3.125157] zynqmp_firmware_probe Trustzone version v1.0
[    3.157446] securefw securefw: securefw probed
[    3.157572] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: The zynqmp-aes driver shall be deprecated in 2022.2 and removed in 2023.1
[    3.168417] alg: No test for xilinx-zynqmp-aes (zynqmp-aes)
[    3.173840] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
[    3.181887] zynqmp-keccak-384 firmware:zynqmp-firmware:sha384: The zynqmp-sha-deprecated driver shall be deprecated in 2022.2 and removed in 2023.1 release
[    3.195686] alg: No test for xilinx-keccak-384 (zynqmp-keccak-384)
[    3.201942] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)
[    3.207445] usbcore: registered new interface driver usbhid
[    3.212832] usbhid: USB HID core driver
[    3.219612] ARM CCI_400_r1 PMU driver probed
[    3.220218] fpga_manager fpga0: Xilinx ZynqMP FPGA Manager registered
[    3.227661] usbcore: registered new interface driver snd-usb-audio
[    3.234169] pktgen: Packet Generator for packet performance testing. Version: 2.75
[    3.241718] Initializing XFRM netlink socket
[    3.245256] NET: Registered PF_INET6 protocol family
[    3.250570] Segment Routing with IPv6
[    3.253763] In-situ OAM (IOAM) with IPv6
[    3.257703] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[    3.263841] NET: Registered PF_PACKET protocol family
[    3.268551] NET: Registered PF_KEY protocol family
[    3.273307] can: controller area network core
[    3.277646] NET: Registered PF_CAN protocol family
[    3.282382] can: raw protocol
[    3.285319] can: broadcast manager protocol
[    3.289474] can: netlink gateway - max_hops=1
[    3.293867] Bluetooth: RFCOMM TTY layer initialized
[    3.298646] Bluetooth: RFCOMM socket layer initialized
[    3.303752] Bluetooth: RFCOMM ver 1.11
[    3.307464] Bluetooth: BNEP (Ethernet Emulation) ver 1.3
[    3.312736] Bluetooth: BNEP filters: protocol multicast
[    3.317929] Bluetooth: BNEP socket layer initialized
[    3.322856] Bluetooth: HIDP (Human Interface Emulation) ver 1.2
[    3.328740] Bluetooth: HIDP socket layer initialized
[    3.333697] 8021q: 802.1Q VLAN Support v1.8
[    3.337913] 9pnet: Installing 9P2000 support
[    3.342077] Key type dns_resolver registered
[    3.346423] registered taskstats version 1
[    3.350371] Loading compiled-in X.509 certificates
[    3.356258] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
[    3.370332] ff000000.serial: ttyPS0 at MMIO 0xff000000 (irq = 60, base_baud = 6249999) is a xuartps
[    3.379364] printk: console [ttyPS0] enabled
[    3.379364] printk: console [ttyPS0] enabled
[    3.383656] printk: bootconsole [cdns0] disabled
[    3.383656] printk: bootconsole [cdns0] disabled
[    3.392872] of-fpga-region fpga-full: FPGA Region probed
[    3.404044] xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA driver Probe success
[    3.412131] xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA driver Probe success
[    3.420212] xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver Probe success
[    3.428284] xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA driver Probe success
[    3.436369] xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA driver Probe success
[    3.444449] xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA driver Probe success
[    3.452544] xilinx-zynqmp-dma fd560000.dma-controller: ZynqMP DMA driver Probe success
[    3.460625] xilinx-zynqmp-dma fd570000.dma-controller: ZynqMP DMA driver Probe success
[    3.468777] xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA driver Probe success
[    3.476852] xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver Probe success
[    3.484939] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA driver Probe success
[    3.493014] xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA driver Probe success
[    3.501102] xilinx-zynqmp-dma ffac0000.dma-controller: ZynqMP DMA driver Probe success
[    3.509186] xilinx-zynqmp-dma ffad0000.dma-controller: ZynqMP DMA driver Probe success
[    3.517267] xilinx-zynqmp-dma ffae0000.dma-controller: ZynqMP DMA driver Probe success
[    3.525347] xilinx-zynqmp-dma ffaf0000.dma-controller: ZynqMP DMA driver Probe success
[    3.534010] spi-nor spi0.0: mt25qu512a (131072 Kbytes)
[    3.539171] 4 fixed-partitions partitions found on MTD device spi0.0
[    3.545516] Creating 4 MTD partitions on "spi0.0":
[    3.550301] 0x000000000000-0x000000a00000 : "qspi-boot"
[    3.556337] 0x000000a00000-0x000002a00000 : "qspi-kernel"
[    3.562449] 0x000002a00000-0x000002a40000 : "qspi-bootenv"
[    3.568634] 0x000002a40000-0x000002ac0000 : "bootscr"
[    3.569545] zynqmp_pll_disable() clock disable failed for dpll_int, ret = -13
[    3.580943] xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM
[    3.587462] xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM
[    3.593954] xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM
[    3.600434] xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM
[    3.627785] xhci-hcd xhci-hcd.1.auto: xHCI Host Controller
[    3.633276] xhci-hcd xhci-hcd.1.auto: new USB bus registered, assigned bus number 1
[    3.641036] xhci-hcd xhci-hcd.1.auto: hcc params 0x0238f625 hci version 0x100 quirks 0x0000000002010090
[    3.650442] xhci-hcd xhci-hcd.1.auto: irq 65, io mem 0xfe200000
[    3.656572] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.15
[    3.664839] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    3.672055] usb usb1: Product: xHCI Host Controller
[    3.676932] usb usb1: Manufacturer: Linux 5.15.36-xilinx-v2022.2 xhci-hcd
[    3.683711] usb usb1: SerialNumber: xhci-hcd.1.auto
[    3.688876] hub 1-0:1.0: USB hub found
[    3.692647] hub 1-0:1.0: 1 port detected
[    3.696762] xhci-hcd xhci-hcd.1.auto: xHCI Host Controller
[    3.702250] xhci-hcd xhci-hcd.1.auto: new USB bus registered, assigned bus number 2
[    3.709906] xhci-hcd xhci-hcd.1.auto: Host supports USB 3.0 SuperSpeed
[    3.716467] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
[    3.724705] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.15
[    3.732972] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    3.740191] usb usb2: Product: xHCI Host Controller
[    3.745063] usb usb2: Manufacturer: Linux 5.15.36-xilinx-v2022.2 xhci-hcd
[    3.751849] usb usb2: SerialNumber: xhci-hcd.1.auto
[    3.756975] hub 2-0:1.0: USB hub found
[    3.760739] hub 2-0:1.0: 1 port detected
[    3.765533] at24 0-0050: supply vcc not found, using dummy regulator
[    3.772174] at24 0-0050: 256 byte 24aa025 EEPROM, writable, 1 bytes/write
[    3.778995] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq 40
[    3.785228] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer with timeout 60s
[    3.792675] cdns-wdt ff150000.watchdog: Xilinx Watchdog Timer with timeout 10s
[    3.801091] macb ff0e0000.ethernet: Not enabling partial store and forward
[    3.809653] macb ff0e0000.ethernet eth0: Cadence GEM rev 0x50070106 at 0xff0e0000 irq 38 (68:27:19:a7:6e:ca)
[    3.822165] of_cfs_init
[    3.824645] of_cfs_init: OK
[    3.827545] clk: Not disabling unused clocks
[    3.832043] ALSA device list:
[    3.832631] mmc0: SDHCI controller on ff160000.mmc [ff160000.mmc] using ADMA 64-bit
[    3.834668] mmc1: SDHCI controller on ff170000.mmc [ff170000.mmc] using ADMA 64-bit
[    3.835001]   No soundcards found.
[    3.854187] Waiting for root device /dev/mmcblk1p2...
[    3.906127] mmc1: new high speed SDHC card at address 1388
[    3.911989] mmcblk1: mmc1:1388 USD00 14.7 GiB
[    3.915010] mmc0: new high speed MMC card at address 0001
[    3.918048]  mmcblk1: p1 p2
[    3.922158] mmcblk0: mmc0:0001 IS008G 7.28 GiB
[    3.930695] mmcblk0boot0: mmc0:0001 IS008G 4.00 MiB
[    3.936633] mmcblk0boot1: mmc0:0001 IS008G 4.00 MiB
[    3.942372] mmcblk0rpmb: mmc0:0001 IS008G 4.00 MiB, chardev (241:0)
[    3.963717] EXT4-fs (mmcblk1p2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
[    3.973499] VFS: Mounted root (ext4 filesystem) on device 179:2.
[    3.980218] devtmpfs: mounted
[    3.983760] Freeing unused kernel memory: 2112K
[    3.988344] Run /sbin/init as init process
[    4.208829] random: fast init done
[    4.738784] systemd[1]: systemd 249.7+ running in system mode (+PAM -AUDIT -SELINUX -APPARMOR +IMA -SMACK +SECCOMP -GCRYPT -GNUTLS -OPENSSL +ACL +BLKID -CURL -ELFUTILS -FIDO2 -IDN2 -IDN -IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -BZIP2 -LZ4 -XZ -ZLIB +ZSTD +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=hybrid)
[    4.769082] systemd[1]: Detected architecture arm64.

Welcome to PetaLinux 2022.2_release_S10071807 (honister)!

[    4.813323] systemd[1]: Hostname set to <trenz>.
[    4.977497] systemd-sysv-generator[241]: SysV service '/etc/init.d/urandom' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.019484] systemd-sysv-generator[241]: SysV service '/etc/init.d/sendsigs' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.043453] systemd-sysv-generator[241]: SysV service '/etc/init.d/busybox-httpd' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.068141] systemd-sysv-generator[241]: SysV service '/etc/init.d/umountfs' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.092072] systemd-sysv-generator[241]: SysV service '/etc/init.d/umountnfs.sh' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.116344] systemd-sysv-generator[241]: SysV service '/etc/init.d/halt' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.139943] systemd-sysv-generator[241]: SysV service '/etc/init.d/save-rtc.sh' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.164966] systemd-sysv-generator[241]: SysV service '/etc/init.d/rng-tools' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.189789] systemd-sysv-generator[241]: SysV service '/etc/init.d/reboot' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.213694] systemd-sysv-generator[241]: SysV service '/etc/init.d/single' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.238167] systemd-sysv-generator[241]: SysV service '/etc/init.d/nfsserver' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.263637] systemd-sysv-generator[241]: SysV service '/etc/init.d/inetd.busybox' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.288027] systemd-sysv-generator[241]: SysV service '/etc/init.d/watchdog-init' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.313156] systemd-sysv-generator[241]: SysV service '/etc/init.d/sshd' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.337651] systemd-sysv-generator[241]: SysV service '/etc/init.d/nfscommon' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[    5.817681] systemd[1]: Queued start job for default target Graphical Interface.
[    5.825937] random: systemd: uninitialized urandom read (16 bytes read)
[    5.862478] systemd[1]: Created slice Slice /system/getty.
[  OK  ] Created slice Slice /system/getty.
[    5.884747] random: systemd: uninitialized urandom read (16 bytes read)
[    5.892675] systemd[1]: Created slice Slice /system/modprobe.
[  OK  ] Created slice Slice /system/modprobe.
[    5.912722] random: systemd: uninitialized urandom read (16 bytes read)
[    5.920557] systemd[1]: Created slice Slice /system/serial-getty.
[  OK  ] Created slice Slice /system/serial-getty.
[    5.941760] systemd[1]: Created slice User and Session Slice.
[  OK  ] Created slice User and Session Slice.
[    5.964897] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[  OK  ] Started Dispatch Password …ts to Console Directory Watch.
[    5.988831] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
[  OK  ] Started Forward Password R…uests to Wall Directory Watch.
[    6.012901] systemd[1]: Reached target Path Units.
[  OK  ] Reached target Path Units.
[    6.028734] systemd[1]: Reached target Remote File Systems.
[  OK  ] Reached target Remote File Systems.
[    6.048714] systemd[1]: Reached target Slice Units.
[  OK  ] Reached target Slice Units.
[    6.064739] systemd[1]: Reached target Swaps.
[  OK  ] Reached target Swaps.
[    6.089507] systemd[1]: Listening on RPCbind Server Activation Socket.
[  OK  ] Listening on RPCbind Server Activation Socket.
[    6.112727] systemd[1]: Reached target RPC Port Mapper.
[  OK  ] Reached target RPC Port Mapper.
[    6.135682] systemd[1]: Listening on Syslog Socket.
[  OK  ] Listening on Syslog Socket.
[    6.152862] systemd[1]: Listening on initctl Compatibility Named Pipe.
[  OK  ] Listening on initctl Compatibility Named Pipe.
[    6.177198] systemd[1]: Listening on Journal Audit Socket.
[  OK  ] Listening on Journal Audit Socket.
[    6.196939] systemd[1]: Listening on Journal Socket (/dev/log).
[  OK  ] Listening on Journal Socket (/dev/log).
[    6.217006] systemd[1]: Listening on Journal Socket.
[  OK  ] Listening on Journal Socket.
[    6.233190] systemd[1]: Listening on Network Service Netlink Socket.
[  OK  ] Listening on Network Service Netlink Socket.
[    6.257063] systemd[1]: Listening on udev Control Socket.
[  OK  ] Listening on udev Control Socket.
[    6.276929] systemd[1]: Listening on udev Kernel Socket.
[  OK  ] Listening on udev Kernel Socket.
[    6.296948] systemd[1]: Listening on User Database Manager Socket.
[  OK  ] Listening on User Database Manager Socket.
[    6.323278] systemd[1]: Mounting Huge Pages File System...
         Mounting Huge Pages File System...
[    6.343318] systemd[1]: Mounting POSIX Message Queue File System...
         Mounting POSIX Message Queue File System...
[    6.367401] systemd[1]: Mounting Kernel Debug File System...
         Mounting Kernel Debug File System...
[    6.385052] systemd[1]: Condition check resulted in Kernel Trace File System being skipped.
[    6.397304] systemd[1]: Mounting Temporary Directory /tmp...
         Mounting Temporary Directory /tmp...
[    6.414111] systemd[1]: Condition check resulted in Create List of Static Device Nodes being skipped.
[    6.426398] systemd[1]: Starting Load Kernel Module configfs...
         Starting Load Kernel Module configfs...
[    6.447915] systemd[1]: Starting Load Kernel Module drm...
         Starting Load Kernel Module drm...
[    6.467731] systemd[1]: Starting Load Kernel Module fuse...
         Starting Load Kernel Module fuse...
[    6.487791] systemd[1]: Starting RPC Bind...
         Starting RPC Bind...
[    6.500833] systemd[1]: Condition check resulted in File System Check on Root Device being skipped.
[    6.540171] systemd[1]: Starting Load Kernel Modules...
         Starting Load Kernel Modules...
[    6.559571] systemd[1]: Starting Remount Root and Kernel File Systems...
[    6.563773] dmaproxy: loading out-of-tree module taints kernel.
         Starting Remount Root and Kernel File Systems    6.573840] EXT4-fs (mmcblk1p2): re-mounted. Opts: (null). Quota mode: none.
0m...
[    6.599729] systemd[1]: Starting Coldplug All udev Devices...
         Starting Coldplug All udev Devices...
[    6.622603] systemd[1]: Started RPC Bind.
[  OK  ] Started RPC Bind.
[    6.637078] systemd[1]: Mounted Huge Pages File System.
[  OK  ] Mounted Huge Pages File System.
[    6.657003] systemd[1]: Mounted POSIX Message Queue File System.
[  OK  ] Mounted POSIX Message Queue File System.
[    6.681041] systemd[1]: Mounted Kernel Debug File System.
[  OK  ] Mounted Kernel Debug File System.
[    6.701131] systemd[1]: Mounted Temporary Directory /tmp.
[  OK  ] Mounted Temporary Directory /tmp.
[    6.721581] systemd[1]: modprobe@configfs.service: Deactivated successfully.
[    6.729934] systemd[1]: Finished Load Kernel Module configfs.
[  OK  ] Finished Load Kernel Module configfs.
[    6.753627] systemd[1]: modprobe@drm.service: Deactivated successfully.
[    6.761491] systemd[1]: Finished Load Kernel Module drm.
[  OK  ] Finished Load Kernel Module drm.
[    6.785682] systemd[1]: modprobe@fuse.service: Deactivated successfully.
[    6.793848] systemd[1]: Finished Load Kernel Module fuse.
[  OK  ] Finished Load Kernel Module fuse.
[    6.818376] systemd[1]: Finished Load Kernel Modules.
[  OK  ] Finished Load Kernel Modules.
[    6.834113] systemd[1]: Finished Remount Root and Kernel File Systems.
[  OK  ] Finished Remount Root and Kernel File Systems.
[    6.861525] systemd[1]: Mounting NFSD configuration filesystem...
         Mounting NFSD configuration filesystem...
[    6.877185] systemd[1]: Condition check resulted in FUSE Control File System being skipped.
[    6.888570] systemd[1]: Mounting Kernel Configuration File System...
         Mounting Kernel Configuration File System...
[    6.914640] systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped.
[    6.923266] systemd[1]: Condition check resulted in Platform Persistent Storage Archival being skipped.
[    6.935773] systemd[1]: Starting Apply Kernel Variables...
         Starting Apply Kernel Variables...
[    6.952975] systemd[1]: Condition check resulted in Create System Users being skipped.
         Starting Create Static Device Nodes in /dev...
[    6.984846] systemd[1]: Failed to mount NFSD configuration filesystem.
[FAILED] Failed to mount NFSD configuration filesystem.
See 'systemctl status proc-fs-nfsd.mount' for details.
[DEPEND] Dependency failed for NFS Mount Daemon.
[DEPEND] Dependency failed for NFS server and services.
[  OK  ] Mounted Kernel Configuration File System.
[  OK  ] Finished Apply Kernel Variables.
[  OK  ] Finished Create Static Device Nodes in /dev.
[  OK  ] Reached target Preparation for Local File Systems.
         Mounting /var/volatile...
[  OK  ] Started Entropy Daemon based on the HAVEGE algorithm.
         Starting Journal Service...
         Starting Rule-based Manage…for Device Events and Files...
[  OK  ] Mounted /var/volatile.
[  OK  ] Finished Coldplug All udev Devices.
         Starting Wait for udev To …plete Device Initialization...
         Starting Load/Save Random Seed...
[  OK  ] Reached target Local File Systems.
[  OK  ] Started Journal Service.
         Starting Flush Journal to Persistent Storage...
[  OK  ] Finished Flush Journal to Persistent Storage.
         Starting Create Volatile Files and Directories...
[  OK  ] Finished Create Volatile Files and Directories.
[  OK  ] Started Rule-based Manager for Device Events and Files.
         Starting Network Time Synchronization...
         Starting Record System Boot/Shutdown in UTMP...
[  OK  ] Finished Record System Boot/Shutdown in UTMP.
[    7.757431] zocl-drm amba_pl@0:zyxclmm_drm: IRQ index 32 not found
[  OK  ] Started Network Time Synchronization.
[  OK  ] Reached target System Time Set.
[  OK  ] Listening on Load/Save RF …itch Status /dev/rfkill Watch.
[  OK  ] Finished Load/Save Random Seed.
[  OK  ] Finished Wait for udev To Complete Device Initialization.
[  OK  ] Created slice Slice /system/systemd-fsck.
[  OK  ] Started Hardware RNG Entropy Gatherer Daemon.
[  OK  ] Reached target System Initialization.
[  OK  ] Started Daily Cleanup of Temporary Directories.
[  OK  ] Reached target Timer Units.
[  OK  ] Listening on D-Bus System Message Bus Socket.
         Starting sshd.socket...
         Starting File System Check on /dev/mmcblk1p1...
[  OK  ] Listening on sshd.socket.
[  OK  ] Finished File System Check on /dev/mmcblk1p1.
[  OK  ] Reached target Socket Units.
[  OK  ] Reached target Basic System.
         Mounting /run/media/mmcblk1p1...
[  OK  ] Started Kernel Logging Service.
[  OK  ] Started System Logging Service.
[  OK  ] Started D-Bus System Message Bus.
         Starting IPv6 Packet Filtering Framework...
         Starting IPv4 Packet Filtering Framework...
         Starting rng-tools.service...
         Starting Resets System Activity Logs...
         Starting User Login Management...
[  OK  ] Started Xserver startup without a display manager.
         Starting OpenSSH Key Generation...
[  OK  ] Finished IPv6 Packet Filtering Framework.
[  OK  ] Finished IPv4 Packet Filtering Framework.
[  OK  ] Started rng-tools.service.
[  OK  ] Finished Resets System Activity Logs.
[  OK  ] Reached target Preparation for Network.
         Starting LSB: NFS support for both client and server...
         Starting Network Configuration...
[  OK  ] Mounted /run/media/mmcblk1p1.
[  OK  ] Finished OpenSSH Key Generation.
[  OK  ] Started LSB: NFS support for both client and server.
         Starting busybox-httpd.service...
         Starting inetd.busybox.service...
         Starting LSB: Kernel NFS server support...
[  OK  ] Started busybox-httpd.service.
[  OK  ] Started inetd.busybox.service.
[FAILED] Failed to start LSB: Kernel NFS server support.
See 'systemctl status nfsserver.service' for details.
[  OK  ] Started User Login Management.
[  OK  ] Started Network Configuration.
         Starting Network Name Resolution...
[  OK  ] Started Network Name Resolution.
[  OK  ] Reached target Network.
[  OK  ] Reached target Host and Network Name Lookups.
[  OK  ] Started NFS status monitor for NFSv2/3 locking..
         Starting Permit User Sessions...
         Starting Target Communication Framework agent...
[  OK  ] Started Xinetd A Powerful Replacement For Inetd.
[  OK  ] Finished Permit User Sessions.
[  OK  ] Started Getty on tty1.
[  OK  ] Started Serial Getty on ttyPS0.
[  OK  ] Reached target Login Prompts.
[  OK  ] Started Target Communication Framework agent.
[  OK  ] Reached target Multi-User System.
[  OK  ] Reached target Graphical Interface.
         Starting Record Runlevel Change in UTMP...
[  OK  ] Finished Record Runlevel Change in UTMP.

PetaLinux 2022.2_release_S10071807 trenz ttyPS0

trenz login: root (automatic login)

Init Start
Run init.sh from SD card
Load SD Init Script
User bash Code can be insered here and put init.sh on SD
Init End
root@trenz:~# cd /run/media/mmcblk1p1/
root@trenz:/run/media/mmcblk1p1# ./test_vadd krnl_vadd.xclbin
INFO: Reading krnl_vadd.xclbin
Loading: 'krnl_vadd.xclbin'
Trying to program device[0]: edge
Device[0]: program successful!
TEST PASSED
root@trenz:/run/media/mmcblk1p1# ifconfig
eth0      Link encap:Ethernet  HWaddr 68:27:19:A7:6E:CA
          inet addr:192.168.13.184  Bcast:192.168.13.255  Mask:255.255.255.0
          inet6 addr: fe80::6a27:19ff:fea7:6eca/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:80 errors:0 dropped:0 overruns:0 frame:0
          TX packets:53 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:13303 (12.9 KiB)  TX bytes:8950 (8.7 KiB)
          Interrupt:38

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:106 errors:0 dropped:0 overruns:0 frame:0
          TX packets:106 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:7680 (7.5 KiB)  TX bytes:7680 (7.5 KiB)

root@trenz:/run/media/mmcblk1p1# halt
         Stopping Session c1 of User root...
[  OK  ] Removed slice Slice /system/modprobe.
[  OK  ] Stopped target Graphical Interface.
[  OK  ] Stopped target Multi-User System.
[  OK  ] Stopped target Login Prompts.
[  OK  ] Stopped target Host and Network Name Lookups.
[  OK  ] Stopped target System Time Set.
[  OK  ] Stopped target Timer Units.
[  OK  ] Stopped Daily Cleanup of Temporary Directories.
[  OK  ] Closed Load/Save RF Kill Switch Status /dev/rfkill Watch.
         Stopping busybox-httpd.service...
         Stopping Kernel Logging Service...
         Stopping System Logging Service...
         Stopping Getty on tty1...
         Stopping inetd.busybox.service...
         Stopping Serial Getty on ttyPS0...
[  OK  ] Stopped Resets System Activity Logs.
         Stopping Load/Save Random Seed...
         Stopping Target Communication Framework agent...
         Stopping Xinetd A Powerful Replacement For Inetd...
[  OK  ] Stopped OpenSSH Key Generation.
[  OK  ] Stopped Kernel Logging Service.
[  OK  ] Stopped System Logging Service.
[  OK  ] Stopped Xinetd A Powerful Replacement For Inetd.
[  OK  ] Stopped Serial Getty on ttyPS0.
[  OK  ] Stopped Target Communication Framework agent.
[  OK  ] Stopped Getty on tty1.
[  OK  ] Stopped busybox-httpd.service.
[  OK  ] Stopped inetd.busybox.service.
[  OK  ] Stopped Load/Save Random Seed.
[  OK  ] Stopped Session c1 of User root.
[  OK  ] Removed slice Slice /system/getty.
[  OK  ] Removed slice Slice /system/serial-getty.
         Stopping LSB: NFS support for both client and server...
         Stopping User Login Management...
         Stopping User Manager for UID 0...
[  OK  ] Stopped LSB: NFS support for both client and server.
[  OK  ] Stopped User Manager for UID 0.
[  OK  ] Stopped target RPC Port Mapper.
         Stopping rng-tools.service...
         Stopping User Runtime Directory /run/user/0...
[  OK  ] Unmounted /run/user/0.
[  OK  ] Stopped Hardware RNG Entropy Gatherer Daemon.
[  OK  ] Stopped User Login Management.
[  OK  ] Stopped rng-tools.service.
[  OK  ] Stopped User Runtime Directory /run/user/0.
[  OK  ] Removed slice User Slice of UID 0.
         Stopping D-Bus System Message Bus...
         Stopping Permit User Sessions...
[  OK  ] Stopped D-Bus System Message Bus.
[  OK  ] Stopped Permit User Sessions.
[  OK  ] Stopped target Network.
[  OK  ] Stopped target Remote File Systems.
         Stopping Network Name Resolution...
[  OK  ] Stopped Network Name Resolution.
         Stopping Network Configuration...
[  OK  ] Stopped Network Configuration.
[  OK  ] Stopped target Preparation for Network.
[  OK  ] Stopped IPv6 Packet Filtering Framework.
[  OK  ] Stopped IPv4 Packet Filtering Framework.
[  OK  ] Stopped target Basic System.
[  OK  ] Stopped target Path Units.
[  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.
[  OK  ] Stopped Forward Password R…uests to Wall Directory Watch.
[  OK  ] Stopped target Slice Units.
[  OK  ] Removed slice User and Session Slice.
[  OK  ] Stopped target Socket Units.
[  OK  ] Closed D-Bus System Message Bus Socket.
[  OK  ] Closed sshd.socket.
[  OK  ] Stopped target System Initialization.
[  OK  ] Closed Syslog Socket.
[  OK  ] Closed Network Service Netlink Socket.
[  OK  ] Stopped Apply Kernel Variables.
[  OK  ] Stopped Load Kernel Modules.
         Stopping Network Time Synchronization...
         Stopping Record System Boot/Shutdown in UTMP...
[  OK  ] Stopped Network Time Synchronization.
[  OK  ] Stopped Record System Boot/Shutdown in UTMP.
[  OK  ] Stopped Create Volatile Files and Directories.
[  OK  ] Stopped target Local File Systems.
         Unmounting /run/media/mmcblk1p1...
         Unmounting Temporary Directory /tmp...
         Unmounting /var/volatile...
[  OK  ] Unmounted /run/media/mmcblk1p1.
[  OK  ] Unmounted Temporary Directory /tmp.
[  OK  ] Unmounted /var/volatile.
[  OK  ] Stopped target Swaps.
[  OK  ] Reached target Unmount All Filesystems.
[  OK  ] Stopped File System Check on /dev/mmcblk1p1.
[  OK  ] Removed slice Slice /system/systemd-fsck.
[  OK  ] Stopped target Preparation for Local File Systems.
[  OK  ] Stopped Remount Root and Kernel File Systems.
[  OK  ] Stopped Create Static Device Nodes in /dev.
[  OK  ] Reached target System Shutdown.
[  OK  ] Reached target Late Shutdown Services.
         Starting System Halt...
[  317.764371] reboot: System halted

TE0706-03 test_board can be connected to the X11 terminal running on your PC Ubuntu with PuTTY application via Ethernet.

Find Ethernet IP address of your board by ifconfig command in PetaLinux terminal.
In PC Ubuntu OS, open PuTTY application.
In PuTTY, set  Ethernet IP of your board.
In PuTTY, select checkbox SSH->X11->Enable X11 forwarding.

Use PC Ubuntu mouse and keyboard. In PuTTY, open PetaLinux terminal and login as:
user: root pswd: root.

In opened PetaLinux terminal, start X11 desktop x-session-manager by typing:
root@Trenz:~# x-session-manager &

Click on “Terminal” icon (A Unicode capable rxvt)

Terminal opens as an X11 graphic window. In X11 terminal, use Ubuntu PC keyboard and type:

sh-5.0# cd /media/sd-mmcblk1p1/
sh-5.0# ./test_vadd krnl_vadd.xclbin

The application test_vadd should run with this output:

sh-5.0# cd /media/sd-mmcblk1p1/
sh-5.0# ./test_vadd krnl_vadd.xclbin
INFO: Reading krnl_vadd.xclbin
Loading: 'krnl_vadd.xclbin'
Trying to program device[0]: edge
Device[0]: program successful!
TEST PASSED
sh-5.0#

The test_board has been running the PetaLinux OS and drives simple version of an X11 GUI on Ubuntu desktop.Application test_vadd has been started from X11 xrvt terminal emulator. 

Close the rxvt terminal emulator by click ”x” icon (in the upper right corner) or by typing:

sh-5.0# exit

In X11, click ”Shutdown” icon to safely close PetaLinux running on the test board.

System on the test board is halted. Messages related to halt of the system can be seen on the PC USB terminal.

The SD card can be safely removed from the test_board, now.
Close the PC USB terminal application.
The TE0706-03 test_board can be disconnected from power, now.

Test 3: Vitis-AI-3.0 Demo


This test implements simple AI 3.0 demo to verify DPU integration to our custom extensible platform. This tutorial follows Xilix Vitis Tutorial for zcu104 with necessary fixes and customizations required for our case.

We have to install correct Vitis project with the DPU instance from this repository:

https://github.com/Xilinx/Vitis-AI/tree/3.0/dpu

Page description contains table with supported targets. Use the line if theis table dedicated to DPUCZDX8G DPU for MPSoC and Kria K26 devices.

It is link for download of the programmable logic based DPU, targeting general purpose CNN inference with full support for the Vitis AI ModelZoo.
Supports either the Vitis or Vivado flows on 16nm Zynq® UltraScale+™ platforms.

Click on the Download link in the column: Reference Design

This will result in download of file:

~/Downloads/DPUCZDX8V_VAI_v3.0.tar.gz

It contains directory
~/Downloads/DPUCZDX8V_VAI_v3.0

Copy this directory to the directory:
~/work/DPUCZDX8V_VAI_v3.0

It contains HDL code for the DPU and also source files and project files to test the DPU with AI resnet50 inference example. 

Create and Build Vitis Design


Create new directory StarterKit_dpu_trd  to test Vitis extendable flow example “dpu trd”
~/work/te0820_84_240/test_board_dpu_trd

Current directory structure:
~/work/te0820_84_240/test_board
~/work/te0820_84_240/test_board_pfm
~/work/te0820_84_240/test_board_test_vadd
~/work/te0820_84_240/test_board_dpu_trd

Change working directory:

$cd ~/work/te0820_84_240/test_board_dpu_trd

In Ubuntu terminal, start Vitis by:

$ vitis &

In Vitis IDE Launcher, select your working directory
~/work/te0820_84_240/test_board_dpu_trd
Click on Launch to start Vitis.

Add Vitis-AI Repository to Vitis

Open menu Window → Preferences

Go to Library Repository tab

Add Vitis-AI by clicking Add button and fill the form as shown below, use absolute path to your home folder in field "Location":



Click Apply and Close.

Field "Location" says that the Vitis-AI repository from github has been cloned into ~/work/DPUCZDX8V_VAI_v3.0 folder, already in the stage of Petalinux configuration. Use the absolute path to your home directory. It depends on the user name. The user name in the figure is "devel". Replace it by your user name.

Correctly added library appears in Libraries:

Open menu Xilinx → Libraries...

You can find there just added Vitis-AI library marked as "Installed".

Create a Vitis-AI Design for our te0820_84_240 custom platform

Select File -> New -> Application project. Click Next.

Skip welcome page if it is shown.

Click on “+ Add” icon and select the custom extensible platform te0820_84_240_pfm[custom] in the directory:
~/work/te0820_84_240/test_board_pfm/te0820_84_240_pfm/export/te0820_84_240_pfm

We can see available PL clocks and frequencies.

PL4 with 240 MHz clock is has been set as default in the platform creation process.


Click Next.
In “Application Project Details” window type into Application project name: dpu_trd
Click Next.
In “Domain window” type (or select by browse):
“Sysroot path”:
~/work/te0820_84_240/test_board_pfm/sysroots/cortexa72-cortexa53-xilinx-linux
“Root FS”:
~/work/te0820_84_240/test_board/os/petalinux/images/linux/rootfs.ext4
“Kernel Image”:
~/work/te0820_84_240/test_board/os/petalinux/images/linux/Image
Click Next.

In “Templates window”, if not done before, update “Vitis IDE Examples” and “Vitis IDE Libraries”.

In “Find”, type: “dpu” to search for the “DPU Kernel (RTL Kernel)” example.

Select: “DPU Kernel (RTL Kernel)

Click Finish
New project template is created.

In dpu_trd window menu “Active build configuration” switch from “SW Emulation” to “Hardware”.

File dpu_conf.vh located at dpu_trd_kernels/src/prj/Vitis directory contains DPU configuration.

Open file dpu_conf.vh and change in line 37:

`define URAM_DISABLE 

to 

`define URAM_ENABLE 

and save modified file.

This modification is necessary for successful implementation of the DPU on the zcu04-ev module with internal memories implemented in URAMs.

Go to dpu_trd_system_hw_link and double click on dpu_trd_system_hw_link.prj.

Remove sfm_xrt_top kernel from binary container by right clicking on it and choosing remove.

Reduce number of DPU kernels to one.

Configure connection of DPU kernels

On the same tab right click on dpu and choose Edit V++ Options 

Click "..." button on the line of V++ Configuration Settings and modify configuration as follows:

[clock]
freqHz=200000000:DPUCZDX8G_1.aclk
freqHz=400000000:DPUCZDX8G_1.ap_clk_2
 
[connectivity]
sp=DPUCZDX8G_1.M_AXI_GP0:HPC0
sp=DPUCZDX8G_1.M_AXI_HP0:HP0
sp=DPUCZDX8G_1.M_AXI_HP2:HP1

Update packaging to add dependencies into SD Card

Create a new folder img in your project in dpu_trd/src/app

Download image from provided link and place it to newly created folder dpu_trd/src/app/img.

Double click dpu_trd_system.sprj

Click "..." button on Packaging options

Enter "--package.sd_dir=../../test_dpu_trd/src/app"

Click OK.

Build DPU_TRD application

In “Explorer” section of Vitis IDE, click on:  dpu_trd_system[te0820_84_240_pfm] to select it.

Right Click on:  dpu_trd_system[te0820_84_240_pfm] and select in the opened sub-menu:
Build project.

Created extended HW with DPU:

Run DPU_TRD on Board

Write sd_card.img to SD card using SD card reader.

The sd_card.img file is output of the compilation and packing by Vitis. It is located in directory:
~/work/te0820_84_240/test_board_dpu_trd/dpu_trd_system/Hardware/package/

In Windows 10 (or Windows 11) PC, inst all program Win32DiskImager  for this task. Win32 Disk Imager can write raw disk image to removable devices.
https://win32diskimager.org/

Boot the board and open terminal on the board either by connecting serial console connection, or by opening ethernet connection to ssh server on the board, or by opening terminal directly using window manager on board. Continue using the embedded board terminal.

Detailed guide how to run embedded board and connect to it can be found in Run Compiled Example Application for Vector Addition.

Check ext4 partition size by:

root@Trenz:~# cd /
root@Trenz:~# df .
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/root               564048    398340    122364  77% /

Resize partition

root@Trenz:~# resize-part /dev/mmcblk1p2
/dev/mmcblk1p2
Warning: Partition /dev/mmcblk1p2 is being used. Are you sure you want to continue?
parted: invalid token: 100%
Yes/No? yes
End?  [2147MB]? 100%
Information: You may need to update /etc/fstab.
 
resize2fs 1.45.3 (14-Jul-2019)
Filesystem at /dev/mmcblk1p2 is mounted on /media/sd-mmcblk1p2; o[   72.751329] EXT4-fs (mmcblk1p2): resizing filesystem from 154804 to 1695488 blocks
n-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
[   75.325525] EXT4-fs (mmcblk1p2): resized filesystem to 1695488
The filesystem on /dev/mmcblk1p2 is now 1695488 (4k) blocks long.

Check ext4 partition size again, you should see:

root@Trenz:~# df . -h
Filesystem                Size      Used Available Use% Mounted on
/dev/root                 6.1G    390.8M      5.4G   7% /

The available size would be different according to your SD card size.

Next figures present:

  • Extension of ext4 disk size on X11 terminal.
  • ARM mc commander application - initial file structure
  • ARM mc commander applications - file structure after copy of files
  • PC winSCP application is used for secure Ethernet copy of tested image bellpeppe-994958.JPEG from PC to ARM. 
  • Execution of Resnet50 example.



Copy dependencies to home folder:

# Libraries
root@Trenz:~# cp -r /run/media/mmcblk1p1/app/samples/ ~
# Model
root@Trenz:~# cp /run/media/mmcblk1p1/app/model/resnet50.xmodel ~
# Host app
root@Trenz:~# cp /run/media/mmcblk1p1/dpu_trd ~
# Images to test
root@Trenz:~# cp /run/media/mmcblk1p1/app/img/*.JPEG ~

Run the application resnet50 from /home/root folder and you can observe that "bell pepper" receives highest score.

root@Trenz:~# env XLNX_VART_FIRMWARE=/run/media/mmcblk1p1/dpu.xclbin samples/bin/resnet50 img/bellpeppe-994958.JPEG
score[945]  =  0.992235     text: bell pepper,
score[941]  =  0.00315807   text: acorn squash,
score[943]  =  0.00191546   text: cucumber, cuke,
score[939]  =  0.000904801  text: zucchini, courgette,
score[949]  =  0.00054879   text: strawberry,

The TE0706-03 carrier with TE0808-05-BBE21-A module is running the PetaLinux OS and drives simple version of an X11 GUI on monitor with Display Port. Application dpu_trd is computing the HW accelerated AI inference on ResNet50 network on the DPU.

The resnet50 is trained for recognition of 1000 different objects in figures. The test board application reads the input figure and call the DPU.  The DPU implements the ResNet50 network. The "bell pepper" object is recognised with high probability.

The bellpeppe-994958.JPEG figure is displayed from file located in Ubuntu PC together with the PetaLinux terminal forwarded to X11 via PuTTY.

Terminal demonstrates execution of resnet50 application cross-compiled in test_dpu_trd project.

On board compilation of Vitis AI 3.0 demo test_dpu_trd

The application test_dpu_trd C++ SW code can be compiled directly on the test board.

The result of on compilation on test board will be application executable file a.out .

Compiled binary application a.out provides identical results to the binary application resnet50 from test_dpu_trd project compiled in Vitis in the PC Ubuntu Vitis AI 3.0 environment.

root@Trenz:~#cd samples
root@Trenz:~#./build.sh
Opencv4                   OpenCV - Open Source Computer Vision Library
root@Trenz:~#cd ..
  
root@Trenz:~# env XLNX_VART_FIRMWARE=/run/media/mmcblk1p1/dpu.xclbin samples/a.out img/bellpeppe-994958.JPEG
score[945]  =  0.992235     text: bell pepper,
score[941]  =  0.00315807   text: acorn squash,
score[943]  =  0.00191546   text: cucumber, cuke,
score[939]  =  0.000904801  text: zucchini, courgette,
score[949]  =  0.00054879   text: strawberry,

Only the C++ SW part of the application can be compiled on the test board. The HW acceleration part (the DPU kernel)
has to be compiled in the Vitis AI 3.0 framework on Ubuntu PC. It is present in the SD card image packed by the Vitis 2022.2 extensible design flow.

Additional Vitis extensible flow demos

Additional Vitis extensible flow demos can be compiled in Vitis 2022.2 and packed for the SD card. Demos (like the test_vadd demo) can be executed on the test board.

Starting point for exploration of Vitis extensible flow is the Vitis Accel Examples' Repository (project templates are already downloaded in Vitis 2022.2 tool):
GitHub - Xilinx/Vitis_Accel_Examples at 2022.2

Additional Vitis AI 3.0 demos

Additional demos from the Vitis AI 3.0 library can be compiled on the test board and executed on the test board with the identical DPU HW.

Starting point for exploration of these Vitis AI 3.0 examples is this Xilinx www page.

https://xilinx.github.io/Vitis-AI/3.0/html/index.html

Vitis AI 3.0 demos work in several modes:

  • From a image stored in a file with output in form of text to console or image displayed on the X11 desktop.
  • From sequence of image stored in several files with output in form of text to console or images displayed on the X11 desktop.
  • From USB 2/3 www camera input  video with output in form video displayed on the X11 remote desktop.

Support image and video file archives

Download the AI 3.0 support archive archive with images:

https://www.xilinx.com/bin/public/openDownload?filename=vitis_ai_library_r3.0.0_images.tar.gz

Download the AI 3.0 support archive with videos:

https://www.xilinx.com/bin/public/openDownload?filename=vitis_ai_library_r3.0.0_video.tar.gz

Unzip and untar content to directory if your choice. For example to directories (size 2.3 GB, 872.9 MB, 54.3 MB):
~/Downloads/apps  
~/Downloads/samples
~/Downloads/samples_onnx

These large packages provide support material for AI 3.0 examples. Next section of this tutorial will demonstrate use of AI 3.0 examples on vehicleclassification example. The vehicleclassification example will be downloaded to the evaluation board, compiled on the evaluation board and executed with image input from file or video input from USB camera.

Vehicleclassification example

Copy  support material for AI 3.0 vehicleclassification example from

 ~/Downloads/samples/vehicleclassification
to
~/work/Vitis-AI-3.0/examples/vai_library/samples/vehicleclassification

Zip the directory into file

~/work/Vitis-AI-3.0/examples/vai_library/samples/vehicleclassification.zip

Copy vehicleclassification.zip to the target board SD card home directory:

~/vehicleclassification.zip

Download models for vehicleclassification example DPU 

The vehicleclassification example will require precompiled models for the DPU.

The link to the archive with these precompiled model files vehicleclassification of make of the car can be found in the model.yaml file located in:

~/work/Vitis-AI-3.0/model-zoo/model-list/pt_vehicle-make-classification_VMMR_224_224_3.64G_3.0/model.yaml

Link from the make related model.yaml:

 https://www.xilinx.com/bin/public/openDownload?filename=vehicle_make_resnet18_pt-zcu102_zcu104_kv260-r3.0.0.tar.gz

The link to the archive with these precompiled model files vehicleclassification of type of the car can be found in the model.yaml file located in:

~/work/Vitis-AI-3.0/model-zoo/model-list/pt_vehicle-type-classification_CarBodyStyle_224_224_3.64G_3.0/model.yaml

Link from the type related model.yaml:

 https://www.xilinx.com/bin/public/openDownload?filename=vehicle_make_resnet18_pt-zcu102_zcu104_kv260-r3.0.0.tar.gz

Copy both models for the AI 3.0 vehicleclassification example from th PC Ubuntu directory

 ~/Downloads/samples/vehicleclassification
to the board directory
~/

On the board, unzip file

~/vehicleclassification.zip
to get directory with project files:
~/vehicleclassification

On the board, unzip and untar both model archives to create these model directories

~/vehicle_make_resnet18_pt
~/vehicle_make_resnet18_pt_acc
~/vehicle_type_resnet18_pt
~/vehicle_type_resnet18_pt_acc

On the board, copy make of the car related model files:
~/vehicle_make_resnet18_pt/vehicle_make_resnet18_pt.prototxt
~/vehicle_make_resnet18_pt/vehicle_make_resnet18_pt.xmodel
to
 ~/Downloads/samples/vehicleclassification/vehicle_make_resnet18_pt.prototxt
~/Downloads/samples/vehicleclassification/vehicle_make_resnet18_pt.xmodel

On the board, copy type of the car related model files:
~/vehicle_type_resnet18_pt/vehicle_type_resnet18_pt.prototxt
~/vehicle_type_resnet18_pt/vehicle_type_resnet18_pt.xmodel
to
 ~/Downloads/samples/vehicleclassification/vehicle_make_resnet18_pt.prototxt
~/Downloads/samples/vehicleclassification/vehicle_make_resnet18_pt.xmodel

The midnight commander utility mc can be used to perform these tasks.

Compile vehicleclassification example

On the board,  change directory to: 

~/vehicleclassification

Compile vehicleclassification examples by:

root@Trenz:~# chmod 777 build.sh
root@Trenz:~# ./build.sh

The compilation on the target board will take some time to finish. These executable binaries are created:

~/vehicleclassification/test_jpeg_vehicleclassification
~/vehicleclassification/test_performance_vehicleclassification
~/vehicleclassification/test_video_vehicleclassification
~/vehicleclassification/test_accuracy_vehicleclassification

Execute test_jpeg_vehicleclassification for detection of the make of the car by command:

root@Trenz:~# env XLNX_VART_FIRMWARE=/run/media/mmcblk1p1/dpu.xclbin ./test_jpeg_vehicleclassification vehicle_make_resnet18_pt.xmodel sample_vehicleclassification.jpg 

Execute test_jpeg_vehicleclassification for detection of the type of the car by command:

root@Trenz:~# env XLNX_VART_FIRMWARE=/run/media/mmcblk1p1/dpu.xclbin ./test_jpeg_vehicleclassification vehicle_type_resnet18_pt.xmodel sample_vehicleclassification.jpg 

Execute test_performance_vehicleclassification for detection of the make of the car by command:

root@Trenz:~# env XLNX_VART_FIRMWARE=/run/media/mmcblk1p1/dpu.xclbin ./test_performance_vehicleclassification vehicle_make_resnet18_pt.xmodel ./test_performance_vehicleclassification.list -s 60 -t 2

Execute test_performance_vehicleclassification for detection of the type of the car by command:

root@Trenz:~# env XLNX_VART_FIRMWARE=/run/media/mmcblk1p1/dpu.xclbin ./test_performance_vehicleclassification vehicle_type_resnet18_pt.xmodel ./test_performance_vehicleclassification.list -s 60 -t 2

Connec USB cammera to the board.

Execute test_video_vehicleclassification for detection of the make of the car from video input by command:

root@Trenz:~# env XLNX_VART_FIRMWARE=/run/media/mmcblk1p1/dpu.xclbin ./test_video_vehicleclassification vehicle_make_resnet18_pt.xmodel 0 -t 1

Execute test_video_vehicleclassification for detection of the type of the car from video input by command:

root@Trenz:~# env XLNX_VART_FIRMWARE=/run/media/mmcblk1p1/dpu.xclbin ./test_video_vehicleclassification vehicle_type_resnet18_pt.xmodel 0 -t 1

Parameter -s 60 is request to perform performance test for 60 sec. 
Parameter 0 is indicating USB camera 0
Parameter -t 1 is requesting to execute the application as a single thread. 


This photo documents test_video_vehicleclassification with DPU model vehicle_make_resnet18_pt . USB camera video input is processed with 20 FPS. The make of the car is displayed on X11 desktop together with the input video and assigned probability of make of the car.


This is more detailed X11 desktop screenshot for running test_video_vehicleclassification application with DPU model vehicle_make_resnet18_pt and USB camera video input. 

Performance of test_performance_vehicleclassification  application is 167 FPS.

Large set of Vitis AI 3.0 demos with precompiled models for the DPU can be executed on the TE0820 test board in similar fashion as described for the vehicleclassification demo. Demos can be compiled directly on the test board with the SD card created by the test_dpu_trd example.

Additional Vitis demos - The Vitis AI 3.0 Model Zoo

See:

Vitis AI Model Zoo — Vitis™ AI 3.0 documentation (xilinx.github.io)

This page includes links to downloadable spreadsheet and an online table that incorporate key data about the supported AI 3.0 Model Zoo models. The spreadsheet and table include comprehensive information about all models, including links to the original papers and datasets, source framework, input size, computational cost (GOPs), and float and quantized accuracy.



Table of contents





App. A: Change History and Legal Notices


Document Change History

To get content of older revision go to "Change History" of this page and select older document revision number.

DateDocument Revision

Authors

Description

Error rendering macro 'page-info'

Ambiguous method overloading for method jdk.proxy241.$Proxy3496#hasContentLevelPermission. Cannot resolve which method to invoke for [null, class java.lang.String, class com.atlassian.confluence.pages.Page] due to overlapping prototypes between: [interface com.atlassian.confluence.user.ConfluenceUser, class java.lang.String, class com.atlassian.confluence.core.ContentEntityObject] [interface com.atlassian.user.User, class java.lang.String, class com.atlassian.confluence.core.ContentEntityObject]

Error rendering macro 'page-info'

Ambiguous method overloading for method jdk.proxy241.$Proxy3496#hasContentLevelPermission. Cannot resolve which method to invoke for [null, class java.lang.String, class com.atlassian.confluence.pages.Page] due to overlapping prototypes between: [interface com.atlassian.confluence.user.ConfluenceUser, class java.lang.String, class com.atlassian.confluence.core.ContentEntityObject] [interface com.atlassian.user.User, class java.lang.String, class com.atlassian.confluence.core.ContentEntityObject]

Error rendering macro 'page-info'

Ambiguous method overloading for method jdk.proxy241.$Proxy3496#hasContentLevelPermission. Cannot resolve which method to invoke for [null, class java.lang.String, class com.atlassian.confluence.pages.Page] due to overlapping prototypes between: [interface com.atlassian.confluence.user.ConfluenceUser, class java.lang.String, class com.atlassian.confluence.core.ContentEntityObject] [interface com.atlassian.user.User, class java.lang.String, class com.atlassian.confluence.core.ContentEntityObject]

  • initial release
--all

Error rendering macro 'page-info'

Ambiguous method overloading for method jdk.proxy241.$Proxy3496#hasContentLevelPermission. Cannot resolve which method to invoke for [null, class java.lang.String, class com.atlassian.confluence.pages.Page] due to overlapping prototypes between: [interface com.atlassian.confluence.user.ConfluenceUser, class java.lang.String, class com.atlassian.confluence.core.ContentEntityObject] [interface com.atlassian.user.User, class java.lang.String, class com.atlassian.confluence.core.ContentEntityObject]

--
Document change history.

  • No labels