29 @page mmapi_build Building and Running
32 You can run the samples on Jetson without rebuilding them. However,
if you
33 modify those samples, you must rebuild them before running them.
35 For information on building the samples on a host Linux PC (x86),
36 see @ref cross_platform_support.
39 Build and run the samples by following the procedures in
this document:
40 <!-- Note to writers: The sample Markdown files all reference the first
41 3 steps below. If you change/renumber these steps, you may also need to
42 updated the sample MDs. -->
44 1. [Export environment variables.](#step1)
45 2. [Use Jetpack to install these programs:](#step2)
46 - NVIDIA<sup>®</sup> CUDA<sup>®</sup>
49 - NVIDIA<sup>®</sup> TensorRT<sup>™</sup>, previously known as GIE
50 3. [Create symbolic links.](#step3)
51 4. [Optionally, set up cross-compiler support.](#step4)
52 5. [Build and run the samples.](#step5)
56 ## Step 1. Export environment variables ##
58 * Export the XDisplay with the following command:
63 ## Step 2: Use Jetpack to install CUDA/Opencv4tegra/cuDNN/TensorRT ##
65 If you have already installed these libraries, you can skip the following steps.
67 1. Download Jetpack from the following website:
71 2. Run the installation script from the host machine with the
74 $ chmod +x ./JetPack-L4T-<version>-linux-x64.run
75 $ ./JetPack-L4T-<version>-linux-x64.run
77 3. Select
"Jetson TX2 Development Kit(64bit) and Ubuntu host".
79 4. Select
"custom" and click
"clear action".
81 5. Select
"CUDA Toolkit",
"OpenCV",
"cuDNN Package" and
82 "TensorRT", and then install.
84 6. For installation details, see the `_installer` folder.
87 ## Step 3: Create symbolic links ##
89 * Create symbolic links with the following commands:
91 $ cd /usr/lib/aarch64-linux-gnu
92 $ sudo ln -sf tegra-egl/libEGL.so.1 libEGL.so
93 $ sudo ln -sf tegra-egl/libGLESv2.so.2 libGLESv2.so
94 $ sudo ln -sf libv4l2.so.0 libv4l2.so
97 ## Step 4: Set up cross-compiler support (Optional) ##
98 * If you want to cross-compile the samples on a host Linux PC (x86),
99 see @ref cross_platform_support.
102 ## Step 5: Build and run the samples ##
103 * Build and run, as described
for each sample.
104 | Directory Location Relative to ll_samples/samples | Description |
105 |---------------------------------------------------|-------------|
106 | @ref l4t_mm_video_decode_cuda | Decodes H.264/H.265 video from a local file and then shares the YUV buffer with CUDA to draw a black box in the left corner. | <!-- l4t_mm_video_decode_cuda.md -->
107 | @ref l4t_mm_video_cuda_enc_group | Use CUDA to draw a black box in the YUV buffer and then feeds it to video encoder to generate an H.264/H.265 video file. | <!-- l4t_mm_video_cuda_enc_guide.md -->
108 | @ref l4t_mm_vid_decode_trt | Uses simple TensorRT calls to save the bounding box info to a file. | <!-- l4t_mm_video_dec_tensorrt.md -->
109 | @ref l4t_mm_jpeg_encode | Uses `libjpeg-8b` APIs to encode JPEG images from software-allocated buffers. | <!-- l4t_mm_jpeg_encode.md -->
110 | @ref l4t_mm_jpeg_decode | Uses `libjpeg-8b` APIs to decode a JPEG image from software-allocated buffers. | <!-- l4t_mm_jpeg_decode.md -->
111 | @ref nvvid_scal_col_group | Uses `V4L2` APIs to
do video format conversion and video scaling. | <!-- l4t_mm_vid_scal_col_fmt_conv.md -->
112 | @ref l4t_mm_08_video_decode_drm | Uses the NVIDIA<sup>®</sup> Tegra<sup>®</sup> Direct Rendering Manager (DRM) to render video stream or UI. | <!-- l4t_mm_08_video_decode_drm.md -->
113 | @ref l4t_mm_jpeg_capture_group | Simultaneously uses Libargus API to preview camera stream and libjpeg-8b APIs to encode JPEG images. | <!-- l4t_mm_camera_jpeg_capture.md -->
114 | @ref l4t_mm_camera_recording | Gets the real-time camera stream from the Libargus API and feeds it into the video encoder to generate H.264/H.265 video files. | <!-- l4t_mm_camera_recording.md -->
115 | @ref l4t_mm_v4l2_cam_cuda_group | Captures images from a V4L2 camera and shares the stream with CUDA engines to draw a black box in the upper left corner. | <!-- l4t_mm_v4l2_cam_cuda.md -->
116 | @ref nvvid_13_multi_camera_group | Captures multiple cameras and composites them to one frame.|
117 | @ref nvvid_backend_group | Performs intelligent video analytics on four concurrent video streams going through a decoding process using the on chip decoders, video scaling using on chip scalar, and GPU compute. | <!-- l4t_mm_backend.md -->
118 | @ref l4t_mm_camcap_tensorrt_multichannel_group | Performs independent processing on four different resolutions of video capture coming directly from camera. | <!-- l4t_mm_camcap_tensorrt_and_multi_enc.md -->
119 | @ref l4t_mm_v4l2cuda_group | Uses V4L2 image capturing with CUDA format conversion. | <!-- l4t_mm_v4l2cuda.md -->
123 | Tool Name | Description | Directory Location |
124 |-------------|-------------|--------------------|
125 | @ref l4t_mm_caffe_to_tensorRT_group | TBD | tools/ConvertCaffeToTrtModel | <!-- l4t_mm_caffe_to_tensorrt_guide.md -->
129 @if view_outside_MMAPI_reference
130 For details on each sample's structure and the APIs they use, see the
131 "Sample Applications" chapter of <em>L4T Multimedia API
132 Reference</em>. You can get this reference from the NVIDIA Embedded
137 For details on each sample's structure and the APIs they use, see
138 @ref l4t_mm_test_group in this reference.