L4T Multimedia API Reference

28.1 Release

 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
multimedia_api/ll_samples/docs/l4t_mm_backend.md
Go to the documentation of this file.
1 Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved.
2 <!-- Sample is in the backend dir. -->
3 
4 @page nvvid_backend_group Backend
5 @{
6 
7  - [Overview](#overview)
8  - [Building and Running](#build_and_run)
9  - [Flow](#flow)
10  - [Command Line Options](#options)
11  - [Key Structure and Classes](#key)
12 
13 - - - - - - - - - - - - - - -
14 <a name="overview">
15 ## Overview ##
16 
17 This application implements a typical appliance performing intelligent video analytics.
18 Application areas include public safety, smart cities, and autonomous machines. This example demonstrates
19 four (4) concurrent video streams going through a decoding process using the on-chip decoders, video scaling using on
20 chip scalar, and GPU compute. For simplicity of demonstration, only one of the channels uses
21 NVIDIA<sup>&reg;</sup> TensorRT<sup>&trade;</sup> to perform object identification
22 and generate bounding box around the identified object. This sample also uses video converter functions
23 for various format conversions. It also uses EGLImage to demonstrate buffer sharing and image display.
24 
25 In this sample, object detection is limited to identifying cars in video streams
26 of 960 x 540 resolution, running up to 14 FPS. The network is based on
27 GoogleNet. The inference is performed on a frame-by-frame basis and no object
28 tracking is involved. Note that this network is intended to be an example
29 that shows how to use TensorRT to quickly build the compute pipeline. The sample
30 includes trained GoogleNet, which was trained with NVIDIA Deep Learning GPU
31 Training System (DIGITS). The training was done with roughly
32 3000 frames taken from 5-10 feet elevation. Varying levels of detection accuracy
33 are expected based on the video samples fed in. Given that this sample is locked
34 to perform at half-HD resolutions under 10 FPS, video feeds with higher FPS for
35 inference will show stuttering during playback.
36 
37 This sample does not require a camera.
38 
39 <a name="build_and_run">
40 ## Building and Running ##
41 
42 #### Prerequisites ####
43 * You have followed Steps 1-3 in @ref mmapi_build.
44 * You have installed:
45  * CUDA Toolkit
46  * OpenCV4Tegra
47 * Optionally, you have installed TensorRT (previously known as GPU Inference Engine (GIE))
48 
49 ### To build
50 1. If you want to run the sample without TensorRT, set the following in the Makefile:
51 
52  ENABLETRT := 0
53  By default, TensorRT is enabled.
54 
55 2. Enter:
56 
57  $ cd backend
58  $ make
59 
60 ### To run
61 * Enter:
62 
63  $ ./backend 1 ../../data/Video/sample_outdoor_car_1080p_10fps.h264 H264 \
64  --trt-deployfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt \
65  --trt-modelfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.caffemodel \
66  --trt-forcefp32 0 --trt-proc-interval 1 -fps 10
67 
68  @note The TensorRT batch size can be configured from the third line of the following file:
69 
70  ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt
71  Where the valid values are 1(default), or 2, or 4.
72 
73 . <!-- leave this period alone. It forces Doxygen to stop indenting. -->
74 
75 ### To quit
76 * Enter `q`.
77 
78 ### To view command-line options
79 * Enter:
80 
81  $ cd backend
82  $ ./backend -h
83 
84 
85 <a name="flow">
86 - - - - - - - - - - - - - - -
87 ## Flow
88 
89 The following image shows the movement of data through the sample when TensorRT
90 is not enabled.
91 
92 ![ ](l4t_mm_backend_one.jpg)
93 
94 The following image shows data flow details for the channel using TensorRT.
95 
96 ![ ](l4t_mm_backend_two.jpg)
97 
98 `NvEGLImageFromFd` is an NVIDIA API that returns an `EGLImage` pointer from the file
99 descriptor buffer that is allocated via the Tegra mechanism. TensorRT then uses
100 the `EGLImage` buffer to render the bounding box to the image.
101 
102 ### X11 Details
103 For X11 technical details, see:
104 
105 http://www.x.org/docs/X11/xlib.pdf
106 
107 - - - - - - - - - - - - - - -
108 <a name="key">
109 ## Key Structure and Classes ##
110 
111 The `context_t structure` (backend/v4l2_backend_test.h) manages all resources in sample applications.
112 
113 |Element|Description|
114 |-------|-----------|
115 |[NvVideoDecoder](classNvVideoDecoder.html)|Contains all video decoding-related elements and functions.|
116 |[NvVideoConverter](classNvVideoConverter.html)|Contains elements and functions for video format conversion.|
117 |NvEglRenderer|Contains all EGL display rendering-related functions.|
118 |EGLImageKHR|The EGLImage used for CUDA processing. This type is from the EGL open source graphical library.|
119 
120 ### %NvVideoDecoder ###
121 
122 The [NvVideoDecoder](classNvVideoDecoder.html) class creates a new V4L2 Video Decoder.
123 The following table describes the key %NvVideoDecoder members that this sample uses.
124 
125 |Member|Description|
126 |-------------|---|
127 |NvV4l2Element::output_plane |Holds the V4L2 output plane.|
128 |NvV4l2Element::capture_plane |Holds the V4L2 capture plane.|
129 |NvVideoDecoder::createVideoDecoder |Static function to create video decode object.|
130 |NvV4l2Element::subscribeEvent |Subscribes event.|
131 |NvVideoDecoder::setExtControls |Sets external control to V4L2 device.|
132 |NvVideoDecoder::setOutputPlaneFormat |Sets output plane format.|
133 |NvVideoDecoder::setCapturePlaneFormat |Sets capture plane format.|
134 |NvV4l2Element::getControl |Gets the value of a control setting.|
135 |NvV4l2Element::dqEvent |Dequeues the devent reported by the V4L2 device.|
136 |NvV4l2Element::isInError |Checks if under error state.|
137 
138 ### %NvVideoConverter ###
139 
140 The NvVideoConverter class packages all video
141 converting related elements and functions. It performs color space conversion,
142 scaling and conversion between hardware buffer memory and software buffer
143 memory. The following table describes the key %NvVideoConverter members that
144 this sample uses.
145 
146 |Member|Description|
147 |-------------|---|
148 |NvV4l2Element::output_plane |Holds the output plane.|
149 |NvV4l2Element::capture_plane |Holds the capture plane.|
150 |NvVideoConverter::waitForIdle |Waits until all the buffers queued on the output plane are converted and dequeued from the capture plane. This is a blocking method.|
151 |NvVideoConverter::setCapturePlaneFormat |Sets the format on the converter capture plane.|
152 |NvVideoConverter::setOutputPlaneFormat |Sets the format on the converter output plane.|
153 
154 `NvVideoDecoder` and `NvVideoConverter` contain two key elements:
155 `output_plane` and `capture_plane`. These objects are instantiated from the
156 NvV4l2ElementPlane class type.
157 
158 ### %NvV4l2ElementPlane ###
159 
160 [NvV4l2ElementPlane](group__l4t_mm__nvv4lelementplane__group.html) creates an [NVv4l2Element](classNvV4l2Element.html) plane.
161 The following table describes the key %NvV4l2ElementPlane members used in this
162 sample. `v4l2_buf` is a local variable inside the NvV4l2ElementPlane::dqThreadCallback
163 function and, thus, the scope exists only inside the callback function. If other
164 functions of the sample must access this buffer, a prior copy of the buffer
165 inside callback function is required.
166 
167 |Member |Description|
168 |-------------------|---|
169 |NvV4l2ElementPlane::setupPlane |Sets up the plane of V4l2 element.|
170 |NvV4l2ElementPlane::deinitPlane |Destroys the plane of V4l2 element.|
171 |NvV4l2ElementPlane::setStreamStatus |Starts/Stops the stream.|
172 |NvV4l2ElementPlane::setDQThreadCallback |Sets the callback function of the `dqueue` buffer thread.|
173 |NvV4l2ElementPlane::startDQThread |Starts the thread of the `dqueue` buffer.|
174 |NvV4l2ElementPlane::stopDQThread |Stops the thread of the `dqueue` buffer.|
175 |NvV4l2ElementPlane::qBuffer |Queues a V4l2 buffer from the plane.|
176 |NvV4l2ElementPlane::dqBuffer |Dequeues a V4l2 buffer from the plane.|
177 |NvV4l2ElementPlane::getNumBuffers |Gets the number of the V4l2 buffer.|
178 |NvV4l2ElementPlane::getNumQueuedBuffers |Gets the number of the V4l2 buffer in the queue.|
179 |NvV4l2ElementPlane::getNthBuffer |Gets the \c %NvBuffer queue object at index N.|
180 
181 
182 ### %TRT_Context ###
183 
184 TRT_Context provides a
185 series of interfaces to load Caffe model and perform inference. The following
186 table describes the key %TRT_Context members used in this sample.
187 
188 |%TRT_Context|Description|
189 |-----------|-----------|
190 |TRT_Context::destroyTrtContext |Destroys the TRT_context.|
191 |TRT_Context::getNumTrtInstances |Gets the number of TRT_context instances.|
192 |TRT_Context::doInference |Interface for inference after TensorRT model is loaded.|
193 
194 ### Functions to Create/Destroy EGLImage ##
195 
196 The sample uses 2 global functions to create and destroy EGLImage from `dmabuf`
197 file descriptor. These functions are defined in nvbuf_utils.h.
198 
199 |Global Function|Description|
200 |---------------|-----------|
201 |NvEGLImageFromFd() |Creates EGLImage from dmabuf fd.|
202 |NvDestroyEGLImage() |Destroys the EGLImage.|
203 
int setupPlane(enum v4l2_memory mem_type, uint32_t num_buffers, bool map, bool allocate)
Helper method that encapsulates all the method calls required to set up the plane for streaming...
static NvVideoDecoder * createVideoDecoder(const char *name, int flags=0)
Creates a new V4L2 Video Decoder object named name.
void destroyTrtContext(bool bUseCPUBuf=false)
bool setDQThreadCallback(dqThreadCallback callback)
Sets the DQ Thread callback method.
int stopDQThread()
Force stops the DQ Thread if it is running.
void doInference(queue< vector< cv::Rect > > *rectList_queue, float *input=NULL)
int qBuffer(struct v4l2_buffer &v4l2_buf, NvBuffer *shared_buffer)
Queues a buffer on the plane.
Defines a helper class for V4L2 Video Decoder.
int dqEvent(struct v4l2_event &event, uint32_t max_wait_ms)
Dequeues an event from the element.
Defines a helper class for operations performed on a V4L2 Element plane.
int setCapturePlaneFormat(uint32_t pixfmt, uint32_t width, uint32_t height, enum v4l2_nv_buffer_layout type)
Sets the format on the converter output plane.
NvV4l2ElementPlane capture_plane
Sets the capture plane.
Class representing a buffer.
Definition: NvBuffer.h:85
bool(* dqThreadCallback)(struct v4l2_buffer *v4l2_buf, NvBuffer *buffer, NvBuffer *shared_buffer, void *data)
This is a callback function type method that is called by the DQ Thread when it successfully dequeues...
NvV4l2ElementPlane output_plane
Sets the output plane.
int setOutputPlaneFormat(uint32_t pixfmt, uint32_t sizeimage)
Sets the format on the decoder output plane.
virtual int isInError()
EGLImageKHR NvEGLImageFromFd(EGLDisplay display, int dmabuf_fd)
This method must be used for getting EGLImage from dmabuf-fd.
int dqBuffer(struct v4l2_buffer &v4l2_buf, NvBuffer **buffer, NvBuffer **shared_buffer, uint32_t num_retries)
Dequeues a buffer from the plane.
int getControl(uint32_t id, int32_t &value)
Gets the value of a control.
int startDQThread(void *data)
Starts DQ Thread.
Defines a helper class for V4L2 Video Converter.
int setCapturePlaneFormat(uint32_t pixfmt, uint32_t width, uint32_t height)
Sets the format on the decoder output plane.
NvBuffer * getNthBuffer(uint32_t n)
Gets the NvBuffer object at index n.
uint32_t getNumQueuedBuffers()
Gets the number of buffers currently queued on the plane.
int setStreamStatus(bool status)
Starts or stops streaming on the plane.
int setExtControls(struct v4l2_ext_controls &ctl)
Sets the value of several controls.
uint32_t getNumTrtInstances() const
int waitForIdle(uint32_t max_wait_ms)
Waits until all buffers queued on the output plane are converted and dequeued from the capture plane...
NvEglRenderer is a helper class for rendering using EGL and OpenGL ES 2.0.
Definition: NvEglRenderer.h:74
int subscribeEvent(uint32_t type, uint32_t id, uint32_t flags)
Subscribes to an V4L2 event.
void deinitPlane()
Helper method that encapsulates all the method calls required to deinitialize the plane for streaming...
int setOutputPlaneFormat(uint32_t pixfmt, uint32_t width, uint32_t height, enum v4l2_nv_buffer_layout type)
Sets the format on the converter output plane.
uint32_t getNumBuffers()
Gets the number of buffers allocated/requested on the plane.
int NvDestroyEGLImage(EGLDisplay display, EGLImageKHR eglImage)
This method must be used for destroying EGLImage object.