WARNING : Object Detection uses a lot of CPU power. The base of Shinobi's detection is done by OpenCV. If you can build OpenCV with CUDA support you will have far greater performance.
OpenCV is the base engine used for detection. You will find most object detection software uses OpenCV or something based on it.
sudo apt update && sudo apt install git libopencv-dev python-opencv openalpr openalpr-daemon openalpr-utils libopenalpr-dev libcairo2-dev libjpeg-dev libpango1.0-dev libgif-dev build-essential g++
WARNING : Only do this if you have a Nvidia graphics chip.
Disable noveau, Create a file to do this.
sudo nano /etc/modprobe.d/blacklist-nouveau.conf
Place these lines in the blank file.
options nouveau modeset=0
Update the system image. Reboot.
sudo update-initramfs -u
Download CUDA 8.0.
You must stop your GPU to install CUDA.
sudo service lightdm stop
sudo chmod +x cuda_8.0.44_linux-run
Download and Install CuDNN from Nvidia's Website.
Install Example : dpkg -i ./libcudnn6_6.0.21-1+cuda8.0_amd64.deb
sudo apt-get update && sudo apt-get install git libopencv-dev build-essential checkinstall cmake pkg-config yasm libtiff5-dev libjpeg-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev libxine2-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev python-dev python-numpy libtbb-dev libqt4-dev libgtk2.0-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils libjpeg-dev libpango1.0-dev libgif-dev g++ openalpr openalpr-daemon openalpr-utils libopenalpr-dev
Download the source for OpenCV
git clone https://github.com/opencv/opencv.git -b 2.4 opencv
Build the Installer for OpenCV from the source files
cmake -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_NVCUVID=ON -D BUILD_DOCS=ON -D WITH_XINE=ON -D WITH_CUDA=ON -D WITH_OPENGL=ON -D WITH_TBB=ON -D WITH_OPENNI=ON -D BUILD_EXAMPLES=ON -D WITH_OPENCL=ON -D CMAKE_BUILD_TYPE=RELEASE -D OPENCV_EXTRA_MODULES_PATH=../modules/opencv_contrib-master/modules/ -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -D WITH_CUBLAS=1 ..
When the installer is made you should see something like this somewhere just before the end.
-- NVIDIA CUDA
-- Use CUFFT: YES
-- Use CUBLAS: YES
-- USE NVCUVID: YES
-- NVIDIA GPU arch: 20 21 30 35
-- NVIDIA PTX archs: 30
-- Use fast math: YES
-- Tiny gpu module: NO
Install OpenCV. This is the boring part.
OPTIONAL : If you are already root user you must return to a regular user.
Navigate to your Shinobi directory, where you downloaded the Shinobi files to.
Example : cd /home/Shinobi
Navigate to your Shinobi directory and install the node.js wrappers needed to run the plugin.
sudo npm install opencv [email protected] moment
Setup the configuration file for the plugin.
cp plugins/opencv/conf.sample.json plugins/opencv/conf.json
OPTIONAL : Modify the conf.json to match your current listening port. Default is 8080.
Check that your main configuration file, for the main Shinobi app, has a Plugin Key set. If one is not set the plugin will not work. You may review
conf.sample.json to see the default setup.
Reutrn to root user.
Start the OpenCV plugin
pm2 start plugins/opencv/shinobi-opencv.js
When complete you will see
Detector : OpenCV Connected in your Monitor Settings. Shinobi does not need to be restarted unless you modified the
Important information to note are as follows.
Monitor Settings > Detector > Send Frames : Enabling this will push frames to your detection plugin for analyzation, in this case OpenCV.
Monitor Settings > Detector > Detect Objects : Enabling this will reveal a list of usable cascades. If you do not have any cascades a link will be provided in the dashboard or you can get on over to the public repository on Github.
Not everyone uses motion detection or object tracking and the libraries required can be bothersome to install, based on the OS. For example if I make shinobi-opencv.js a feature then everyone will be required to install OpenCV just to use basic features in Shinobi. This also allows us to swap the plugin with a custom one or run it on another machine entirely. Sharing the work between multiple machines can be a great way to optimize performance.