You need the following Simdify® modules to complete all exercises involving the Raspberry Pi 4: Simdify® Free Edition, Simdify® Video Module, Simdify® Raspberry Pi 4 Export Module
This exercise teaches you how to create a new Simdify Shader document, implement a GLSL shader module that performs edge detection, and then visualize the results with a fragment shader. Your GPU must support GLSL #version 300 in order to complete this exercise. This section assumes you have completed the previous exercise and that you know the RTSP address of the video stream for your Raspberry Pi 4 video camera.
The application displays a splash screen and then the application desktop appears. The main menu is composed of three items: File, Desktop, and Help. These options provide access to commands relevant to the current context (which is an empty file). The application desktop changes when you create a new file or load a file from disk.
The software displays a wizard that allows you to specify the parameters of your new shader.
User Edge Detection
When the document is saved, the file name will be User Edge Detection.box.
We won't set a value for Video Pixel Format. Leave it set to its current value.
The application creates a new shader document and the main menu options change. In the main application window you can see the hierarchy on the left, the rendered shader with geometry in the middle, and the property sheet on the right.
If the shader compiled successfully, you should see a road in a canyon in the center of the worksheet.
For future reference, the location where this file is saved can be obtained using File » Open Containing Folder, when nothing is selected.
This expands the graph so that you can see all the nodes.
NOTE: You can hover over each node icon, in the image below, for a description of the node and its function.
That covers the basic information about the graph.
Notice that you can see the GLSL version, profile, and source code locations. Many nodes, but not all of them, display useful information if you hover over them. We can see that the shader uses GLSL #version 300 es as we requested when we created the document.
For example: rtsp://192.168.1.30:8554/video
The software displays a dialog that allows you to edit the node's properties.
Note that you can set Video Source Path/URL to RTSP video streams or files on disk. In this case we are setting this path to an RTSP video stream. And for this exercise, this video stream is originating from your Raspberry Pi 4.
You won't see a change immediately. You still need to play the video.
Playback will start in a few seconds and you'll see the output from your Raspberry Pi 4 video camera on the geometry in the Layout application. Your video won't match the screenshot below, but you will see video playing. If your Raspberry Pi 4 video does not appear in the Layout app, please verify again that you can see the video stream using VLC.
The video playback stops. Now we'll write a shader!
The application displays a dialog that allows you to create a new include file.
SPA_UserSobelEdges.glsl
The software saves the file to disk and copies the file path to the Windows® clipboard. This means we can use CTRL + V to paste this into a file open dialog and open the file without searching.
The file SPA_UserSobelEdges.glsl opens and you'll notice that it's empty.
#ifndef SPA_USER_SOBEL_EDGES
#define SPA_USER_SOBEL_EDGES
const int KERNEL_SIZE = 9;
float u_kernel[KERNEL_SIZE] = float[KERNEL_SIZE](
-1.0, -2.0, -1.0,
0.0, 0.0, 0.0,
1.0, 2.0, 1.0
);
#define SAMPLE_PIXEL( _x, _y ) ( SPA_VideoSamplePixel( image, luma_coord + ivec2( _x , _y ), u_coord, v_coord ) )
// Include load/store functionality for specific sampler/image types.
#if SPA_GL_VENDOR != SPA_GL_INTEL && __VERSION__ >= 330
vec4 SPA_Sobel( layout( r8ui ) uimage2D image, vec2 coords )
#else
vec4 SPA_Sobel( usampler2D image, vec2 coords )
#endif
{
mat3 I;
float cnv[9];
ivec2 luma_coord;
ivec2 u_coord;
ivec2 v_coord;
uvec3 src_color;
SPA_VideoGetYuvCoords( image, coords, luma_coord, u_coord, v_coord );
float dx = ( length( vec3( SAMPLE_PIXEL( -1, -1 ) ) / 255.0 * u_kernel[0] +
vec3( SAMPLE_PIXEL( -1, 0 ) ) / 255.0 * u_kernel[1] +
vec3( SAMPLE_PIXEL( -1, +1 ) ) / 255.0 * u_kernel[2]) -
length( vec3( SAMPLE_PIXEL( +1, -1 ) ) / 255.0 * u_kernel[0] +
vec3( SAMPLE_PIXEL( +1, 0 ) ) / 255.0 * u_kernel[1] +
vec3( SAMPLE_PIXEL( +1, +1 ) ) / 255.0 * u_kernel[2] ) );
float dy = ( length( vec3( SAMPLE_PIXEL( -1, -1 ) ) / 255.0 * u_kernel[0] +
vec3( SAMPLE_PIXEL( 0, -1 ) ) / 255.0 * u_kernel[1] +
vec3( SAMPLE_PIXEL( +1, -1 ) ) / 255.0 * u_kernel[2]) -
length( vec3( SAMPLE_PIXEL( -1, +1 ) ) / 255.0 * u_kernel[0] +
vec3( SAMPLE_PIXEL( 0, +1 ) ) / 255.0 * u_kernel[1] +
vec3( SAMPLE_PIXEL( +1, +1 ) ) / 255.0 * u_kernel[2] ) );
float val = length( vec2( dx, dy ) );
return vec4( val, val, val, 1.0 );
}
// !SPA_USER_SOBEL_EDGES
#endif
This is a fairly basic filter, but it will work very well for this example.
This displays a dialog that allows you to select GLSL shader source code (and any include files). The file path of the source item you select will be copied to the Windows® clipboard so you can open it in a text editor. Select user_edge_detection_fragment_shader.glsl and click OK
The file user_edge_detection_fragment_shader.glsl opens. Your fragment shader looks like this.
// #version 300
// The version number is automatically injected by the application.
// It is included above for reference purposes only.
#include <SPA_Version.glsl>
precision highp int;
precision highp float;
precision highp usampler2D;
#include <SPA_Constants.glsl>
#include <SPA_Video.glsl>
in vec2 fs_texcoord;
uniform usampler2D src_video_plane0;
out vec4 fragColor;
void main(void)
{
ivec2 luma_coord;
ivec2 u_coord;
ivec2 v_coord;
ivec2 video_dimension;
video_dimension = SPA_VideoSizeYuvFormat( src_video_plane0 );
SPA_VideoGetYuvCoords( src_video_plane0, fs_texcoord, luma_coord, u_coord, v_coord );
uvec3 yuv = SPA_VideoSamplePixel( src_video_plane0, luma_coord, u_coord, v_coord );
vec3 rgb = SPA_YUVToRGB( yuv );
fragColor = vec4( rgb.r, rgb.g, rgb.b, 1.0 );
}
// #version 300
// The version number is automatically injected by the application.
// It is included above for reference purposes only.
#include <SPA_Version.glsl>
precision highp int;
precision highp float;
precision highp usampler2D;
#include <SPA_Constants.glsl>
#include <SPA_Video.glsl>
#include <Modules\SPA_UserSobelEdges.glsl>
in vec2 fs_texcoord;
uniform usampler2D src_video_plane0;
out vec4 fragColor;
void main(void)
{
ivec2 luma_coord;
ivec2 u_coord;
ivec2 v_coord;
ivec2 video_dimension;
video_dimension = SPA_VideoSizeYuvFormat( src_video_plane0 );
SPA_VideoGetYuvCoords( src_video_plane0, fs_texcoord, luma_coord, u_coord, v_coord );
uvec3 yuv = SPA_VideoSamplePixel( src_video_plane0, luma_coord, u_coord, v_coord );
vec3 rgb = SPA_YUVToRGB( yuv );
fragColor = SPA_Sobel( src_video_plane0, fs_texcoord );
}
Note that the shader imports the GLSL module we just implemented:
#include <Modules\SPA_UserSobelEdges.glsl>
We've also replaced the assignment to fragColor with a call to SPA_Sobel( ... ).
fragColor = SPA_Sobel( src_video_plane0, fs_texcoord );
Now we need to verify that the edge detection shader works.
Playback will start in a few seconds and you'll see the output from your Raspberry Pi 4 video camera showing the results of the Sobel edge detection on the video, instead of the raw video you saw before.
If you don't see video with edge detection, please contact support@scenomics.com for assistance.
The video playback pauses.
This exercise is complete. Return to tutorials or proceed to the next exercise in this series.