Tutorial - Creating SQs from Vanilla Python Functions
To break down this tutorial into manageable chunks, we first start with developing the camera pipeline. This camera is situated at the top of the intersection and should output the coordinates of the car. For ease of the tutorial, we’ve provided raw camera footage so you won’t have to get all the hardware necessary and set up the entire project.
The first step in our application is to write a function to set up the cameras
and to grab 5 frames of the stored camera stream.
We see an example below in camera_sampler
:
def camera_sampler(trigger):
import sys, time
sys.path.insert(0, '/content/ticktalkpython/libraries')
import camera_recognition
global sq_state
if sq_state.get('camera', None) == None:
# Setup our various camera settings
camera_specifications = camera_recognition.Settings()
camera_specifications.darknetPath = '/content/darknet/'
camera_specifications.useCamera = False
camera_specifications.inputFilename = '/content/yolofiles/cav1/live_test_output.avi'
camera_specifications.camTimeFile = '/content/yolofiles/cav1/cam_output.txt'
camera_specifications.cameraHeight = .2
camera_specifications.cameraAdjustmentAngle = 0.0
camera_specifications.fps = 60
camera_specifications.width = 1280
camera_specifications.height = 720
camera_specifications.flip = 2
sq_state['camera'] = camera_recognition.Camera(camera_specifications)
# Package up 5 frames so that we can parse them
output_package = []
for idx in range(5):
frame_read, camera_timestamp = sq_state['camera'].takeCameraFrame()
output_package.append([frame_read, camera_timestamp])
return [output_package, time.time()]
Here, the function camera_sampler
is written in Python
without any direct mention of TTPython constructs. The simplest way to
include this Python function into TTPython is to use the @SQify
function
decorator around camera_sampler
as shown below.
@SQify
def camera_sampler(trigger):
...
The @SQify
decorator transforms a “well-behaved” Python function into a SQ
that can then be instantiated one or more times in a TTPython graph. We cannot
transform arbitrary Python into a valid SQ. The SQ
defines the basic unit of computation in TTPython.
The graph execution of our program is transparent to the programmer; they
do not need to worry about converting data to tokens to communicate between SQs.
When the SQ has the tokens it needs to run, it will provide the values
in those tokens as arguments to the supplied function.
Our SQs may still need to keep persistent state, so we use the global variable
sq_state
to store state between multiple executions of a SQ. Note that
this does not share the same semantics as Python’s global keyword. Each SQ
has its own notion of an sq_state
, and any call will only access the local SQ’s
version of persistent state. All data in TTPython follows pass-by-value semantics.
The contained function should not have any mention of TTPython constructs. TTPython treats the contained SQified function as a black box and vice versa, so any attempts to use TTPython constructs within will fail.
NOTE: @SQify
limits the expressiveness of Python functions it decorates.
*args
is not allowed in function definitions as our graph requires a statically
known number of input arguments. **kwargs
are allowed in function calls
if they are also defined within the function definition.
We can now write a basic “Hello World” program to call camera_sampler
once! We have informed TTPython how to include our sensing function as a SQ
into our graph. Now, we need to define the graph structure for our program.
To do so, we use another function decorator: @GRAPHify
.
We’ve also included some more functions to process the data generated by
camera_sampler
.
@SQify
def camera_sampler(trigger):
import sys, time
sys.path.insert(0, '/content/ticktalkpython/libraries')
import camera_recognition
global sq_state
if sq_state.get('camera', None) == None:
# Setup our various camera settings
camera_specifications = camera_recognition.Settings()
camera_specifications.darknetPath = '/content/darknet/'
camera_specifications.useCamera = False
camera_specifications.inputFilename = '/content/yolofiles/cav1/live_test_output.avi'
camera_specifications.camTimeFile = '/content/yolofiles/cav1/cam_output.txt'
camera_specifications.cameraHeight = .2
camera_specifications.cameraAdjustmentAngle = 0.0
camera_specifications.fps = 60
camera_specifications.width = 1280
camera_specifications.height = 720
camera_specifications.flip = 2
sq_state['camera'] = camera_recognition.Camera(camera_specifications)
# Package up 5 frames so that we can parse them
output_package = []
for idx in range(5):
frame_read, camera_timestamp = sq_state['camera'].takeCameraFrame()
output_package.append([frame_read, camera_timestamp])
return output_package
@SQify
def process_camera(cam_sample):
import sys, time
sys.path.insert(0, '/content/ticktalkpython/libraries')
import camera_recognition
global sq_state
for each in cam_sample:
camera_frame = each[0]
camera_timestamp = each[1]
if sq_state.get('camera_recognition', None) == None:
# Setup our various camera settings
camera_specifications = camera_recognition.Settings()
camera_specifications.darknetPath = '/content/darknet/'
camera_specifications.useCamera = False
camera_specifications.inputFilename = '/content/yolofiles/cav1/live_test_output.avi'
camera_specifications.camTimeFile = '/content/yolofiles/cav1/cam_output.txt'
camera_specifications.cameraHeight = .2
camera_specifications.cameraAdjustmentAngle = 0.0
camera_specifications.fps = 60
camera_specifications.width = 1280
camera_specifications.height = 720
camera_specifications.flip = 2
sq_state['camera_recognition'] = camera_recognition.ProcessCamera(camera_specifications)
coordinates, processed_timestamp = sq_state['camera_recognition'].processCameraFrame(camera_frame, camera_timestamp)
return coordinates
@GRAPHify
def example_1_test(trigger):
with TTClock.root() as root_clock:
cam_sample = camera_sampler(trigger)
processed_camera = process_camera(cam_sample)
The function decorator @GRAPHify
uses the decorated function as the main
program for the graph. It defines the connections between the SQs decorated
from the functions above. @GRAPHify
requires that the underlying
wrapped function contains at least one argument, as this argument acts
as the start of the execution of the graph. You can change the value given to
these parameters, but we’ll ignore this for the purpose of this tutorial.
Furthermore, any function called within @GRAPHify
needs to have been decorated by @SQify
. This can be observed in the function
process_camera
. Since this function is used inside @GRAPHify
to process
the camera_frame obtained from camera_sampler
, we need to SQify the
process_camera
function. All TTPython abstractions work directly under the
@GRAPHify
decorator, and the programmer can worry about the correctness of
programs in these functional SQs without TTPython constructs.
Now that we understand how to insert SQs into TTPython with @SQify
and run
them through @GRAPHify
, take a look at the CAVExamples.ipynb file and run
Steps 2 and 3 to see how to compile and run a basic TTPython program.