# 9 – Developer’s Corner

This page is for problems we ran into as well as justifications for the methods and modules we used.

**Pinhole camera model and the Omnidirectional model**

Argus was originally developed around the Pinhole model for undistorting pixel coordinates, i.e. removing lens distortion. This choice was made because OpenCV readily supported the solving of distortion equations according the Pinhole model in Python, and for most shooting modes, the images undistorted well over the entire frame. The Omnidirectional or Fisheye model is supported by OpenCV 2.4.11, but is not currently wrapped in Python. What we found however, was that wide angle shooting modes, at least for the GoPro Hero 3s and 4s, still had lens distortion towards the outer edges of the frame even after undistorting according to the models we developed with Argus. We initially thought that perhaps this was due to setting tangential distortion coefficients (called t1 and t2) as well as the sixth order radial distortion coefficient (called k3) to zero. Estimating these coefficients had never been successful using OpenCV algorithms. Thus, we tried estimating these coefficients once more with the Argus/OpenCV routine as well as with Sparse Bundle Adjustment, but alas, came up short. This necessitated the use of another model for undistorting wide angle shooting modes, i.e. the Omnidirectional or Fisheye model. We were able to find a Matlab package called Ocam Calib and a revision of it called Ocam Calib Urban (Scaramuzza et al. (2006) and Urban et al. (2015)). Using these tools we were able to calibrate wide angle shooting modes to root mean squared pixel errors of less than one. All shooting modes which contain ‘(Fisheye)’ within the name in Argus DWarp subscribe to the Omnidirectional model. Argus supports working with these shooting modes by first undistorting your videos with DWarp and then continuing through the workflow assuming that the distortion profile is now zero.

**Pyglet and Clicker**

Clicker uses the Python library Pyglet which allows users to program in OpenGL using Python. Clicker originally used the module Pygame. We ran into many challenges trying to make Pygame a good fit for the purposes of Clicker. Pygame does not support multiple windows and as such the original beta of Clicker communicated across multiple processes via the disk drive. This was a nightmare in Python. In addition, we found that Pyglet was far faster and more tractable due to the speed advantages offered by drawing in OpenGL.

**Calibrating odd GoPro shooting modes**

Even by employing both the Pinhole and Omnidirectional camera models to undistort pixel coordinates, we found there are some shooting modes available for the GoPro Hero 4 that were unable to be calibrated. Namely, these were the ‘Superview’ modes. According to GoPro, these modes stretch the pixels on the left and right edges of the image. As such, no standard model will calibrate these modes well. Alternatively, one could use an arbitrary brute force method, i.e. filming a grid and then fitting some non-linear polynomial to the relationship between observed and ideal grid points. That being said, for the purposes of the 3D calibration, we recommend not using these shooting modes as from our understanding they don’t actually offer any increase in FOV.

Dear Argus developers and community,

I’ve been calibrating a GoPro setup, doing things by hand. Although I never actually installed argus, I’ve been using the argus code. The open source code, and the documentation on this website, have been immensely useful for my project. Kudos and thank you!

Being new to the subject, I wanted to get a feel for our setup. So I tested how accurate a “hipshot calibration” (with the corners of a box) would be.

To quantify accuracy, I implemented a probabilistic model with the pymc3 library. Being primed on probabilistic programming, I do stuff like that all the time. All it does is replace the “numpy.linalg.lstsq” in “slove_dlt”. But the replacement allows to estimate how “sure” a given calibration is a bout an arbitrary point in space.

I cannot judge how “innovative” or useful that would be for others. Maybe you have an opinion or feedback. Thank you in advance!

My attempts are documented here:

http://mielke-bio.info/falk/camera_calibration

and maybe they are at least useful for other users who are up to tinkering.

Kind regards,

Falk Mielke

University of Antwerp

Dear Falk,

Thanks for investigating this and very clearly documenting your work on your web page. Argus has some partially implemented routines that attempt a “brute force” solution to the reconstruction uncertainty problem through resampling, but at first glance your method looks much more elegant – I will ask one of the lab developers to look into adding it to Argus. The next step in our internal workflows would usually be to use a smoothing spline with the appropriate weights and error tolerance to find the smoothest possible trajectory that passes through the 95% CI of all the points in the time series.

Cheers,

Ty Hedrick