5 – Contact
Thank you for using Argus! Argus was developed by Dylan Ray, Dennis Evangelista and Brandon Jackson at the University of North Carolina at Chapel Hill.
Special thanks to:
Dr. Tyson Hedrick
Dr. Manolis I. A. Lourakis
Jorge Garza for the use of his art as the icon of Argus
My mom
–Dylan Ray
Argus team contact info:
- Brandon Jackson, jacksonbe3@longwood.edu
- Tyson L. Hedrick, thedrick@bio.unc.edu
- Dennis J. Evangelista, devangel77b@gmail.com
- Dylan D. Ray
Dear Argus team,
just a quick question: how does the calibration work for zoom lenses? Do we have to do the calibration for various focal lengths throughout the range of the lense? Or is one calibration sufficient?
Best
Simon
Hi Simon,
I apologize for the late response. I had not thought to check this site often for troubleshooting, and if in the future you could email ddray1993@gmail.com or thedrick@bio.unc.edu we will get back to you sooner. Since the focal length of the camera changes with zoom lenses, I would advise doing a separate calibration for any zoom setting you use.
Best,
Dylan
Hi Dylan,
thanks for the information. Eventually I decided to use fixed focals for my project, in order to avoid any such issues and reach higher image quality.
Best
Simon
Dear Dr. Tyson Hedrick
I have some questions that I would like to ask you regarding about Argus, while you try to see what went wrong with my installation in Anaconda that I emailed you earlier.
1. Is Argus similar to your program DLTdv5/6? Does it do the same thing or Argus is more advanced compare to those two.
2. Does Argus has a capability to process the data in realtime or videos are needed first then run Argus to process the data for post processing later on.
3. Is it possible for Argus to auto track multiple simple markers such as those create by black paint or regular but highly contrast to background color sticker?
If you wouldn’t mind answers these questions, it would be really helpful. I’m doing a literature study on finding a possibility of a software that can help with 3D reconstruction for post processing to diagnose the level of disability of a person without the requirement of using fancy tools such as those from Vicon. So we can verify that an accessible 3D technique is possible for all clinicians.
Regards,
Pongkrit
Dear Pongkrit,
1. Argus is generally similar to the MATLAB DLTdv* programs, but provides some additional features for working with consumer-market cameras such as the GoPro series. It’s also written in Python and therefore available for free and does not require a costly license. Python is also higher performance than MATLAB for some operations, including reading and displaying compressed video.
2. Argus is not real-time, it needs saved video files.
3. Simultaneous multi-tracking is not implemented in Argus at this time. The MATLAB DLTdv5 attempts this, but the user interface is cumbersome and the whole operation not usually very effective. If you need simultaneous tracking of many similar markers you’re probably better served to code a small custom Python routine on your own or look for other software.
FYI, I’m still looking into your install issues. The hurricane here in North Carolina has been a bit disruptive so many things are slightly behind schedule.
Cheers,
Ty Hedrick
Dear Dr. Hedrick,
Thank you so much, sir. I still try to install Argus, I believe you might want to update the installation guideline or check the files’ link in the Anaconda installation guideline, sir.
Please be safe, sir.
Regards,
Pongkrit
As a recap, Pongkrit encountered some problems with the Windows/Anaconda install recipe; we debugged the problem and I’ve updated the install recipe.
Hi,
1. Argus accomplishes the same task as DLTdv5, but is more basic in terms of marking videos / its marking program Clicker has far fewer features.
2. Yes. Videos are needed if you already have pixel locations marked via some other method.
3. No. The auto tracker is very basic and only tracks one thing at a time. I would recommend writing your own program if you want to track multiple objects in parallel.
Best,
Dylan
Hello. So I’ve been using the ARGUS program for my Masters project, and I’ve run into a bit of a snag. I didn’t realize both my cameras were recording at different FPS (one at 30 and the other at 60). 30fps is still usable for my project, and I was wondering if you had a suggestion for reducing the 60fps to 30 to able to use in conjunction with the other footage? Any advice with this would be helpful.
Hi Patrick,
Argus doesn’t directly support videos of different frame rates, so you have a few options. The most direct one is to reduce the frame rate of the 60 fps video. This can be done with the ffmpeg command line tool, see the Stackexchange discussion at https://stackoverflow.com/questions/11004137/re-sampling-h264-video-to-reduce-frame-rate-while-maintaining-high-image-quality for some tips.
Cheers,
Ty Hedrick
Hi!
I’ve a problem with the argus_gui.
When I start argus clicking twice it doesn’t start and when I start using “python3 argus_win.py” in powershell appears an error about argus_gui:
The ‘argus-gui==2.1.dev3’ distribution was not found and is required by the application.
I’ve installed the argus_gui as it is.
OK, the error message you’re getting is unfamiliar but might indicate that Argus isn’t correctly or fully installed. Try testing that by starting Python in a terminal or command-line context and then trying:
>>> import argus_gui as ag
If that works, Argus is at least installed. If it doesn’t work there will probably be a more informative error message.
–Ty
Dear Argus development team,
I hope this message finds you well 🙂
I’d like to ask a question about using Argus through the command line: will I get the same undistorted videos if I input my camera’s coefficients using –omni/–coefficients as I would if I were to use –camera and –mode to get the said coefficients?
Cheers, thanks for making such an amazing software, and all the best!
Hi, do you have any recommendations when setting up the three cameras for 3dof tracking? I have three action cameras I will be mounting to a 20×20 foot frame to track inside of. Would it be better to have the cameras fairly close to one another, or further apart? I initially thought it made sense to have them as close to perpendicular to one another as possible, but after going through the calibration documentation it seems like you want them fairly close to be able to have the wand visible to all cameras at the same time. Does it not matter as long as the cameras are on epipolar lines?
Hi Pawel,
Cameras closer together is easier to calibrate, cameras further apart gives you better 3D resolution. Also avoid the following conditions: 1) having all 3 cameras lie along a line in 3D space; spread them out so their positions define a plane instead. The ensures that the epipolar lines will cross one another, simplifying video analysis. 2) also avoid having two of the cameras look directly at one another; cameras positioned this way can’t independently contribute to measuring a 3D location.
Without knowing more about what you’re trying to record within your 20 x 20 x ? frame, I’d recommend putting 2 cameras in corners looking toward the middle, and the 3rd camera on the opposite side middle, also looking toward the center.
Cheers,
Ty Hedrick