Integration Testing

August 16, 2013 - 3 minute read -
testing gsoc

Commits

Completed Things

  • Added Audio Sources: No the API allows adding audio sources. These are added through the TestSources class. Audio has behaviour similar to video sources like they are added using function new_test_audio and have properties similar to them. All audio sources have two important properties: the frequency and wave. The wave can take integral values from 0 and 12.
  • Beginning with integration testing: The integration tests I have done till now are:
    • Server: Starting and Stopping the server multiple times.
    • Input Sources: Adding input video and audio sources to the server using the API. I tried adding huge number of video sources. I also connected a preview to view the output. On my machine, on adding more than 50 sources, it seemed that the output was hanging up a bit due to slight stops in the video output. These hangs become much more severe as the number of sources are increased. Around 100 the video was almost frozen. Audio sources were also added.
    • Controller: I am currently implementing this part. Till now I was checking if the establish_connection method, which makes a DBus connection to the server is working. Refering to code at https://github.com/hyades/gst-switch/blob/python-api/python-api/tests/integrationtests/test_controller.py#L14 - I tried starting the server once, and then repeatedly trying to establish a new connection. I have set the NUM value to 3. This was the maximum value possible without any exceptions coming in the code. With NUM=4, the exception is like this: ConnectionError: unix:abstract=gstswitch (Error receiving data: Connection reset by peer)

Other things going on

To proceed with integration tests, I need a method which can compare a video's key frames with pre-defined ones. This is done to ensure that methods like switch, change PIP works as they should. For extracting key frames, I am using ffmpeg. Then I use scipy for calculating zero-norm between images.

References:

There are two methods for extracting frames using ffmpeg, the one in the first link. However the problem here is that even if I have two videos which were generated using the same code, the number of key frames of the two videos is not the same. I observed that some extra black frames were added. This makes it very difficult to compare the two sets of key frames. The second method is that I take frames after every few seconds. This works on the assumption that I am bringing about a change in the video in a time which is greater than this time.

Comments Section

Feel free to comment on the post but keep it clean and on topic.

blog comments powered by Disqus