Dieser Artikel ist in der englischen Sprache verfasst, um Teams auf der ganzen Welt einen Einblick in unser System zu geben.
Our software framework, called Amun-Ra, consumes robot and ball position information provided by SSL-Vision, which are then processed by our AI to generate movement commands for our robots. The framework, which runs on a standard computer besides the playing field, is available as open-source on Github.
SSL-Vision does not synchronize the cameras, nor is its output in any way synchronized. So, instead of relying on SSL-Vision packets as trigger signal, the main thread is running at a fixed frequency of 100 Hz. This assures a fixed period for the control loop.
State estimates of the robots and ball positions and velocities have to be made at each time-step. This estimation is realized by using a Kalman Filter. To correctly update the filter with measurements from SSL-Vision, the age of this data has to be approximated. The time at which a vision packet has been received from the software is decremented by the execution time of SSL-Vision, which is included in the packet. The network latency can be neglected as it usually well below 1 ms. Ideally, the shutter time of the camera and the latency for the firewire connection should be integrated as well.
There is also a simulation functionality build into Amun. The simulator, which is built using the bullet physics engine, provides a rudimentary physical simulation of the playing field with the robots and the ball. Its use is vital for the development of our AI as it enables the testing of small changes or even completely new ideas without risking any damage of the real robots.
- Teams configuration
- Visualization of the vision data
- Visualization of the AI output
- Debugging of the AI
- Generation of referee instructions, without the need for an external RefBox
- Recording of an entire game, including any AI decisions
- Control of the simulation