Several example clients written in Python are provided with the MDK. You can find these at
~/mdk/bin/shared. Generally, they will work with any of the development Profiles (though some features may be missing in some configurations).
General control examples
Each of these examples illustrates how to use some parts of the interface of the robot (or simulator), exchanging data and, in some cases, additionally providing some small example of data processing.
- Receive, extract and display various sensor (or control) topics. Not exhaustive, a good example of a minimal client for some straightforward sensors and actuators.
- Receive audio data from the microphones, and send audio to the speaker.
- Manage streaming of audio to the speaker, keeping the buffer stuffed to ensure continuous playback (see also client_audio which uses much the same approach).
- Receive images from the cameras, do some processing using OpenCV, and either display them or record them to video files.
- This client is used by our development team for testing various elements of the robot—a support engineer may ask you to run this client to perform diagnostics. However, it may also be useful to explore the client to see how to control those aspects of the robot.
Special purpose examples
- This is by far the most comprehensive interfacing example, showing how to access all of the upstream signals and to command all of the downstream signals. However, it's quite complex, so you may prefer to start by looking at one of the simpler examples.
- This example interacts with the demo controller, which must therefore be running. Run without arguments to see the options.
- This client allows live control of the configuration of the demo controller. It is still in development and is not currently documented.
Kinematic chain examples
- Simple use of the KC model to transform locations and directions between frames of the robot body model. This is not a live example, and does not connect to the robot.
- This more complex client illustrates how to use the KC and camera models to map a pixel location in image space all the way through the system to a location in the WORLD frame. It uses the ROS topics sensors/kinematic_joints and sensors/body_vel to configure the mapping, and is live, so the output will update if you manually move the robot (or if you control it to move). The camera model is used to reverse the particular distortion introduced by the lenses of MiRo's cameras, and then the KC model is used to map from HEAD space to WORLD space.