Robotics and Evolutionary Computing

Google Scholar

You can also find out about my published research on Google Scholar, which, as well as providing links to the papers and their abstracts, additionally provides the latest citation counts and index values.

Overview

While I was working on my PhD I attended a workshop in Cairns given by the famous Rodney Brooks introducing his work on behaviour-based robotics. I was immediately fascinated and after completing my PhD I set about purchasing some of my own robots and experimenting with Brooks’ ideas. These robots became the RoboCoasters, a robot soccer team that represented Griffith University in the FIRA robot soccer world cups in Rockhampton, Beijing and Seoul in the early 2000s. The team reached the quarter-finals in Seoul where they were defeated by the eventual champions from Germany.

The picture on the right shows our first three robots. They would chase an orange golf ball across a ping-pong table watched over by a giant Sony video camera (things were not very sophiticated in those days).

Robotics and Evolutionary Computing

My main work on these robots was the development of a robust vision system to recognise the shapes situated on top of the robots (see the picture above and the FIRA 2002 paper). However, my most enjoyable moment was the success of my implementation of one of Braitenberg’s simple vehicle algorithms to control each robot’s ball chasing behaviour. Before doing this I had written pages of code to control the robot motors based on mathematical projections of where I wanted each robot to be in the next few seconds. In constrast, my implementation of Braitenberg’s algorithm used about 4 lines of code. All it did was to make the amount of power sent to each of the two wheel motors proportional to the angle between the direction the robot was facing and the position of the ball. So if the ball were 45 degrees to the right, the left wheel would get power proportional to x+45 and the right wheel would get power proportional to x-45, where x is the current desired speed of the robot. This would make the robot turn slightly toward the ball in that particular time frame (bearing in mind this command would be rerun 50 times per second). Once the robot was pointing toward the ball, the angle would be zero, and each wheel would get the same power, so it would go straight ahead. The only other condition was that x itself was inversely proportional to the absolute value of the ball’s angle (i.e. the closer the angle gets to zero, regardless of whether the ball is to the left or right, the greater is x and the faster the robot moves). Given these simple rules the robot immediately started to smoothly and effectively turn and chase the ball almost as if it had a mind of its own.

I also branched out from my focus on pure constraint satisfaction by experimenting with evolutionary algorithms, even though these algorithms were still intended to generate new constraint satisfaction algorithms (e.g. see CEC 2004). This approach emerged from an experience I had with a local search algorithm that was outperforming all my other prototypes, and that I thought was working according to my very clever design. But when I was writing up a paper describing the algorithm, I found I had made a coding error and the only reason the algorithm was doing so well was because of that error. I now came to see that most of the successful refinements I had made to the algorithms I had developed were primarily a matter of trial and error and not a matter of deep insight. So while the core ideas – like that of constraint weighting, or of Braitenberg’s vehicle control algorithm – were a matter of insight, the details of the implementation that produced the best results could not be foreseen. Hence, I came to think it would be better to automate this process of trial and error rather than exhaustively try out all the possible algorithms by hand. This idea was taken up by one of my PhD students, Stuart Bain, who then developed a genetic programming approach to generate and test new constraint satisfaction algorithms, which produced the four papers below and culminated in the publication of his PhD.

2005

Bain, S., Thornton, J. R. & Sattar, A. (2005). Evolving Variable-Ordering Heuristics for Constrained Optimisation. CP 2005: 11th International Conference on Contraint Programming. Lecture Notes in Computer Science, 3709, 732-736, Springer, ISSN 0302-9743.

Bain, S., Thornton, J. R. & Sattar, A. (2005). A Comparison of Evolutionary Methods for the Discovery of Local Search Heuristics. AI 2005: 18th Australian Joint Conference on Artificial Intelligence. Lecture Notes in Computer Science, 3809, 1068-1074, Springer, ISSN 0302-9743.

2004

Bain, S., Thornton, J. R. & Sattar, A. (2004). Methods of Automatic Algorithm Generation. PRICAI 2004: 8th Pacific Rim International Conference on Artificial Intelligence. Lecture Notes in Computer Science, 3157, 144-153, Springer, ISSN 0302-9743.

Bain, S., Thornton, J. R. & Sattar, A. (2004). Evolving Algorithms for Constraint Satisfaction. In: CEC 2004: Proceedings of the 2004 Congress on Evolutionary Computation, IEEE, pp. 265-272.

2003

Leonard, J., Treffner, P., & Thornton, J. R. (2003). Tau Guidance for Mobile Soccer Robots. Studies in Perception and Action VII, 169-172, Psychology Press, ISBN 978-0805848052.

2002

Ferreira Jr., V., Thornton, J. R. & Leonard, J. (2002). A Subsumption Architecture for Robotic Soccer. In: Proceedings of the 2002 FIRA Robot World Congress, KAIST (Korea Advanced Institute of Science and Technology), pp. 648-654.

Thornton, J. R., Leonard, J., Wiseby, R. & Lee, Y. (2002). Shape Recognition and Enhanced Control Systems for Robot Soccer. In: Proceedings of the 2002 FIRA Robot World Congress, KAIST (Korea Advanced Institute of Science and Technology), pp. 670-675.