My old post-doc and postgrad webspace is offline, so this is a place to collect some info on my publications, and previous research. I hope to add some of my current and newer work too, as it becomes public. My work has mostly been around AI in general, and more specifically agent based systems. Currently I work in applied space robotics, with a dash of machine learning and computer vision.
Publications & Invited Talks
P. Rendell, I. Wallace, M. Woods, and D. Long, ‘PMOPS: The Planetary Mission On-Board Planner and Scheduler’, in Proc. 14th Symposium on Advanced Space Technologies in Robotics and Automation, 2017.
I. Wallace, M. Woods, ‘Exploring and Exploiting Large Field Trial Datasets for Perception and Simulation’, in Proc. 14th Symposium on Advanced Space Technologies in Robotics and Automation, 2017.
I. Wallace, N. Read, M. Woods, ‘LabelMars.net: Driving Next-Generation Science Autonomy with Large High Quality Dataset Collection’, in Proc. 14th Symposium on Advanced Space Technologies in Robotics and Automation, 2017.
Iain Wallace, S. P. Schwenzer, M. Woods , N. Read, S. Wright, K. Waumsley, L. Joudrier, LabelMars.net: Crowd-Sourcing an Extremely Large High Quality Martian Image Dataset, 48th Lunar and Planetary Science Conference, LPSC 2017
Iain Wallace, Invited book review of Planetary Rovers: Robotic Exploration of the Solar System, The Aeronautical Journal January 2017, Royal Aeronautical Society.
Iain Wallace, Invited Talk: Mars Rovers to Inspection Robots: GPUs for Applied Machine Intelligence and Visualisation, GPU Technology Conference 2016, Amsterdam
Iain Wallace, Mark Woods, MASTER: A Mobile Autonomous Scientist for Terrestrial and Extra-Terrestrial Research ,13th Symposium on Advanced Space Technologies in Robotics and Automation, ASTRA 2015
Mark Woods, Andy Shaw, Iain Wallace, Mateusz Malinowski, The Chameleon Field Trial: Toward Efficient, Terrain Sensitive Navigation, 13th Symposium on Advanced Space Technologies in Robotics and Automation, ASTRA 2015
Mark Woods, Andy Shaw, Iain Wallace, Mateusz Malinowski and Philip Rendell, Demonstrating Autonomous Mars Rover Science Operations in the Atacama Desert, 13th Symposium on Advanced Space Technologies in Robotics and Automation, ASTRA 2015
Mark Woods, Andy Shaw, Iain Wallace, Mateusz Malinowski and Philip Rendell, Demonstrating Autonomous Mars Rover Science Operations in the Atacama Desert, 12th International Symposium on Artificial Intelligence, Robotics and Automation in Space – i-SAIRAS 2014
Mark Woods, Andy Shaw, Iain Wallace, Mateusz Malinowski and Philip Rendell, Simulating Remote Mars Rover Operations in the Atacama Desert for Future ESA Missions 13th International Conference on Space Operations 2014
Iain Wallace and Michael Rovatsos, A Computational Framework for Practical Social Reasoning, Computational Intelligence, doi: 10.1111/coin.12014, 2013
Aylett, R.; Kriegel, M.; Wallace, I.; Marquez Segura, E.; Mecurio, J.; Nylander, S.; Vargas, P., “Do I remember you? Memory and identity in multiple embodiments,” RO-MAN, 2013 IEEE , vol., no., pp.143,148, 26-29 Aug. 2013
Iain Wallace, Michael Kriegel and Ruth Aylett, Migrating Artificial Companions, Demo & Paper, In Proceedings of the Eleventh International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2012),
Iain Wallace, Ph.D. Thesis, Social Reasoning in Multi-Agent Systems with the Expectation-Strategy-Behaviour Framework, 2010
Iain Wallace and Michael Rovatsos. Executing specifications of social reasoning agents. In Wiebe van der Hoek, Gal A. Kaminka, Yves Lesperance, Michael Luck, and Sandip Sen, editors, Proceedings of The Eighth International Workshop on Declarative Agent Languages and Technologies, 2010.
Iain Wallace and Michael Rovatsos. Bounded Social Reasoning in the ESB Framework. In Proceedings of the Eighth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2009), pages 1097-1104, 2009.
Iain Wallace, An Expectation Framework for Agent Social Reasoning, European Agent Systems Summer School, selected student paper, 2007.
A. Bonenfant, Z. Chen, K. Hammond G. Michaelson, A. Wallace, I. Wallace, Towards resource-certified software: a formal cost model for time and its application to an image-processing example, Proceedings of the 2007 ACM symposium on Applied computing. Online here.
Z. Chen, Z. Husz, I. Wallace, A. Wallace, Video Object Tracking Based on a Chamfer Distance Transform, IEEE International Conference on Image Processing (ICIP) 2007
I. Wallace, A mean-shift tracker: implementations in C++ and Hume, Technical Report HW-MACS-TR-0035, Heriot-Watt Technical Report Series 2005
Iain Wallace, A Hybrid Architecture for Robotic Soccer, MEng thesis, University of York, 2006.
Social Reasoning in Multi-Agent Systems with the Expectation-Strategy-Behaviour (ESB) Framework
The aim of this section is to give a brief overview of my PhD work, for someone not familiar with the multi-agent systems research area. Of course it’s impossible to cover 4 years of hard work in depth within a few paragraphs, so all I hope to do here is give you a (very) informal intuition as to what it’s about, and pointers to further information and publications. The “ultimate” reference for my PhD work is probably the most recent journal paper, representing the ideas distilled and refined..
What’s an agent then?
Good question, to keep it simple, it’s an autonomous AI entity. Could be some software, could be a robot, could be a character in a computer game or a webserver load balancing with others to show you a page. The key concept is it has some goals to achieve, but autonomy in how they’re achived.
What’s social reasoning then?
It’s how an agent reasons about its interactions with other agents – as opposed to practical reasoning, which is usually defined as how an agent reasons about interactions with it’s environment.
What’s your basic idea?
Well, the wordy-science version would be my core thesis itself:
“It is possible to separate social reasoning from practical reasoning, allowing for a generic specification of social reasoning schemes. This will allow for the development of generic algorithms for bounded execution of agent reasoning and analysis of designs.”
But it’s easy to explain the basic gist of it in a simpler way. Traditionally agent (symbolic) reasoning is implemented/modelled as one (practical) reasoner which handles all the agent reasoning. I separate out social reasoning about interaction, and propose a generic method to specify, model and implement it. Basically ESB allows you to specify the properties of the agent’s social reasoning, and it provides algorithms to generate and execute a model of that reasoning. There are many benefits to this, principally to bound the complexity of reasoning, allow easier implementation of theoretical reasoning methods or combination of existing methods for different task. For example, you might want to implement an agent that reasoned both using norms and the principles of Joint Intentions.
How’s it work?
Well, it’s a simple idea really, though it can be a bit complicated to get your head round initially. I think the best summary I’ve got is this poster I made for the AAMAS’09 conference. The idea is that agents hold expectations (about others) that they can verify with a test, and update their reasoning based on this result. Behaviours can then condition on expectations to control the agent, and various strategies can be used to bound or shape the reasoning process. The cunning part is that the reasoning process can be captured in a graph structure, allowing all sorts of clever strategies based on graph- or game-theoretic principles, and easy implementation through model-checking.
What did you implement?
Well, I implemented the ESB reasoner, best described in my DALT’10 paper. An easier place to start is probably with the slides for the talk I gave there though. It’s basically a combination of Jason for a BDI interpreter, extended to support an ESB reasoner using NuSMV to model check behaviour conditions on the graphs of expectations. Tech wise, Jason is written in Java, as are my extensions (plus some AgentSpeak(L) interpreted by Jason).
To evaluate (full details, as ever, in the thesis) I implemented several examples of social reasoning from the literature. Principally Joint Intentions (JI), and a model of normative reasoning. As an example of the way ESB can ease agent design through the modularity of social reasoning specifications, I took the generic JI example and extended it to a multi-agent (simple) robotic soccer scenario. There’s a little video of this below. It’s not much to look at, the clever parts are all in the agent’s mental states, and it’s a simple grid-world they inhabit. What you’re seeing is one agent negotiate commitment to a joint plan with two others, one to move up each side of the pitch, pass to one and then have them shoot for a goal.
A Hybrid Architecture for Robotic Soccer
This section gives a brief overview of my MEng project, which was to create an agent control architecture for a team of cooperative agents. It was implemented in a (simulated) robotic soccer environment, as it provided a few nice benefits. It’s a pretty simple application (robots just move or kick), a simple world, limited agents and there’s a lot of existing work.
If you want more info than this brief overview, can I recommend you start with a short slideshow I made, or for more detail and experimental results, see my MEng thesis. Or maybe you just want to scroll right down and watch the video ;-)
What’s the basic idea?
Simple reactive behaviours defined by percept/action pairs can be used to create “intelligent” behaviour, but the large sets of rules needed to manage complex behaviour cause difficulties in design, implementation and for coordination.
Classical planning can be good for coordination, using pre- and post-conditions for actions, and solving more complex problems. But planning is expensive, and cannot easily deal with the unexpected.
The solution presented in my thesis is to combine the best of both appropaches. A global team planner acting at a high level, with combinations of reactive behaviours that robustly achieve postconditions for plan actions through their emergent properties.
- Plan actions map to simple sets of behaviours.
- A plan is a sequence of behaviour changes for each robot.
- Like a football coach planning a match – an overall plan, but players carry it out reacting to opposition.
Here’s a clip from a simple example running in the simulator. What you’re seeing here is a top level plan with two steps – pass from the corner, and then an agent with the ball shoots to score. Agents doing nothing try and mantain clear space around them, so as to be free to recieve a pass an score. This is the “corner kick FSM” described in my thesis.