The Research Page
Papers, Books and Manuscripts
Google Scholar
You can also find out about my published research on Google Scholar, which, as well as providing links to the papers and their abstracts, additionally provides the latest citation counts and index values.
2021
Thornton, J. (2021). The Questoning of Intelligence: A phenomenological exploration of what it means to be intelligent. FUBText: Brighton, 511 pp. ISBN 978-1-8384787-0-4
Back Cover: The Questioning of Intelligence is an inquiry by intelligence of intellgence. It is a questioning of the ground on which we understand ourselves and our capacity for intelligent thought and action. Our means of inquiry is the way of phenomenology, the way of entering into the immediacy of being conscious, now. It is from here we can start to investigate the philosophical and scientific inheritance that has formed the collective understanding we currently have of our place in the universe. In questioning this inheritance we are asking after the source from out of which it has emerged, the same source that is manifesting our experience of being alive and conscious now. According to the scientific materialism of our age, this manifestation of experience is no more than an effect of microphysical events occurring in our nervous systems, events that themselves have been determined by an inexorably mechanistic process of physical evolution. It is this materialistic presupposition that stands in the way of recognising the essential form of our natural, innate intelligence. For it is unintelligible to think a system of purely mechanistic calculations could produce the experience of meaning that is the hallmark of human consciousness. Seeing this is not a matter of argument or proof, it is a matter of direct phenomenological insight. It is on the basis of such insight that we look again at the meaning of the findings of contemporary science. For once we put aside this collective materialism, our science reveals an entirely new dimension of significance, where the meaning of our being conscious and intelligent is reflected back in the forms of processes that science has already discovered. From here, perhaps, we can even start to intuit the action of a universal intentionality that expresses itself through these processes, including the processes of our own human consciousness.
Newton, M. A. H., Polash, M. M. A., Pham, D. N., Thornton, J. R., Su, K. & Sattar, A. (2021, In Press). Evaluating Logic Gate Constraints in Local Search for Structured Satisability Problems. Artificial Intelligence Review, Springer, ISSN 0269-2821.
Abstract: Conjunctive normal forms (CNF) of structured satisability problems contain logic gate patterns. So Boolean circuits (BC) by and large can be obtained from them and thus structural information that is lost in the CNF can be recovered. However, it is not known which logic gates are useful for local search on BCs or which logic gates in particular help local search the most and why. In this article, we empirically show that exploitation of xor, xnor, eq, and not gates is a key factor behind the performance of local search algorithms using single variable flips when adapted to logic gate constraints. Moreover, controlled experiments and investigations into the variables selected for flipping further elucidates these findings. To achieve these conclusions, we have adapted the AdaptNovelty+ and CCANr algorithms to cope with logic gate-based constraint models. These are two prominent families of local search algorithms for satisfiability. We performed our experiments using a large set of benchmark instances from SATLib, SAT2014, and SAT2020. We have also presented techniques to eliminate cycles among logic gates that are detected from CNF and to propagate equivalence of variables statically through the logic gate dependency relationships.
2018
Cowley, B., Thornton, J. R., Main, L. & Sattar, A. (2018). Precision without Precisions: Handling uncertainty with a single predictive model. In: ALIFE 2018, Proceedings of the 2018 Conference on Artificial Life, Tokyo, Japan, MIT Press, pp. 129-136.
Abstract: The predictive processing theory of cognition and neural encoding dictates that hierarchical regions in the neocortex learn and encode predictive hypotheses of current and future stimuli. To better handle uncertainty these regions must also learn, infer, and encode the precision of stimuli. In this treatment we investigate the potential of handling uncertainty within a single learned predictive model. We exploit the rich predictive models formed by the learning of temporal sequences within a Hierarchical Temporal Memory (HTM) framework, a cortically inspired connectionist system of self-organizing predictive cells. We weight a cell’s feedforward response by the degree of its own temporal expectations of a response. We test this model on data that has been saturated with temporal or spatial noise, and show significant improvements over traditional HTM systems. In addition we perform an experiment based on the Posner cuing task and show that the system displays phenomena consistent with attention and biased competition. We conclude that the observed effects are similar to those of precision encoding.
2017
Cowley, B., Thornton, J. R., Main, L. & Sattar, A. (2017). Dynamic Thresholds for Self-Organizing Predictive Cells. In: ECAL 2017, Proceedings of the 14th European Conference on Artificial Life, Lyon, France, MIT Press, pp. 114-121.
Abstract: It has become increasingly popular to view the brain as a prediction machine. This view has informed a number of theories of brain function, the most prominent being predictive processing, where generative hypotheses are iteratively updated by error signals. In this treatment we take a lower-level approach by examining the hierarchical temporal memory framework, which views individual pyramidal cells as the primary predictive unit of a self-organizing networked sequence learning system. Within this computational framework, the cell behaviour is constrained by a number of parameters which are static and shared across all cells. To further increase the adaptability of the cells, we shift away from this paradigm by introducing the concept of dynamic thresholds. This allows for the activation threshold (the amount of activity on a distal dendrite needed to form a prediction) to be adjusted continuously and individually for each cell. As a metric we use the prior, or unconditional, probability of activity on the proximal dendrites. Our experiments show that using this metric for dynamic thresholds can improve the predictive capabilities of the system in a number of domains, including anomaly detection, where we achieve state-of-the-art results on the Numenta Anomaly Benchmark.
Cowley, B. & Thornton, J. R. (2017). Feedback Modulated Attention within a Predictive Framework. ACALCI 2017, Artificial Life and Computational Intelligence, 3rd Australasian Conference. Lecture Notes in Computer Science, 10142, 61-73. Springer, ISSN 0302-9743.
Abstract: Attention is both ubiquitous throughout and key to our cognitive experience. It has been shown to filter out mundane stimuli, while simultaneously communicating specific stimuli from the lowest levels of perception through to the highest levels of cognition. In this paper we present a connectionist system with mechanisms that produce both exogenous (bottom-up) and endogenous (top-down) attention. The foundational algorithm of our system is the Temporal Pooler (TP), a neocortically inspired algorithm that learns and predicts temporal sequences. We make a number of modications to the Temporal Pooler and place it in a framework which is inspired by predictive coding. We use a novel technique in which feedback connections elicit endogenous attention by disrupting the learned representations of attended sequences. Our experiments show that this approach successfully filters attended stimuli and suppresses unattended stimuli.
Main, L. & Thornton, J. R. (2017). Stable Sparse Encoding for Predictive Processing. In: SSCI 2017, Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence, Hawaii, USA, IEEE, pp. 1-8.
Abstract: Hierarchical Predictive Coding systems that have adopted prediction as their primary goal, are heavily reliant on the stable sparse coding of sensory input. Furthermore, such systems will require their spatial coding function to be adaptive and able to reform to reflect changes within the environment. These properties of stability and adaptiveness should emerge naturally from the spatial coding system and not be reliant on additional control mechanisms. Hierarchical Temporal Memory is a cortically inspired model that encapsulates both sparse coding and temporal processing functions. We present an investigation into the stability and adaptiveness of three alternative versions of the spatial pooling function. Our results show that two of these SP algorithms are able to form stable sparse distributed representations of audio input, while still remaining adaptive to changes within the input data.
2015
Kneller, A. & Thornton, J. R. (2015). Distal Dendrite Feedback in Hierarchical Temporal Memory. In: IJCNN 2015, Proceedings of the 2015 International Joint Conference on Neural Networks, Killarney, Ireland, IEEE, pp. 385-392.
Abstract: Recent theories have proposed that the unifying principle of brain function is the minimisation of variational free energy and that this is best achieved using a hierarchical predictive coding (HPC) framework. Hierarchical Temporal Memory (HTM) is a model of neocortical function that fits within the free energy framework but does not implement predictive coding. Recent work has attempted to integrate predictive coding and hierarchical message passing into the existing suite of HTM Cortical Learning Algorithms (CLA) producing a PC-CLA hybrid. In this paper we examine for the first time how such hierarchical message passing can be implemented in a pure HTM framework using distal dendrite structures that are already implemented in the CLA temporal pooler. We show this approach outperforms the more simplistic proximal dendrite structures used in the PCCLA hybrid and also that the new CLA hierarchy is effective for anomaly detection and image reconstruction problems that are beyond the reach of the existing single-level CLA framework.
Main, L. & Thornton, J. R. (2015). A Cortically Inspired Model for Bioacoustics Recognition. ICONIP 2015: 22nd International Conference on Neural Information Processing, Istanbul, Turkey, Lecture Notes in Computer Science, 9492, 348-355, Springer, ISSN 0302-9743.
Abstract: Wavelet transforms have shown superior performance in bioacoustic recognition tasks compared to the more commonly used Mel-Frequency Cepstral Coecients, and offer the ability to more closely model the frequency response behaviour of the basilar membrane within the cochlea. In this paper we evaluate a gammatone wavelet as a preprocessor for the Hierarchical Temporal Memory (HTM) model of the neocortex, as part of the broader development of a biologically motivated approach to sound recognition. Specifically, we implement a gammatone/equivalent rectangular bandwidth wavelet transform and apply it, in conjunction with the HTM’s spatial pooler, to recognise frog calls, bird songs and insect sounds. We demonstrate the improved performance of wavelets for feature detection and the potential viability of using HTM for bioacoustic recognition. Our classication accuracy of 99.5% in detecting insect sounds and 96.3% in detecting frog calls are signicant improvements on results previously published for the same datasets.
Thornton, J. R. (2015). The Transcendence of Computational Intelligence. PhD Confirmation Document, 141 pp. Completed under the supervision of John Mandalios, School of Humanities, Griffith University.
Following on from the introduction, Chapters 2-5 are all intended for inclusion in the final thesis. They cover Part One described above, with Chapter 2 providing a method of access, and Chapters 3-5 examining the work of Descartes, Schopenhauer and Husserl in relation to the material presented in Chapter 2. Finally, the appendices present two conference papers and one workshop paper that I have written and presented during my candidature. These papers cover various aspects of the questions I intend to address in Parts Two and Three, including the relevance of Heidegger to contemporary cognitive neuroscience (Appendix A), whether the activity of the human brain is causally closed under laws that determine the local low-level functioning of neural populations (Appendix B) and how contemporary models of neocortical functioning can be mapped onto Husserl’s phenomenological account of temporal consciousness (Appendix C).
2014
Cai, S., Luo, C., Thornton, J. R. & Su, K. (2014). Tailoring Local Search for Partial MaxSAT. In: AAAI 2014, Proceedings of the 28th Conference on Artificial Intelligence, Quebec, Canada, MIT Press, pp. 2623-2629.
Abstract: Partial MaxSAT (PMS) is a generalization to SAT and MaxSAT. Many real world problems can be encoded into PMS in a more natural and compact way than SAT and MaxSAT. In this paper, we propose new ideas for local search for PMS, which mainly rely on the distinction between hard and soft clauses. We use these ideas to develop a local search PMS algorithm called Dist. Experimental results on PMS benchmarks from MaxSAT Evaluation 2013 show that Dist significantly outperforms state-of-the-art PMS algorithms, including both local search algorithms and complete ones, on random and crafted benchmarks. For the industrial benchmark, Dist dramatically outperforms previous local search algorithms and is comparable with complete algorithms.
Cowley, B., Kneller, A. & Thornton, J. R. (2014). Cortically-Inspired Overcomplete Feature Learning for Colour Images. PRICAI 2014: Trends in Artificial Intelligence, 13th Pacific Rim International Conference on Artificial Intelligence. Lecture Notes in Computer Science, 8862, 720-732, Springer, ISSN 0302-9743.
Abstract: The Hierarchical Temporal Memory (HTM) framework is a deep learning system inspired by the functioning of the human neocortex. In this paper we investigate the feasibility of this framework by evaluating the performance of one component, the spatial pooler. Using a recently developed implementation, the augmented spatial pooler (ASP), as a single layer feature detector, we test its performance using a standard image classification pipeline. The main contributions of the paper are the implementation and evaluation of modications to ASP that enable it to form overcomplete representations of the input and to form connections with multiple data channels. Our results show that these modications signicantly improve the utility of ASP, making its performance competitive with more traditional feature detectors such as sparse restricted Boltzmann machines and sparse auto-encoders.
Thornton, J. R. (2014). Hierarchical Temporal Intentionality. Abstract in: ASSC 18: Handbook of the 18th Conference of the Association for the Scientific Study of Consciousness, Brisbane, Australia, University of Queensland, pp. 41-42.
Abstract: In recent years, a more unified understanding of the functioning of the neocortex has emerged. This understanding sees the neocortex as a hierarchically structured Bayesian prediction machine that perceives and acts according to a delicate interaction between direct inputs from the body and environment, and feedback within the brain concerning what it expects those inputs to be. This hierarchical predictive coding model provides an elegant account of how attention, perception, cognition and action can be understood as different aspects of a single process that aims to minimise prediction errors. Nevertheless, predictive coding models are not immediately concerned with predicting the future, but rather with predicting what is to happen now. As such, the predictive coding paradigm leaves the temporal horizons of experience unexplained. These horizons were first clearly identified in Husserl’s investigations of the unified tripartite structure of temporal consciousness. Several recent attempts have been made to explain how such a tripartite structure could be realised within current understandings of neocortical processing, but, as yet, none have been convincing. In this paper I introduce Jeff Hawkins’ model of neocortical processing that extends hierarchical predictive coding by proposing that the entire neocortex is engaged in sequence learning. This hierarchical temporal memory (HTM) model provides a coherent mapping between processes occurring in the brain and the structures of temporal consciousness. The paper also provides a phenomenological examination and reinterpretation of the meaning of the HTM model. This re-interpretation takes both consciousness and neocortical functioning to be fundamentally structured in terms of intentionality.
2013
Thornton, J. R. & Srbic, A. (2013). Spatial Pooling for Greyscale Images. International Journal of Machine Learning and Cybernetics. 4(3), 207-216, Springer, ISSN 1868-8071.
Abstract: It is a widely held view in contemporary computational neuroscience that the brain responds to sensory input by producing sparse distributed representations. In this paper we investigate a brain-inspired spatial pooling algorithm that produces sparse distributed representations of spatial images by modelling the formation of proximal dendrites associated with neocortical minicolumns. In this approach, distributed representations are formed out of a competitive process of inter-column inhibition and subsequent learning. Specifically, we evaluate the performance of a recently proposed binary spatial pooling algorithm on a well-known benchmark of greyscale natural images. In the process, we augment the algorithm to handle greyscale images, and to produce better quality encodings of binary images. We also show that the augmented algorithm produces superior population and lifetime kurtosis measures in comparison to a number of well-known coding schemes and explain how the augmented coding scheme can be used to produce high-delity reconstructions of greyscale input.
2012
Ramachandran, R., Wang, K., Wang, J. & Thornton, J. R. (2012). Probabilistic Reasoning in DL-Lite. PRICAI 2012: 12th Pacific Rim International Conference on Artificial Intelligence, Lecture Notes in Computer Science. 7458, 480-491, Springer, ISSN 0302-9743.
Abstract: The problem of extending description logics with uncertainty has received significant attention in recent years. In this paper, we investigate a probabilistic extension of DL-Lite, a family of tractable description logics. We first present a new probabilistic semantics for terminological knowledge bases based on the notion of types. The semantics proposed is not capable of handling assertional knowledge. In order to reason with both terminological and assertional probabilistic knowledge, we propose a probabilistic semantics based on a finite semantics for DL-Lite called features. This approach enables us to infer new information from the existing knowledge base by drawing on the inherent relation between a probabilistic TBox and a probabilistic ABox.
Thornton, J. R., Main L. & Srbic, A. (2012). Fixed Frame Temporal Pooling. AI 2012: 25th Australasian Joint Conference on Artificial Intelligence, Lecture Notes in Computer Science, 7691, 707-718, Springer, ISSN 0302-9743.
Abstract: It is a widely held view in contemporary computational neuroscience that the brain responds to sensory input by producing sparse distributed representations. In this paper we investigate a brain-inspired spatial pooling algorithm that produces sparse distributed representations of spatial images by modelling the formation of proximal dendrites associated with neocortical minicolumns. In this approach, distributed representations are formed out of a competitive process of inter-column inhibition and subsequent learning. Specifically, we evaluate the performance of a recently proposed binary spatial pooling algorithm on a well-known benchmark of greyscale natural images. In the process, we augment the algorithm to handle greyscale images, and to produce better quality encodings of binary images. We also show that the augmented algorithm produces superior population and lifetime kurtosis measures in comparison to a number of well-known coding schemes and explain how the augmented coding scheme can be used to produce high-delity reconstructions of greyscale input.
Thornton, J. R. (2012). The Phenomenological Negation of the Causal Closure of the Physical. Abstract in: AAP 2012: Proceedings of the 2012 Conference of the Australasian Association for Philosophy, Wollongong, Australia, Wollongong University, p. 22.
Abstract: The central argument of the paper is a response to David Chalmers’ account of the paradox of phenomenal judgment. It proceeds as follows: in order to deploy pure phenomenal concepts I must already understand that there is such a thing as pure phenomenal quality. Such understanding requires that I can consciously demonstrate the being of a pure phenomenal quality. Without such a ‘seeing’ demonstration I have no basis on which to distinguish a pure phenomenal concept from a relational phenomenal concept, even though I can passively experience pure phenomenal quality as the content of the pre-existing pre- reflective phenomenal concepts that I inherit from my language community. My conscious ‘seeing’ of pure phenomenal quality depends on the being of something that is not physical, i.e. it is the immediate experience of an ideal property associated with certain physical states instantiated in my brain. This experience, according to causal closure, cannot be the independent cause of any physical event. And yet, my consciousness of pure phenomenal quality is the cause of the formation of physical structures in my brain that enable me to distinguish and speak of the being of pure phenomenal qualities. Therefore the causal closure of the physical is false.
Thornton, J. R. (2012). The Phenomenological Negation of Objective Physicalism. Unpublished philosophical paper. 35 pp. Completed under the supervision of Deborah Brown at the School of History, Philosophy, Religion and Classics, University of Queensland, Brisbane.
Abstract: An outstanding task for contemporary philosophy of mind and cognitive science is to evaluate the significance and relevance of the phenomenological tradition for the study of mind and consciousness. At the moment, mainstream philosophy of mind stands in relative ignorance of the original work of the key phenomenologists, content either to dismiss phenomenology as muddled and self-contradictory, or to understand it in terms of the interpretations of a few famous contemporaries. Insofar as phenomenology is accepted into the analytic debate, it is seen as playing a supporting role, generating the empirical data of accurately observed experience that then acts as a test of adequacy for analytic theory. The revolutionary aspects of phenomenology, the transcendentalism of Husserl, the destruction of traditional ontology by Heidegger, are placed firmly on the other side of the continental divide.
In this paper, I intend to show that authentic phenomenology stands in a fundamental opposition to the assumptions and practices of contemporary analytic philosophy of mind. Phenomenology is not simply a useful tool for generating observations of subjective experience, it is way of engaging in philosophical enquiry that moves in an opposite direction to analytic thought. Unless this is seen clearly, the pronouncements of the phenomenologists will be misinterpreted and inappropriately rejected. To understand phenomenology, one has to enter into phenomenological enquiry ‘feet first.’ It is not a matter of a theoretical understanding. One of the aims of the phenomenological method is to uncover the pre-theoretical ground of theoretical understanding. Arriving at such a pre-theoretical ground means suspending one’s theoretical perspective. And what it means to suspend one’s theoretical perspective is itself a phenomenological question.
The point is that a positive theoretical account of the phenomenological method is not going to communicate what it means to uncover the pre-theoretical ground of experience. One has to proceed on the basis of experience and understand on the basis of negation. These principles can only be claried through actual demonstration. This is analogous to what Mary learned on seeing colour for the rst time. My plan is to illustrate the phenomenological method through a phenomenological analysis of the general framework within which contemporary philosophy of mind operates. I shall term this general framework objective physicalism. What exactly is meant by objective physicalism will become clearer as we proceed.
2011
Sotomayor, M., Wang, K., Shen, Y. & Thornton, J. R. (2011). Probabilistic Multi-context Systems. JIST 2011: Joint International Semantic Technology Conference. Lecture Notes in Computer Science, 7185, 366-375, Springer, ISSN 0302-9743.
Abstract: The concept of contexts is widely used in artificial intelligence. Several recent attempts have been made to formalize multi-context systems (MCS) for ontology applications. However, these approaches are unable to handle probabilistic knowledge. This paper introduces a formal framework for representing and reasoning about uncertainty in multi-context systems (called p-MCS). Some important properties of p-MCS are presented and an algorithm for computing the semantics is developed. Examples are also used to demonstrate the suitability of p-MCS.
Thornton, J. R., Srbic, A., Main, L. & Chitsaz, M. (2011). Augmented Spatial Pooling. AI 2011: 24th Australasian Joint Conference on Artificial Intelligence, Lecture Notes in Computer Science. 7106, 261-270, Springer, ISSN 0302-9743.
Abstract: Applications of unsupervised learning techniques to action recognition have proved highly competitive in comparison to supervised and hand-crafted approaches, despite not being designed to handle image processing problems. Many of these techniques are either based on biological models of cognition or have responses that correlate to those observed in biological systems. In this study we apply (for the first time) an adaptation of the latest hierarchical temporal memory (HTM) cortical learning algorithms (CLAs) to the problem of action recognition. These HTM algorithms are both unsupervised and represent one of the most complete high-level syntheses available of the current neuroscientific understanding of the functioning of neocortex.
Specifically, we extend the latest HTM work on augmented spatial pooling, to produce a fixed frame temporal pooler (FFTP). This pooler is evaluated on the well-known KTH action recognition data set and in comparison with the best performing unsupervised learning algorithm for bag-of-features classification in the area: independent subspace analysis (ISA). Our results show FFTP comes within 2% of ISA’s performance and outperforms other comparable techniques on this data set. We take these results to be promising, given the preliminary nature of the research and that the FFTP algorithm is only a partial implementation of the proposed HTM architecture.
Thornton, J. R. (2011). The Consciousness Test. Unpublished philosophical paper. 25 pp. Completed under the supervision of Bruin Christensen and David Chalmers at the School of Philosophy, Australian National University, Canberra.
Abstract: In 1994, Todd Moody argued that a zombie community would not develop in the same way as a human community, because their lack of consciousness would subtly alter their speech behaviour – particularly their philosophical talk of phenomenal consciousness. Subsequently, even those who accept the conceptual possibility of zombies have baulked at the idea that zombies could behave differently to their human counterparts. One reason must be that to accept such a difference comes at a high price: the denial of the causal closure of the physical. In particular, David Chalmers has given a detailed account of how this ‘paradox of phenomenal judgment’ can be answered. For Chalmers, the paradox is how a human being, whose behaviour is entirely determined by interactions between microphysical entities that obey physical laws, can come to make correct judgments about phenomenal experiences. His answer is given in a detailed analysis of the formation of direct phenomenal beliefs. This analysis explains how, although we form direct phenomenal concepts that possess genuine phenomenal content, the role of such concepts in the formation of beliefs and the subsequent production of behaviour can be explained in entirely functional, and hence physical terms.
In this paper I shall argue that even if we accept Chalmers’ account, there are still capacities of human concept formation and judgment that remain unexplained. For example, consider Moody’s community of zombies, and the situation of their never having had any direct or indirect contact with any conscious entity. I suggest that one concept such a community would lack, would be the concept of a zombie, i.e. the concept of an entity physically identical to themselves that lacks phenomenal consciousness. I shall argue that the reason for this lack is that zombies, by definition, lack the direct knowledge of being conscious required to form such a concept.
2010
Thornton, J. R. & Christensen, C. B. (2010). An Essential Difference: Wheeler and Heidegger on the relationship between science and philosophy. Presented at: Reconstructing the Cognitive World: A workshop with Michael Wheeler, Goethe University Frankfurt am Main, February 3-4, 2010.
Introduction: Michael Wheeler, in his book Reconstructing the Cognitive World, analyses the development of embedded-embodied cognitive science in the light of underlying philosophical differences about the constitution of human agency. On one side he sees orthodox computational cognitive science as holding to Cartesian conceptions of an abstract, disembodied reason deliberating over de-contextualised representations of the world. On the other side, he sees modern-day embodied-embedded cognitive scientists going beyond such Cartesianism to embrace concepts of human agency more in keeping with Heidegger’s account of Dasein in Being and Time. By bringing to light and criticising the Cartesian assumptions of the computationalists and by pointing out and clarifying the connections between embodied-embedded cognitive science and Heidegger’s philosophy, Wheeler aims to lay the “foundations of a genuinely non-Cartesian cognitive science.”
Along the way, Wheeler argues that Heidegger is a scientific realist who holds that modern science provides genuinely objective epistemic access to independently real entities. He takes this to show that Heidegger would not object to the incorporation of his account of Dasein into the broad framework of contemporary cognitive-scientific explanation. According to Wheeler, such explanation is:
. . . a species of empirical explanation in which the ultimate goal is to map out the subagential elements (e.g., the neural states and mechanisms, or the functionally identified psychological subsystems) whose organization, operation, and interaction make it intelligible to us how it is that unmysterious causal processes (such as those realized in brains) can give rise to psychological phenomena that are genuinely constitutive of agency and cognition.
Within this framework, cognitive scientists necessarily make assumptions about what the relevant psychological phenomena are and how they are constitutive of agency and cognition. It is in the formulation of these assumptions that Wheeler sees the hidden hand of Descartes and the potential for a new Heideggerian reconstruction of the area. For Wheeler, the major philosophical task is to clarify these assumptions, bringing into question existing orthodox views and providing a philosophical underpinning for newer embodied-embedded conceptions of agency. Evidently, this task commits Wheeler to endorsing, at least in general terms, the same framework and thus the same assumptions that underlie what he defines as the goal of cognitive-scientific explanation. Wheeler supposes that this presents no problems for the claim that much from Heidegger’s philosophy can be incorporated into cognitive science. To support this, Wheeler adduces a number of passages from Being and Time in which Heidegger appears to endorse Wheeler’s views on philosophy’s role in identifying and clarifying the constitutive assumptions of individual sciences. And so Wheeler concludes that how he conceives such philosophical clarification is basically how Heidegger conceives it.
In this paper we shall argue that this is not correct and that the key reason for this turns on the issue of naturalism. According to Wheeler any serious attempt to achieve the defining goal of cognitive science must accept a commitment to naturalism. Yet the commitment Wheeler appears to have to naturalism (as we shall see, it is not quite clear what this commitment is) and certainly the unquestioned character of it, separates him from Heidegger. In consequence, the actual affinities between Heidegger and Wheeler are limited because Wheeler accepts as given something which, at least on any strong reading of it, Heidegger is concerned to dislodge. This difference entails significant divergence in how each sees the relation of philosophy to science.
2009
Thornton, J. R., Faichney, J., Blumenstein, M., Nguyen, V. & Hine, T. (2009). Offline Cursive Character Recognition: A state-of-the-art comparison. In: IGS 2009: Proceedings of the 14th Conference of the International Graphonomics Society, Dijon, France, pp. 148-151.
Abstract: Recent research has demonstrated the superiority of SVM-based approaches for offline cursive character recognition. In particular, Camastra’s 2007 study showed SVM to be better than alternative LVQ and MLP approaches on the large C-Cube data set. Subsequent work has applied hierarchical vector quantization (HVQ) with temporal pooling to the same data set, improving on LVQ and MLP but still not reaching SVM recognition rates.
In the current paper, we revisit Camastra’s SVM study in order to explore the effects of using an alternative modified direction feature (MDF) vector representation, and to compare the performance of a RBF-based approach against both SVM and HVQ. Our results show that SVMs still have the better performance, but that much depends on the feature sets employed. Surprisingly, the use of more sophisticated MDF feature vectors produced the poorest results on this data set despite their success on signature verification problems.
2008
Pham, D. N., Thornton, J. R., Gretton, C. & Sattar, A. (2008). Combining Adaptive and Dynamic Local Search for Satisfiability. Journal on Satisfiability, Boolean Modeling and Computation. 4, 149-172, The SAT Association, ISSN 1574-0617.
Abstract: In this paper we describe a stochastic local search (SLS) procedure for finding models of satisfiable propositional formulae. This new algorithm, gNovelty+, draws on the features of two other WalkSAT family algorithms: AdaptNovelty+ and G2WSAT, while also successfully employing a hybrid clause weighting heuristic based on the features of two dynamic local search (DLS) algorithms: PAWS and (R)SAPS. gNovelty+ was a Gold Medal winner in the random category of the 2007 SAT competition. In this paper we present a detailed description of the algorithm and extend the SAT competition results via an empirical study of the effects of problem structure, parameter tuning and resolution preprocessors on the performance of gNovelty+. The study compares gNovelty+ with three of the most representative WalkSAT-based solvers: AdaptG2WSAT0, G2WSAT and AdaptNovelty+; and two of the most representative DLS solvers: RSAPS and PAWS. Our new results augment the SAT competition results and show that gNovelty+ is also highly competitive in the domain of solving structured satisfiability problems in comparison with other SLS techniques.
Pham, D. N., Thornton, J. R. & Sattar, A. (2008). Efficiently Exploiting Dependencies in Local Search for SAT. In AAAI 2008: Proceedings of the 23rd Conference on Artificial Intelligence, MIT Press, pp. 1476-1478.
Abstract: We propose a new local search platform that splits a CNF formula into three sub-components: i) a minimal dependency lattice (representing the core connections between logic gates), ii) a conjunction of equivalence clauses, and iii) the remaining clauses. We also adopt a new hierarchical cost function that focuses on solving the core components of the problem first. We then show experimentally that our platform not only significantly outperforms existing local search approaches but is also competitive with modern systematic solvers on highly structured problems.
Pham, D. N., Thornton, J. R. & Sattar, A. (2008). Modelling and Solving Temporal Reasoning as Propositional Satisfiability. Artificial Intelligence. 172(15), 1752-1782, ISSN 0004-3702.
Abstract: Representing and reasoning about time dependent information is a key research issue in many areas of computer science and artificial intelligence. One of the best known and widely used formalisms for representing interval-based qualitative temporal information is Allen’s interval algebra (IA). The fundamental reasoning task in IA is to find a scenario that is consistent with the given information. This problem is in general NP-complete.
In this paper, we investigate how an interval-based representation, or IA network, can be encoded into a propositional formula of Boolean variables and/or predicates in decidable theories. Our task is to discover whether satisfying such a formula can be more efficient than finding a consistent scenario for the original problem. There are two basic approaches to modelling an IA network: one represents the relations between intervals as variables and the other represents the end-points of each interval as variables. By combining these two approaches with three different Boolean satisfiability (SAT) encoding schemes, we produced six encoding schemes for converting IA to SAT. In addition, we also showed how IA networks can be formulated into satisfiability modulo theories (SMT) formulae based on the quantifier-free integer difference logic (QF-IDL). These encodings were empirically studied using randomly generated IA problems of sizes ranging from 20 to 100 nodes. A general conclusion we draw from these experimental results is that encoding IA into SAT produces better results than existing approaches. More specifically, we show that the new point-based 1-D support SAT encoding of IA produces consistently better results than the other alternatives considered. In comparison with the six different SAT encodings, the SMT encoding came fourth after the point-based and interval-based 1-D support schemes and the point-based direct scheme. Further, we observe that the phase transition region maps directly from the IA encoding to each SAT or SMT encoding, but, surprisingly, the location of the hard region varies according to the encoding scheme. Our results also show a fixed performance ranking order over the various encoding schemes.
Thornton, J. R. & Pham, D. N. (2008). Using Cost Distributions to Guide Weight Decay in Local Search for SAT. PRICAI 2008: 10th Pacific Rim International Conference on Artificial Intelligence. Lecture Notes in Computer Science, 5351, 405-416, Springer, ISSN 0302-9743.
Abstract: Although clause weighting local search algorithms have produced some of the best results on a range of challenging satisfiability (SAT) benchmarks, this performance is dependent on the careful hand-tuning of sensitive parameters. When such hand-tuning is not possible, clause weighting algorithms are generally outperformed by self-tuning WalkSAT-based algorithms such as AdaptNovelty+ and AdaptG2WSAT. In this paper we investigate tuning the weight decay parameter of two clause weighting algorithms using the statistical properties of cost distributions that are dynamically accumulated as the search progresses. This method selects a parameter setting both according to the speed of descent in the cost space and according to the shape of the accumulated cost distribution, where we take the shape to be a predictor of future performance. In a wide ranging empirical study we show that this automated approach to parameter tuning can outperform the default settings for two state-of-the-art algorithms that employ clause weighting (PAWS and gNovelty+). We also show that these self-tuning algorithms are competitive with three of the best-known self-tuning SAT local search techniques: RSAPS, AdaptNovelty+ and AdaptG2WSAT.
Thornton, J. R., Faichney, J., Blumenstein, M. & Hine, T. (2008). Character Recognition using Hierarchical Vector Quantization and Temporal Pooling. AI 2008: 21st Australasian Joint Conference on Artificial Intelligence. Lecture Notes in Computer Science, 5360, 562-572, Springer, ISSN 0302-9743.
Abstract: In recent years, there has been a cross-fertilization of ideas between computational neuroscience models of the operation of the neocortex and artificial intelligence models of machine learning. Much of this work has focussed on the mammalian visual cortex, treating it as a hierarchically-structured pattern recognition machine that exploits statistical regularities in retinal input. It has further been proposed that the neocortex represents sensory information probabilistically, using some form of Bayesian inference to disambiguate noisy data.
In the current paper, we focus on a particular model of the neocortex developed by Hawkins, known as hierarchical temporal memory (HTM). Our aim is to evaluate an important and recently implemented aspect of this model, namely its ability to represent temporal sequences of input within a hierarchically structured vector quantization algorithm. We test this temporal pooling feature of HTM on a benchmark of cursive hand-writing recognition problems and compare it to a current state-of-the-art support vector machine implementation. We also examine whether two preprocessing techniques can enhance the temporal pooling algorithm’s performance. Our results show that a relatively simple temporal pooling approach can produce recognition rates that approach the current state-of-the-art without the need for extensive tuning of parameters. We also show that temporal pooling performance is surprisingly unaffected by the use of preprocessing techniques.
2007
Ishtaiwi, A., Thornton, J. R. & Sattar, A. (2007). Weight Redistribution for Unweighted MAX-SAT. AI 2007: Advances in Artificial Intelligence, 20th Australian Joint Conference on Artificial Intelligence, Lecture Notes in Computer Science, 4830, 687-693, Springer, ISSN 0302-9743.
Abstract: Many real-world problems are over-constrained and require search techniques adapted to optimising cost functions rather than searching for consistency. This makes the MAX-SAT problem an important area of research for the satisfiability (SAT) community. In this study we perform an empirical analysis of several of the best performing SAT local search techniques in the domain of unweighted MAX-SAT. In particular, we test two of the most recently developed SAT clause weight redistribution algorithms, DDFW and DDFW+, against three more well-known techniques (RSAPS, AdaptNovelty+ and PAWS). Based on an empirical study across a range of previously studied problems we conclude that DDFW is the most promising algorithm in terms of robust average performance.
Orgun, M. A. & Thornton, J. (Eds.) (2007). AI 2007: Advances in Artificial Intelligence. Lecture Notes in Artificial Intelligence, 4830, Heidelberg: Springer, ISBN 978-3-540-76926-2.
Preface: This volume contains the papers presented at AI 2007: The 20th Australian Joint Conference on Artificial Intelligence held during December 2–6, 2007 on the Gold Coast, Queensland, Australia. AI 2007 attracted 194 submissions (full papers) from 34 countries. The review process was held in two stages. In the first stage, the submissions were assessed for their relevance and readability by the Senior Program Committee members. Those submissions that passed the first stage were then reviewed by at least three Program Committee members and independent reviewers. After extensive discussions, the Committee decided to accept 60 regular papers (acceptance rate of 31%) and 44 short papers (acceptance rate of 22.7%). Two regular papers and four short papers were subsequently withdrawn and are not included in the proceedings. AI 2007 featured invited talks from four internationally distinguished researchers, namely, Patrick Doherty, Norman Foo, Richard Hartley and Robert Hecht-Nielsen. They shared their insights and work with us and their contributions to AI 2007 were greatly appreciated. AI 2007 also featured workshops on integrating AI and data-mining, semantic biomedicine and ontology. The short papers were presented in an interactive poster session and contributed to a stimulating conference.
We would like to thank the Conference Co-chairs, Abdul Sattar and Vladimir Estivill-Castro of Griffith University for their guidance, and the local Organizing Co-chairs Michael Blumenstein and Guido Governatori for making sure that the conference ran smoothly. Special thanks go to Natalie Dunstan and Vicky Wheeler for supporting the Committees so effectively. We also would like to thank the following organizations for their generous sponsorship of AI 2007: Griffith University, National ICT Australia, the University of Queensland, Queensland University of Technology, Bond University, the Australian Computer Society and the Gold Coast City Council.
Pham, D. N., Thornton, J. R., Gretton, C. & Sattar, A. (2007). Advances in Local Search for Satisfiability. AI 2007: 20th Australian Joint Conference on Artificial Intelligence. Lecture Notes in Computer Science, 4830, 213-222, Springer, ISSN 0302-9743.
Abstract: In this paper we describe a stochastic local search (SLS) procedure for finding satisfying models of satisfiable propositional formulae. This new algorithm, gNovelty+, draws on the features of two other WalkSAT family algorithms: R+AdaptNovelty+ and G2WSAT, while also successfully employing a dynamic local search (DLS) clause weighting heuristic to further improve performance. gNovelty+ was a Gold Medal winner in the random category of the 2007 SAT competition. In this paper we present a detailed description of the algorithm and extend the SAT competition results via an empirical study of the effects of problem structure and parameter tuning on the performance of gNovelty+. The study also compares gNovelty+ with two of the most representative WalkSAT-based solvers: G2WSAT, AdaptNovelty+; and two of the most representative DLS solvers: RSAPS and PAWS. Our new results augment the SAT competition results and show that gNovelty+ is also highly competitive in the domain of solving structured satisfiability problems in comparison with other SLS techniques.
Pham, D. N., Thornton, J. R. & Sattar, A. (2007). Building Structure into Local Search for SAT. In: IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, pp. 2359-2364. Winner of Distinguished Paper Award.
Abstract: Local search procedures for solving satisfiability problems have attracted considerable attention since the development of GSAT in 1992. However, recent work indicates that for many real-world problems, complete search methods have the advantage, because modern heuristics are able to effectively exploit problem structure. Indeed, to develop a local search technique that can effectively deal with variable dependencies has been an open challenge since 1997.
In this paper we show that local search techniques can effectively exploit information about problem structure producing significant improvements in performance on structured problem instances. Building on the earlier work of Ostrowski et al. we describe how information about variable dependencies can be built into a local search, so that only independent variables are considered for flipping. The cost effect of a flip is then dynamically calculated using a dependency lattice that models dependent variables using gates (specifically and, or and equivalence gates). The experimental study on hard structured benchmark problems demonstrates that our new approach significantly outperforms the previously reported best local search techniques.
Thornton, J. (2007). The Impersonal Knowledge of Conscious Experience: A philosophical investigation. Unpublished philosophical manuscript. 81 pp.
Preface: My motivation in writing this manuscript is to fundamentally challenge the prevailing wisdom of the philosophy of mind and artificial intelligence communities that conscious experience is created or caused by the operation of physical neurons in a physical brain. Because this idea remains unchallenged, it is now widely treated as an obvious truth, a fact beyond dispute. And so it is disseminated into the larger human community, through the media, and through scientific education, without further reflection.
As I shall demonstrate, this naturalistic scientific conception of consciousness is little more than an assertion of faith or belief, which remains unsupported by factual evidence. To see how this has come about, and how the reality of consciousness remains beyond the grasp of contemporary scientific and philosophical thought, requires that we each, individually, investigate this issue in our own conscious experience.
My aim is therefore not to give a detailed criticism of the particular arguments and positions of contemporary thinkers but to present a methodology for the investigation of the mind. The basic thrust of this methodology is the attainment of an impersonal first-person observational standpoint, and to show how such a standpoint can act as the foundation for an authentic philosophy of mind. The idea is to proceed by practical demonstration and not by conceptual argumentation. As such, this work flies in the face of current practice, and there is much in it that will appear provocative and even nonsensical to someone schooled in the existing literature.
Therefore I have to ask my prospective readers to suspend their judgement, as far as is possible, until they have absorbed what has been described in the following pages. For this is a practical work, and asks you to test what is proposed by actual observation and not according to the thoughts and opinions of others.
I should make it clear that this approach to the investigation of consciousness has not been plucked out of thin air. It owes much to the phenomenology of Edmund Husserl, although it goes somewhat further in identifying thought as the agency of the personal viewpoint. Behind that lies the philosophy of Arthur Schopenhauer and my quite separate experience of the work of various twentieth century spiritual teachers. Of these the most profound inspiration has been the life and work of Barry Long (1926-2003). In particular, it is Barry Long’s recognition of the fundamental distinction between thought and observation that lies at the core of the current work. Not that you will find a spiritual teaching in these pages. This is a work of philosophy and does not assume the existence of any spiritual realm. Quite the reverse. What is asked for here is only that we remain within the empirical realm of our direct, first-person experience.
Thornton, J. (2007). The Foundations of Computing and the Information Technology Age: A historical, sociological and philosophical enquiry. Pearson Education Australia, 286 pp. ISBN 978-0-7339-8848-6
Back Cover: The Foundations of Computing and the Information Technology Age is a book both for undergraduate computing students and for anyone seeking a deeper understanding of technology in the modern world. Dispensing with simplistic explanations, the book first considers the evolution of the computer from the origins of number to the development of the microprocessor. Along the way we meet the early pioneers of mechanical calculation, including Pascal and Leibniz, the groundbreaking work of Charles Babbage and his Difference Engines and the drama of the wartime code-breakers at Bletchley Park.
But this is not just a historical text. It provides an introduction to the theory of computation, showing how Alan Turing’s concept of a universal Turing machine helped form the foundations of modern computer science. Theory then becomes practice as the book explores the von Neumann architecture and shows how simple switching circuits can be used to construct a general purpose computer.
The basic theme running throughout this discussion is that the foundations of computing and the information technology age lie in the scientific turn of mind taken by our entire civilisation. From this perspective, the book traces how information technology has been used to restructure the economic and social life of the developed world and enquires into the ultimate direction and purpose of this process of globalisation. The reader is then drawn to consider how our technical, materialistic understanding has ignored the underlying reality from which all technology emerges: human consciousness.
Finally, the book argues that this inability to acknowledge the central reality of consciousness has caused modern civilisation to enter into an unbalanced pattern of development, where we increasingly understand ourselves as biological machines that must be adapted to the latest technology, rather than as the creative intelligence that technology was originally supposed to serve.
2006
Ishtaiwi, A., Thornton, J. R., Anbulagan, Sattar, A. & Pham, D. N. (2006). Adaptive Clause Weight Redistribution. CP 2006: 12th International Conference on the Principles and Practice of Constraint Programming. Lecture Notes in Computer Science, 4204, 229-243, Springer, ISSN 0302-9743.
Abstract: In recent years, dynamic local search (DLS) clause weighting algorithms have emerged as the local search state-of-the-art for solving propositional satisfiability problems. However, most DLS algorithms require the tuning of domain dependent parameters before their performance becomes competitive. If manual parameter tuning is impractical then various mechanisms have been developed that can automatically adjust a parameter value during the search. To date, the most effective adaptive clause weighting algorithm is RSAPS. However, RSAPS is unable to convincingly outperform the best non-weighting adaptive algorithm AdaptNovelty+, even though manually tuned clause weighting algorithms can routinely outperform the Novelty+ heuristic on which AdaptNovelty+ is based.
In this study we introduce R+DDFW+, an enhanced version of the DDFW clause weighting algorithm developed in 2005, that not only adapts the total amount of weight according to the degree of stagnation in the search, but also incorporates the latest resolution-based pre-processing approach used by the winner of the 2005 SAT competition (R+AdaptNovelty+). In an empirical study we show R+DDFW+ improves on DDFW and outperforms the other leading adaptive (R+Adapt-Novelty+, R+RSAPS) and non-adaptive (R+G2WSAT) local search solvers over a range of random and structured benchmark problems.
Pham, D. N., Thornton, J. R. & Sattar, A. (2006). Towards an Efficient SAT Encoding for Temporal Reasoning. CP 2006: 12th International Conference on the Principles and Practice of Constraint Programming. Lecture Notes in Computer Science, 4204, 421-436, Springer, ISSN 0302-9743.
Abstract: In this paper, we investigate how an IA network can be effectively encoded into the SAT domain. We propose two basic approaches to modelling an IA network as a CSP: one represents the relations between intervals as variables and the other represents the relations between end-points of intervals as variables. By combining these two approaches with three different SAT encoding schemes, we produced six encoding schemes for converting IA to SAT. These encodings were empirically studied using randomly generated IA problems of sizes ranging from 20 to 100 nodes. A general conclusion we draw from these experimental results is that encoding IA into SAT produces better results than existing approaches. Further, we observe that the phase transition region maps directly from the IA encoding to each SAT encoding, but, surprisingly, the location of the hard region varies according to the encoding scheme. Our results also show a fixed performance ranking order over the various encoding schemes.
Thornton, J. R., Gustafsson, T., Blumenstein, M. & Hine, T. (2006). Robust Character Recognition using a Hierarchical Bayesian Network. AI 2006: 19th Australian Joint Conference on Artificial Intelligence. Lecture Notes in Computer Science, 4304, 1259-1264, Springer, ISSN 0302-9743.
Abstract: There is increasing evidence to suggest that the neocortex of the mammalian brain does not consist of a collection of specialised and dedicated cortical architectures, but instead possesses a fairly uniform, hierarchically organised structure. As Mountcastle has observed [1], this uniformity implies that the same general computational processes are performed across the entire neocortex, even though different regions are known to play different functional roles. Building on this evidence, Hawkins has proposed a top-down model of neocortical operation [2], taking it to be a kind of pattern recognition machine, storing invariant representations of neural input sequences in hierarchical memory structures that both predict sensory input and control behaviour. The first partial proof of concept of Hawkins’ model was recently developed using a hierarchically organised Bayesian network that was tested on a simple pattern recognition problem [3]. In the current study we extend Hawkins’ work by comparing the performance of a backpropagation neural network with our own implementation of a hierarchical Bayesian network in the well-studied domain of character recognition. The results show that even a simplistic implementation of Hawkins’ model can produce recognition rates that exceed a standard neural network approach. Such results create a strong case for the further investigation and development of Hawkins’ neocortically-inspired approach to building intelligent systems.
2005
Bain, S., Thornton, J. R. & Sattar, A. (2005). Evolving Variable-Ordering Heuristics for Constrained Optimisation. CP 2005: 11th International Conference on Contraint Programming. Lecture Notes in Computer Science, 3709, 732-736, Springer, ISSN 0302-9743.
Abstract: In this paper we present and evaluate an evolutionary approach for learning new constraint satisfaction algorithms, specifically for MAX-SAT optimisation problems. Our approach offers two signicant advantages over existing methods: it allows the evolution of more complex combinations of heuristics, and; it can identify fruitful synergies among heuristics. Using four different classes of MAX-SAT problems, we experimentally demonstrate that algorithms evolved with this method exhibit superior performance in comparison to general purpose methods.
Bain, S., Thornton, J. R. & Sattar, A. (2005). A Comparison of Evolutionary Methods for the Discovery of Local Search Heuristics. AI 2005: 18th Australian Joint Conference on Artificial Intelligence. Lecture Notes in Computer Science, 3809, 1068-1074, Springer, ISSN 0302-9743.
Abstract: Methods of adaptive constraint satisfaction have recently become of interest to overcome the limitations imposed on “black-box” search algorithms by the no free lunch theorems. Two methods that each use an evolutionary algorithm to adapt to particular classes of problem are the CLASS system of Fukunaga and the evolutionary constraint algorithm work of Bain et al. We directly compare these methods, demonstrating that although special purpose methods can learn excellent algorithms, on average standard evolutionary operators perform even better, and are less susceptible to the problems of bloat and redundancy.
Ferreira Jr., V. & Thornton, J. R. (2005). Tie Breaking in Clause Weighting Local Search for SAT. AI 2005: 18th Australian Joint Conference on Artificial Intelligence. Lecture Notes in Computer Science, 3809, 70-81, Springer, ISSN 0302-9743.
Abstract: Clause weighting local search methods are widely used for satisfiability testing. A feature of particular importance for such methods is the scheme used to maintain the clause weight distribution relevant to different areas of the search landscape. Existing methods periodically adjust clause weights either multiplicatively or additively. Tie breaking strategies are used whenever a method’s evaluation function encounters more than one optimal candidate flip, with the dominant approach being to break such ties randomly. Although this is acceptable for multiplicative methods as they rarely encounter such situations, additive methods encounter signicantly more tie breaking scenarios in their landscapes, and therefore a more refined tie breaking strategy is of much greater relevance. This paper proposes a new way of handling the tie breaking situations frequently encountered in the landscapes of additive constraint weighting local search methods. We demonstrate through an empirical study that when this idea is used to modify the purely random tie breaking strategy of a state-of-the-art solver, the modified method significantly outperforms the existing one on a range of benchmarks, especially when we consider the encodings of large and structured problems.
Ishtaiwi, A., Thornton, J. R., Sattar, A. & Pham, D. N. (2005). Neighbourhood Clause Weight Redistribution in Local Search for SAT. CP 2005: 11th International Conference on the Principles and Practice of Constraint Programming. Lecture Notes in Computer Science, 3709, 772-776, Springer, ISSN 0302-9743.
Abstract: In recent years, dynamic local search (DLS) clause weighting algorithms have emerged as the local search state-of-the-art for solving propositional satisfiability problems. However, most DLS algorithms require the tuning of domain dependent parameters before their performance becomes competitive. If manual parameter tuning is impractical then various mechanisms have been developed that can automatically adjust a parameter value during the search. To date, the most effective adaptive clause weighting algorithm is RSAPS. However, RSAPS is unable to convincingly outperform the best non-weighting adaptive algorithm AdaptNovelty+, even though manually tuned clause weighting algorithms can routinely outperform the Novelty+ heuristic on which AdaptNovelty+ is based.
In this study we introduce R+DDFW+, an enhanced version of the DDFW clause weighting algorithm developed in 2005, that not only adapts the total amount of weight according to the degree of stagnation in the search, but also incorporates the latest resolution-based pre-processing approach used by the winner of the 2005 SAT competition (R+AdaptNovelty+). In an empirical study we show R+DDFW+ improves on DDFW and outperforms the other leading adaptive (R+Adapt-Novelty+, R+RSAPS) and non-adaptive (R+G2WSAT) local search solvers over a range of random and structured benchmark problems.
Pham, D. N., Thornton, J. R., Sattar, A. & Ishtaiwi, A. (2005). SAT-based versus CSP-based Constraint Weighting for Satisfiability. In AAAI 2005: Proceedings of the 20th National Conference on Artificial Intelligence, MIT Press, pp. 455-460.
Abstract: Recent research has focused on bridging the gap between the satisfiability (SAT) and constraint satisfaction problem (CSP) formalisms. One approach has been to develop a many-valued SAT formula (MV-SAT) as an intermediate paradigm between SAT and CSP, and then to translate existing highly efficient SAT solvers to the MV-SAT domain. Experimental results have shown this approach can achieve significant improvements in performance compared with the traditional SAT and CSP approaches.
In this paper, we follow a different route, developing SAT solvers that can automatically recognise CSP structure hidden in SAT encodings. This allows us to look more closely at how constraint weighting can be implemented in the SAT and CSP domains. Our experimental results show that a SAT-based approach to handle weights, together with CSP-based approach to variable instantiation, is superior to other combinations of SAT and CSP-based approaches. A further experiment on the round robin scheduling problem indicates that this many-valued constraint weighting approach outperforms other state-of-the-art solvers.
Pham, D. N., Thornton, J. R. & Sattar, A. (2005). Modelling and Solving Temporal Reasoning as Satisfiability. Proceedings of the 4th International Workshop on Modelling and Reformulating Constraint Satisfaction Problems, Sitges, Spain. pp. 117-131.
Abstract: Recent research has shown that it is often preferable to encode real-world problems as propositional satisfiability (SAT) problems, and then solve using general purpose solvers. In this way the efficiencies of SAT technology can be exploited, and the development of specialised solution techniques can be avoided. However, in the interval algebra (IA) domain of temporal reasoning, the state-of-the-art still involves the use of specialised techniques that exploit the particular structure of interval relations.
In this paper we investigate the feasibility of representing and solving IA problems as SAT problems. We firstly introduce two methods of representing IA as a constraint satisfaction problem (CSP), and then use three SAT-encoding schemes to produce six different IA to SAT representations. In an empirical study, we examine the performance of existing SAT local and complete search solvers on these SAT representations, and perform a comparison with solvers that operate on native IA representations. Our results show that the best performance over a range of algorithms is produced using a support SAT encoding of a point algebra-based CSP. The results also show that a state-of-the-art complete SAT solver (zChaff) can solve these instances significantly faster than existing IA solvers working on equivalent native IA representations.
Thornton, J. R. (2005). Clause Weighting Local Search for SAT. Journal of Automated Reasoning. 35(1-3), 97-142, Springer, ISSN 0168-7433.
Abstract: This paper investigates the necessary features of an effective clause weighting local search algorithm for propositional satisfiability testing. Using the recent history of clause weighting as evidence, we suggest that the best current algorithms have each discovered the same basic framework, i.e. to increase weights on false clauses in local minima and then to periodically normalize these weights using a decay mechanism.
Within this framework, we identify two basic classes of algorithm according to whether clause weight updates are performed additively or multiplicatively. Using one of the best recently developed multiplicative algorithms (SAPS) and our own pure additive weighting scheme (PAWS), an experimental study was constructed to isolate the effects of multiplicative in comparison to additive weighting, while controlling other key features of the two approaches, namely the use of pure versus flat random moves, deterministic versus probabilistic weight smoothing and multiple versus single inclusion of literals in the local search neighbourhood. In addition, we examined the effects of adding a threshold feature to multiplicative
weighting that makes it indifferent to similar cost moves.
As a result of this investigation, we show that additive weighting can outperform multiplicative weighting on a range of difficult problems, while requiring considerably less effort in terms of parameter tuning. Our examination of the differences between SAPS and PAWS suggests that additive weighting does benefit from the random flat move and deterministic smoothing heuristics, whereas multiplicative weighting would benefit from a deterministic/probabilistic smoothing switch parameter that is set according to the problem instance. We further show that adding a threshold to multiplicative weighting produces a general deterioration in performance, contradicting our earlier conjecture that additive weighting has better performance due to having a larger selection of possible moves. This leads us to explain differences in performance as being mainly caused by the greater emphasis of additive weighting on penalizing clauses with relatively less weight.
2004
Bain, S., Thornton, J. R. & Sattar, A. (2004). Methods of Automatic Algorithm Generation. PRICAI 2004: 8th Pacific Rim International Conference on Artificial Intelligence. Lecture Notes in Computer Science, 3157, 144-153, Springer, ISSN 0302-9743.
Abstract: Many methods have been proposed to automatically generate algorithms for solving constraint satisfaction problems. The aim of these methods has been to overcome the difficulties associated with matching algorithms to specific constraint satisfaction problems. This paper examines three methods of generating algorithms: a randomised search, a beam search and an evolutionary method. The evolutionary method is shown to have considerably more flexibility than existing alternatives, being able to discover entirely new heuristics and to exploit synergies between heuristics.
Bain, S., Thornton, J. R. & Sattar, A. (2004). Evolving Algorithms for Constraint Satisfaction. In: CEC 2004: Proceedings of the 2004 Congress on Evolutionary Computation, IEEE, pp. 265-272.
Abstract: This paper proposes a framework for automatically evolving constraint satisfaction algorithms using genetic programming. The aim is to overcome the difculties associated with matching algorithms to specic constraint satisfaction problems. A representation is introduced that is suitable for genetic programming and that can handle both complete and local search heuristics. In addition, the representation is shown to have considerably more exibility than existing alternatives, being able to discover entirely new heuristics and to exploit synergies between heuristics. In a preliminary empirical study, it is shown that the new framework is capable of evolving algorithms for solving the well-studied problem of boolean satisability testing.
Beaumont, M., Thornton, J. R., Sattar, A. & Maher, M. (2004). Solving Over-constrained Temporal Reasoning Problems using Local Search. PRICAI 2004: 8th Pacific Rim International Conference on Artificial Intelligence, Lecture Notes in Computer Science, 3157, 134-143, Springer, ISSN 0302-9743..
Abstract: Temporal reasoning is an important task in many areas of computer science including planning, scheduling, temporal databases and instruction optimisation for compilers. Given a knowledge-base consisting of temporal relations, the main reasoning problem is to determine whether the knowledge-base is satisfiable, i.e., is there a scenario which is consistent with the information provided. However, many real world problems are over-constrained (i.e. unsatisfiable). To date, there has been little research aimed at solving over-constrained temporal reasoning problems. Recently, we developed standard backtracking algorithms to compute partial scenarios, in the spirit of Freuder and Wallace’s notion of partial satisfaction. While these algorithms were capable of obtaining optimal partial solutions, they were viable only for small problem sizes.
In this paper, we apply local search methods to overcome the deficiencies of the standard approach to solving over-constrained temporal reasoning problems. Inspired by our recent success in efficiently handling reason- ably large satisfiable temporal reasoning problems using local search, we have developed two new local search algorithms using a random restart strategy and a tabu search. Further, we extend our previous constraint weighting algorithm to handle over-constrained problems. An empirical study of these new algorithms was performed using randomly generated under- and over-constrained temporal reasoning problems. We conclude that 1) local search significantly outperforms standard backtracking approaches on over-constrained temporal reasoning problems; and 2) the random restart strategy and tabu search have a superior performance to constraint weighting for the over-constrained problems. We also conjecture that the poorer performance of constraint weighting is due to distortions of non-zero global minima caused by the weighting process.
Ferreira Jr., V. & Thornton, J. R. (2004). Longer-Term Memory in Clause Weighting Local Search for SAT. In AI 2004: 17th Australian Joint Conference on Artificial Intelligence, Lecture Notes in Computer Science, 3339, 730-741, Springer, ISSN 0302-9743.
Abstract: This paper presents a comparative study between a state-of-the-art clause weighting local search method for satisfiability testing and a variant modified to obtain longer-term memory from a global measure of clause perturbation. We present empirical evidence indicating that by learning which clauses are hardest to satisfy, the modified method can offer significant performance improvements for a well-known range of satisfiable problems. We conclude that our method’s ability to learn, and consequently to offer performance improvement, can be attributed to its ability to obtain information from a global measure of hardness, rather than from the contextual perspective exploited in previous works.
Stantic, B., Khanna, S. & Thornton, J. R. (2004). An Efficient Method for Indexing Now-relative Bitemporal Data. In ADC 2004: Proceedings of the 15th Australasian Database Conference, Australian Computer Society, pp. 113-122.
Abstract: Most modern database applications involve a significant amount of time dependent data and a substantial proportion of this data is now-relative, current now. While a lot of work has focussed on indexing on temporal data in general, only a little work has addressed the indexing of now-relative data, which is a natural and meaningful part of every temporal database as well as being the focus of most queries. This paper proposes a logical query transformation that relies on the POINT representation of current time and the geometry features of spatial access methods. Logical query transformation enables off-the-shelf Spatial indexes to be used. We empirically prove that this method is very efficient on now-relative Bitemporal data, outperforming the straightforward maximum-timestamp approach by over a factor of 20, both in number of Disk accesses and CPU usage.
Thornton, J. R., Pham, D. N., Bain, S. & Ferreira Jr., V. (2004). Additive versus Multiplicative Clause Weighting for SAT. In AAAI 2004: Proceedings of the 19th National Conference on Artificial Intelligence, The MIT Press, pp. 191-196.
Abstract: This paper examines the relative performance of additive and multiplicative clause weighting schemes for propositional satisability testing. Starting with one of the most recently developed multiplicative algorithms (SAPS), an experimental study was constructed to isolate the effects of multiplicative in comparison to additive weighting, while controlling other key features of the two approaches, namely the use of random versus flat moves, deterministic versus probabilistic weight smoothing and multiple versus single inclusion of literals in the local search neighborhood.
As a result of this investigation we developed a pure additive weighting scheme (PAWS) which can outperform multiplicative weighting on a range of difficult problems, while requiring considerably less effort in terms of parameter tuning. We conclude that additive weighting shows better scaling properties because it makes less distinction between costs and so considers a larger domain of possible moves.
Thornton, J. R., Beaumont, M., Sattar, A. & Maher, M. (2004). A Local Search Approach to Modelling and Solving Interval Algebra Problems. Journal of Logic and Computation, 14(1), 93-112, Oxford University Press, ISSN 0955-792X.
Abstract: Local search techniques have attracted considerable interest in the artificial intelligence community since the development of GSAT and the minconflicts heuristic for solving propositional satisfiability (SAT) problems and binary constraint satisfaction problems (CSPs) respectively. Newer techniques, such as the discrete Langrangian method (DLM), have significantly improved on GSAT and can also be applied to general constraint satisfaction and optimisation. However, local search has yet to be successfully employed in solving temporal constraint satisfaction problems (TCSPs).
In this paper we argue that current formalisms for representing TCSPs are inappropriate for a local search approach, and we propose an alternative CSP-based end-point ordering model for temporal reasoning. In particular we look at modelling and solving problems formulated using Allen’s interval algebra (IA) and propose a new constraint weighting algorithm derived from DLM. Using a set of randomly generated IA problems, we show that our local search outperforms existing consistency-enforcing algorithms on those problems that the existing techniques find most difficult.
Zhou, L., Thornton, J. R. & Sattar, A. (2004). Dynamic Agent-Ordering and Nogood-Repairing in Distributed Constraint Satisfaction Problems. In FLAIRS 2004: Proceedings of the 17th International Florida Artificial Intelligence Research Society Conference, AAAI Press, pp. 20-25.
Abstract: The distributed constraint satisfaction problem (CSP) is a general formalization used to represent problems in distributed multi-agent systems. To deal with realistic problems, multiple local variables may be required within each autonomous agent. A number of heuristics have been developed for solving such multiple local variable problems. However, these approaches do not always guarantee agent independence and have low efficiency search mechanisms.
In this paper, we are interested in increasing search efficiency for distributed CSPs. To this end we present a new algorithm using unsatisfied constraint densities to dynamically determine agent ordering during the search. Variables having a total order relationship only exist in the local agent. The independence of agents is guaranteed and agents without neighboring relationships can run concurrently and asynchronously. As a result of using nogoods to guarantee completeness, we developed a new technique called nogood repairing, which greatly reduces the number of nogoods stored and communication costs during the search, leading to further efficiency gains. In an empirical study, we show our new approach outperforms an equivalent static ordering algorithm and a current state-of-the-art technique in terms of execution time, memory usage and communication cost.
2003
Anbulagan, Thornton, J. R., & Sattar, A. (2003). Dynamic Variable Filtering for Hard Random 3-SAT Problems. AI 2003: 16th Australian Joint Conference on Artificial Intelligence, Lecture Notes in Computer Science, 2903, 100-111, Springer, ISSN 0302-9743.
Abstract: The DPL (Davis-Putnam-Logemann-Loveland) procedure is one of the most effective methods for solving SAT problems. It is well known that its efficiency depends on the choice of the branching rule. One of the main improvements of this decision procedure has been the development of better branching variable selection through the use of unit propagation look-ahead (UPLA) heuristics (e.g., [13]). UPLA heuristics perform one of the following actions during two propagations of a free variable at each search tree node: detecting a contradiction earlier, simplifying the formula, or weighing the branching variable candidates. UPLA heuristics can be considered as polynomial time reasoning techniques. In this paper, we propose a new branching rule which does more
reasoning to narrow the search tree of hard random 3-SAT problems. We term this reasoning technique the dynamic variable filtering heuristic. In our empirical study we develop four different variants of the DPL procedure: two (ssc34 and ssc355) based on this new heuristic and another two (Satz215-0 and Satz215sT ) based on static variable filtering heuristics. ssc355 additionally integrates the Neighborhood Variable Ordering (NVO) heuristic into ssc34. We then compare the best known versions of the state-of-the-art Satz DPL procedure (Satz215), with each of our four procedures. Our results suggest that improved branching rules can further enhance the performance of DPL procedures. On hard random 3-SAT problems, our best procedure (ssc355) outperforms Satz215 with an order of magnitude in terms of the number of branching nodes in the search tree. While the run-times for dynamic variable filtering are still uncompetitive with Satz215, we have yet to explore the benefits that can be gained from avoiding redundant propagations and we still can improve the performance of the NVO heuristic. A further interesting property of dynamic filtering is that all backtracking can be eliminated during the DPL unit rule process. This property can also be explored in our future work for improving DPL procedure efficiency.
Leonard, J., Treffner, P., & Thornton, J. R. (2003). Tau Guidance for Mobile Soccer Robots. Studies in Perception and Action VII, 169-172, Psychology Press, ISBN 978-0805848052.
Abstract: Traditional approaches to mobile robot guidance have utilised an internal model of the environment constructed from sensor data in order to plan a course of action. Although this approach has been challenged by behaviour-based robotics (e.g., Brooks, 1991), the creation of “smart perceptual instruments” that attempt to directly couple perception and action has been seriously addressed by the ecological robotics paradigm founded upon Gibson’s ecological optics (Duchon et al., 1995). We explore the possibility of using tau information for guidance of a mobile robot. More specifically, an attempt to implement some of the navigation techniques using tau as outlined by Lee (1998) was undertaken. Problems encountered during implementation and possible solutions are given. Although Lee introduced the concept of using tau (τ) for the guidance of movement (Lee, 1998) and although tau is usually said to represent the time to closure of a gap, this is not entirely true. A negative tau represents the time to closure of the gap, while a positive tau represents the time since the closure of the gap (i.e., the gap is opening). A gap can represent a distance, an angle, a difference in force or any other gap. The information on time to closure provided by a sensor can therefore be used to control the closure of a gap and thus to guide movement. Lee also introduced the idea of guidance through tau coupling in which the ratio of taus of different gaps are kept invariant.
Pullan, W., Zhao, L., & Thornton, J. R. (2003). Estimating Problem Metrics for SAT Clause Weighting Local Search. AI 2003: 16th Australian Joint Conference on Artificial Intelligence, Lecture Notes in Computer Science, 2903, 137-149, Springer, ISSN 0302-9743.
Abstract: The distributed constraint satisfaction problem (CSP) is a general formalisation used to represent problems in distributed multiagent systems. To deal with realistic problems, multiple local variables may be required within each autonomous agent. A number of heuristics have been developed for solving such multiple local variable problems. However, these approaches do not always guarantee agent independence and the size of problem that can be solved is fairly limited. In this paper, we are interested in increasing search efficiency for distributed CSPs. To this end we present a new algorithm using unsatisfied constraint densities to dynamically determine agent ordering during the search. The independence of agents is guaranteed and agents without neighbouring relationships can run concurrently and asynchronously. As a result of using a backtracking technique to solve the local problem, we have been able to reduce the number of nogoods stored during the search, leading to further efficiency gains. In an empirical study, we show our new approach outperforms an equivalent static ordering algorithm and a current state-of-the-art technique both in terms of execution time and memory usage.
Stantic, B., Thornton, J. R., & Sattar, A. (2003). A Novel Approach to Model NOW in Temporal Databases. In TIME-ICTL 2003: Proceedings of the 10th International Symposium on Temporal Representation and Reasoning / 4th International Conference on Temporal Logic, IEEE, pp. 174-181.
Abstract: In bitemporal databases, current facts and transaction states are modelled using a special value to represent the current time (such as a minimum or maximum timestamp or NULL). Previous studies indicate that the choice of value for now (i.e. the current time) significantly influences the efficiency of accessing bitemporal data. This paper introduces a new approach to represent now, in which current tuples and facts are represented as points on the transaction time and valid time line respectively. This allows us to exploit the computational advantages of point-based query languages. Via an empirical study, we demonstrate that our new approach to representing now offers considerable performance benefits over existing techniques for accessing bitemporal data.
Zhou, L., Thornton, J. R., & Sattar, A. (2003). Dynamic Agent Ordering in Distributed Constraint Satisfaction Problems. AI 2003: 16th Australian Joint Conference on Artificial Intelligence, Lecture Notes in Computer Science, 2903, 427-439, Springer, ISSN 0302-9743.
Abstract: The distributed constraint satisfaction problem (CSP) is a general formalisation used to represent problems in distributed multiagent systems. To deal with realistic problems, multiple local variables may be required within each autonomous agent. A number of heuristics have been developed for solving such multiple local variable problems. However, these approaches do not always guarantee agent independence and the size of problem that can be solved is fairly limited. In this paper, we are interested in increasing search efficiency for distributed CSPs. To this end we present a new algorithm using unsatisfied constraint densities to dynamically determine agent ordering during the search. The independence of agents is guaranteed and agents without neighbouring relationships can run concurrently and asynchronously. As a result of using a backtracking technique to solve the local problem, we have been able to reduce the number of nogoods stored during the search, leading to further efficiency gains. In an empirical study, we show our new approach outperforms an equivalent static ordering algorithm and a current state-of-the-art technique both in terms of execution time and memory usage.
2002
Ferreira Jr., V., Thornton, J. R. & Leonard, J. (2002). A Subsumption Architecture for Robotic Soccer. In: Proceedings of the 2002 FIRA Robot World Congress, KAIST (Korea Advanced Institute of Science and Technology), pp. 648-654.
Abstract: The purpose of this paper is to report in our investigation about the feasibility of implementing a pure subsumption architecture for controlling our team of Microsot Robotic Soccer players, the RoboCoasters. This extends previous work on subsumption by considering a highly dynamic and competitive environment where speed and accuracy are paramount. The major finding of our study is a rigid adherence to the incremental development concept of subsumption has proved impractical when applied to our single architecture approach to robotic soccer. This finding has led us to propose two extensions to subsumption that simplify the development process and address the need for continual improvement in lower level behaviours. These are: (a) The creation of a chief behaviour as a permanent top layer for the architecture; and (b) the development of modules that can be changed within a layer without changing the overall layer’s structure.
Kravchuk, O., Pullan, W., Thornton, J. R. & Sattar, A. (2002). An Investigation of Variable Relationships in 3-SAT Problems. AI 2002: 15th Australian Joint Conference on Artificial Intelligence, Lecture Notes in Computer Science, 2557, 579-590, Springer, ISSN 0302-9743.
Abstract: To date, several types of structure for finite Constraint Satisfaction Problems have been investigated with the goal of either improving the performance of problem solvers or allowing efficient problem solvers to be identified. Our aim is to extend the work in this area by performing a structural analysis in terms of variable connectivity for 3-SAT problems. Initially structure is defined in terms of the compactness of variable connectivity for a problem. Using an easily calculable statistic developed to measure this compactness, a test was then created for identifying 3-SAT problems as either compact, loose or unstructured (or uniform). A problem generator was constructed for generating 3-SAT problems with varying degrees of structure. Using problems from this problem generator and existing problems from SATLIB, we investigated the effects of this type of structure on satisfiability and solvability of 3-SAT problems. For the same problem length, it is demonstrated that satisfiability and solvability are different for structured and uniform problems generated by the problem generator.
Thornton, J. R., Bain, S., Sattar, A. & Pham, D. (2002). A Two Level Local Search for MAX-SAT Problems with Hard and Soft Constraints. AI 2002: 15th Australian Joint Conference on Artificial Intelligence, Lecture Notes in Computer Science, 2557, 603-614, Springer, ISSN 0302-9743.
Abstract: Local search techniques have attracted considerable interest in the AI community since the development of GSAT for solving large propositional SAT problems. Newer SAT techniques, such as the Discrete Lagrangian Method (DLM), have further improved on GSAT and can also be applied to general constraint satisfaction and optimisation. However, little work has applied local search to MAX-SAT problems with hard and soft constraints. As many real-world problems are best represented by hard (mandatory) and soft (desirable) constraints, the development of effective local search heuristics for this domain is of significant practical importance.
This paper extends previous work on dynamic constraint weighting by introducing a two-level heuristic that switches search strategy according to whether a current solution contains unsatisfied hard constraints. Using constraint weighting techniques derived from DLM to satisfy hard constraints, we apply a Tabu search to optimise the soft constraint violations. These two heuristics are further combined with a dynamic hard constraint multiplier that changes the relative importance of the hard constraints during the search. We empirically evaluate this new algorithm using a set of randomly generated 3-SAT problems of various sizes and difficulty, and in comparison with various state-of-the-art SAT techniques. The results indicate that our dynamic, two-level heuristic offers significant performance benefits over the standard SAT approaches.
Thornton, J. R., Beaumont, M., Sattar, A. & Maher, M. (2002). Applying Local Search to Temporal Reasoning. In: TIME 2002: 9th International Symposium on Temporal Reasoning and Representation, IEEE, pp. 94-99, ISSN 1530-1311.
Abstract: Local search techniques have attracted considerable interest in the Artificial Intelligence (AI) community since the development of GSAT [9] and the min-conflicts heuristic [5] for solving large propositional satisfiability (SAT) problems and binary Constraint Satisfaction Problems (CSPs) respectively. Newer SAT techniques, such as the Discrete Langrangian Method (DLM) [10], have significantly improved on GSAT and can also be applied to general constraint satisfaction and optimisation. However, local search has yet to be successfully employed in solving Temporal Constraint Satisfaction Problems (TCSPs).
In this paper we argue that current formalisms for representing TCSPs are inappropriate for a local search approach, and we propose an alternative CSP-based endpoint ordering model for temporal reasoning. In particular we look at modelling and solving problems formulated using Allen’s interval algebra (IA) [1] and propose a new constraint weighting algorithm derived from DLM. Using a set of randomly generated IA problems, we show that our local search outperforms Nebel’s backtracking algorithm [6] on larger and more difficult consistent problems.
Thornton, J. R., Leonard, J., Wiseby, R. & Lee, Y. (2002). Shape Recognition and Enhanced Control Systems for Robot Soccer. In: Proceedings of the 2002 FIRA Robot World Congress, KAIST (Korea Advanced Institute of Science and Technology), pp. 670-675.
Abstract: In this paper, we report on the current developments of our Mirosot Robot Soccer team, the RoboCoasters, in preparation for the FIRA Robot World Cup 2002 in Korea. We describe a number of enhancements to both the vision systems and the control system. Vision enhancements include a unique shape recognition system, expanded from recognition of three shapes to five shapes for the middle league competition. Control enhancements include improvements to our goalkeeper and attacker’s behaviour, and the introduction of role swapping between the robots. Finally, we outline our proposed future research for the team.
Thornton, J. R., Pullan, W. & Terry, J. (2002). Towards Fewer Parameters for Clause Weighting SAT Algorithms. AI 2002: 15th Australian Joint Conference on Artificial Intelligence, Lecture Notes in Computer Science, 2557, 603-614, Springer, ISSN 0302-9743.
Abstract: Considerable progress has recently been made in using clause weighting algorithms such as DLM and SDF to solve SAT benchmark problems. While these algorithms have outperformed earlier stochastic techniques on many larger problems, this improvement has been bought at the cost of extra parameters and the complexity of fine tuning these parameters to obtain optimal run-time performance.
This paper examines the use of parameters, specifically in relation to DLM, to identify underlying features in clause weighting that can be used to eliminate or predict workable parameter settings. To this end we propose and empirically evaluate a simplified clause weighting algorithm that replaces the tabu list and flat moves parameter used in DLM. From this we show that our simplified clause weighting algorithm is competitive with DLM on the four categories of SAT problem for which DLM has already been optimised.
2001
Beaumont, M., Sattar, A., Maher, M., & Thornton, J. R. (2001). Solving Overconstrained Temporal Reasoning Problems. AI 2001: 14th Australian Joint Conference on Artificial Intelligence, Lecture Notes in Computer Science, 2556, 603-614, ISSN 0302-9743.
Abstract: Representing and reasoning with temporal information is an essential part of many tasks in AI such as scheduling planning and natural language processing. Two influential frameworks for representing temporal information are interval algebra and point algebra. Given a knowledgebase consisting of temporal relations the main reasoning problem is to determine whether this knowledgebase is satisfiable i.e. whether there is a scenario which is consistent with the information provided. However, when a given set of temporal relations is unsatisfiable, no further reasoning is performed. We argue that many real world problems are inherently overconstrained and we can not just ignore them, we must address them. This paper investigates approaches for handling overconstrainedness in temporal reasoning. We adapt a well-studied notion of partial satisfaction to define partial scenarios: an optimal partial solution. We propose two reasoning procedures for computing an optimal partial solution to a problem or a complete solution if it exists.
2000
Nagarajan, S., Goodwin, S., Sattar, A, & Thornton, J. R. (2000). On Dual Encodings for Non-Binary Constraint Satisfaction Problems. CP 2000: 6th International Conference on the Principles and Practice of Constraint Programming. Lecture Notes in Computer Science, 1894, 531-536, Spinger, ISSN 0302-9743.
Abstract: In [Walsh and Stergiou, 1999] enforcing arc consistency (AC) in the dual encoding was shown to strictly dominate enforcing AC on the hidden or GAC on the original problem. We introduce a dual encoding that requires only a small subset of the original constraints to be stored in extension, while the remaining constraints can be stored intensionally. In this paper we present a theoretical comparison between the pruning achieved by enforcing AC on this dual encoding, versus enforcing GAC and dual arc consistency on the standard encoding. We show how the covering based encoding retains the dominance over enforcing GAC on the original problem, while using less space than the existing dual encoding.
Thornton, J. R. (2000). Contraint Weighting Local Search for Constraint Satisfaction. PhD Thesis, School of Computing and Information Technology, Griffith University, Brisbane, Australia, pp. 160.
Abstract: One of the challenges for the constraint satisfaction community has been to develop an automated approach to solving Constraint Satisfaction Problems (CSPs) rather than creating specific algorithms for specific problems. Much of this work has concentrated on the development and improvement of general purpose backtracking techniques. However, the success of relatively simple local search techniques on larger satisfiability problems [Selman et al. 1992] and CSPs such as the n-queens [Minton et al. 1992] has caused interest in applying local search to constraint satisfaction. In this thesis we look at the usefulness of constraint weighting as a local search technique for constraint satisfaction. The work is based on the clause weighting ideas of Selman and Kautz [1993] and Morris [1993] and applies, evaluates and extends these ideas from the satisfiability domain to the more general domain of CSPs. Specifically, the contributions of the thesis are:
- The introduction of a local search taxonomy. We examine the various better known local search techniques and recognise four basic strategies: restart, randomness, memory and weighting.
- The extension of the CSP modelling framework. In order to represent and efficiently solve more realistic problems we extend the CSP modelling framework to include array-based domains and array-based domain use constraints.
- The empirical evaluation of constraint weighting. We compare the performance of three constraint weighting strategies on a range of CSP and satisfiability problems and with several other local search techniques. We find that no one technique dominates in all problem domains.
- The characterisation of constraint weighting performance. Based on our empirical study we identify the weighting behaviours and problem features that favour constraint weighting. We conclude weighting does better on structured problems where the algorithm can recognise a harder sub-group of constraints.
- The extension of constraint weighting. We introduce an efficient arc weighting algorithm that additionally weights connections between constraints that are simultaneously violated at a local minimum. This algorithm is empirically shown to outperform standard constraint weighting on a range of CSPs and within a general constraint solving system. Also we look at combining constraint weighting with other local search heuristics and find that these hybrid techniques can do well on problems where the parent algorithms are evenly matched.
- The application of constraint weighting to over constrained domains. Our empirical work suggests constraint weighting does well for problems with distinctions between constraint groups. This led us to investigate solving real-world over constrained problems with hard and soft constraint groups and to introduce two dynamic constraint weighting heuristics that maintain a distinction between hard and soft constraint groups while still adding weights to violated constraints in a local minimum. In an empirical study, the dynamic schemes are shown to outperform other fixed weighting and non-weighting systems on a range of real world problems. In addition, the performance of weighting is shown to degrade less severely when soft constraints are added to the system, suggesting constraint weighting is especially applicable to realistic, hard and soft constraint problems.
1999
Thornton, J. R., & Sattar, A. (1999). On the Behaviour and Application of Constraint Weighting. CP 1999: 5th International Conference on the Principles and Practice of Constraint Programming, Lecture Notes in Computer Science, 1713, 446-460, Springer, ISSN 0302-9743.
Abstract: In this paper we compare the performance of three constraint weighting schemes with one of the latest and fastest WSAT heuristics: rnovelty. We extend previous results from satisfiability testing by looking at the broader domain of constraint satisfaction and test for differences in performance using randomly generated problems and problems based on realistic situations and assumptions. We find constraint weighting produces fairly consistent behaviour within problem domains, and is more influenced by the number and interconnectedness of constraints than the realism or randomness of a problem. We conclude that constraint weighting is better suited to smaller structured problems, where it is can clearly distinguish between different constraint groups.
1998
Thornton, J. R., & Sattar, A. (1998). Using Arc Weights to Improve Iterative Repair. In AAAI 1998: 15th National Conference on Artificial Intelligence, MIT Press, pp. 367-372.
Abstract: One of the surprising findings from the study of CNF satisfiability in the 1990’s has been the success of iterative repair techniques, and in particular of weighted iterative repair. However, attempts to improve weighted iterative repair have either produced marginal benefits or rely on domain specific heuristics. This paper introduces a new extension of constraint weighting called Arc Weighting Iterative Repair, that is applicable outside the CNF domain and can significantly improve the performance of constraint weighting. The new weighting strategy extends constraint weighting by additionally weighting the connections or arcs between constraints. These arc weights represent increased knowledge of the search space and can be used to guide the search more efficiently. The main aim of the research is to develop an arc weighting algorithm that creates more benefit than overhead in reducing moves in the search space. Initial empirical tests indicate the algorithm does reduce search steps and times for a selection of CNF and CSP problems.
Thornton, J. R., & Sattar, A. (1998). Dynamic Constraint Weighting for Over Constrained Problems. PRICAI 1998: 5th Pacific Rim International Conference on Artificial Intelligence, Lecture Notes in Computer Science, 1531, 377-388, Springer, ISSN 0302-9743.
Abstract: Many real-world constraint satisfaction problems (CSPs) can be over-constrained but contain a set of mandatory or hard constraints that have to be satisfied for a solution to be acceptable. Recent research has shown that constraint weighting local search algorithms can be very effective in solving a variety of CSPs. However, little work has been done in applying such algorithms to over-constrained problems with hard constraints. The difficulty has been finding a weighting scheme that can weight unsatisfied constraints and still maintain the distinction between the mandatory and non-mandatory constraints. This paper presents a new weighting strategy that simulates the transformation of an over constrained problem with mandatory constraints into an equivalent problem where all constraints have equal importance, but the hard constraints have been repeated. In addition, two dynamic constraint weighting schemes are introduced that alter the number of simulated hard constraint repetitions according to feedback received during the search. The dynamic constraint weighting algorithms are compared with two algorithms that maintain a fixed number of hard constraint repetitions, using a test bed of over-constrained timetabling and nurse rostering problems. The results show the dynamic strategies outperform both fixed repetition approaches.
1997
Thornton, J. R., & Sattar, A. (1997). Applied Partial Constraint Satisfaction Using Weighted Iterative Repair. AI 1997: 10th Australian Joint Conference on Artificial Intelligence, Lecture Notes in Computer Science, 1342. Springer, ISSN 0302-9743.
Abstract: Many real-world constraint satisfaction problems (CSPs) can be over-constrained or too large to solve using a standard constructive / backtracking approach. Instead, faster heuristic techniques have been proposed that perform a partial search of all possible solutions using an iterative repair or hill-climbing approach. The main problem with such approaches is that they can become stuck in local minima. Consequently, various strategies or meta-heuristics have been developed to escape from local minima. This paper investigates the application of one such meta heuristic, weighted iterative repair, to solving a real-world problem of scheduling nurses at an Australian hospital. Weighted iterative repair has already proved successful in solving various binary CSPs. The current research extends this work by looking at a non-binary problem formulation, and partial constraint satisfaction involving hard and soft constraints. This has lead to the development of a soft constraint heuristic to improve the level of soft constraint optimisation and an extension of the original weighted iterative repair that avoids certain forms of cyclic behaviour. It is also demonstrated that weighted iterative repair can learn from repeatedly solving the same problem. and that restarting the algorithm on the same problem can result in faster execution times. The overall results show that weighted iterative repair finds better quality solutions than a standard iterative repair, whilst approaching near optimal solutions in less time than an alternative integer programming approach.
Thornton, J. R., & Sattar, A. (1997). Nurse Rostering and Integer Programming Revisited. In: ICCIMA 1997: Proceedings of the International Conference on Computational Intelligence and Multimedia Applications, Griffith University, pp. 49-58.
Abstract: Design and development of robust and reliable scheduling algorithms has been an active research area in Computer Science and Artificial Intelligence. Given that the general problem is computationally intractable, many heuristic-based techniques have been developed, while other approaches have used optimising techniques for specific and limited problem domains. In this paper, we consider a large, real world scheduling problem of nurse rostering at the Gold Coast Hospital, and we propose an optimising integer programming-based approach to solving the problem. The study extends previous work in the area by looking at a more complex problem domain and by introducing a mathematical model that is capable of capturing the important details of the domain without becoming unrealistically large. We also describe a problem decomposition heuristic to effectively manage the computational resources. Schedule quality and staff allocation quality measures are introduced to evaluate the automated nurse schedules. Finally, we conclude that an integer programming-based approach to nurse rostering is feasible for realistic problem sizes and is sufficiently flexible to handle overconstrained problems with competing goals.
1996
Thornton, J. R., & Sattar, A. (1996). An Integer Programming-Based Nurse Rostering System. In ASIAN 1996: 2nd Asian Computing Science Conference, Lecture Notes in Computer Science, 1179, 357-358, Springer, ISSN 0302-9743.
Abstract: This paper considers a real-world rostering problem at the Gold Coast Hospital, Queensland. Nurse rostering is a constraint satisfaction problem (CSP) [3]. The task is to find a consistent allocation of shift values, for a group of nurses, over a fixed period of time, that satisfy a set of rostering constraints. These constraints include i) acceptable shift combinations (or schedules) for individual nurses, and ii) acceptable overall staffing levels for each shift. Previous work in the area has looked at optimally solving small or simplified problems [5] or using non-optimal heuristics to solve larger, more realistic problems [1]. The current research uses an optimising integer programming (IP) approach to solve a large rostering problem within reasonable time and computing resource bounds. The paper shows how the nurse rostering problem can be simplified by combining different modelling and decomposition techniques. Finally, the research demonstrates an IP model can capture “difficult” constraints and remain flexible in changing circumstances.
1995
Thornton, J. R. (1995). An Enhanced Cyclic Descent Algorithm for Nurse Rostering. Honours Thesis, Faculty of Engineering and Applied Science, Griffith University, Gold Coast, Australia, 145 pp.
Abstract: The study introduces an enhanced cyclic descent algorithm for nurse rostering. The algorithm is compared to four other rostering algorithms and to manually generated roster solutions obtained from the Gold Coast Hospital. Three criteria are developed with which the roster generation methods are assessed: these are roster schedule quality, roster shift allocation quality and execution time. A statistical analysis shows that the enhanced cyclic descent algorithm has the best overall performance. An integer linear programming algorithm and an enhanced simulated annealing algorithm are also shown to perform well with smaller problems.