The same process can be used to build parse trees or Abstract Syntax Trees (AST), which can be translated later into executable code.
In preparation for evolving ASTs into software solutions, in this post we’ll explore how problem solving could be modeled as a generalized Turing Machine.
Defining a Turing Machine
Earlier, we defined problem solving as the search for one or more sequences of instructions or services which will transform an initial context into a target context.
To generalize the Turing machine (TM) we’ll expand the tape into multiple (variable) dimensions with each dimension associated to a specific property and to a specific reader head and optionally a writer head. We’ll associate the current tape vector with the context and you should think of reading and writing in the context as similar to the getters and setters methods used in Object Oriented Programming.
Writing on the multidimensional tape would be equivalent to setting one or more properties for objects in the context. Another useful analogy would be to consider the reader heads as sensors and the writer heads as actuators.
In this case, the Turing machine might appear similar to a robot, and it would be important to consider that each getter and setter methods will have a cost equivalent to the energy spent to operate it for each use, as well as a duration.
For such a robot, we could observe that any operation of the machine can be decomposed into sequences of basic getters and setters and each operation will have an overall duration and cost.
Furthermore, with the robotic metaphor we could observe that the machine will be practically restricted to the activities which could be performed with the current energy budget available, that certain activities must replenish the energy budget and that some form of planning will help in better using the resources available to get to a specific objective (defined as a vector in the multidimensional context/tape).
Turing had to reduce his automatic machine to operate with a limited number of mathematical symbols and operators, as well as the simplest representation of data. Als, it was enough for him to consider just one head for the machine with the capability to read and write to one cell on the tape.
Updating Our Turing Machine to the Modern Day
Nowadays, we can consider building Turing machines with a large number of instructions and huge numbers of combinations of instructions which could be stored in databases.
We could even consider Internet-size code databases where these combinations of instruction – we’ll call them microservices (or functions/methods grouped in APIs, web services, etc.) – can be indexed and ranked according to input and output dimensions, cost and duration/performance, etc.
Also today, for a Turing machine implementation, the number of readers/writers or sensor/actuators could be quite high and comparable to the number of sensors and actuators of living organisms: millions of external and internal sensors and hundreds of muscles.
Today we don’t have to limit ourselves to only reading one character (0 or 1) from the tape, but using a graph query language like Cypher, we could read or search for very complex patterns in the current graph context and then write or apply a complex transformation in the form of a microservice jumping thus to another graph context – hopefully closer to the target one.
As such, the rules of our generalized Turing machine will be microservices selected from a market and annotated with one or more Cypher queries which will provide a set of inputs for the microservice.
The Cypher queries can be understood as the equivalent of the documentation for APIs or web services which describe when and how to use them.
The advantage of annotating microservices with Cypher scripts is that we can automate and significantly speed up the mental work of reading documentation and figuring out how to apply a service to a problem context. Furthermore, placing such microservices in a market offers the option of having complex Cypher queries and services be written only once and then reused for a small fee in the benefit of the author or owner.
Example: A Video Recognition Project
As this article series progresses, we’ll continue to explore how living organisms and businesses can be assimilated to generalized Turing machines or theorem provers and how markets of annotated microservices could speed up the search for innovative solutions in business and technology.
For interested readers, we’ll propose the following exercise to identify a set of parse tree generation microservices similar to those used in the previous post which could be combined into an abstract syntax tree code solution for an AI system.
Imagine that you are a software engineer in a Facebook team working on a video recognition project, and your task is to build a data structure to store the delta of video frames and their summaries in a pyramid graph as in the picture below.
You decided to code a pyramid in Neo4j where each node is averaging in its properties the RGB values of nine nodes of a lower level. The image above depicts the top three levels of such a graph data structure.
Homework: What Microservices Would You Need?
Although the graphs of sequences of pyramids generated from video streams are fascinating (try to imagine how to classify them), for simplicity you should focus only on the piece of code for generating the relationships in the graph.
What microservices (API functions, etc) would you need to generate and combine into a parse tree for wiring the pyramid relationships? And what Cypher queries should be associated to these microservices to provide the proper input and output in the problem context?
If you decide to code the solution try to time your programming effort, save intermediary steps on Git, visualize intermediate parse trees with Antlr and eventually animate the whole abstract syntax tree evolution with KeyLines.
Otherwise just wait for the next post in this series to see how a parse tree graph can be built with Cypher annotated microservices to build a pyramid graph.