This is the first look at the Brain Simulator III. I’ve just made the GitHub repository public so you can be among the first to try it out.
https://github.com/FutureAIGuru/BrainSimIII
How You Can Participate
First Steps: Please download the code, build it using the Microsoft Visual Studio Community Edition (available Free HERE). Open the Test1.xml network file and try out the four available dialog functions. Report issues by leaving comments below. Then attend the Online Meeting in January.
What’s there?
The Universal Knowledge Store (UKS) and its hierarchical display dialog show how knowledge can be represented as a graph of nodes connected by edges—“Things” connected by “Relationships” in UKS parlance. The provided UKS demonstrates how data is represented and can be stored and queried. Updated UKS content can be stored in XML files for easy transfer.
The “Add Statement” dialog lets you add new information to the UKS. Enter a relationship with a source, relationship type, and target. If the nodes do not exist, they will be created. There is a “?” in the lower right of each dialog which displays help about how the dialog is used.
The “Query” dialog demonstrates how the UKS content can be retrieved. The query process handles:
- Inheritance
- Exception (in knowledge)
- Conditionals
- Sequences of information
The “Clauses” dialog allows Relationships to connect to other Relationships; vastly increasing the power For example, using the “IF” clause type allows one relationship to be conditional on the truth of another.
Still to Come
Leave a comment about what we should work on next. Ideas include:
- An “Event” system which uses clauses to store information on how a situation and an action can lead to a new result. A Common Sense system could use this to decide the best thing to do or say next.
- A “Retention Learning” system which stores all of its sensations in the UKS and immediately prunes away information which proves to be irrelevant or erroneous. This leaves a UKS with useful information.
- A “Verbal” system which isolates words and phrases (sequences) from the abstract Things they mean. This would allow for ambiguity, idioms and redundancy in the input and output language. Any abstract Thing could be referenced by any number of word and phrase Things and a confidence system would sort out the most likely meanings.
- A “Vision” system which would store images of sequences of graphic primitives and perform recognition by visual input and learning new abstract physical objects.
- A “Mental Model” which would keep track of surrounding objects to allow for an understanding of tree dimensionality and object persistence.
I’d like to hear your experience with this new software.
Fascinating and hopeful. It may be successful in the short-term. In the long run, it risks becoming “brittle” because human understanding is creative, not archival. This was the essence of Chomsky’s argument about language. It is not possible to list all or even most of the variants for a term, situation, or object, because humans are creative. In vision, for example, representation of basic invariants is a good step for flexibility if visual recognition is elemental and bottom-up. If visual recognition is actually Gestalty (in terms of affordances, per J.J. Gibson), with optional but not always necessary downward-branching discrimination, then a catalog of visual elements may not help much in the long run. A complementary approach might be to focus on embodied cognition, which is pre-conceptual, tacit, and nonintellectual. A creative way would have to be found to represent that. Then you’d need a “motivational engine” to define intentional action from the prevailing Gestalt. –Bill Adams
We all risk becoming brittle in the long run 🙂
I think that, generally, the more code that gets written and the more specific data is needed, the less likely the system is to work in the long run.
The key will be to have a system which can (for example) learn its own set of visual primitives which build up a catalog of known physical objects objects in such a way that the same mechanism will also recognize phonemes and build up vocabularies words and phrases.
I think it can be done.
@Gill Adams
Regarding “A complementary approach might be to focus on embodied cognition, which is pre-conceptual, tacit, and nonintellectual.”
Agreed. If combined, truly embodied cognition and prelinguistic concepts can complement each other. The concepts can provide the “skeleton/frame” for embodied (sensor-based) training. Human/animal cognition is not a clean slate; the body (broadly understood) provides the framework for the way it moves, its experiences, and its conceptualizations.
Best regards,
Wes Raykowski
User William.W wrote to me saying…
Hello again,
I downloaded Brain SImulator III and it seems like a great first step.
I wish to help you work on it as a programmer, so if you have any ideas I could work on, please let me know!
One idea I have is to fine tune a large language model to provide responses in the correct format. Once this is done, we should be able to add thousands if not millions of nodes and edges into the knowledge store in a much faster and cheaper way than using people.
Thank you,
~William
This is an excellent idea. I started down this road with a module named ModuleOnlineInfo which is in the repository (in a prototype state). It can scrape data from a number of different online sources but none of them are consistent enough to be useful. The existing module could help you get started.
Let me know how it goes.
Charles
Hi Charles,
At the risk of repeating myself, attributing features is only one approach to knowing and recognizing. For instance, we can understand chairs by recognizing features like a seat, backrest, and four legs, or we can comprehend dogs through features like a head, torso, four legs, and a tail. This represents an additive way of understanding things.
However, we also apprehend chairs not just by their listed features but by how they influence the deformation of our bodies and the extent to which our bodies are affected. For this to occur, the body must be deformable, and the sensors must be on the inside of the body. This level of understanding is more complex than merely listing features.
Wes Raykowski
I started down this road by simulating touch in BrainSimII. I used it to improve recognition and I like your whole idea of identifying things based on their influence on the self and/or other things.
Hi again,
It may sound like philosophical pondering, but it is not. Firstly, humans are multicellular organisms (huge collections of cells evolved to move together), and they think just like one. The primary purpose of cognition involves protecting the broadly understood integrity of the collection.
Contrary to common thinking, the sensory systems monitor the inside of the body and not the environment outside of it — simply because all sensory receptors are embedded inside the collection. This means that the external environment is known only from the effects it has on the multicellular body.
Note that, for that, one needs to have a flexible (multicellular) body. Just like proprioception, which involves large-scale deformations (think of sitting or moving), vision too involves bodily deformation of individual sensory cells (retinol molecules).
Receptor has built-in valence from zero deformation to excessive deformation. This, in turn, means that sensations provide more than just zero/one information. Even at these early stages of cognition, sensations integrate a lot of information, including intensity, extent, and whether the interaction is of no significance or highly damaging to collective integrity.
There are two interesting outcomes of this view: first, humans know the external world indirectly in terms of the effect the world has on multicellular integrity; second, sensory maps in this situation are essential for monitoring the total effect on the body.
This raises several questions related to the workings of Sallie: Does she have a flexible body (in the sense of being a multicellular being protecting its collective integrity)? Are sensory endings/receptors on the inside or outside of the body? Adding a couple of sensors here and there does not represent human thinking. I am not saying that this approach is not viable or useful for production lines (e.g., Tesla bot) or home kitchens. I think it is very useful, but it does not represent human intelligence.
Do you have alternative views on how intelligence could be approached in machine systems?
Wes Raykowski
Thanks for this, Wes…
First, I agree with you that AIs will never be identical to human thought because the sensory system(s) will necessarily be radically different. If we rely on cameras, for example, we lose the retinal feature of resolution which varies over its range which likely has a lot to do with attention. Also, we are not likely to have human-level touch sensors.
On the other hand, common sense is about interacting with the environment and making reasonable decisions to achieve goals (the goals would be radically different from humans’). I think that the model we are using which has an internal mental model with virtually every input contributing to it and every decision based on it will create a reasonable facsimile.
Charles
Subject: Re: “AIs will never be identical to human thought because the sensory system(s) will necessarily be radically different.”
Dear Charles,
AIs do not necessarily require organism-like bodies, as long as their cognition is properly emulated. In other words, if the input data for AIs is structured in a way that mimics human cognition. While listing properties, attributing them to objects, considering inheritance, exceptions, clauses, cause and effect, etc., are all crucial aspects, they might not be sufficient. My research into cognition, extending beyond language, suggests the significant importance of sensory products as the most basic mechanism for cognition.
Sensory/cognitive products can be easily mistaken for mathematical products derived from them. I define products as unique associations between intensity-like experiences and the extent of such an experience before they are recognized as a particular pattern (PhD thesis, 2013). This is important as it implies that the experience of objects can vary in terms of patterns of their properties without losing identity – as long as they are represented by the same product. I often use the notion of monetary value to illustrate its nature. For example, the value of $12 can be represented with 12 one-dollar coins, as well as 6 two-dollar coins, but also with one ten-dollar bill and one two-dollar coin, and so on. Even though they are different, all those expressions have the same value.
The significance of such expressions is manifold. They combine intensity (e.g., value of money, intensity of a color, importance of a fact, etc.) with the extent of its experience, vertical with horizontal, lower scale with the higher one, private with public, etc. They can be used at the level of a phrase (e.g., a red apple with an average intensity of its surface), used to express sentences (e.g., the snow melted overnight to virtually nothing), as well as in narratives (e.g., “The road to hell is paved with good intentions”). To simulate sensory products, one needs the sense of intensity (or its difference) and extent of a property attributed to the object.
I believe the current model relies on lists/tables of attributes and is missing the notion of sensory products. Am I right? What are your thoughts on the proposed approach?
Best regards,
Wes Raykowski
Addressing only your first point. It is inconceivable to me that one could ever understand that reality exists without ever having experienced it. With various senses, you learn that objects are solid, fall when dropped, etc. and can be affected by your own actions, including that their appearance changes as you move through the environment. This type of experience requires some sort of robotic body.
Once the experience of reality has been learned by an AI, the knowledge can be transferred to an AI without a body and would still be remembered and be useful. That is you still understand about vision when your eyes are closed.
I agree with you that my list of necessary components may certainly not be sufficient but we’ll know a lot more about the missing components after implementing the existing list.
Also, see my answer to Phil Goetz below.
I worked with AI systems using semantic network KR for many years. The representational problem is very deep; I recommend you read the publications of the SNePS project if you’re determined to take this route. Many tricky issues aren’t apparent until you spend years trying to enter knowledge by hand into the network.
But I don’t recommend you take this route, because the Cyc project spent 40 years and many millions of dollars doing this, and never had any remarkable results. Symbols cannot be atomic in artificial intelligences, because the fragility / flexibility problems that afflict all symbolic AI lie mostly in the fact that an atomic symbol relies on external propositions to embody the knowledge about that symbol. That is, if your network uses the word “bird”, every time you use the word, you need to traverse the entire network of knowledge about different types of birds, colloquial uses, pragmatics about when one should and should not count a dead bird as a bird, and on and on and on, every time you use the word, just to know whether it applies. And symbolic representation is ultimately too rigid, and incapable of learning incrementally with a fine enough grain, to do this. And attempts to automate symbolic learning have never worked.
You really should read up on the Cyc project, whose charter back in 1989 was also to add common sense to AI, and explain why you wouldn’t be better off just using Cyc. Last I checked, which was about 25 years ago, you could download the public-domain Cyc engine and dataset for free.
Agre & Chapman advocated the embodied symbolic-reasoner approach in 1987 with Pengi. Rodney Brooks built lots of cool little robots which used a hard-coded symbolic reasoning system around the same time. I used the Pengi approach in the late 1990s at Zoesis, and it did very well in a video game; but basically it was just symbolic reasoning plus quasi-indexicals. Very helpful in making control programs short, but nothing that would provide any leaps in capabilities. Brooks’ approach resembles the wiring of insect brains, and works well at replicating simple insect behavior, but nobody knows how to integrate it with a complex insect brain like that of a honeybee. Merging the reactive with the symbolic might be another worthy research project.
The area that most needs work is on how to use symbols which have distributed (neural) representations. This has been the case for 30 years. See my 2000 paper “A neuronal basis for the fan effect” (https://www.sciencedirect.com/science/article/abs/pii/S0364021399000245#! ) for an outline of an approach, which is pretty similar to Google’s word2vec approach.
Thank you for your thoughtful contribution.
The existing BrainSim version can mislead many into thinking I am a proponent of the symbolic. Well, only sort-of.
I’ve looked at Cyc and hope to learn more as they have gone more-or-less proprietary. I am convinced that any system attempting common sense which uses language as its basis is doomed as we can see lots of common sense in young children (and animals) without language. Worse, systems which are text-based overlook all the nuance of spoken (or sung) language.
Finally, the idea of Cyc where you might store enough examples of common sense that true common sense might magically emerge is equally flawed. Common sense requires understanding.
I see BrainSim and the UKS as an underlying storage mechanism which could handle the actuality of understanding. In addition to the simple examples in the videos so far, there are also little agents which scan the UKS looking for commonality—generating automatic subclassing and attribute bubbling for example. If many Things share common attributes perhaps they are members of a yet-to-be-named subclass of Things. If many dogs are observed to share specific attributes, perhaps these attributes are attributes of the dog, not of the individuals. With these, and real-world input devices, the system can learn that: objects which look like dogs are likely to bark.
This, coupled with the “event” system (see above) could enable the system to learn about cause-and-effect and the passage of time. The verbal system which I prototyped makes the underlying knowledge fully abstract and independent of the language used to express it (or used in learning it).
I hope you can join next week’s meeting as we decide what to migrate next to the public BrainSim repository.