Skip to main content
Brain Simulator III

Brain Simulator III Sneak Peek!

By December 19, 202314 Comments

This is the first look at the Brain Simulator III.  I’ve just made the GitHub repository public so you can be among the first to try it out.

https://github.com/FutureAIGuru/BrainSimIII

How You Can Participate

First Steps: Please download the code, build it using the Microsoft Visual Studio Community Edition (available Free HERE). Open the Test1.xml network file and try out the four available dialog functions.  Report issues by leaving comments below.  Then attend the Online Meeting in January.

What’s there?

The Universal Knowledge Store (UKS) and its hierarchical display dialog show how knowledge can be represented as a graph of nodes connected by edges—“Things” connected by “Relationships” in UKS parlance. The provided UKS demonstrates how data is represented and can be stored and queried.  Updated UKS content can be stored in XML files for easy transfer.

The “Add Statement” dialog lets you add new information to the UKS. Enter a relationship with a source, relationship type, and target.  If the nodes do not exist, they will be created.  There is a “?” in the lower right of each dialog which displays help about how the dialog is used.

The “Query” dialog demonstrates how the UKS content can be retrieved. The query process handles:

  • Inheritance
  • Exception (in knowledge)
  • Conditionals
  • Sequences of information

The “Clauses” dialog allows Relationships to connect to other Relationships; vastly increasing the power For example,  using the “IF” clause type allows one relationship to be conditional on the truth of another.

Still to Come

Leave a comment about what we should work on next.  Ideas include:

  • An “Event” system which uses clauses to store information on how a situation and an action can lead to a new result. A Common Sense system could use this to decide the best thing to do or say next.
  • A “Retention Learning” system which stores all of its sensations in the UKS and immediately prunes away information which proves to be irrelevant or erroneous. This leaves a UKS with useful information.
  • A “Verbal” system which isolates words and phrases (sequences) from the abstract Things they mean. This would allow for ambiguity, idioms and redundancy in the input and output language. Any abstract Thing could be referenced by any number of word and phrase Things and a confidence system would sort out the most likely meanings.
  • A “Vision” system which would store images of sequences of graphic primitives and perform recognition by visual input and learning new abstract physical objects.
  • A “Mental Model” which would keep track of surrounding objects to allow for an understanding of tree dimensionality and object persistence.
Subscribe
Notify of
14 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
BILL ADAMS
BILL ADAMS
11 months ago

Fascinating and hopeful. It may be successful in the short-term. In the long run, it risks becoming “brittle” because human understanding is creative, not archival. This was the essence of Chomsky’s argument about language. It is not possible to list all or even most of the variants for a term, situation, or object, because humans are creative. In vision, for example, representation of basic invariants is a good step for flexibility if visual recognition is elemental and bottom-up. If visual recognition is actually Gestalty (in terms of affordances, per J.J. Gibson), with optional but not always necessary downward-branching discrimination, then a catalog of visual elements may not help much in the long run. A complementary approach might be to focus on embodied cognition, which is pre-conceptual, tacit, and nonintellectual. A creative way would have to be found to represent that. Then you’d need a “motivational engine” to define intentional action from the prevailing Gestalt. –Bill Adams

Wes Raykowski
Wes Raykowski
11 months ago
Reply to  BILL ADAMS

@Gill Adams

Regarding “A complementary approach might be to focus on embodied cognition, which is pre-conceptual, tacit, and nonintellectual.”

Agreed. If combined, truly embodied cognition and prelinguistic concepts can complement each other. The concepts can provide the “skeleton/frame” for embodied (sensor-based) training. Human/animal cognition is not a clean slate; the body (broadly understood) provides the framework for the way it moves, its experiences, and its conceptualizations.

Best regards,
Wes Raykowski

Wes Raykowski
Wes Raykowski
11 months ago
Reply to  Charles Simon

Hi Charles,
At the risk of repeating myself, attributing features is only one approach to knowing and recognizing. For instance, we can understand chairs by recognizing features like a seat, backrest, and four legs, or we can comprehend dogs through features like a head, torso, four legs, and a tail. This represents an additive way of understanding things.
However, we also apprehend chairs not just by their listed features but by how they influence the deformation of our bodies and the extent to which our bodies are affected. For this to occur, the body must be deformable, and the sensors must be on the inside of the body. This level of understanding is more complex than merely listing features.
Wes Raykowski

Wes Raykowski
Wes Raykowski
11 months ago
Reply to  Charles Simon

Hi again,
It may sound like philosophical pondering, but it is not. Firstly, humans are multicellular organisms (huge collections of cells evolved to move together), and they think just like one. The primary purpose of cognition involves protecting the broadly understood integrity of the collection.

Contrary to common thinking, the sensory systems monitor the inside of the body and not the environment outside of it — simply because all sensory receptors are embedded inside the collection. This means that the external environment is known only from the effects it has on the multicellular body.

Note that, for that, one needs to have a flexible (multicellular) body. Just like proprioception, which involves large-scale deformations (think of sitting or moving), vision too involves bodily deformation of individual sensory cells (retinol molecules).

Receptor has built-in valence from zero deformation to excessive deformation. This, in turn, means that sensations provide more than just zero/one information. Even at these early stages of cognition, sensations integrate a lot of information, including intensity, extent, and whether the interaction is of no significance or highly damaging to collective integrity.

There are two interesting outcomes of this view: first, humans know the external world indirectly in terms of the effect the world has on multicellular integrity; second, sensory maps in this situation are essential for monitoring the total effect on the body.

This raises several questions related to the workings of Sallie: Does she have a flexible body (in the sense of being a multicellular being protecting its collective integrity)? Are sensory endings/receptors on the inside or outside of the body? Adding a couple of sensors here and there does not represent human thinking. I am not saying that this approach is not viable or useful for production lines (e.g., Tesla bot) or home kitchens. I think it is very useful, but it does not represent human intelligence.

Do you have alternative views on how intelligence could be approached in machine systems?
Wes Raykowski

Wes Raykowski
Wes Raykowski
11 months ago
Reply to  Charles Simon

Subject: Re: “AIs will never be identical to human thought because the sensory system(s) will necessarily be radically different.”

Dear Charles,

AIs do not necessarily require organism-like bodies, as long as their cognition is properly emulated. In other words, if the input data for AIs is structured in a way that mimics human cognition. While listing properties, attributing them to objects, considering inheritance, exceptions, clauses, cause and effect, etc., are all crucial aspects, they might not be sufficient. My research into cognition, extending beyond language, suggests the significant importance of sensory products as the most basic mechanism for cognition.

Sensory/cognitive products can be easily mistaken for mathematical products derived from them. I define products as unique associations between intensity-like experiences and the extent of such an experience before they are recognized as a particular pattern (PhD thesis, 2013). This is important as it implies that the experience of objects can vary in terms of patterns of their properties without losing identity – as long as they are represented by the same product. I often use the notion of monetary value to illustrate its nature. For example, the value of $12 can be represented with 12 one-dollar coins, as well as 6 two-dollar coins, but also with one ten-dollar bill and one two-dollar coin, and so on. Even though they are different, all those expressions have the same value.

The significance of such expressions is manifold. They combine intensity (e.g., value of money, intensity of a color, importance of a fact, etc.) with the extent of its experience, vertical with horizontal, lower scale with the higher one, private with public, etc. They can be used at the level of a phrase (e.g., a red apple with an average intensity of its surface), used to express sentences (e.g., the snow melted overnight to virtually nothing), as well as in narratives (e.g., “The road to hell is paved with good intentions”). To simulate sensory products, one needs the sense of intensity (or its difference) and extent of a property attributed to the object.

I believe the current model relies on lists/tables of attributes and is missing the notion of sensory products. Am I right? What are your thoughts on the proposed approach?

Best regards,

Wes Raykowski

Phil Goetz
Phil Goetz
11 months ago

I worked with AI systems using semantic network KR for many years. The representational problem is very deep; I recommend you read the publications of the SNePS project if you’re determined to take this route. Many tricky issues aren’t apparent until you spend years trying to enter knowledge by hand into the network.

But I don’t recommend you take this route, because the Cyc project spent 40 years and many millions of dollars doing this, and never had any remarkable results. Symbols cannot be atomic in artificial intelligences, because the fragility / flexibility problems that afflict all symbolic AI lie mostly in the fact that an atomic symbol relies on external propositions to embody the knowledge about that symbol. That is, if your network uses the word “bird”, every time you use the word, you need to traverse the entire network of knowledge about different types of birds, colloquial uses, pragmatics about when one should and should not count a dead bird as a bird, and on and on and on, every time you use the word, just to know whether it applies. And symbolic representation is ultimately too rigid, and incapable of learning incrementally with a fine enough grain, to do this. And attempts to automate symbolic learning have never worked.

You really should read up on the Cyc project, whose charter back in 1989 was also to add common sense to AI, and explain why you wouldn’t be better off just using Cyc. Last I checked, which was about 25 years ago, you could download the public-domain Cyc engine and dataset for free.

Agre & Chapman advocated the embodied symbolic-reasoner approach in 1987 with Pengi. Rodney Brooks built lots of cool little robots which used a hard-coded symbolic reasoning system around the same time. I used the Pengi approach in the late 1990s at Zoesis, and it did very well in a video game; but basically it was just symbolic reasoning plus quasi-indexicals. Very helpful in making control programs short, but nothing that would provide any leaps in capabilities. Brooks’ approach resembles the wiring of insect brains, and works well at replicating simple insect behavior, but nobody knows how to integrate it with a complex insect brain like that of a honeybee. Merging the reactive with the symbolic might be another worthy research project.

The area that most needs work is on how to use symbols which have distributed (neural) representations. This has been the case for 30 years. See my 2000 paper “A neuronal basis for the fan effect” (https://www.sciencedirect.com/science/article/abs/pii/S0364021399000245#! ) for an outline of an approach, which is pretty similar to Google’s word2vec approach.

Last edited 11 months ago by Phil Goetz
14
0
Would love your thoughts, please comment.x
()
x