Artificial Intelligence Laboratory
ArtBrain
We're not developing LLM. We're creating an alternative AI architecture that better matches human capabilities. Primarily, the ability to learn while using. Learning like a human—acquiring new knowledge based on existing knowledge. We believe this ability is sufficient for artificial intelligence to acquire all other capabilities on its own. Here we'll explain how we plan to achieve this.
This project was created by independent AI researcher and philosopher Alexander Khomyakov. Together with the developers who have joined the project, we are creating an AI architecture based on new principles that will solve existing problems with Transformers and open up entirely new possibilities. These include, first and foremost, learning through dialogue with humans and independent internet searches, as well as addressing the problem of hallucinations and rule-following. The project is based on Alexander Khomyakov's theory of intelligence, outlined in his book "Prolegomena to Any kind of Knowledge," which outlines the key differences between these approaches.

Human-type intelligence

ArtBrain's intelligence principles are based on research in epistemology, cognitive psychology, and problem-solving theory (some key principles are not disclosed).
  • The project's idea is based on the concept of constructivism, which asserts that intelligence does not generalize input information into models, but rather constructs new models from existing ones to differentiate input (up-down). This fundamental distinction allows us to avoid problems such as injections, hallucinations, and "Potemkin understanding".
  • We've discovered a new method for finding analogies based on predicates. It replaces embedding in neural networks and solves the learning problem, as predicates can be changed directly during use. We've created a program for finding analogies based on predicates on the fly. [ArtBrain article in the journal]. The predicate, not the individual word, is the basic unit of the ArtBrain intelligence algorithm.
  • According to our research, analogy is the foundation of intellectual functions such as metaphor, syllogism, learning, and others. Thanks to predicate analogy, the ArtBrain algorithm is capable of deeper generalization and rule-following. Journal article.
  • Modeling architecture. Intelligence models are built as nested independent models, with the abstract model managing the sequence in the subject model, and the subject model managing the input and output registers, which are the text. Prediction of the next predicate occurs not at the text level, as in neural networks, but at a higher level of subject-level schemas.
  • Answer generation is structured as problem solving, automatically separating what exists from what needs to be achieved, and constructing a sequence of steps to reduce the distance between them based on deep (subject-specific) schemas. This approach is based on the theory of problem solving and schema adaptation, which forms the foundation of intelligence.
Development blog
  • Creating abstract schemes and searching
    In the fall of 2025, we decided to launch the first project based on the developed analogy scripts. We began developing Meaning Search, a search by meaning that aims to become an alternative to both contextual search and RAG LLM search. The main difference is that instead of using chunked text, we search the entire text, parsing it into predicate vectors. This allows us to quickly find the most relevant texts to a query by identifying similar abstract patterns. We will report on the results closer to February 2026.
  • Confirmation of project hypotheses
    In early 2025, we refined the predicate analogy script and tested the ability to resolve disambiguation by selecting predicates based on context. We also validated the method for recognizing new clauses based on existing predicates through analogy. This allows us to use existing schemas in the model as stable predicate chains for text recognition and response construction.
  • Analogy as a basic function of thinking
    At the end of 2024, an article was published describing how analogy enables the fundamental functions of thought. This is the core process of the new AI architecture. It underlies the creation of abstract system schemas (templates), enabling the recognition of new text patterns by analogy and, through them, the discovery of further text, a new method of text generation.
  • Discovery of analogy by predicates
    In early 2024, we parsed the texts of 200 books into predicates, removing obvious errors, thus creating predicate vectors for the words SVn and VOn. Using the TF-IDF method with a few tricks, we obtained high-quality analogs for S and V. This method, instead of costly one-time neural network training, requires significantly less data, can change the analogy when new predicates are introduced, and can restrict the analogy by predicate, resolving ambiguities. The results are presented in the article.
Meaning Search
This is an early ArtBrain project that uses developed algorithms for semantic search.
One of our developments is Meaning Search as a solution to the search problem using RAG technology. The disadvantages of this technology include the need to slice texts into chunks to convert them into vector representations. Chunks are sliced ​​not by meaning, but by text quantity, which subsequently creates problems finding the necessary parts of the text without unnecessary information. Our approach involves searching by predicate vectors with analogy, which allows us to find parts of the text only based on the exact meaning of the query and taking into account similarities with it. As a result, the accuracy of meaning-based searches increases significantly, eliminates hallucinations, and provides a clear criterion for missing information in texts. This technology avoids information loss in LLMs, as well as hallucinations due to the lack of accurate information in knowledge bases.

We are currently preparing a pilot technology and a benchmark for comparison with RAG to demonstrate the advantages of Meaning Search technology.

Semantic search is in high demand in business, education, and science, where vast amounts of textual knowledge have been accumulated, but retrieving, duplicating, and summarizing it without losing information poses challenges. Therefore, we are confident in the technology's success..
For investors
The potential for using this new architecture is limitless, as it can learn. Control over it will provide incredible advantages in all areas.
  • The key distinction of the new intelligence architecture is the ability to make discoveries and propose new solutions, which will, without exaggeration, allow us to solve all of humanity's existing problems. But beyond these, new ones will undoubtedly emerge, of a completely different scale and quality.
  • We're building human-level intelligence. Joining a team creating next-generation technology is a rare opportunity that only we have. Join our team as an investor and be among the first.
  • Terms of cooperation with investors are discussed individually. We are independently developing the technology in the lab, and investments will allow us to bring our technology to market with you, ensuring we are the first to profit from it.
Join us in creating the future of artificial intelligence architecture.
We welcome collaboration, new ideas, and assistance with project implementation. Alexander Khomiakov (akhomiakov.com)
Made on
Tilda