Since I started considering neuro-symbolic systems for a project, I have become deeply aware of their enormous potential and the ethical challenges posed by such systems. I can only make sense of these quandaries and placate my anxieties through traditional philosophies. Here is an engineer’s point of view.
To be clear, this article will discuss Nero-symbolic systems, which combine generative AI and logic. We will note their resemblance to meditation practice in the Yogic tradition. Such systems’ power is scary and poses some serious ethical challenges. We discuss how traditional philosophies present a new perspective on these ethical problems. More than hard engineering is needed to enforce in the AI context.
Neuro-Symbolic Systems
Psychologists discuss two aspects of human decision-making and behavior: intuition and reasoning. Daniel Kalman, for instance, in “Thinking Fast and Slow,” describes these two aspects of our behavior as Systems 1 and Systems 2. The first system quickly generates candidate solutions or intuition. At the same time, the second component either merely accepts an intuition or activates a reasoning system that searches through the memory to judge the validity of the intuition.
Most of us intuitively know that our intuition is only that reliable, and if the stakes are high, we better enlist our reasoning abilities to make a good decision. When the problem sounds hard, we automatically switch to the reasoning System 2, but sometimes questions with an intuitive answer can trick us, as in the famous example:
If five machines make five widgets in five minutes, how long does it take 100 machines to make 100 widgets? Obviously, the answer is not 100 minutes, but even when you have seen this before, the number 100 does pop up in your head! Is our intuition any different from ChatGPT?
So, the most reliable way to solve a problem is to guess an answer through intuition, but check this intuitive guess with reasoning! This is precisely how Neuro-Symbolic Systems want to mimic human behavior. You have a generative AI model, such as an LLM, that creates candidate answers or “intuitions” and plays the part of System 1. Then, there is the logical or symbolic component. Its job is to evaluate the validity of this intuition. AlphaProof and AlphaGeometry, which can even solve Mathematics Olympiad problems, are the most popular examples of NSS.

Classical understanding
Far before the advent of modern psychology, the role of intuition and reasoning was well-considered in most systems of meditation. The Sanskrit words Mana (मन) and Buddhi (बुद्धि) bear close resemblance to what we have, following Kalman, referred to as System 1 and 2 earlier. The Mana represents sensory and emotional impulses, and the job of a meditator is to apply Buddhi, an equanimous (and often compassionate) awareness capable of reasoning to these impulses. These concepts serve to guide one’s meditation practice toward becoming more enlightened and, of course, an ethical person. It is with this analogy that we would appeal to traditional ethics and apply them to general neuro-symbolic systems.
But before considering ethics, let’s examine the neuro-symbolic system’s generative AI and symbolic reasoning components a bit more carefully.
Generative AI as intuition
As generative AI models like ChatGPT have become commonplace, it is no surprise to anyone that they can be used as a substitute for intuition. After all, they can generate guess answers or images to a desired question, notwithstanding the correctness or validity of such an answer. For a whole while, the internet was happily busy discovering ChatGPT’s hallucinations. So, one would recommend that any AI, or a human for that matter, take the answers with a pinch of salt and validate them. However, before moving on to the validation-related problems, let us recall the two classical facets involved.
- A generative model needs a dataset X, and it tries to learn the probability distribution P(X) of this data. Generation is merely sampling from this learned probability distribution.
- The process of learning the distribution P(X) depends on the choice of an optimization objective!
So, the final performance depends on data and the choice of the optimized objective, a phenomenon for most modern learning problems. There is also the effect of the strategy (whether we use an encoder-decoder or adversarial approach), which we will ignore.
Although the ethics of data choice and optimization have been widely debated, we will address them here in the special setup of neuro-symbolic systems.
Logic, Confirmation bias, and Gödel
The equivalent of System 2 in Neuro-Symbolic Systems like AlphaProof is sometimes referred to as the symbolic reasoning component. Such a component does require well-formulated and consistent axioms. These axioms form a set of rules and facts at the core of the symbolic reasoning component that is needed for logical deduction to reach conclusions. For instance, when the generative AI outputs suggestions for a solution to a math problem, the symbolic reasoning component checks the logic of the steps to be consistent with the axioms. So, in this scenario, axioms are the set of truths.
Choices of truth
Choosing the axioms for a synthetic reasoning component is like choosing the truth or starting point for further reasoning and arguments for the neuro-symbolic system. However, a set of axioms is often a matter of design choice. And this choice of truth does matter.
One philosophical caution to the choice of axiom comes from confirmation and belief bias, “When you’re overly confident in your assumptions, you might accept arguments or conclusions that align with your assumptions as valid.”
In mathematics, fixing a set of axioms is a necessary evil. In fact, mathematicians are extremely fussy about picking their axioms. Even small changes in choices can have huge effects. It is worthwhile to review this classic example.
Example: Non-Euclidean geometry
As an illustrating example, consider the parallel axiom in Euclidean geometry, which says, “Given a line and a point not on the line, there is a unique parallel line.” From our everyday experience of the geometry of a plane, this looks pretty reasonable and self-evident.

However, we can replace this axiom with the requirement that no parallel lines can be drawn through any point and get the geometry on a sphere. To the uninitiated, the other extreme possibility of the parallel axiom is more surprising: we can demand that there are infinitely many parallel lines passing through each point. In this case, we get the hyperbolic geometry.

[Hyperbolic geometry: The geometry inside the disc where the notion of lines (geodesics or shortest distances) are either lines through the center or circles which are centered on the boundary of the disc.]
The example shows how a simple change in assumptions has significant consequences.
So, choosing axioms for a symbolic reasoning component is delicate, even when ethical considerations are absent. However, designing an ethical neuro-symbolic system requires a choice of axioms that can also guarantee safety and fairness. But let’s up the game a little bit and see how these systems can manifest in the future.
Next Gen Neuro-Symbolic System: Or a proposal for an AGI
Many LLMs like ChatGPT/CoPilot can generate good imitation code for a problem, assuming the prompt provides a reasonable description. However, currently, such tools cannot automatically integrate, debug, and validate that the right problem has been solved by designing suitable tests all by themselves. The reason is that each problem has a unique structure and comes with assumptions. Verifying the “intuitive code” generated by LLM, we need a more flexible logical component that can work with different axioms depending on the problem at hand.
So, let us consider the next step. What if an LLM or some other AI generates a set of axioms or logic specific to the input problem? Thus, in the context of geometric problems, we want a generative AI to output various axioms of geometry. The corresponding logical reasoning component chooses to have a parallel axiom with 0,1 or infinitely many parallel lines depending on the problem arising from spherical, Euclidean, or hyperbolic geometry.
Let us leave the technical description of such a generative AI aside and merely assume that we could start training such a system. At first glance, this seems like a natural progression. We too, work under different sets of assumptions in different circumstances. In fact, we can be comfortable with applying a contradictory set of assumptions to different situations(hypocrisy is unavoidable and necessary).
Secondly, logicians would happily point out that Gödel’s Incompleteness Theorem forces us to have an adaptive set of axioms to deduce diverse problems and achieve different kinds of conclusions.

This version of neuro-symbolic systems, with a logical component dynamically generated by an LLM, can get closer to some form of AGI. However, there is still some way to go before such a speculation can come true. In the meantime, we can discuss the implications.
Ethics
Even if we ignore the above speculation that neuro-symbolic systems can lead to an AGI, the current considerable applications raise serious ethical considerations.
We can now return to comparing the neuro-symbolic systems and meditation traditions we mentioned before. Let us start with a few reverse implications to apply our more pronounced understanding of AI and neuro-symbolic systems to meditation.
- Ensure that the generative AI gets clean, non-biased data. Applying this principle to meditation, we notice it is already part of the deal. In Yoga, the practice of pratyahara stipulates strict discipline to check what sensory influences we are exposed to.
- The right choice of optimization or loss function: This is discussed as the right or ethical choice of motivation, as motivation dictates the direction of learning. We will discuss this in detail.
- Well-chosen logical axioms with marked limitations chosen depending on the situation: Well, this is the hard part of deciding Dharma and is way beyond the scope of this article.
The above list shows that there are certainly parallels between meditation, at least the philosophy of Yoga, and neuro-symbolic systems. So, we should also be able to reverse the direction and apply some Yoga epistemology to AI.
As an example, let us consider the problem of choosing the appropriate objective for optimization. The parallel concept is indeed the problem of choosing the right motivation. Not surprisingly, this is a well-studied topic.
Yogic tradition describes three kinds of motivations: Satva, the motivation of curiosity, learning, and knowledge; Raja, the motivation of shivery, justice, and courage; and Tamasa, the standard darker motivation of greed, lust, etc. So far, we have only been able to employ the last type, the darker Tamsa motivations in Reinforcement Learning.
Curiousness and knowledge are natural motivations for meditators. So, by analogy, we need to optimize the consistent knowledge base of a neuro-symbolic system. What makes a certain piece of information curious often depends on how surprising it is. In particular, this suggests that if we want to add new data or axioms to the logical reasoning component, it should be furthest from the current knowledge base.
There are many directions that can be explored, but I will stop here. If you find any interesting comparison with your own philosophy or meditation, please share them in the comments below. Let me know whether you agree or disagree with the possibility that neuro-symbolic systems can lead to an AGI.
