Logic: The Intelligence in Artificial Intelligence

Major Blog Series: Part II

Source: WordPress

Logic seems reasonable most of the time, however, sometimes we need to think beyond limits. Humans have been using logic intuitively since they started to think. As it forms the basis of our thought processes, one can observe a closely-knit connection between logic and Artificial Intelligence. Precisely, “logic” is what an artificial brain can recognize and deduce patterns from, to understand novel ideas.

What makes a human brain exceptional is its ability to build subjective opinions depending on the context and gravity of the situation. An AI-driven brain, on the other hand, makes decisions based on a limited set of constraints that are either directly fed to it or have been derived by it over time. This set of constraints is what forms logic. To define it formally:

“Logic defines a framework for inference and correct reasoning.”

It all started in 500 BC with Philosophical Logic, which deals with our natural languages. We use existing facts to derive conclusions that should undoubtedly be true. However, there’s a twist in the tale. Here’s an example:

  • Human Cells are invisible to the naked eye.
  • Humans are made up of human cells.
  • Therefore, humans are invisible to the human eye.

That’s a logical fallacy!

Overcoming logical fallacies is one of the major issues one has to deal with when building AI systems because of the inability of computers to reason without concrete facts.

Moving ahead to 1854, Boolean Logic was introduced. Boolean logic models everything as true/1 or false/0, and nothing in-between. For instance, famous puzzles like the river crossing problem, and the n-queens problem, can be modeled and solved using boolean logic. Though we can very well find solutions to these problems using other ways, the hit-and-trial technique would take exponentially high time, because in the worst case, we shall end up taking all the possible paths. Therefore, it becomes important to use the most efficient ways to solve such problems. If a proposition satisfies a given set of constraints, then it is put in the set of possible solutions to that particular problem, and the problem is said to be satisfiable, else it is marked unsat, i.e. there exists no solution that satisfies all the constraints. To avoid going through all the possible sets of values and then finding the ones that satisfy, we train our systems to limit the search space of values using some heuristics. Another example of the applications of Boolean Logic can be found here, where I proposed a potential way to implement a GoogleMaps-like service.

Though Boolean Logic holds immense power, sometimes we are unable to model a problem simply in terms of binary values, e.g., when talking about the temperature of the water, we can have the 2 values as “hot” or “cold”, now how to ask for “warm” water? Therefore, more precision is required to understand the details and represent all the predicates associated with some problems. Hence, when Boolean Logic fails, Fuzzy Logic comes to the rescue! Fuzzy Logic is a more flexible technique, as it allows us to use partial truths in contrast to the non-intersecting true and false values in Boolean Logic. It brings the machine one step closer to human reasoning capabilities, leading to better results.

Now, a machine can quantify its results as probabilities. This provides a huge advantage over the conventional boolean variables with two strict extremes.

Logic forms a small yet important part of the vast domains of Artificial Intelligence. In fact, logic forms a completely new paradigm of programming (Logic Programming) which is used to solve complex problems. Prolog (PRogramming in LOGic) is a commonly used declarative language in AI models. It is based on logic programming. It consists of a set of facts that state what is always true and a set of rules to deduce if a given proposition can be deduced from the facts or not.

Logic cannot completely define AI. That’s because it is purely based on the facts fed to it, and these facts cannot be used to learn a new concept or new situation. This is referred to as the Closed World Assumption, i.e., inferences can be drawn only from known facts. Hence, logic can be thought of as a necessary but not sufficient component of AI.

We also encounter situations wherein going along the lines of logic leads to errors, some of which turn out to be really funny! E.g., A robot passport checker in New Zealand rejected an Asian man’s application claiming that his eyes were closed in the picture. That’s oddly a true story.

This happens because artificial machines lack the ability to think beyond their scope. There are many other cases where AI goes wrong, like the misinterpretation of voice commands, Google misidentifying a person from their photo and putting it in some other person’s album, chatbots misunderstanding the conversation, etc.

AI also lacks the ability to perceive visual information accurately. This area of research involves a major contribution from neuroanatomy, i.e., the study of the human brain. Because there are many mysteries yet to unfold. So, before going on to build an artificial system that can comprehend visuals, we are required to understand how it’s done in humans. The facts alone might not be sufficient, as the human brain is often controlled by our surroundings as well as the subconscious mind.

Despite several minute shortcomings, AI continues to be the ultimate power in the field of computing and Computer Vision. As also discussed in my first article in this series, it’s difficult to imagine our today’s life without AI. Talking about the potential of Artificial Intelligence, it’s important to talk about GANs (Generative Adversarial Networks). These are AI models that consist of two parts: a generator (that creates samples) and a discriminator (that distinguishes between the samples generated by the generator and that from the real world). This way, a GAN learns features by itself. I would not like to skip this chance to mention:

https://thispersondoesnotexist.com/

The pictures that we see on this page are generated by StyleGan2. These seemingly real-looking humans do not exist in real life. These photographs don’t capture real people, rather these are hypothetical faces created by feature learning, and hence it is almost impossible that one would see the same picture twice. The same holds for the other categories of photos on this page (e.g., cats, horses, etc.).

Another breakthrough in technology is the first humanoid to officially get citizenship of a country, Sophia. This social machine is not solely an outcome of Artificial Intelligence, as such kind of technology involves continuous and dedicated efforts from almost all the scientific fields one can imagine. One of those is OpenCog, a powerful AI system designed for general reasoning. Being social means being trained to interact with people and respond to their surroundings, with human-like gestures like happiness and anger. However, ever thought of the number of possible situations it can encounter? It’s not possible to explicitly feed in all the facts, queries, and their answers. So what makes AI that smart? The answer is AI itself. Every time it is put into a new environment, it tries to learn from it, such that the next time a similar situation arises, the response is improved. There are equal chances of it learning from the wrong environments which can cause major problems to mankind. That is another issue that needs to be addressed.

Artificial Intelligence is really smart. And it’s getting smarter every day. However, several issues still need to be addressed despite coming a long way. The journey ahead shall involve AI to build up itself. This is only possible with our constant efforts and monitoring of their learning, as the takeaway is still the same, i.e., “artificial intelligence has no existence without real intelligence”.

Thanks a lot for reading! Hope this blog added to your existing knowledge :)

References:

  1. Satisfiability Checking — a course by Dr. Erika Ábrahám (RWTH Aachen University)
  2. Paradigms of Programming — a course by Dr. Manas Thakur (Indian Institute of Technology Mandi)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store