We, so called humans, used to consider us as the centre of the universe until Nicolaus Copernicus enlightened otherwise. As per Galileo, you cannot teach a man anything; you can only help him find it within himself. Da Vinci invested most of his life in learning sciences like anatomy, physics, and chemistry so that he can express better through his immortal paintings. Friedrich Nietzsche, in his book, visualised a superhuman and set a goal for each human to achieve the status of an Übermensch for itself. In recent decades, innate human potential has found its ways to express through artificial intelligence.
During 1945 Alan Turing completed his outlines of the proposed electronic calculator. By all definition that was the full-fledged design of a stored-program computer. By the mid of the twentieth century, he published Computing machinery and intelligence”, which gave a clear notion of a new era of machine intelligence on a large scale. Thus he is regarded as the founder of the computer sciences and artificial intelligence. In fact, whosoever taps on the keyboard is working on the incarnation of the Turing machine. Right after him Marvin and Dean developed a stochastic neural analog reinforcement calculator, which was a very initial AI neural network. In which over three thousand vacuum tubes were used to simulate around forty neurons. In a proposal, the term AI was coined by him. Arthur Samuel created the first program that had the ability to learn.
In a few year’s time, Frank Rosenblatt created a perceptron neural network that had the capability of recognising patterns. A popular newspaper termed the discovery as a new era of a remarkable machine that will be capable of walking, talking, seeing, writing, plus will be conscious of its existence too. John McCarthy devised LISP, a language that’s used in AI research and Arthur Samuel came up with the term “machine learning” Oliver Selfridge in his publication explained that computers are capable of recognising patterns on its own. General Motors factory based in NJ incorporated the first robot by the name of Unimate. Eliza, an interactive program was developed by Mr. Weizenbaum that was capable of having a conversation on any random topic. Understanding natural language became easier once Terry developed the “Shrdlu” computer program. Around 1970’s “backpropagation” a learning algorithm for layers of AI neural networks created waves. Plus, the first human-like robot by the name of Wabot was developed in Japan that was having limbs, vision, and dialogue capabilities.
Since the advent of AI, two opposing philosophies have been around, which are known as symbolism and connectionism. In recent years connectionism has emerged as victorious in the form of deep learning. But it has its own restraints. and boundaries. In the symbolic approach, intelligent systems are built that dwell on conceptualisation through words and numbers. While in the connectionist approach massive data based on neural networks simulates intelligent visualisation and manifestations. In neural networks, millions of neurons are interconnected and are highly sensitive and their plasticity enables them to learn based on each new piece of information feed. But mostly neurons are hard to program in a clear and detailed manner, this is the basic impediment that determines its boundaries because in the real world we cannot rely on ambiguities. In the symbolic approach, symbols contain discreet semantic logics that are readable for humans too. But the problem arises when such systems are exposed to real-world situations in which they don’t have predetermined programs. Real-life situations may vary, cause it’s replete with intricate events of varying nature.
Till 1979 symbolic approach was dominant, but later it was realised that perceptrons and expert systems are not scaleable. Thus the demise of the symbolic approach began. Right after academia and commercial escorts started paying more attention to the connectionist approach. Through Geoffrey Hinton’s research work “backpropagation” was introduced that revolutionised the neural networks. The recent related technological developments like microprocessors and availability of massive datasets have revitalised neural networks and companies have started adopting this approach for the better.
In nutshell, connectionism is good at recognising patterns based on fed data and can safely predict, but it’s unable to reason and understand deep-seated semantics and human pragmatism. Probably a hybrid of both approaches may work for a while till we come up with a better approach to solve AI reasoning and accurate prediction methods. To make commercially viable products and solutions tech companies are dwelling on both approaches like in driverless cars. Cognition modules consist of an actual environment, then perceptions, planning, and real-time predictions. On top of that symbolic frameworks are integrated to validate all actions. It’s safe to say that a neural network enables a system to see, while the symbolic approach enables a system to reason pragmatically. Human neurons develop connections to memories and we think in images. So, a better mix of two may reap good benefits in the near future.
In the contemporary era, Siri, google talk and Alexa are popular bots while Boston Dynamics humanoid robots are new fascinations. Ultimately, human imagination and technologies are paving the way for singularity. A world in which human societies will learn to live in peace, with a collective feeling of harmony and oneness. In a sustainable way, of course.
***
Are you currently seeking a reliable tech partner? Look no further, simply Contact Us here or drop few lines at [email protected] and we will reach out to you quickly. Initial consultation at DevCrew is absolutely free. We are right away ready to transform your ingenious idea into a highly performant solution.