Artificial Intelligence is defined as the simulation by certain machines and computer systems of human intelligence processes. This concept includes process automation to advanced robotics. Here are several moments that have defined its development.
Alan Turing’s numbers and the foundations of computing
It is estimated that the origin of artificial intelligence is directly related to computation. During the ’30s and ’40s of the 20th century, the first works were published that addressed in some way the principle of artificial intelligence, although at that time the term AI did not exist. The article “Calculable Numbers”, published in 1936 by Alan Turing, establishes the theoretical basis of computer science and formalizes the concept of algorithms.
Z3, the first fully-functional digital electronic computer
In 1941, German engineer Konrad Zuse creates the Z3, which experts point to as the first fully functional digital electronic computer. Zuse is also the creator of the first high-level programming language. In 1943, Warren McCulloch and Walter Pitts present the Artificial Neuron Model, which is considered to be the first specific work addressing the field of artificial intelligence in history.
The meeting that shaped the name and trends of AI
In the mid-1950s, a meeting was held at Dartmouth College between leading researchers in this incipient area, in which different researchers were present, such as John McCarthy, professor and famous mathematician, who created the LISP language and designated the new science with the name of Artificial Intelligence. From this important meeting, two trends in the study of artificial intelligence began to be distinguished: connectionism and knowledge engineering.
The 1970s: between programming languages and prototypes
Between the 60s and 70s of the 20th century, it was observed that many of the techniques proposed by the initiators of AI lacked the consideration of heuristic knowledge. In 1975 a very important step was taken with the appearance of the initial version of the logic programming language PROLOG. Around the same time, several disease diagnosis systems and important functional prototypes were designed, which were successful in the fields for which they were designed, thus attracting the attention of investors to this area.
The advent of the Database concept
The so-called industrial diffusion stage began in the 1980s, with the departure of prototypes from laboratories and universities to enter the world of production and the market. In this period of AI also appears the concept of Intelligent Database, which is considered a renewing concept that goes beyond the simple storage of information. Among the best-known, intelligent databases are Oracle, DB2, SQL Server, and NoSQL, which were conceived for big data management challenges and to increase the efficiency of the organization.
Robotic applications, blockchain, and the cloud
Robotics, neural networks, pattern recognition, natural language processing, and machine learning are domains that are starting to get special attention during the development that AI has had in the last decades. This generates an unlimited perspective on this topic. A world in which robotic applications, blockchain, and cloud services start to gain more and more prominence begins to be defined then.
The ethical responsibility debate
The almost limitless prospects raised by the development of artificial intelligence have eventually opened up a debate on ethical responsibility. This is due to the fact that a technology of such complexity and impact as AI must be managed under solid human values. In this regard, the debate has focused on issues such as prudence, reliability, and accountability. Human responsibility for AI and the necessary limitation of autonomy of these systems are also integrated into this debate.
¡Stay tuned! Go and visit our blog. We post new articles every week.