Ada Lovelace

Escrito por Catherine Muñoz

Artificial intelligence’s history is as remarkable as the current result thereof. First, this history starts in a very uncommon way in the mid-19th century, with a woman named Ada Augusta Byron, also known as Ada Lovelace, the daughter of the well-known poet Lord Byron. Ada Lovelace, was an English woman, whose fascination for mathematics led her to meet several mechanical inventions of her time face to face, having a fresh view of them from a revolutionary perspective.[1]

Ada Lovelace was a collaborator of the scientist and mathematician Charles Babbage in the direction, sense and aim of her analytical engine[2], which was finally never put into practice. As a result of this work, she is considered as the first creator of an algorithm as such, and she has been called the first “programmer” of computer systems.[3]

In 1843 and as part of her work as an assistant, Ada translated a paper of Luigi Menabrea, a French scientist, who wrote about Babbage’s analytical engine. Along with said translation, she prepared extensive notes that incorporated an analysis of the functional nature of said machine by her own. Finally, said translation was pushed into the background, with the aforementioned additional notes becoming the main character of a remarkable anticipated understanding on the basis of computer systems and the artificial intelligence from a mathematical and philosophical perspective, opening a number of possibilities for a machine exclusively invented for numerical calculations.[4]

Ultimately, what Ada was discerning was the capacity a machine could have for processing in a consistent, logical way not only numbers, but also another kind of abstract data, such as musical notes.

It is also said that Ada was the first person in writing an algorithm for the note individualized as G [5] in the same document in reference, detailing a logical sequence that allows calculating Bernoulli numbers.[6]

From Ada Lovelace’s deep analyses almost 100 years elapsed until one of the most important scientists of XX century, Alan Turing, would be able to empirically develop the basis of scientific and logical foundations that made the progress of computer science and AI possible.[7] As everything in this story, Turing was a remarkable and advanced man for his time.

In a paper published in 1937[8] called “On computable numbers, with an application to the Entscheidungsproblem”[9], Turing gave new definitions on computable numbers and a description of a universal machine, finally explaining that Entscheidungsproblem cannot be solved. Like Ada Lovelace’s analysis, in this paper and other later one, Turing combines logics with philosophy and mathematics in order to give rise to his well-known theories.

“Calculator” at that time was the denomination given to those persons whose work was the performance of mathematical works. Turing based his machine on said function and as explained by Nils J. Nilsson[10] its operation was very simple, being made of few parts, where an infinite tape divided in cells can be found, which have a 1 or 0 printed. The machine had a reading and writing head, and a logical unit that can change its own state in order to command the writing or reading function of 1 or 0, and move the tape to the left or to the right by reading a new cell or ending an operation

In principle, each machine had an unique function; then Turing created a form in which multiple functions could be coded in the same input data, thus creating the so-called Universal Turing Machine, which is the basis of current computers.

With the advent of the Second World War, this invention and its creator would continue developing the aspect of intelligence and national security. Thus, Turing was recruited by the Government Code and the Cypher School based in Bletchley Park, making important contributions in the field of codebreaking.

Then, in 1950 Turing published in Mind magazine his paper called “Computing Machinery and Intelligence”.[11]The paper started by saying the following: “I intend to review the following question: Can machines think?”.

In that paper, Turing created the so-called “Turing Test”, which purpose was to determine if a computer could “think”. In simple terms, the test required a machine to hold a conversation through a text with a human being. If After five minutes, the human being was convinced of being talking to another human being, it was said that the machine had passed the test. In John McCarthy’s words: He argued (Turing) that if the machine could successfully pretend to be human to a knowledgeable observer then you certainly should consider it intelligent.[12]

Alan Turing Statue at Bletchley Park (C) Gerald Massey

The statue, commissioned from Stephen Kettle by the late Sidney E Frank, depicts Alan Turing seated in front of an…www.geograph.org.uk

Until now we have seen the development of AI from the concept of mathematics and logics, but after that, two well defined approaches became clearly distinct that based their studies and development on AI. First, we have those basing their knowledge on logics and mathematics with philosophical and epistemological resources corresponding to the symbolic or classical AI, and, on the other hand we have those basing their studies on the biology known as connectionist AI — cybernetics.

Cybernetics was defined by Wiener as “the science of control and communication, in the animal and the machine”.[13]

The preceding definition was taken from one of the works of W. Ross Ashby, an English psychiatrist who published a theory of the adaptive behavior in animals and machines.[14]

Parallel to symbolic or classical AI development, the neurologist Warren S. McCulloch and the logician Walter Pitts published in 1943 a paper called “A logical calculus of the ideas immanent in nervous activity”[15] where they proposed an artificial neurologic model from an explanation about how the human nervous system operates.

According to Nils J. Nilsson[16], the McCulloch-Pitts neuron corresponds to a mathematical abstraction of a brain cell as such with an input corresponding to dendrites and an output corresponding to a value of 0 or 1 as a simile of an axion, with these “neurons” being able to connect each other forming networks. In this respect, Nilsson says: “Some neural elements are excitatory — their outputs contribute to “firing” any neural elements to which they are connected. Others are inhibitory — their outputs contribute to inhibiting the firing of neural elements to which they are connected. If the sum of the excitatory inputs less the sum of the inhibitory inputs impinging on a neural element is greater than a certain “threshold,” that neural element fires, sending its output of 1 to all of the neural elements to which it is connected”.[17]

In sum, we have elements of mathematics, logics and biology that have been present in the development of AI, to which also psychology and cognitive sciences are added, since they have been of the essence in the development of learning of artificial neurons.

Psychologist Frank Rosenblatt developed the Perceptron in 1957, which corresponded to the first artificial neuronal network for learning that allows the recognition of patterns based on a binary classifier.[18] Then David Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams describe the retropropagation as a method of optimizing multi-stage dynamic.

In 1956 at the 1956 Dartmouth Conference organized by Marvin Minsky, John McCarthy, the expression “Artificial Intelligence” was first coined. From this conference began a new era in the development of AI.[19]

Government agencies, such as the U.S. Defense and Research Projects Agency (DARPA), invested heavily in Artificial Intelligence innovation during World War II and the Cold War. Then, between 1974 and 1980, the so-called “AI Winter” took place, with the reduction of the interest and the financing of this type of technologies.

However, there was an explosive growth in the early 1990s in AI technology advances driven mainly by the rise of Big Data and supercomputers…

And here we are now, nobody knows how this will continue, we hope it will recover its essence reflected in Ada Lovelance’s work.

[1] Luigia Carlucci Aiello, “The Multifaceted Impact Of Ada Lovelace In The Digital Age”, Artificial Intelligence 235 (2016): 58–62, doi:10.1016/j.artint.2016.02.003.

[2] Margaret A Boden, AI: Its Nature And Future, 1st ed. Oxford: Oxford University Press, 2016. 1–28

[3]Suw Charman-Anderson “Ada Lovelace: Victorian Computing Visionary.” Ada User Journal 36.1 (2015).

[4] J.G. Llaurado, “ADA, The Enchantress Of Numbers”, International Journal Of Bio-Medical Computing 32, no. 1 (1993): 79–80, doi:10.1016/0020–7101(93)90008-t.

[5] L. F. Menabrea & A. Lovelace (1842). “Sketch of the analytical engine invented by Charles Babbage”. 49–59

[6] Ronald L. Graham, et al. “Concrete mathematics: a foundation for computer science.” Computers in Physics 3.5 (1989): 106–107.

[7] Margaret A. Boden, AI: Its Nature And Future, 1st ed. Oxford: Oxford University Press, 2016. 1–28

[8] A. M. Turing, “On Computable Numbers, With An Application To The Entscheidungsproblem”, Proceedings Of The London Mathematical Society 2–42, no. 1 (1937): 230–265, doi:10.1112/plms/s2–42.1.230.

[9] The Entscheidungsproblem (mathematics, logic) is a decision problem, of finding a way to decide whether a formula is true or provable within a given system. “Entscheidungsproblem Dictionary Definition | Entscheidungsproblem Defined”, Yourdictionary.Com, accessed 14 March 2019, https://www.yourdictionary.com/entscheidungsproblem.

[10] Nils Nilsson. The quest for artificial intelligence. Cambridge University Press, 2009. E-book. 57

[11] A. M. Turing, “I. — Computing Machinery And Intelligence”, Mind no. 236 (1950): 433–460, doi:10.1093/mind/lix.236.433.

[12] John McCarthy, “What Is Artificial Intelligence?”, Stanford University, 1998, http://www-formal.stanford.edu/jmc/whatisai/whatisai.html.

[13] W. Ross Ashby and J. R. Pierce, “An Introduction To Cybernetics”, Physics Today 10, no. 7 (1957): 34–36, doi:10.1063/1.3060436.

[14] W. Ross Ashby. “Principles Of The Self-Organizing Dynamic System”, The Journal Of General Psychology 37, no. 2 (1947): 125–128, doi:10.1080/00221309.1947.9918144.

[15] Warren S. McCulloch and Walter Pitts. “A Logical Calculus Of The Ideas Immanent In Nervous Activity”, Bulletin Of Mathematical Biology 52, no. 1–2 (1990): 99–115, doi:10.1016/s0092–8240(05)80006–0.

[16] Nils J. Nilsson. The quest for artificial intelligence. Cambridge University Press, 2009. E-book. 34- 35

[17] Ibid.

[18] S.I. Gallant, “Perceptron-Based Learning Algorithms”, IEEE Transactions on Neural Networks 1, no. 2 (1990): 179–191, doi:10.1109/72.80230.

[19] James Moor. “The Dartmouth College artificial intelligence conference: The next fifty years.” Ai Magazine 27.4 (2006).87.