Dr Rajiv Desai

An Educational Blog

ARTIFICIAL INTELLIGENCE (AI)

Artificial Intelligence (AI):

_____

______

Prologue:

Mention Artificial Intelligence (AI) and most people are immediately transported into a distant future inspired by popular science fiction such as Terminator and HAL 9000. While these two artificial entities do not exist, the algorithms of AI have been able to address many real issues, from performing medical diagnoses to navigating difficult terrain to monitoring possible failures of spacecraft. In the early 20th century, Jean Piaget remarked, “Intelligence is what you use when you don’t know what to do, when neither innateness nor learning has prepared you for the particular situation.”  A 1969 McKinsey article claimed that computers were so dumb that they were not capable of making any decisions and it was human intelligence that drives the dumb machine. Alas, this claim has become a bit of a “joke” over the years, as the modern computers are gradually replacing skilled practitioners in fields across many industries such as architecture, medicine, geology, and education. Artificial Intelligence pursues creating computers or machines as intelligent as human beings. Michael A. Arbib advanced the notion that the brain is not a computer in the recent technological sense, but that we can learn much about brains from studying machines, and much about machines from studying brains. While AI seems like a futuristic concept, it’s actually something that many people use daily, although 63 percent of users don’t realize they’re using it. We use artificial intelligence all day, every day. Siri, Google Now, and Cortana are obvious examples of artificial intelligence, but AI is actually all around us. It can be found in vacuum cleaners, cars, lawnmowers, video games, Hollywood special effects, e-commerce software, medical research and international finance markets – among many other examples. John McCarthy, who originally coined the term “Artificial Intelligence” in 1956, famously quipped: “As soon as it works, no-one calls it AI anymore.” While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons. In many unforeseen ways, AI is helping to improve and make our lives more efficient, though the reverse degeneration of human economic and cultural structures is also a potential reality. The Future of Life Institute‘s tagline sums it up in succinct fashion: “Technology is giving life the potential to flourish like never before…or to self-destruct.” Humans are the creators, but will we always have control of our revolutionary inventions?  Scientists reckon there have been at least five mass extinction events in the history of our planet, when a catastrophically high number of species were wiped out in a relatively short period of time. Global warming, nuclear holocaust and artificial intelligence are debated as causes of sixth mass extinction, where human species could become extinct by human activities!!!

_____

Synonyms and abbreviations: 

CPU = central processing unit

GPU = graphics processing unit

RAM = random access memory

ANI = artificial narrow intelligence = weak AI

AGI = artificial general intelligence = strong AI

CI = computational intelligence

FLS = Fuzzy Logic Systems

ANN = Artificial neural network

EA = evolutionary algorithms

GA = genetic algorithm

ES = expert system

NLP = natural language processing

ML = machine learning

API = application program interface

RPA = remotely powered aircraft

___

Artificial Intelligence Terminology:

Here is the list of frequently used terms in the domain of AI:

Agent:

Agents are systems or software programs capable of autonomous, purposeful and reasoning directed towards one or more goals. They are also called assistants, brokers, bots, droids, intelligent agents, and software agents.

Environment:

It is the part of real or computational world inhabited by the agent.

Autonomous Robot:

Robot free from external control or influence and able to control itself independently.

Heuristics:

It is the knowledge based on Trial-and-error, evaluations, and experimentation.

Knowledge Engineering:

Acquiring knowledge from human experts and other resources.

Percepts:

It is the format in which the agent obtains information about the environment.

Pruning:

Overriding unnecessary and irrelevant considerations in AI systems.

Rule:

It is a format of representing knowledge base in Expert System. It is in the form of IF-THEN-ELSE.

Shell:

A shell is a software that helps in designing inference engine, knowledge base, and user interface of an expert system.

Task:

It is the goal the agent is tries to accomplish.

Turing Test:

A test developed by Allan Turing to test the intelligence of a machine as compared to human intelligence.

Existential threat:

A force capable of completely obviating human existence.

Autonomous weapons:

The proverbial killer A.I., autonomous weapons would use artificial intelligence rather than human intelligence to select their targets.

Machine learning:

Unlike conventional computer programs, machine-learning algorithms modify themselves to better perform their assigned tasks.

Alignment problem:

A situation in which the methods artificial intelligences use to complete their tasks fail to correspond to the needs of the humans who created them.

Singularity:

Although the term has been used broadly, the singularity typically describes the moment at which computers become so adept at modifying their own programming that they transcend current human intellect.

Superintelligence:

Superintelligence would exceed current human mental capacities in virtually every way and be capable of transformative cognitive feats.

_______

_______

Consider the following 3 practical examples:

A computer system allows a severely handicapped person to type merely by moving their eyes. The device is able to follow eye movements that briefly fix on letters of the alphabet. Then it types the letters at a rate that has allowed some volunteers to achieve speeds of 18 words per minute after practice. A similar computer system for the handicapped is installed on mechanized wheelchairs. It allows paralytics to “order” their wheelchairs wherever they want to go merely by voice commands. A couple in London reportedly have adapted a home computer to act as a nanny for their baby. The baby’s father, a computer consultant, programmed the computer to respond the instant baby Gemma cries by talking to her in a soothing tone, using parental voices. The surrogate nanny will also tell bedtime stories and teach the baby three languages as she begins to talk. These are just a few examples of Artificial Intelligence (AI), where a set of instructions, or programs are fed into the computer which enables it to solve problems on its own, the way a human does. AI is now being used in many fields such as medical diagnosis, robot control, computer games, flying airplanes etc.

_

_

_

_

_

_

_

Besides computer science, AI is linked with many fields and disciplines:

______

______

History of AI:

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. In the 19th century, George Boole refined those ideas into propositional logic and Gottlob Frege developed a notational system for mechanical reasoning (a “predicate calculus”). Around the 1940s, Alan Turing’s theory of computation suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis. Along with concurrent discoveries in neurology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”. The field of AI research was founded at a conference at Dartmouth College in 1956. The attendees, including John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel and Herbert Simon, became the leaders of AI research. They and their students wrote programs that were astonishing to most people: computers were winning at checkers, solving word problems in algebra, proving logical theorems and speaking English. By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do.” Marvin Minsky agreed, writing, “within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.” They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”, a period when funding for AI projects was hard to find. In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began. In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas. The success was due to increasing computational power, greater emphasis on solving specific problems, new ties between AI and other fields and a commitment by researchers to mathematical methods and scientific standards. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997. Advanced statistical techniques (loosely known as deep learning), access to large amounts of data and faster computers enabled advances in machine learning and perception.  By the mid 2010s, machine learning applications were used throughout the world. In a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research as do intelligent personal assistants in smartphones. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increasing from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011. He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.

__

Although in the eighteenth, nineteenth, and early twentieth centuries the formalization of science and mathematics created the intellectual prerequisite for the study of artificial intelligence, it was not until the twentieth century and the introduction of the digital computer that AI became a viable scientific discipline. By the end of the 1940s electronic digital computers had demonstrated their potential to provide the memory and processing power required by intelligent programs. It was now possible to implement formal reasoning systems on a computer and empirically test their sufficiency for exhibiting intelligence. An essential component of the science of artificial intelligence is this commitment to digital computers as the vehicle of choice for creating and testing theories of intelligence. Digital computers are not merely a vehicle for testing theories of intelligence. Their architecture also suggests a specific paradigm for such theories: intelligence is a form of information processing. The notion of search as a problem-solving methodology, for example, owes more to the sequential nature of computer operation than it does to any biological model of intelligence. Most AI programs represent knowledge in some formal language that is then manipulated by algorithms, honoring the separation of data and program fundamental to the von Neumann style of computing. Formal logic has emerged as an important representational tool for AI research, just as graph theory plays an indispensable role in the analysis of problem spaces as well as providing a basis for semantic networks and similar models of semantic meaning.  We often forget that the tools we create for our own purposes tend to shape our conception of the world through their structure and limitations. Although seemingly restrictive, this interaction is an essential aspect of the evolution of human knowledge: a tool (and scientific theories are ultimately only tools) is developed to solve a particular problem. As it is used and refined, the tool itself seems to suggest other applications, leading to new questions and, ultimately, the development of new tools.

__

Here is synopsis of history of AI during 20th century:

Year Milestone / Innovation
1923 Karel Čapek play named “Rossum’s Universal Robots” (RUR) opens in London, first use of the word “robot” in English.
1943 Foundations for neural networks laid.
1945 Isaac Asimov, a Columbia University alumni, coined the term Robotics.
1950 Alan Turing introduced Turing Test for evaluation of intelligence and published Computing Machinery and Intelligence. Claude Shannon published Detailed Analysis of Chess Playing as a search.
1956 John McCarthy coined the term Artificial Intelligence. Demonstration of the first running AI program at Carnegie Mellon University.
1958 John McCarthy invents LISP programming language for AI.
1964 Danny Bobrow’s dissertation at MIT showed that computers can understand natural language well enough to solve algebra word problems correctly.
1965 Joseph Weizenbaum at MIT built ELIZA, an interactive problem that carries on a dialogue in English.
1969 Scientists at Stanford Research Institute Developed Shakey, a robot, equipped with locomotion, perception, and problem solving.
1973 The Assembly Robotics group at Edinburgh University built Freddy, the Famous Scottish Robot, capable of using vision to locate and assemble models.
1979 The first computer-controlled autonomous vehicle, Stanford Cart, was built.
1985 Harold Cohen created and demonstrated the drawing program, Aaron.
1990 Major advances in all areas of AI −

  • Significant demonstrations in machine learning
  • Case-based reasoning
  • Multi-agent planning
  • Scheduling
  • Data mining, Web Crawler
  • natural language understanding and translation
  • Vision, Virtual Reality
  • Games
1997 The Deep Blue Chess Program beats the then world chess champion, Garry Kasparov.
2000 Interactive robot pets become commercially available. MIT displays Kismet, a robot with a face that expresses emotions. The robot Nomad explores remote regions of Antarctica and locates meteorites.

______

Breakthroughs in AI are depicted in chronological order:

_______

_______

Mathematics, computer science and AI:

To know artificial intelligence, you have to know computer science, and to know computer science you have to know mathematics.

Definition of mathematics:

Mathematics makes up that part of the human conceptual system that is special in the following way:

It is precise, consistent, stable across time and human communities, symbolizable, calculable, generalizable, universally available, consistent within each of its subject matters, and effective as a general tool for description, explanation, and prediction in a vast number of everyday activities, [ranging from] sports, to building, business, technology, and science. There is no branch of mathematics, however abstract, which may not someday be applied to phenomena of the real world. Mathematics arises from our bodies and brains, our everyday experiences, and the concerns of human societies and cultures.

_

Computer science to artificial intelligence:

Since the invention of computers or machines, their capability to perform various tasks went on growing exponentially. Humans have developed the power of computer systems in terms of their diverse working domains, their increasing speed, and reducing size with respect to time. A branch of Computer Science named Artificial Intelligence pursues creating the computers or machines as intelligent as human beings. Theoretical Computer Science has its roots in mathematics, where there was a lot of discussion of logic. It began with Pascal and Babbage in the 1800’s. Pascal and Babbage eventually tried to come up with computing machines that would help in calculating arithmetic. Some of them actually worked, but they were mechanical machines built on physics, without a real theoretical background. Another person in the 1800’s was a man named George Boole, who tried to formulate a mathematical form of logic. This was eventually called Boolean Logic in his honor, and we still use it today to form the heart of all computer hardware. All those transistors and things you see on a circuit board are really just physical representations of what George Boole came up with. Computer Science, however, hit the golden age with John von Neumann and Alan Turing in the 1900’s. Von Neumann formulated the theoretical form of computers that is still used today as the heart of all computer design: the separation of the CPU, the RAM, the BUS, etc. This is all known collectively as Von Neumann architecture. Alan Turing, however, is famous for the theoretical part of Computer Science. He invented something called the Universal Turing Machine, which told us exactly what could and could not be computed using the standard computer architecture of today. This formed the basis of Theoretical Computer Science. Ever since Turing formulated this extraordinary concept, Computer Science has been dedicated to answering one question: “Can we compute this?” This question is known as computability, and it is one of the core disciplines in Computer Science. Another form of the question is “Can we compute this better?” This leads to more complications, because what does “better” mean? So, Computer Science is partly about finding efficient algorithms to do what you need. Still, there are other forms of Computer Science, answering such related questions as “Can we compute thought?” This leads to fields like Artificial Intelligence. Computer Science is all about getting things done, to find progressive solutions to our problems, to fill gaps in our knowledge. Sure, Computer Science may have some math, but it is different from math. In the end, Computer Science is about exploring the limitations of humans, of expanding our horizons.

_

Basic of computer science:

Computer science is the study of problems, problem-solving, and the solutions that come out of the problem-solving process. Given a problem, a computer scientist’s goal is to develop an algorithm, a step-by-step list of instructions for solving any instance of the problem that might arise. Algorithms are finite processes that if followed will solve the problem. Algorithms are solutions. Computer science can be thought of as the study of algorithms. However, we must be careful to include the fact that some problems may not have a solution. Although proving this statement is beyond the scope of this text, the fact that some problems cannot be solved is important for those who study computer science. We can fully define computer science, then, by including both types of problems and stating that computer science is the study of solutions to problems as well as the study of problems with no solutions. It is also very common to include the word computable when describing problems and solutions. We say that a problem is computable if an algorithm exists for solving it. An alternative definition for computer science, then, is to say that computer science is the study of problems that are and that are not computable, the study of the existence and the nonexistence of algorithms. In any case, you will note that the word “computer” did not come up at all. Solutions are considered independent from the machine. Computer science, as it pertains to the problem-solving process itself, is also the study of abstraction. Abstraction allows us to view the problem and solution in such a way as to separate the so-called logical and physical perspectives. Most people use computers to write documents, send and receive email, surf the web, play music, store images, and play games without any knowledge of the details that take place to allow those types of applications to work. They view computers from a logical or user perspective. Computer scientists, programmers, technology support staff, and system administrators take a very different view of the computer. They must know the details of how operating systems work, how network protocols are configured, and how to code various scripts that control function. This is known as the physical perspective.

_

Fields of Computer Science:

Computer science is often said to be neither a science nor about computers. There is certainly some truth to this claim–computers are merely the device upon which the complex and beautiful ideas in computer science are tested and implemented. And it is hardly a science of discovery, as might be physics or biology, so much as it is a discipline of mathematics or engineering. But this all depends on which branch of computer science you are involved in, and there are many: theory, hardware, networking, graphics, programming languages, software engineering, systems, and of course, AI.

_

Theory:

Computer science (CS) theory is often highly mathematical, concerning itself with questions about the limits of computation. Some of the major results in CS theory include what can be computed and how fast certain problems can be solved. Some things are simply impossible to figure out! Other things are merely difficult, meaning they take a long time. The long-standing question of whether “P=NP” lies in the realm of theory. The P versus NP problem is a major unsolved problem in computer science. Informally speaking, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer. A subsection of theory is algorithm development. For instance, theorists might work to develop better algorithms for graph coloring, and theorists have been involved in improving algorithms used by the human genome project to produce faster algorithms for predicting DNA similarity.

_

Algorithm:

In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms perform calculation, data processing, and/or automated reasoning tasks. An algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty),  the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing “output” and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. An informal definition could be “a set of rules that precisely defines a sequence of operations.” which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually. Algorithms are essential to the way computers process data. Many computer programs contain algorithms that detail the specific instructions a computer should perform (in a specific order) to carry out a specified task, such as calculating employees’ paychecks or printing students’ report cards. In computer systems, an algorithm is basically an instance of logic written in software by software developers to be effective for the intended “target” computer(s) to produce output from given (perhaps null) input. An optimal algorithm, even running in old hardware, would produce faster results than a non-optimal (higher time complexity) algorithm for the same purpose, running in more efficient hardware; that is why algorithms, like computer hardware, are considered technology.

_

Recursion and Iteration:

Recursion and iteration are two very commonly used, powerful methods of solving complex problems, directly harnessing the power of the computer to calculate things very quickly. Both methods rely on breaking up the complex problems into smaller, simpler steps that can be solved easily, but the two methods are subtlely different. The difference between iteration and recursion is that with iteration, each step clearly leads onto the next, like stepping stones across a river, while in recursion, each step replicates itself at a smaller scale, so that all of them combined together eventually solve the problem. These two basic methods are very important to understand fully, since they appear in almost every computer algorithm ever made.

_

Decision tree:

The idea behind decision trees is that, given an input with well-defined attributes, you can classify the input entirely based on making choices about each attribute. For instance, if you’re told that someone has a piece of US currency, you could ask what denomination it is; if the denomination is less than one dollar, it must be the coin with that denomination. That could be one branch of the decision tree that was split on the attribute of “value”. On the other hand, if the denomination happened to be one dollar, you’d then need to ask what type of currency it is: if it were a bill, it would have to be a dollar bill.  If it were a coin, it could still be one of a few things (such as a Susan B. Anthony Dollar or a Sacajawea dollar). Then your next question might be, is it named after a female; but that wouldn’t tell you anything at all! This suggests that some attributes may be more valuable for making decisions than others. A useful way of thinking about decision trees is as though each node of the tree were a question to be answered. Once you’ve reached the leaves of the tree, you’ve reached the answer! For instance, if you were choosing a computer to buy, your first choice might be “Laptop or desktop”. If you chose a laptop, your second choice might be “Mac or PC”. If you chose a desktop, you might have more important factors than Mac or PC (perhaps you know that you will buy a PC because you think they are faster, and if you want a desktop, you want a fast machine).  In essence, with a decision tree, the algorithm is designed to make choices in a similar fashion. Each of the questions corresponds to a node in the decision tree, and each node has branches for each possible answer. Eventually, the algorithm reaches a leaf node that contains the correct classification of the input or the correct decision to make.

_

Visually, a decision tree looks something like this:

_

In the case of the decision tree learning algorithm, the questions at each node will correspond to a question about the value of an attribute on the input, and each branch from the node will correspond. The requirement is that we want the shortest decision trees possible (we’d like to find the shortest, simplest way of characterizing the inputs). This introduces a bit of the bias that we know is necessary for learning by favoring simpler solutions over more complex solutions. We make this choice because it works well in general, and has certainly been a useful principle for scientists trying to come up with explanations for data. So our goal is to find the shortest decision tree possible. Unfortunately, this is rather difficult; there’s no known way of generating the shortest possible tree without looking at nearly every possible tree. Unfortunately, the number of possible decision trees is exponential in the number of attributes. Since there are way too many possible trees for an algorithm to simply try every one of them and use the best, we’ll take a less time-consuming approach that usually results in fairly good results. In particular, we’ll learn a tree that seems to split on attributes that tend to give a lot of information about the data.

_

Symbolic and Numeric Computation:

“Computation” can mean different things to different people at different times. Indeed, it was only in the 1950s that “computer” came to mean a machine rather than a human being. The great computers of the 19th century, such as Delaunay, would produce formulae of great length, and only convert to numbers at the last minute. Today, “compute” is largely synonymous with “produce numbers”, and often means “produce numbers in FORTRAN using the NAG library”. However, a dedicated minority use computers to produce formulae. Historically symbolic and numeric computations have pursued different lines of evolution, have been written in different languages and generally seen to be competitive rather than complementary techniques. Even when both were used to solve a problem, ad hoc methods were used to transfer the data between them. Whereas calculators brought more numeric computation into the classroom, computer algebra systems enable the use of more symbolic computation. Generally the two methods have been viewed as alternatives, and indeed the following table shows some of the historic differences:

Of course there are exceptions to these rules. MAPLE and MATHEMATICA are written in C, and MATHLAB is an interactive package of numerical algorithms; but in general it has been the case that computer algebra systems were interactive packages run on personal workstations, while numerical computation was done on large machines in a batch-oriented environment. The reason for this apparent dichotomy is clear. Numerical computation tends to be very CPU-intensive, so the more powerful the host computer the better; while symbolic computation is more memory-intensive, and performing it on a shared machine might be considered anti-social behavior by other users. However numeric and symbolic computations do need each other. It is the combination of numeric and symbolic capabilities, enriched by graphical ones, that gives a program like DERIVE an essential advantage over a calculator (or graphics calculator). Hardware has, of course, come a long way since these lines were drawn. Most researchers now have access to powerful workstations with a reasonable amount of memory and high quality display devices. Graphics have become more important to most users and, even if numerical programs are still run on the departmental supercomputer, they are probably developed and tested on the individual’s desk-top machine. Modern hardware is thus perfectly suited for both symbolic and numeric applications. AI involves symbolic computation.

_

Cryptography is another booming area of the theory section of computer science, with applications from e-commerce to privacy and data security. This work usually involves higher-level mathematics, including number theory. Even given all of the work in the field, algorithms such as RSA encryption have yet to be proven totally secure. Work in theory even includes some aspects of machine learning, including developing new and better learning algorithms and coming up with bounds on what can be learned and under what conditions.

_

Hardware:

Computer hardware deals with building circuits and chips. Hardware design lies in the realm of engineering, and covers topics such as chip architecture, but also more general electrical engineering-style circuit design. Computer systems generally consist of three main parts: the central processing unit (CPU) that processes data, memory that holds the programs and data to be processed, and I/O (input/output) devices as peripherals that communicate with the outside world. In a modern system we might find a multi-core CPU, DDR4 SDRAM for memory, a solid-state drive for secondary storage, a graphics card and LCD as a display system, a mouse and keyboard for interaction, and a Wi-Fi connection for networking. Computer buses move data between all of these devices.

_

Networking:

Networking covers topics dealing with device interconnection, and is closely related to systems. Network design deals with anything from laying out a home network to figuring out the best way to link together military installations. Networking also covers a variety of practical topics such as resource sharing and creating better protocols for transmitting data in order to guarantee delivery times or reduce network traffic.  Other work in networking includes algorithms for peer-to-peer networks to allow resource detection, scalable searching of data, and load balancing to prevent network nodes from exploiting or damaging the network.  Networking often relies on results from theory for encryption and routing algorithms and from systems for building efficient, low-power network nodes.

_

Graphics:

The field of graphics has become well-known for work in making amazing animated movies, but it also covers topics such as data visualization, which make it easier to understand and analyse complex data. You may be most familiar with the work in computer graphics because of the incredible strides that have been made in creating graphical 3D worlds!

_

Programming Languages: 

A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs to control the behavior of a machine or to express algorithms. The term programming language usually refers to high-level languages, such as BASIC, C, C++, COBOL, FORTRAN, Ada, and Pascal. Each language has a unique set of keywords (words that it understands) and a special syntax for organizing program instructions. Programming languages are the heart of much work in computer science; most non-theory areas are dependent on good programming languages to get the job done. Programming language works focuses on several topics. One area of work is optimization–it’s often said that it’s better to let the compiler figure out how to speed up your program instead of hand-coding assembly. And these days, that’s probably true because compiler optimizations can do amazing things. Proving program correctness is another aspect of programming language study, which has led to a class of “functional” programming languages. Much recent work has focused on optimizing functional languages, which turn out to be easier to analyze mathematically and prove correct, and also sometimes more elegant for expressing complex ideas in a compact way. Other work in programming languages deals with programmer productivity, such as designing new language paradigms or simply better implementations of current programming paradigms (for instance, one could see Java as an example of a cleaner object-oriented implementation than C++) or simply adding new features, such as garbage collection or the ability to create new functions dynamically, to languages and studying how this improves the programmer’s productivity.  Recently, language-based security has become more interesting, as questions of how to make “safer” languages that make it easier to write secure code.

_

Software Engineering:

Software engineering relies on some of the work from the programming languages community, and deals with the design and implementation of software. Often, software engineering will cover topics like defensive programming, in which the code includes apparently extraneous work to ensure that it is used correctly by others. Software engineering is generally a practical discipline, with a focus on designing and working on large-scale projects. As a result, appreciating software engineering practices often requires a fair amount of actual work on software projects. It turns out that as programs grow larger, the difficulty of managing them dramatically increases in sometimes unexpected ways.

_

Systems:

Systems work deals, in a nutshell, with building programs that use a lot of resources and profiling that resource usage. Systems work includes building operating systems, databases, and distributed computing, and can be closely related to networking. For instance, some might say that the structure of the internet falls in the category of systems work. The design, implementation, and profiling of databases is a major part of systems programming, with a focus on building tools that are fast enough to manage large amounts of data while still being stable enough not to lose it. Sometimes work in databases and operating systems intersects in the design of file systems to store data on disk for the operating system. For example, Microsoft has spent years working on a file system based on the relational database model. Systems work is highly practical and focused on implementation and understanding what kinds of usage a system will be able to handle. As such, systems work can involve trade-offs that require tuning for the common usage scenarios rather than creating systems that are extremely efficient in every possible case. Some recent work in systems has focused on solving the problems associated with large-scale computation (distributed computing) and making it easier to harness the power of many relatively slow computers to solve problems that are easy to parallelize.

__

Artificial Intelligence (AI):

Last, but not least, is artificial intelligence, which covers a wide range of topics. AI work includes everything from planning and searching for solutions (for instance, solving problems with many constraints) to machine learning. There are areas of AI that focus on building game playing programs for Chess and Go. Other planning problems are of more practical significance–for instance, designing programs to diagnose and solve problems in spacecraft or medicine.  AI also includes work on neural networks and machine learning, which is designed to solve difficult problems by allowing computers to discover patterns in a large set of input data. Learning can be either supervised, in which case there are training examples that have been classified into different categories (for instance, written numerals classified as being the numbers 1 through 9), or unsupervised, in which case the goal is often to cluster the data into groups that appear to have similar features (suggesting that they all belong to the same category).  AI also includes work in the field of robotics (along with hardware and systems) and multiagent systems, and is focused largely on improving the ability of robotic agents to plan courses of action or strategize about how to interact with other robots or with people. Work in this area has often focused on multiagent negotiation and applying the principles of game theory (for interacting with other robots) or behavioral economics (for interacting with people).  Although AI holds out some hope of creating a truly conscious machine, much of the recent work focuses on solving problems of more obvious importance. Thus, the applications of AI to research, in the form of data mining and pattern recognition, are at present more important than the more philosophical topic of what it means to be conscious. Nevertheless, the ability of computers to learn using complex algorithms provides clues about the tractability of the problems we face.

_______

_______

Conventional vs. AI computing:

How does computer work like human being, this can be known by the considering few question as: how does a human being store knowledge, how does human being learn and how does human being reason? The art of performing these actions is the aim of AI. The major difference between conventional and AI computing is that in conventional computing the computer is given data and is told how to solve a problem whereas in AI knowledge is given about a domain and some inference capability.

_

Heuristic in artificial intelligence:

A heuristic technique, often called simply a heuristic, is any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision. Examples of this method include using a rule of thumb, an educated guess, an intuitive judgment, stereotyping, profiling, or common sense.  AI is the study of heuristics, rather than algorithms. Heuristic means rule of thumb, which usually works but may not do so in all circumstances. Example: getting to university in time for 8.00 AM lectures. Algorithm means prescription for solving a given problem over a defined range of input conditions. Example: solving a polynomial equation, or a set of N linear equations involving N variables. It may be more appropriate to seek and accept a sufficient solution (Heuristic search) to a given problem, rather than an optimal solution (algorithmic search). Heuristic is the integrated sum of those facts, which give us the ability to remember a face not seen for thirty or more years. In short we can say:

  1. It is the ability to think and understand instead of doing things by instinct or automatically.
  2. It is the ability to learn or understand to deal with new or trying situation.
  3. It is the ability to apply knowledge to manipulate one’s environment or think abstractly as, measured by objectives criteria.
  4. It is the ability to acquire, understand and apply knowledge or the ability to exercise thought and reason.

_

How AI techniques help Computers to be Smarter?

It is the question that we think when we think about AI. So let us think once again about the constraints of knowledge between them. Figure below shows how computers become intelligent by infusing inference capability into them.

_

_

The table below compares conventional computing and AI computing:

__

The programming without and with AI is different in following ways:

Programming Without AI Programming With AI
A computer program without AI can answer the specific questions it is meant to solve. A computer program with AI can answer the generic questions it is meant to solve.
Modification in the program leads to change in its structure. AI programs can absorb new modifications by putting highly independent pieces of information together. Hence you can modify even a minute piece of information of program without affecting its structure.
Modification is not quick and easy. It may lead to affecting the program adversely. Quick and Easy program modification.

_

Knowledge based systems proved to be much successful that earlier, more general problem solving systems. Since knowledge based system depend on large quantities of high quality knowledge for their success, the ultimate goal is to develop technique that permits systems to learn new knowledge autonomously and continually improve the quality of the knowledge they possess.

______

______

Human versus machine intelligence:

_

Intelligence:

In the early 20th century, Jean Piaget remarked, “Intelligence is what you use when you don’t know what to do, when neither innateness nor learning has prepared you for the particular situation.” In simpler terms, intelligence can be defined as doing the right thing at the right time in a flexible manner that helps you survive proactively and improve productivity in various facets of life. Intelligence is the ability of a system to calculate, reason, perceive relationships and analogies, learn from experience, store and retrieve information from memory, solve problems, comprehend complex ideas, use natural language fluently, classify, generalize, and adapt new situations. There are various forms of intelligence: There is the more rational variety that is necessitated for intellectually demanding tasks like playing chess, solving complex problems and making discerning choices about the future. There is also the concept of social intelligence, characterized by courteous social behavior. Then there is emotional intelligence, which is about being empathetic toward the emotions and thoughts of the people with whom you engage. We generally experience each of these contours of intelligence in some combination, but that doesn’t mean they cannot be comprehended independently, perhaps even creatively within the context of AI. Most human behaviors are essentially instincts or reactions to outward stimuli; we generally don’t think we need to be intelligent to execute them. In reality though, our brains are wired smartly to perform these tasks. Most of what we do is reflexive and automatic, in that we sometimes don’t even need to be conscious of these processes, but our brain is always in the process of assimilating, analyzing and implementing instructions. It is very difficult to program robots to do things we typically find very easy to do. In the field Education, intelligence is defined as the capability to understand, deal with and adapt to new situations. When it comes to Psychology, it is defined as the capability to apply knowledge to change one’s environment. For example, a physician learning to treat a patient with unfamiliar symptoms or an artist modifying a painting to change the impression it makes, comes under this definition very neatly. Effective adaptation requires perception, learning, memory, logical reasoning and solving problems. This means that intelligence is not particularly a mental process; it is rather a summation of these processes toward effective adaptation to the environment. So when it comes to the example of the physician, he/she is required to adapt by seeing material about the disease, learning the meaning behind the material, memorizing the most important facts and reasoning to understand the new symptoms. So, as a whole, intelligence is not considered a mere ability, but a combination of abilities using several cognitive processes to adapt to the environment. Artificial Intelligence is the field dedicated to developing machines that will be able to mimic and perform as humans.

__

__

There is a huge difference between (artificial) intelligence (being intelligent) and intelligent behaviors (behaving intelligently). Intelligent behaviors can be simulated after the gathered facts and rules and constructed algorithms for some limited input data and defined goals. From this point of view even machines (e.g. calculator, computers programs) can behave intelligently. Such machines are not aware of doing intelligent computations that have been programmed by intelligent programmers. It is much easier to simulate intelligent behaviors than reproduce intelligence. On the other hand, (artificial) intelligence can produce rules and algorithms to solve new (usually similar) tasks using gained knowledge and some associative mechanisms that allow it to generalize facts and rules and then memorize or adapt the results of generalization. Today, we can program computers to behave intelligently, but we still work on making computers to be intelligent (have their own artificial intelligence). Artificial intelligence like real intelligence cannot work without knowledge that has to be somehow represented, generalizable, and easily available in various contexts and circumstances. Knowledge is indispensable for implementing intelligent behaviors in machines. It automatically steers the associative processes that take place in a brain. Knowledge as well as intelligence are automatically formed in brain structures under the influence of incoming data, their combinations, and sequences.

______

Human brain versus computer:

One thing that definitely needs to happen for AI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, it’ll need to equal the brain’s raw computing capacity. One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together. For comparison, if a “computation” was equivalent to one “Floating Point Operation” – a measure used to rate current supercomputers – then 1 quadrillion “computations” would be equivalent to 1 Petaflops  The most powerful systems in the world are ranked by the total number of “petaflops” they can achieve. A single petaflop is equal to 1 quadrillion calculations per second (cps) — or 1,000 trillion operations.

_

Digital Computers:

A digital computer system is one in which information has discrete values. The design includes transistors (on/off switches); a central processing unit (CPU), some kind of operating system (like windows) and it is based on binary logic (instructions coded as 0s and 1s). Computers are linear designs and have continually grown in terms of size, speed and capacity. In 1971 the first Intel microprocessor (model 4004) had 2,300 transistors, but by 2011 the Intel Pentium microprocessor had 2.3 billion transistors. One of the fastest super computers today is built by Fujitsu . It has 864 racks containing 88,128 individual CPUs and can operate at a speed of 10.51 peta-flops or 10.5 quadrillion calculations per second. The Sequoia supercomputer can perform 16.32 quadrillion floating operations (or petaflops) every second. Supercomputers have led many people to think that we must be finally approaching the capabilities of the human brain in terms of speed and capability, and we must be on the verge of creating a C3PO type robot that can think and converse just like a human. But the fact is that we still have a long way to go.

_

Arthur R. Jensen, a leading researcher in human intelligence, suggests “as a heuristic hypothesis” that all normal humans have the same intellectual mechanisms and that differences in intelligence are related to “quantitative biochemical and physiological conditions” resulting in differences in speed, short term memory, and the ability to form accurate and retrievable long term memories. Whether or not Jensen is right about human intelligence, the situation in AI today is the reverse. Computer programs have plenty of speed and memory but their abilities correspond to the intellectual mechanisms that program designers understand well enough to put in programs. Some abilities that children normally don’t develop till they are teenagers may be in, and some abilities possessed by two year olds are still out. The matter is further complicated by the fact that the cognitive sciences still have not succeeded in determining exactly what the human abilities are. Very likely the organization of the intellectual mechanisms for AI can usefully be different from that in people. Whenever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently. AI aim to make computer programs that can solve problems and achieve goals in the world as well as humans. However, many people involved in particular research areas are much less ambitious. A few people think that human-level intelligence can be achieved by writing large numbers of programs of the kind people are now writing and assembling vast knowledge bases of facts in the languages now used for expressing knowledge. However, most AI researchers believe that new fundamental ideas are required, and therefore it cannot be predicted when human-level intelligence will be achieved. Many researchers invented non-computer machines, hoping that they would be intelligent in different ways than the computer programs could be. However, they usually simulate their invented machines on a computer and come to doubt that the new machine is worth building. Because many billions of dollars that have been spent in making computers faster and faster, another kind of machine would have to be very fast to perform better than a program on a computer simulating the machine.

_

The Human Brain:

The human brain is not a digital computer design. It is some kind of analogue neural network that encodes information on a continuum. It does have its own important parts that are involved in the thinking process, such as the pre-frontal cortex, amygdale, thalamus, hippocampus, limbic system that are inter-connected by neurons. However, the way they communicate and work is totally different from a digital computer. Neurons are the real key to how the brain learns, thinks, perceives, stores memory, and a host of other functions. The average brain has at least 100 billion neurons which are connected via thousands of synapses that transmit signals via electro/chemical connections. It is the synapses that are most comparable to transistors because they turn off or on. The human brain has a huge number of synapses. Each of the one hundred billion neurons has on average 7,000 synaptic connections to other neurons. It has been estimated that the brain of a three-year-old child has about 10^15 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging 100 to 500 trillion. An estimate of the brain’s processing power, based on a simple switch model for neuron activity, is around 100 trillion synaptic updates per second (SUPS). Each neuron is a living cell and a computer in its own right. A neuron has the signal processing power of thousands of transistors. Unlike transistors neurons can modify their synapses and modulate the frequency of their signals. Unlike digital computers with fixed architecture, the brain can constantly re-wire its neurons to learn and adapt. Instead of programs, neural networks learn by doing and remembering and this vast network of connected neurons gives the brain excellent pattern recognition.  Using visual processing as a starting point, robotics expert Hans Moravec of Carnegie Mellon institute estimated that humans can process about 100 trillion instructions per second (or teraflops). But Chris Westbury, associate professor at the University of Alberta, estimates the brain may be capable of 20 million billion calculations per second, or around 20 petaflops. Westbury bases this estimation on the number of neurons in an average brain and how quickly they can send signals to one another. IBM researchers have estimated that a single human brain can process 36.8 petaflops of data. Ray Kurzweil calculated it to be around 10 quadrillion cps. Currently, the world’s fastest supercomputer, China’s Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet. The combined processing power of the 500 most powerful supercomputers has grown to 123.4 petaflops.  What’s clear is that computer processing power is at least approaching, if not outpacing, human thought. The brain processes information slowly, since neurons are slow in action (order of milliseconds). Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz). Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, whereas existing electronic processing cores can communicate optically at the speed of light.

_

Computer intelligence versus Human intelligence: 

Intelligent systems (both natural and artificial) have several key features. Some intelligence features are more developed in a human’s brain; other intelligence features are more developed in modern computers.

Name of intelligence feature  Who has the advantage Comments about comparison
Experimental learning Human Currently computers are not able to general experimenting. Computers are able to make some specific (narrow) experimentation though.
Direct gathering information Computer Modern computers are very strong in gathering information. Search engines and particularly Google is the best example
Decision making ability to achieve goals Human Currently computers are not able to make good decisions in “general” environment
Hardware processing power Computer Processing power of modern computers is tremendous (several billion operations per second)
Hardware memory storage Computer HDD memory storage of modern computers is huge
Information retrieval speed Computer Data retrieval speed of modern computers is ~1000 times faster that human’s ability.

Examples of high-speed data retrieval systems: RDBMS, Google

Information retrieval depths It’s not clear Both humans and computers have limited ability in “deep” informational retrieval.
Information breadth Computer Practically every internet search engine beats human’s in the breadth of stored and available information
Information retrieval relevancy Human Usually human’s brain retrieves more relevant information than computer program. But advantage of humans disappears every year.
Ability to find and establish correlation between concepts Human Currently computers are not able to establish correlation between general concepts
Ability to derive concepts from other concepts Human Usually computers are not able to derive concepts from other concepts
Consistent system of super goals Human Humans have highly developed system of super goals (“avoid pain”, “avoid hunger”, sexuality, “desire to talk” …). Super goals implementation in modern computers is very limited.

_

Limitations of Digital Computers

We have been so successful with Large Scale Integration (LSI) in continuously shrinking microprocessor circuits and adding more transistors year after year that people have begun to believe that we might actually equal the human brain. But, there are problems. The first problem is that in digital computers all calculations must pass through the CPU which eventually slows down its program. The human brain doesn’t use a CPU and is much more efficient. The second problem is the limitations to shrinking circuits. In 1971 when Intel introduced the 4004 microprocessor, it could hold 2.300 transistors and they were about 10 microns wide. Today a Pentium chip has 1.4 billion transistors and the transistors are down to 22 nanometers wide (a nanometer is one billionth of a meter). They are currently working at 14 nanometers and hope to reach 10 nanometers. The problem is that they are getting close to the size of a few atoms where they will begin to run into problems of quantum physics such as the “uncertainty principle where you wouldn’t be able to determine precisely where the electron is and it could leak out of the wire.” This could end size reduction for digital computers. Also, all that a transistor in computer can do is to switch on or off current. Transistors have no metabolism, cannot manufacture chemicals and cannot reproduce. Major advantage of transistor is that it can process information very fast, near speed of light which a neuron cannot do.

_

Seth Lloyd considers computers as physical systems. He shows in a nice way that the speed with which a physical device can process information is limited by its energy, and the number of degrees of freedom it possesses limits the amount of information that it can process. So, the physical limits of computation can be calculated as determined by the speed of light, the quantum scale and the gravitational constant. Currently, the basic unit of computer computation — the silicon chip — relies on the same outdated computational architecture that was first proposed nearly 70 years ago. These chips separate processing and memory — the two main functions that chips carry out — into different physical regions, which necessitates constant communications between the regions and lowers efficiency. Although this organization is sufficient for basic number crunching and tackling spreadsheets, it falters when fed torrents of unstructured data, as in vision and language processing.

_

Each human eye has about 120 high-quality megapixels. A really good digital camera has about 16 megapixels. The numbers of megapixels between the eye and the camera are not that dramatically different, but the digital camera has no permanent wire connections between the physical sensors and the optical, computational, and memory functions of the camera. The microprocessor input and output need to be multiplexed to properly channel the flow of the arriving and exiting information. Similarly, the functional heart of a digital computer only time-shares its faculties with the attached devices: memory, camera, speaker, or printer. If such an arrangement existed in the human brain, you could do only one function at a time. You could look, then think, and then stretch out your hand to pick up an object. But you could not speak, see, hear, think, move, and feel at the same time. These problems could be solved by operating numerous microprocessors concurrently, but the hardware would be too difficult to design, too bulky to package, and too expensive to implement. By contrast, parallel processing poses no problem in the human brain. Neurons are tiny, come to life in huge numbers, and form connections spontaneously. Just as important is energy efficiency. Human brains require negligible amounts of energy, and power dissipation does not overheat the brain. A computer as complex as the human brain would need its own power plant with megawatts of power, and a heat sink the size of a city.

___ 

How human brain is superior to computer:

The brain has a processing capacity of 10 quadrillion instructions (calculations) per second according to Ray Kurzweil. Currently, the world’s fastest supercomputer, China’s Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. However the computational power of the human brain is difficult to ascertain, as the human brain is not easily paralleled to the binary number processing of computers. For while the human brain is calculating a math problem, it is subconsciously processing data from millions of nerve cells that handle the visual input of the paper and surrounding area, the aural input from both ears, and the sensory input of millions of cells throughout the body. The brain is also regulating the heartbeat, monitoring oxygen levels, hunger and thirst requirements, breathing patterns and hundreds of other essential factors throughout the body. It is simultaneously comparing data from the eyes and the sensory cells in the arms and hands to keep track of the position of the pen and paper as the calculation is being performed. Human brains can process far more information than the fastest computers. In fact, in the 2000s, the complexity of the entire Internet was compared to a single human brain. This is so because brains are great at parallel processing and sorting information. Brains are also about 100,000 times more energy-efficient than computers, but that will change as technology advances. 

_

Although the brain-computer metaphor has served cognitive psychology well, research in cognitive neuroscience has revealed many important differences between brains and computers. Appreciating these differences may be crucial to understanding the mechanisms of neural information processing, and ultimately for the creation of artificial intelligence. The most important of these differences are listed below:

Difference 1: Brains are analogue; computers are digital:

Difference 2: The brain uses content-addressable memory:

Difference 3: The brain is a massively parallel machine; computers are modular and serial:

Difference 4: Processing speed is not fixed in the brain; there is no system clock

Difference 5: Short-term memory is not like RAM

Difference 6: No hardware/software distinction can be made with respect to the brain or mind

Difference 7: Synapses are far more complex than electrical logic gates

Difference 8: Unlike computers, processing and memory are performed by the same components in the brain

Difference 9: The brain is a self-organizing system

Difference 10: Brains have bodies

_

  • Humans perceive by patterns whereas the machines perceive by set of rules and data.
  • Humans store and recall information by patterns, machines do it by searching algorithms. For example, the number 40404040 is easy to remember, store, and recall as its pattern is simple.
  • Humans can figure out the complete object even if some part of it is missing or distorted; whereas the machines cannot do it correctly.
  • Human intelligence is analogue as work in the form of signals and artificial intelligence is digital, they majorly works in the form of numbers.

__

Processing power needed to simulate a brain:

Whole brain emulation:

A popular approach discussed to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.  Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.

_

The figure above estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity, doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.

______

______

Testing intelligence of machine:

Turing test:

Alan Turing’s research into the foundations of computation had proved that a digital computer can, in theory, simulate the behaviour of any other digital machine, given enough memory and time. (This is the essential insight of the Church–Turing thesis and the universal Turing machine.) Therefore, if any digital machine can “act like it is thinking” then, every sufficiently powerful digital machine can. Turing writes, “all digital computers are in a sense equivalent.” The Turing test is a test, developed by Alan Turing in 1950, of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so that the result would not be dependent on the machine’s ability to render words as speech.  If the evaluator cannot reliably tell the machine from the human (Turing originally suggested that the machine would convince a human 70% of the time after five minutes of conversation), the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give. Since Turing first introduced his test, it has proven to be both highly influential and widely criticised, and it has become an important concept in the philosophy of artificial intelligence. Drawback of the Turing Test is that the test is only as good as the human who is asking the questions and the knowledge of the human answering the questions.

_

Two of the objections cited by Turing are worth considering further. Lady Lovelace’s Objection, first stated by Ada Lovelace, argues that computers can only do as they are told and consequently cannot perform original (hence, intelligent) actions. This objection has become a reassuring if somewhat dubious part of contemporary technological folklore. Expert systems, especially in the area of diagnostic reasoning, have reached conclusions unanticipated by their designers. Indeed, a number of researchers feel that human creativity can be expressed in a computer program.

_

The other related objection, the Argument from Informality of Behavior, asserts the impossibility of creating a set of rules that will tell an individual exactly what to do under every possible set of circumstances. Certainly, the flexibility that enables a biological intelligence to respond to an almost infinite range of situations in a reasonable if not necessarily optimal fashion is a hallmark of intelligent behavior. While it is true that the control structure used in most traditional computer programs does not demonstrate great flexibility or originality, it is not true that all programs must be written in this fashion. Indeed, much of the work in AI over the past 25 years has been to develop programming languages and models such as production systems, object-based systems, network representations, and others attempt to overcome this deficiency. Many modern AI programs consist of a collection of modular components, or rules of behavior, that do not execute in a rigid order but rather are invoked as needed in response to the structure of a particular problem instance. Pattern matchers allow general rules to apply over a range of instances. These systems have an extreme flexibility that enables relatively small programs to exhibit a vast range of possible behaviors in response to differing problems and situations. Whether these systems can ultimately be made to exhibit the flexibility shown by a living organism is still the subject of much debate. Nobel laureate Herbert Simon has argued that much of the originality and variability of behavior shown by living creatures is due to the richness of their environment rather than the complexity of their own internal programs. In The Sciences of the Artificial, Simon (1981) describes an ant progressing circuitously along an uneven and cluttered stretch of ground. Although the ant’s path seems quite complex, Simon argues that the ant’s goal is very simple: to return to its colony as quickly as possible. The twists and turns in its path are caused by the obstacles it encounters on its way. Simon concludes that an ant, viewed as a behaving system, is quite simple. The apparent complexity of its behavior over time is largely a reflection of the complexity of the environment in which it finds itself. This idea, if ultimately proved to apply to organisms of higher intelligence as well as to such simple creatures as insects, constitutes a powerful argument that such systems are relatively simple and, consequently, comprehensible. It is interesting to note that if one applies this idea to humans, it becomes a strong argument for the importance of culture in the forming of intelligence. Rather than growing in the dark like mushrooms, intelligence seems to depend on an interaction with a suitably rich environment. Culture is just as important in creating humans as human beings are in creating culture. Rather than denigrating our intellects, this idea emphasizes the miraculous richness and coherence of the cultures that have formed out of the lives of separate human beings. In fact, the idea that intelligence emerges from the interactions of individual elements of a society is one of the insights supporting the approach to AI technology.

_

ELIZA and PARRY:

In 1966, Joseph Weizenbaum created a program which appeared to pass the Turing test. The program, known as ELIZA, worked by examining a user’s typed comments for keywords. If a keyword is found, a rule that transforms the user’s comments is applied, and the resulting sentence is returned. If a keyword is not found, ELIZA responds either with a generic riposte or by repeating one of the earlier comments. In addition, Weizenbaum developed ELIZA to replicate the behaviour of a Rogerian psychotherapist, allowing ELIZA to be “free to assume the pose of knowing almost nothing of the real world.” With these techniques, Weizenbaum’s program was able to fool some people into believing that they were talking to a real person, with some subjects being “very hard to convince that ELIZA […] is not human.” Thus, ELIZA is claimed by some to be one of the programs (perhaps the first) able to pass the Turing Test, even though this view is highly contentious. Kenneth Colby created PARRY in 1972, a program described as “ELIZA with attitude”. It attempted to model the behaviour of a paranoid schizophrenic, using a similar (if more advanced) approach to that employed by Weizenbaum. To validate the work, PARRY was tested in the early 1970s using a variation of the Turing Test. A group of experienced psychiatrists analysed a combination of real patients and computers running PARRY through teleprinters. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the “patients” were human and which were computer programs. The psychiatrists were able to make the correct identification only 48 percent of the time – a figure consistent with random guessing. In the 21st century, versions of these programs (now known as “chatterbots”) continue to fool people. “CyberLover”, a malware program, preys on Internet users by convincing them to “reveal information about their identities or to lead them to visit a web site that will deliver malicious content to their computers”. The program has emerged as a “Valentine-risk” flirting with people “seeking relationships online in order to collect their personal data”.

_

The Chinese room:

John Searle’s 1980 paper Minds, Brains, and Programs proposed the “Chinese room” thought experiment and argued that the Turing test could not be used to determine if a machine can think. Searle noted that software (such as ELIZA) could pass the Turing Test simply by manipulating symbols of which they had no understanding. Without understanding, they could not be described as “thinking” in the same sense people do. Therefore, Searle concludes, the Turing Test cannot prove that a machine can think. Much like the Turing test itself, Searle’s argument has been both widely criticised and highly endorsed. Arguments such as Searle’s and others working on the philosophy of mind sparked off a more intense debate about the nature of intelligence, the possibility of intelligent machines and the value of the Turing test that continued through the 1980s and 1990s.

_

Loebner Prize:

The Loebner Prize provides an annual platform for practical Turing Tests with the first competition held in November 1991. It is underwritten by Hugh Loebner. The Cambridge Center for Behavioral Studies in Massachusetts, United States, organised the prizes. As Loebner described it, one reason the competition was created is to advance the state of AI research, at least in part, because no one had taken steps to implement the Turing Test despite 40 years of discussing it. The Loebner Prize tests conversational intelligence; winners are typically chatterbot programs, or Artificial Conversational Entities (ACE). During the 2009 competition, held in Brighton, UK, the communication program restricted judges to 10 minutes for each round, 5 minutes to converse with the human, 5 minutes to converse with the program. This was to test the alternative reading of Turing’s prediction that the 5-minute interaction was to be with the computer. For the 2010 competition, the Sponsor again increased the interaction time, between interrogator and system, to 25 minutes, well above the figure given by Turing.

__

IBM’s Deep Blue and chess:

Alexander Kronrod, a Russian AI researcher, said ‘Chess is the Drosophila of AI.’ He was making an analogy with geneticists’ use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. In 1945 Turing predicted that computers would one day play “very good chess”, an opinion echoed in 1949 by Claude Shannon of Bell Telephone Laboratories, another early theoretician of computer chess. By 1958 Simon and Newell were predicting that within ten years the world chess champion would be a computer, unless barred by the rules. Just under 40 years later, on May 11 1997, in midtown Manhattan, IBM’s Deep Blue beat the reigning world champion, Gary Kasparov, in a six-game match. Critics question the worth of research into computer chess. MIT linguist Noam Chomsky has said that a computer program’s beating a grandmaster at chess is about as interesting as a bulldozer’s “winning” an Olympic weight-lifting competition. Deep Blue is indeed a bulldozer of sorts–its 256 parallel processors enable it to examine 200 million possible moves per second and to look ahead as many as fourteen turns of play. The huge improvement in machine chess since Turing’s day owes much more to advances in hardware engineering than to advances in AI. Massive increases in CPU speed and memory have meant that each generation of chess machine has been able to examine increasingly more possible moves. Turing’s expectation was that chess-programming would contribute to the study of how human beings think. In fact, little or nothing about human thought processes has been learned from the series of projects that culminated in Deep Blue.

_

IBM’s program Watson wins guessing game, Jeopardy:

This was spectacular. Watson had to understand natural language—in this case, English—to the point where it (yes, everyone on the Jeopardy program was referring to it as “he,” but we’ll continue to say “it”) could outguess two of the best human players ever. To play Jeopardy, you must be able to crack riddles, puns, puzzles, and interpret ambiguous statements. Watson is a tremendous achievement.

_

How does one create an intelligent machine?

This problem has proven difficult. Over the past several decades, scientists have taken one of three approaches: In the first, which is knowledge-based, an intelligent machine in a laboratory is directly programmed to perform a given task. In a second, learning-based approach, a computer is “spoon-fed” human-edited sensory data while the machine is controlled by a task-specific learning program. Finally, by a “genetic search,” robots have evolved through generations by the principle of survival of the fittest, mostly in a computer-simulated virtual world. Although notable, none of these is powerful enough to lead to machines having the complex, diverse, and highly integrated capabilities of an adult brain, such as vision, speech, and language. Nevertheless, these traditional approaches have served as the incubator for the birth and growth of a new direction for machine intelligence.

_

So far, we have approached the problem of building intelligent machines from the viewpoint of mathematics, with the implicit belief of logical reasoning as paradigmatic of intelligence itself, as well as with a commitment to “objective” foundations for logical reasoning. This way of looking at knowledge, language, and thought reflects the rationalist tradition of western philosophy, as it evolved through Plato, Galileo, Descartes, Leibniz, and many of the other philosophers. It also reflects the underlying assumptions of the Turing test, particularly its emphasis on symbolic reasoning as a test of intelligence, and the belief that a straightforward comparison with human behavior was adequate to confirming machine intelligence. The reliance on logic as a way of representing knowledge and on logical inference as the primary mechanism for intelligent reasoning are so dominant in Western philosophy that their “truth” often seems obvious and unassailable. It is no surprise, then, that approaches based on these assumptions have dominated the science of artificial intelligence from its inception through to the present day.

_____

_____

Introduction to AI:

_

Computer scientists have defined artificial intelligence in many different ways, but at its core, AI involves machines that think the way humans think. Of course, it’s very difficult to determine whether or not a machine is “thinking,” so on a practical level, creating artificial intelligence involves creating a computer system that is good at doing the kinds of things humans are good at. The idea of creating machines that are as smart as humans goes all the way back to the ancient Greeks, who had myths about automatons created by the gods. In practical terms, however, the idea didn’t really take off until 1950. In that year, Alan Turing published a groundbreaking paper called “Computing Machinery and Intelligence” that posed the question of whether machines can think. He proposed the famous Turing test, which says, essentially, that a computer can be said to be intelligent if a human judge can’t tell whether he is interacting with a human or a machine. The phrase artificial intelligence was coined in 1956 by John McCarthy, who organized an academic conference at Dartmouth dedicated to the topic. At the end of the conference, the attendees recommended further study of “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” This proposal foreshadowed many of the topics that are of primary concern in artificial intelligence today, including natural language processing, image recognition and classification, and machine learning. In the years immediately after that first conference, artificial intelligence research flourished. However, within a few decades it became apparent that the technology to create machines that could truly be said to be thinking for themselves was many years off. But in the last decade, artificial intelligence has moved from the realms of science fiction to the realm of scientific fact. Stories about IBM’s Watson AI winning the game show Jeopardy and Google’s AI beating human champions at the game of Go have returned artificial intelligence to the forefront of public consciousness. Today, all of the largest technology companies are investing in AI projects, and most of us interact with AI software every day whenever we use smartphones, social media, Web search engines or ecommerce sites. And one of the types of AI that we interact with most often is machine learning.

_

When the term “AI” was coined in 1955, it referred to machines that could perform tasks that required intelligence when performed by humans. It has come to mean machines that simulate human cognitive processes (i.e. they mimic the human brain in how they process). They learn, reason, judge, predict, infer and initiate action.

To really be AI, a system or application should be the following:

  • Aware: Be cognizant of context and human language
  • Analytical: Analyze data and context to learn
  • Adaptive: Use that learning to adapt and improve
  • Anticipatory: Understand likely good next moves
  • Autonomous: Be able to act independently without explicit programming

There are examples of AI today that fit this description, but unlike the human brain, they can only perform a specific application. For example, “digital personal assistants” like Apple’s Siri can understand human language and deliver relevant suggestions on what to buy or what to watch on TV. But they can’t clean your house or drive cars. We are seeing self-driving cars, but that car will not be able to learn how to play chess or to cook. Essentially, they won’t be able to combine even the smallest subsets of actions that constitute being human. All of these AI types do one or two things humans can already do pretty well. But while they can’t do everything, they can save us time and could end up doing these specific tasks far better than any human.

____

Definitions of AI:

According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of making intelligent machines, especially intelligent computer programs”. Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think. AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems. Artificial Intelligence is the famous field of study looking for ways to create computers that are capable of intelligent behaviour. A machine is deemed intelligent if it can do things normally associated with human intelligence. To pass the Turing Test and qualify as artificially intelligent, a machine should be able to do for example: natural language processing (i.e. communicate with no trouble on a given language); automated reasoning (using stored information to answer questions and draw new conclusions) and machine learning (the ability to adapt to new circumstances and detect patterns).

_

Dr. Stuart Russell and Dr. Peter Norvig:

Artificial Intelligence: A Modern Approach is a university textbook on artificial intelligence, written by Stuart J. Russell and Peter Norvig. It was first published in 1995 and the third edition of the book was released 11 December 2009. It is used in over 1100 universities worldwide and has been called “the most popular artificial intelligence textbook in the world”. It is considered the standard text in the field of artificial intelligence. The book is intended for an undergraduate audience but can also be used for graduate-level studies with the suggestion of adding some of the primary sources listed in the extensive bibliography.  In “Artificial Intelligence: A Modern Approach”, Stuart Russell and Peter Norvig defined AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.” This definition by its nature unites many different splintered fields – speech recognition, machine vision, learning approaches, etc. – and filters them through a machine that is then able to achieve a given goal.

_

Artificial intelligence (AI) is typically defined as “a field of computer science dedicated to the study of computer software making intelligent decisions, reasoning, and problem solving.” However, that definition leans toward what experts consider “strong AI,” which focuses on artificial intelligence systems that are able to perform as flexibly as the human brain. That version of AI is still likely to be at least three decades from becoming a reality. Instead, what is emerging in countless everyday applications today is what is known as “Weak” AI. Weak AI functions within a tightly focused area of ability, performing either one or a few simple tasks more efficiently than humans can perform them.

Examples include:

  • Air-traffic control systems that determine flight plans and choose the optimal landing gates for airplanes.
  • Logistics apps that help companies like UPS route their trucks to save time and fuel.
  • Loan-processing systems that assess the creditworthiness of mortgage applicants.
  • Speech-recognition tools that handle incoming calls and provide automated customer service.
  • Digital personal assistants that search multiple data sources and provide answers in plain English, like Apple’s Siri.

In both cases, AI is based on a series of algorithms-a formula or set of rules-that neural networks use to process information and arrive at an answer.

_

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”. As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For example, optical character recognition is no longer perceived as an example of “artificial intelligence”, having become a routine technology. Capabilities currently classified as AI include successfully understanding human speech, competing at a high level in strategic game systems (such as Chess and Go), self-driving cars, intelligent routing in Content Delivery Networks, and interpreting complex data.

_

AI or artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions), and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.  Today, it is an umbrella term that encompasses everything from robotic process automation to actual robotics. It has gained prominence recently due, in part, to big data, or the increase in speed, size and variety of data businesses are now collecting. AI can perform tasks such as identifying patterns in the data more efficiently than humans, enabling businesses to gain more insight out of their data.

_

Artificial Intelligence is a branch of Science, which deals with helping machines find solutions to complex problems in a more human-like fashion. This generally involves harrowing characteristics from human intelligence, and applying them as algorithms in a computer friendly way. The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is among the field’s long-term goals. Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience and artificial psychology.

_

Out of the following areas, one or multiple areas can contribute to build an intelligent system.

_

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues. A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?  Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?  Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require “sub-symbolic” processing?  John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence, a term which has since been adopted by some non-GOFAI researchers. Stuart Shapiro divides AI research into three approaches, which he calls computational psychology, computational philosophy, and computer science. Computational psychology is used to make computer programs that mimic human behavior. Computational philosophy is used to develop an adaptive, free-flowing computer mind. Implementing computer science serves the goal of creating computers that can perform tasks that only people could previously accomplish. Together, the humanesque behavior, mind, and actions make up artificial intelligence.

_

Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:

  • Knowledge
  • Reasoning
  • Problem solving
  • Perception
  • Learning
  • Planning
  • Ability to manipulate and move objects

Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious approach. Machine learning is another core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions. Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory. Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with few sub-problems such as facial, object and speech recognition. Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.

____

____

Artificial intelligence is used in:

  • expert systems,
  • decision-making,
  • natural language processing,
  • robotics,
  • machine translation,
  • pattern recognition,
  • clustering and classification,
  • regression,
  • forecasting, predicting, planning, and scheduling,
  • generalizing and decision processes,
  • computational creativity,
  • intelligent agents and multi-agents,
  • intelligent chatbots,
  • games playing,

_____

The main research in the field of intelligent computing is focussed on development of various machine learning methods using:

  • artificial neural networks,
  • kernel methods such as the support vector machines,
  • fuzzy sets and fuzzy computation,
  • genetic algorithms and gene expression algorithms,
  • evolutionary programming and computation,
  • rough sets, rough separability, and rough computing,
  • swarm intelligence algorithms,
  • mathematical optimization,
  • means-ends analysis,
  • statistical methods,
  • k-nearest neighbor algorithm,
  • decision trees,
  • probabilistics and chaos theory,
  • Bayes classifiers,
  • uncertain numbers,
  • knowledge representation and knowledge engineering,
  • simulated annealing,
  • hidden Markov models,
  • Kalman filters,
  • beam search etc.

____

Isn’t AI about simulating human intelligence?

Sometimes but not always or even usually. On the one hand, we can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do.

_

What about IQ? Do computer programs have IQs?

No. IQ is based on the rates at which intelligence develops in children. It is the ratio of the age at which a child normally makes a certain score to the child’s age. The scale is extended to adults in a suitable way. IQ correlates well with various measures of success or failure in life, but making computers that can score high on IQ tests would be weakly correlated with their usefulness. For example, the ability of a child to repeat back a long sequence of digits correlates well with other intellectual abilities, perhaps because it measures how much information the child can compute with at once. However, “digit span” is trivial for even extremely limited computers. However, some of the problems on IQ tests are useful challenges for AI.

_

What should I study before or while learning AI?

Study mathematics, especially mathematical logic. The more you learn about sciences, e.g. physics or biology, the better. For biological approaches to AI, study psychology and physiology of the nervous system. Learn some programming languages–at least C, Lisp and Prolog. It is also a good idea to learn one basic machine language. Jobs are likely to depend on knowing the languages currently in fashion. In the late 1990s, these include C++ and Java.

_

What organizations and publications are concerned with AI?

The American Association for Artificial Intelligence (AAAI), the European Coordinating Committee for Artificial Intelligence (ECCAI) and the Society for Artificial Intelligence and Simulation of Behavior (AISB) are scientific societies concerned with AI research. The Association for Computing Machinery (ACM) has a special interest group on artificial intelligence SIGART. The International Joint Conference on AI (IJCAI) is the main international conference. The AAAI runs a US National Conference on AI. Electronic Transactions on Artificial Intelligence, Artificial Intelligence, and Journal of Artificial Intelligence Research, and IEEE Transactions on Pattern Analysis and Machine Intelligence are four of the main journals publishing AI research papers.

_

Is artificial intelligence real?

It’s real. For more than two decades, your credit card company has employed various kinds of artificial intelligence programs to tell whether or not the transaction coming in from your card is typical for you, or whether it’s outside your usual pattern. Outside the pattern, a warning flag goes up. The transaction might even be rejected. This isn’t usually an easy, automatic judgment–many factors are weighed as the program is deciding. In fact, finance might be one of the biggest present-day users of AI. Utility companies employ AI programs to figure out whether small problems have the potential to be big ones, and if so, how to fix the small problem. Many medical devices now employ AI to diagnose and manage the course of therapy. Construction companies use AI to figure out schedules and manage risks. The U.S. armed forces uses all sorts of AI programs–to manage battles, to detect real threats out of possible noise, and so on. Though these programs are usually smarter than humans could be, they aren’t perfect. Sometimes, like humans, they fail.

______

______

AI overview:

___

AI Continuum:

In similar fashion to types of AI solutions organized by capability, there exists a continuum of AI in regards to level of autonomy:

  1. Assisted Intelligence – Involves the taking over of monotonous, mundane tasks that machines can do more efficiently. Example: Robotic shelf picking from IAM Robotics
  2. Augmented Intelligence – A step up in a more authentic collaboration of “intelligence”, in which machines and humans learn from the other and in turn refine parallel processes. Example: Editor from NyTimes
  3. Autonomous Intelligence – System that can both adapt over time (learn on its own) and take over whole processes within a particular system or entity. Example: NASA’s Mars Curiosity Rover

___

Research areas of AI:

The domain of artificial intelligence is huge in breadth and width. The broadly common and prospering research areas in the domain of AI include:

____

Real Life Applications of AI Research Areas:

There is a large array of applications where AI is serving common people in their day-to-day lives:

Sr.No. Research Areas
1 Expert Systems

Examples − Flight-tracking systems, Clinical systems.

2 Natural Language Processing

Examples: Google Now feature, speech recognition, Automatic voice output.

3 Neural Networks

Examples − Pattern recognition systems such as face recognition, character recognition, handwriting recognition.

4 Robotics

Examples − Industrial robots for moving, spraying, painting, precision checking, drilling, cleaning, coating, carving, etc.

5 Fuzzy Logic Systems

Examples − Consumer electronics, automobiles, etc.

________

What is AI Technique?

In the real world, the knowledge has some unwelcomed properties:

  • Its volume is huge, next to unimaginable.
  • It is not well-organized or well-formatted.
  • It keeps changing constantly.

AI Technique is a manner to organize and use the knowledge efficiently in such a way that:

  • It should be perceivable by the people who provide it.
  • It should be easily modifiable to correct errors.
  • It should be useful in many situations though it is incomplete or inaccurate.

AI techniques elevate the speed of execution of the complex program it is equipped with.

______

AI system:

An AI system is composed of an agent and its environment. The agents act in their environment. The environment may contain other agents. An agent is anything that can perceive its environment through sensors and acts upon that environment through effectors.

  • A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the sensors, and other organs such as hands, legs, mouth, for effectors.
  • A robotic agent replaces cameras and infrared range finders for the sensors, and various motors and actuators for effectors.
  • A software agent has encoded bit strings as its programs and actions.

_

Figure below shows agent in the environment:

The agent operates in an environment. The agent perceives its environment through its sensors. The agent acts upon the environment through actuators or effectors.  The agent also has goals. Goals are the objectives which the agent has to satisfy and the actions that the agent will take will depend upon the goal it wants to achieve. The complete set of inputs at a given time the agent gets is called its percept. The input can be from the keyboard or through its various sensors. The sequence of percepts may be the current percept or may be all the percepts that the agent has perceived so far can influence the actions of an agent. The agent can change the environment through effectors or actuators. An operation which involves an actuator is called an action. So, agent can take action in an environment through the output device or through the different actuators that it might be having. These actions can be grouped into action sequences. In artificial intelligence, artificial intelligent agent’s autonomy is extremely important. The agent should be able to decide autonomously which action it should take in the current situation.

_

Rationality:

Rationality is nothing but status of being reasonable, sensible, and having good sense of judgment. Rationality is concerned with expected actions and results depending upon what the agent has perceived. Performing actions with the aim of obtaining useful information is an important part of rationality. A rational agent always performs right action, where the right action means the action that causes the agent to be most successful in the given percept sequence.

_

Intelligent Agent:

In artificial intelligence, an intelligent agent (IA) is an autonomous entity which observes through sensors and acts upon an environment using actuators (i.e. it is an agent) and directs its activity towards achieving goals (i.e. it is “rational”). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex: a reflex machine such as a thermostat is an intelligent agent. An AI program is called an intelligent agent. The agent can perceive the state of environment through its sensors and it can affect environment with actuators. The inside function of the agent should produce the answer based on the sensed values. An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. The simplest intelligent agents are programs that solve specific problems. More complicated agents include human beings and organizations of human beings (such as firms). The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works – some agents are symbolic and logical, some are sub-symbolic neural networks and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fields—such as decision theory and economics—that also use concepts of abstract agents. The intelligent agent paradigm became widely accepted during the 1990s. Researchers have designed systems to build intelligent systems out of interacting intelligent agents in a multi-agent system. A system with both symbolic and sub-symbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling.  Rodney Brooks’ subsumption architecture was an early proposal for such a hierarchical system.

_

Russell & Norvig (2003) group agents into five classes based on their degree of perceived intelligence and capability:

  1. Simple reflex agents: Simple reflex agents act only on the basis of the current precept, ignoring the rest of the precept history. The agent function is based on the condition-action rule: if condition then action.

_

  1. Model-based reflex agents: A model-based agent can handle a partially observable environment. Its current state is stored inside the agent maintaining some kind of structure which describes the part of the world which cannot be seen
  2. Goal-based agents: Goal-based agents further expand on the capabilities of the model-based agents, by using “goal” information. Goal information describes situations that are desirable. This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state.
  3. Utility-based agent “rational utility-based agent”: It is possible to define a measure of how desirable a particular state is and that’s the difference between this type of agent and other types it depend on the Agent state. This measure can be obtained through the use of a utility function which maps a state to a measure of the utility of the state.
  4. Learning agents: Learning has an advantage that it allows the agents to initially operate in unknown environments and to become more competent than its initial knowledge alone might allow. The most important distinction is between the “learning element”, which is responsible for making improvements, and the “performance element”, which is responsible for selecting external actions.

Other classes of intelligent agents:

According to other sources, some of the sub-agents not already mentioned may be a part of an Intelligent Agent or a complete Intelligent Agent. They are:

  • Decision Agents (that are geared to decision making);
  • Input Agents (that process and make sense of sensor inputs – e.g. neural network based agents);
  • Processing Agents (that solve a problem like speech recognition);
  • Spatial Agents (that relate to the physical real-world);
  • World Agents (that incorporate a combination of all the other classes of agents to allow autonomous behaviors).
  • Believable agents – An agent exhibiting a personality via the use of an artificial character (the agent is embedded) for the interaction.
  • Physical Agents – A physical agent is an entity which percepts through sensors and acts through actuators.
  • Temporal Agents – A temporal agent may use time based stored information to offer instructions or data acts to a computer program or human being and takes program inputs percepts to adjust its next behaviors.

____

Nature of Environments:

Some programs operate in the entirely artificial environment confined to keyboard input, database, computer file systems and character output on a screen. In contrast, some software agents (software robots or softbots) exist in rich, unlimited softbots domains. The simulator has a very detailed, complex environment. The software agent needs to choose from a long array of actions in real time. A softbot designed to scan the online preferences of the customer and show interesting items to the customer works in the real as well as an artificial environment. The most famous artificial environment is the Turing Test environment, in which one real and other artificial agent are tested on equal ground. This is a very challenging environment as it is highly difficult for a software agent to perform as well as a human.

_

Properties of Environment:

The environment has multi-fold properties:

  • Discrete / Continuous − If there are a limited number of distinct, clearly defined, states of the environment, the environment is discrete (For example, chess); otherwise it is continuous (For example, driving).
  • Observable / Partially Observable − If it is possible to determine the complete state of the environment at each time point from the percepts it is observable; otherwise it is only partially observable.
  • Static / Dynamic − If the environment does not change while an agent is acting, then it is static; otherwise it is dynamic.
  • Single agent / Multiple agents − The environment may contain other agents which may be of the same or different kind as that of the agent.
  • Accessible / Inaccessible − If the agent’s sensory apparatus can have access to the complete state of the environment, then the environment is accessible to that agent.
  • Deterministic / Non-deterministic − If the next state of the environment is completely determined by the current state and the actions of the agent, then the environment is deterministic; otherwise it is non-deterministic.
  • Episodic / Non-episodic − In an episodic environment, each episode consists of the agent perceiving and then acting. The quality of its action depends just on the episode itself. Subsequent episodes do not depend on the actions in the previous episodes. Episodic environments are much simpler because the agent does not need to think ahead.

_

In computer science we usually define a sequence of actions that have to be made to achieve a demanded goal. We can distinguish:

  1. Initial state (the AI agent starts)
  2. Actions(s) → {a1, a2, a3 …an} (the actions the agent can execute, the actions are dependent on the initial state)
  3. Result(s, a) →s’ (we achieve a new state as a result)
  4. Goal Test(s) →T/F (the true/false problem, that tell us if the problem has a solution, and the goal is able to achieve)
  5. Path Cost (s1→s2→…→sm) →c (what is the cost of achieving goal for the given sequence of actions)

_____

Search techniques:

Problem solving by search:

Any solution to an Artificial Intelligence problem can be solved as the human has a series of choices. In a game of chess the first move can be any pawn (8 separate moves) or any knight (2 separate moves) – that’s a total of 10 separate moves. For the second move this escalates and the human has more choices. As you then complete the game of chess, the number of possible moves grows very quickly leaving the human with many options. These steps can be seen in what is called a search tree. An Artificial Intelligence program examines all solutions until a goal is found.

______

Branches of AI:

Here’s a list, but some branches are surely missing, because no-one has identified them yet. Some of these may be regarded as concepts or topics rather than full branches.

  1. Logical AI

What a program knows about the world in general, the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals.

  1. Search

AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.

  1. Pattern recognition

When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.

  1. Representation

Facts about the world have to be represented in some way. Usually languages of mathematical logic are used.

  1. Inference

From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we mean infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises. Circumscription is another form of non-monotonic reasoning.

  1. Common sense knowledge and reasoning

This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Cyc system contains a large but spotty collection of common sense facts.

  1. Learning from experience

Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.

  1. Planning

Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions.

  1. Epistemology

This is a study of the kinds of knowledge that are required for solving problems in the world.

  1. Ontology

Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s.

  1. Heuristics

A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, that may be more useful.

  1. Genetic programming

Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations.

_______

_______

The different levels of artificial intelligence:

_

ANI and AGI:

AI system is classified as either weak AI or strong AI. Weak AI, also known as artificial narrow intelligence (ANI), is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple’s Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence (AGI), is an AI system with generalized human cognitive abilities so that when presented with an unfamiliar task, it has enough intelligence to find a solution. The Turing Test, developed by mathematician Alan Turing in 1950, is a method used to determine if a computer can actually think like a human, although the method is controversial. Somewhere in the middle of strong and weak AI is a third camp (the “in-between”): systems that are informed or inspired by human reasoning. This tends to be where most of the more powerful work is happening today. These systems use human reasoning as a guide, but they are not driven by the goal to perfectly model it.  A good example of this is IBM Watson. Watson builds up evidence for the answers it finds by looking at thousands of pieces of text that give it a level of confidence in its conclusion. It combines the ability to recognize patterns in text with the very different ability to weigh the evidence that matching those patterns provides. Its development was guided by the observation that people are able to come to conclusions without having hard and fast rules and can, instead, build up collections of evidence. Just like people, Watson is able to notice patterns in text that provide a little bit of evidence and then add all that evidence up to get to an answer.

_

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:

  • Type 1: Reactive machines. An example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue can identify pieces on the chess board and make predictions, but it has no memory and cannot use past experiences to inform future ones. It analyzes possible moves — its own and its opponent — and chooses the most strategic move. Deep Blue and Google’s AlphaGO were designed for narrow purposes and cannot easily be applied to another situation.
  • Type 2: Limited memory. These AI systems can use past experiences to inform future decisions. Some of the decision-making functions in autonomous vehicles have been designed this way. Observations used to inform actions happening in the not-so-distant future, such as a car that has changed lanes. These observations are not stored permanently.
  • Type 3: Theory of mind. This is a psychology term. It refers to the understanding that others have their own beliefs, desires and intentions that impact the decisions they make. This kind of AI does not yet exist.
  • Type 4: Self-awareness. In this category, AI systems have a sense of self, have consciousness. Machines with self-awareness understand their current state and can use the information to infer what others are feeling. This type of AI does not yet exist.

________

Task Classification of AI:

The domain of AI is classified into Formal tasks, Mundane tasks, and Expert tasks.

_

Mundane (Ordinary) Tasks Formal Tasks Expert Tasks
Perception

  • Computer Vision
  • Speech, Voice
  • Mathematics
  • Geometry
  • Logic
  • Integration and Differentiation
  • Engineering
  • Fault Finding
  • Manufacturing
  • Monitoring
Natural Language Processing

  • Understanding
  • Language Generation
  • Language Translation
Games

  • Go
  • Chess (Deep Blue)
  • Ckeckers
Scientific Analysis
Common Sense Verification Financial Analysis
Reasoning Theorem Proving Medical Diagnosis
Planning   Creativity
Robotics

  • Locomotive
   

_

Humans learn mundane (ordinary) tasks since their birth. They learn by perception, speaking, using language, and locomotives. They learn Formal Tasks and Expert Tasks later, in that order. For humans, the mundane tasks are easiest to learn. The same was considered true before trying to implement mundane tasks in machines. Earlier, all work of AI was concentrated in the mundane task domain. Later, it turned out that the machine requires more knowledge, complex knowledge representation, and complicated algorithms for handling mundane tasks. This is the reason why AI work is more prospering in the Expert Tasks domain now, as the expert task domain needs expert knowledge without common sense, which can be easier to represent and handle.

______

______

Approaches:

Historically there were two main approaches to AI:

  • Classical approach (designing the AI, top-down approach), based on symbolic reasoning – a mathematical approach in which ideas and concepts are represented by symbols such as words, phrases or sentences, which are then processed according to the rules of logic.
  • Connectionist approach (letting AI develop, bottom-up approach), based on artificial neural networks, which imitate the way neurons work, and genetic algorithms, which imitate inheritance and fitness to evolve better solutions to a problem with every generation.

_

Symbolic and subsymbolic AI:

Classical (symbolic) artificial intelligence:

Basic problem of classical artificial intelligence are solved in the framework by the so-called symbolic representation.  Its main essence consists that for given elementary problems we have available symbolic processors, which on their input site accept symbolic input information and on their opposite output site create symbolic output   information.

_

Subsymbolic artificial intelligence

In subsymbolic (connectionist) theory information is parallelly processed by simple calculations realized by neurons. In this approach information is represented by a simple sequence pulses.  Subsymbolic models are based on a metaphor human brain, where cognitive activities of brain are interpreted by theoretical concepts that have their origin in neuroscience.

_

_

Classicalism is the AI that is found in Chess programs, weather diagnostics, and language processors. Classicalism, also known as the top-down approach, approaches AI from the standpoint that human minds are computational by nature, that is, we manipulate data one piece at a time (serially) through a built in circuitry in our brains. So, much of the classical approach to AI consists of things like minimax trees, preprogrammed databases, and prewritten code. Expert systems are another name for Classical AI. Connectionism is the newer form of AI. The problem with classicalism, connectionists say, is that it is too unlike the human mind. The human mind can learn, expand, and change, but many of the Expert systems are too rigid and don’t learn. Connectionism is the AI having Neural Networks and Parallel Processing. Connectionism seems a step closer to the human mind, since it uses networks of nodes that seem like the human brain’s network of neurons.  Connectionism, however, also has its flaws. Connectionism is many times inaccurate and slow, and currently connectionism has failed to reach higher level AI, such as language and some advanced logic, which humans seem to pick up easily in little time. Human intelligence isn’t built from scratch, like the Connectionist systems often are. So, for those higher-level AI, Classicalism is by far the better suited. Connectionism, however, is quite successful at modelling lower level thinking like motor skills, face-recognition, and some vision. Symbolic reasoning has been successfully used in expert systems and other fields. Neural nets are used in many areas, from computer games to DNA sequencing. But both approaches have severe limitations. A human brain is neither a large inference system, nor a huge homogenous neural net, but rather a collection of specialised modules. The best way to mimic the way humans think appears to be specifically programming a computer to perform individual functions (speech recognition, reconstruction of 3D environments, many domain-specific functions) and then combining them together.

Additional approaches:

  • Genetics, evolution
  • Bayesian probability inferencing
  • Combinations – i.e.: “evolved (genetic) neural networks that influence probability distributions of formal expert systems”

____

____

There are two ways of AI computing: Hard computing AI and soft computing AI:

_

First let me discuss what is hard computing and what is soft computing.

Hard Computing Vs. Soft Computing:

1) Hard computing, i.e., conventional computing, requires a precisely stated analytical model and often a lot of computation time. Soft computing differs from conventional (hard) computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty, partial truth, and approximation. In effect, the role model for soft computing is the human mind.

2)  Hard computing based on binary logic, crisp systems, numerical analysis and crisp software but soft computing based on fuzzy logic, neural nets and probabilistic reasoning.

3) Hard computing has the characteristics of precision and categoricity and the soft computing, approximation and dispositionality. Although in hard computing, imprecision and uncertainty are undesirable properties, in soft computing the tolerance for imprecision and uncertainty is exploited to achieve tractability, lower cost, high Machine Intelligence Quotient (MIQ) and economy of communication

4) Hard computing requires programs to be written; soft computing can evolve its own programs

5) Hard computing uses two-valued logic; soft computing can use multivalued or fuzzy logic

6) Hard computing is deterministic; soft computing incorporates stochasticity (randomness)

7) Hard computing requires exact input data; soft computing can deal with ambiguous and noisy data

8) Hard computing is strictly sequential; soft computing allows parallel computations

9)  Hard computing produces precise answers; soft computing can yield approximate answers

_

Soft computing differs from conventional (hard) computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty, partial truth, and approximation. In effect, the role model for soft computing is the human mind.

Hard computing AI = conventional AI

Soft computing AI = computational intelligence (CI)

Hard computing AI:

Conventional AI research focuses on attempts to mimic human intelligence through symbol manipulation and symbolically structured knowledge bases. This approach limits the situations to which conventional AI can be applied. Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence.

The main methods that are involved with this form of AI are as follows:

  • Expert systems: The process whereby reasoning capabilities are applied in order to reach a conclusion. Such a system is able to process large amounts of known information in order to provide a set of conclusions.
  • Case based reasoning
  • Bayesian networks
  • Behaviour based AI: a modular method of building AI systems by hand.

Hard computing techniques work following binary logic based on only two values (the Booleans true or false, 0 or 1) on which modern computers are based. One problem with this logic is that our natural language cannot always be translated easily into absolute terms of 0 and 1. Soft computing techniques, based on fuzzy logic can be useful here. Much closer to the way the human brain works by aggregating data to partial truths (fuzzy systems), this logic is one of the main exclusive aspects of CI.  Within the same principles of fuzzy and binary logics follow fuzzy and crispy systems. Crisp logic is a part of artificial intelligence principles and consists of either including an element in a set, or not, whereas fuzzy systems (CI) enable elements to be partially in a set. Following this logic, each element can be given a degree of membership (from 0 to 1) and not exclusively one of these two values.

_

Soft computing AI:

Computational intelligence:

The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence. Computational Intelligence is a subset of Artificial Intelligence. Generally, computational intelligence is a set of nature-inspired computational methodologies and approaches to address complex real-world problems to which mathematical or traditional modelling can be useless for a few reasons: the processes might be too complex for mathematical reasoning, it might contain some uncertainties during the process, or the process might simply be stochastic in nature.  Indeed, many real-life problems cannot be translated into binary language (unique values of 0 and 1) for computers to process it. Computational Intelligence therefore provides solutions for such problems. The methods used are close to the human’s way of reasoning, i.e. it uses inexact and incomplete knowledge, and it is able to produce control actions in an adaptive way. CI therefore uses a combination of five main complementary techniques. The fuzzy logic which enables the computer to understand natural language, artificial neural networks which permits the system to learn experiential data by operating like the biological one, evolutionary computing, which is based on the process of natural selection, learning theory, and probabilistic methods which helps dealing with uncertainty imprecision.  Except those main principles, currently popular approaches include biologically inspired algorithms such as swarm intelligence and artificial immune systems, which can be seen as a part of evolutionary computation. This form of AI involves repetitive development or learning. Learning is guided by experience and is therefore based on empirical data, often associated with non-symbolic AI, scruffy AI and soft computing.

The main methods that are involved with this form of AI are as follows:

  • Neural networks: systems with very strong pattern recognition capabilities.
  • Fuzzy systems: Used in modern industrial and consumer product control systems, such systems provide techniques for achieving reasoning under uncertain conditions.
  • Evolutionary computation: This form makes use of biologically inspired concepts such as populations, mutation and survival of the fittest in order to help generate better solutions to the problems.

_

Note:

I want to emphasize that conventional non-AI computing is hard computing having binary logic, crisp systems, numerical analysis and crisp software. AI hard computing differs from non-AI hard computing by having symbolic processing using heuristic search with reasoning ability rather than algorithms.

______

______

Modern domains of AI that are widely used are computational intelligence (CI) or soft computing and distributed artificial intelligence (DAI):

_

Single agent and multi-agent systems:

_

_

Overview of multi-agent system:

_

Distributed Artificial Intelligence:

Distributed Artificial Intelligence (DAI) systems can be defined as cooperative systems where a set of agents act together to solve a given problem. These agents are often heterogeneous (e.g., in Decision Support System, the interaction takes place between a human and an artificial problem solver).  Its metaphor of intelligence is based upon social behaviour (as opposed to the metaphor of individual human behavior in classical AI) and its emphasis is on actions and interactions, complementing knowledge representation and inference methods in classical AI. This approach is well suited to face and solve large and complex problems, characterized by physically distributed reasoning, knowledge and data managing. Distributed artificial intelligence (DAI) includes intelligent agents (IAs), multi-agent systems (MASs) and ambient intelligence. In DAI, there is no universal definition of “agent”, but Ferber’s definition is quite appropriate for drawing a clear image of an agent: “An agent is a real or virtual entity which is emerged in an environment where it can take some actions, which is able to perceive and represent partially this environment, which is able to communicate with the other agents and which possesses an autonomous behaviour that is a consequence of its observations, its knowledge and its interactions with the other agents”.  DAI systems are based on different technologies like, e.g., distributed expert systems, planning systems or blackboard systems. What is now new in the DAI community is the need for methodology for helping in the development and the maintenance of DAI systems. Part of the solution relies on the use of more abstract formalisms for representing essential DAI properties (in fact, in the software engineering community, the same problem led to the definition of specification languages).

_____

Associative Artificial Intelligence:

Associative Artificial Intelligence – is the artificial intelligence that mechanisms of working are based on associative processes, associative reasoning, associative knowledge formation and use, generalization, creativity, and associative recalling of objects, facts, and rules. It does not use the classic data structures but the associative graph data structure (AGDS) and active graphs of associative neurons (as-neurons).

_____

_____

Traditional Symbol System AI vs. Nouvelle AI:

The symbol system hypothesis states that intelligence operates on a system of symbols. The implicit idea is that perception and motor interfaces are sets of symbols on which the central intelligence system operates. Thus, the central system, or reasoning engine, operates in a domain independent way on the symbols. Their meanings are unimportant to the reasoner, but the coherence of the complete process emerges when an observer of the system knows the groundings of the symbols within his or her own experience. Somewhat more implicitly in the work that the symbol system hypothesis has inspired, the symbols represent entities in the world. They may be individual objects, properties, concepts, desires, emotions, nations, colors, libraries, or molecules, but they are necessarily named entities.  It is argued that the symbol system hypothesis upon which classical AI is base is fundamentally flawed, and as such imposes severe limitations on the fitness of its progeny. Further the dogma of the symbol system hypothesis implicitly includes a number of largely unfounded great leaps of faith when called upon to provide a plausible path to the digital equivalent of human level intelligence. It is the chasms to be crossed by these leaps which now impede classical AI research. But there is an alternative view, or dogma, variously called nouvelle AI, fundamentalist AI, or in a weaker form situated activity. It is based on the physical grounding hypothesis. It provides a different methodology for building intelligent systems than that pursued for the last thirty years. The traditional methodology bases its decomposition of intelligence into functional information processing modules whose combinations provide overall system behavior. The new methodology bases its decomposition of intelligence into individual behavior generating modules, whose coexistence and co-operation let more complex behaviors emerge. In classical AI, none of the modules themselves generate the behavior of the total system. Indeed it is necessary to combine together many of the modules to get any behavior at all from the system. Improvement in the competence of the system proceeds by improving the individual functional modules. In nouvelle AI each module, itself generates behavior, and improvement in the competence of the system proceeds by adding new modules to the system. Traditional AI relies on the use of heuristics to control search. While much mathematical analysis has been carried out on this topic, the user of a heuristic still relies on an expected distribution of cases within the search tree to get a “reasonable” amount of pruning in order to make the problem manageable. Nouvelle AI relies on the emergence of more global behavior from the interaction of smaller behavioral units. As with heuristics there is no a priori guarantee that this will always work. However, careful design of the simple behaviors and their interactions can often produce systems with useful and interesting emergent properties. The user again is relying on expectations without hard proofs.

____

AI and cybernetics:

Cybernetics is a transdisciplinary approach for exploring regulatory systems—their structures, constraints, and possibilities. Norbert Wiener defined cybernetics in 1948 as “the scientific study of control and communication in the animal and the machine.” In the 21st century, the term is often used in a rather loose way to imply control of any system using technology. Artificial Intelligence and Cybernetics are widely misunderstood to be the same thing. However, they differ in many dimensions. For example, Artificial Intelligence (AI) grew from a desire to make computers smart, whether smart like humans or just smart in some other way. Cybernetics grew from a desire to understand and build systems that can achieve goals, whether complex human goals or just goals like maintaining the temperature of a room under changing conditions. But behind the differences between each domain (“smart” computers versus “goal-directed” systems) are even deeper underlying conceptual differences. Cybernetics started a bit in advance of AI, but AI has dominated for the last 25 years. Now recent difficulties in AI have led to renewed search for solutions that mirror the past approaches of Cybernetics.

____

Machine Intelligence (MI):

Many Data Scientists believe that Machine Intelligence and Artificial Intelligence are interchangeable terms.  The term “Machine Intelligence” has been popular in Europe, while the term “Artificial Intelligence” with its scientific slant has been more popular in the US. MI indicates an involvement of a biological neuron in the research process with a more superior approach than the one usually employed in Simple Neural Networks. Some experts say MI is more cognitive and mimics humans, while AI is simply a subset of MI. MI includes machine learning, deep learning and cognition, among other tools like Robotics Process Automation (RPA), and bots. Some experts say the time is ripe to latch on to umbrella-term MI and stop single-mindedly concentrating on apparently one-dimensional AI. Faster distributed systems, introduced by better chips and networks, sensors and Internet of Things (IoT)—coupled with those huge swaths of data and “smarter algorithms” that “simulate human thinking”—are other important elements that are going to unleash MI over AI. It’s those advanced algorithms that are probably the most exciting—and the most different.

____

____

Synopsis of Artificial Intelligence:

In spite of the variety of problems addressed in artificial intelligence research, a number of important features emerge that seem common to all divisions of the field; these include:

  1. The use of computers to do reasoning, pattern recognition, learning, or some other form of inference.
  2. A focus on problems that do not respond to algorithmic solutions. This underlies the reliance on heuristic search as an AI problem-solving technique.
  3. A concern with problem solving using inexact, missing, or poorly defined information and the use of representational formalisms that enables the programmer to compensate for these problems.
  4. Reasoning about the significant qualitative features of a situation.
  5. An attempt to deal with issues of semantic meaning as well as syntactic form.
  6. Answers that are neither exact nor optimal, but are in some sense “sufficient”. This is a result of the essential reliance on heuristic problem-solving methods in situations where optimal or exact results are either too expensive or not possible.
  7. The use of large amounts of domain-specific knowledge in solving problems. This is the basis of expert systems.
  8. The use of meta-level knowledge to effect more sophisticated control of problem solving strategies. Although this is a very difficult problem, addressed in relatively few current systems, it is emerging as an essential area of research.

____

____

AI use in daily life:

Artificial Intelligence helps us perform the tasks in our day-to-day life, making decisions and completing our daily chores. This makes A.I. a lot popular these days. They can be found everywhere in our surroundings. Have you ever played video games? Most of the people would have. In every modern video game, all the characters have artificial intelligence which allows them to follow the main player, attack and fight automatically without human interaction. Many of us already deal with limited AI in our daily lives–credit cards, search engines like Google, automated voice instructions from our GPS devices to help us drive to our destinations; we order prescriptions over the phone from semi-intelligent voice machines.

Online customer support:

Many Websites has implemented bots which can answer queries themselves without needing a human. They can answer question, issue license keys and do more. This is a great relief to major companies as it allows dealing with customers without needing agents. The customer gets all the benefit of chatting with an agent whereas a bot performs all the tasks. Natural Language Processing (NLP) is a brilliant way.

Bots:

A bot is software that is designed to automate the kinds of tasks you would usually do on your own, like making a dinner reservation, adding an appointment to your calendar or fetching and displaying information. The increasingly common form of bots, chatbots, simulate conversation. They often live inside messaging apps — or are at least designed to look that way — and it should feel like you’re chatting back and forth as you would with a human. AI “chatbots” power conversations between humans and machines.  Companies like Google and Facebook are designing chatbots that make decisions for users about commercial activities like shopping and travel arrangements. The bots we’re talking about here are essentially virtual assistants, much like Siri and Cortana. Only the latest generation of bots communicate via text rather than speech. Cortana already does this, both on Windows Phone and in Windows 10.

Intelligent personal assistant:

An intelligent personal assistant (or simply IPA) is a software agent that can perform tasks or services for an individual. These tasks or services are based on user input, location awareness, and the ability to access information from a variety of online sources (such as weather or traffic conditions, news, stock prices, user schedules, retail prices, etc.). Examples of such an agent are Apple’s Siri, Google’s Google Home, Google Now (and later Google Assistant). Amazon Alexa, Amazon’s Evi (branded as Evi), Microsoft’s Cortana, the Free software Lucida, Braina (application developed by Brainasoft for Microsoft Windows), Samsung’s S Voice, LG G3’s Voice Mate, BlackBerry’s Assistant, SILVIA, HTC’s Hidi, IBM’s Watson (computer), Facebook’s M (app) and One Voice Technologies (IVAN). Siri in Apple’s iPhones is a great example of it. You just have to give a voice command to the Siri, and it can perform tasks like calling a person, setting an alarm, performing a web search, playing music and a lot more.

Speech Recognition:

Neural networks, natural language processing, and machine learning algorithms are used in improving speech recognition, for use in various devices. Google has improved the speech recognition used in its various applications including Google Voice and Google Now, using machine learning techniques.

Email Spam Filtering:

There are Millions of Spam Emails send every day. It is necessary to have a clean inbox so that you won’t miss or lost your important email in the bin of spam and unnecessary emails. Email filters do great work in keeping your inbox clean. These systems find spam email by sender’s address, IP, Geo-Location, words used etc. If they found a match, either they send the email to spam box or delete it completely.

These examples tell us how important artificial intelligence has become in our modern life. But this is just a start; we are going to see a lot of great inventions using AI which will make our lives a lot easier, later on in the article. Artificial intelligence (AI), once the seeming red-headed stepchild of the scientific community, has come a long way in the past two decades. Most of us have reconciled with the fact that we can’t live without our smartphones and Siri, and AI’s seemingly omnipotent nature has infiltrated the nearest and farthest corners of our lives, from robo-advisors on Wall Street and crime-spotting security cameras, to big data analysis by Google’s BigQuery and Watson’s entry into diagnostics in the medical field.

____

Does this mean that machines are smarter than we are?

Machines have been “smarter” than us in many ways for a while. Chess and Jeopardy are the best-known achievements, but many artificially intelligent programs have been at work for more than two decades in finance, in many sciences, such as molecular biology and high-energy physics, and in manufacturing and business processes all over the world. We’ve lately seen a program that has mastered the discovery process in a large, complex legal case, using a small fraction of the time, an even smaller fraction of costs—and it’s more accurate. So if you include arithmetic, machines have been “smarter” than us for more than a century. People no longer feel threatened by machines that can add, subtract, and remember faster and better than we can, but machines that can manipulate and even interpret symbols better than we can give us pause.

_

Do smart computers really think?

Yes, if you believe that making difficult judgments, the kind usually left to experts, choosing among plausible alternatives, and acting on those choices, is thinking. That’s what artificial intelligences do right now. Most people in AI consider what artificial intelligences do is a form of thinking, though these programs don’t think just like human beings do, for the most part.

______

______

While scientists have been working on artificial intelligence for decades, it is only now emerging as an important business tool because of several key developments:

  1. Processing power continues to accelerate. As predicted by Moore’s Law, named for Intel cofounder Gordon Moore, the number of transistors per chip has been roughly doubling every two year for the past four decades. Moore’s Law is a historically-reliable rule that the world’s maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. To keep improving the performance, companies like Nvidia are supplementing the central processing unit (CPU) cores in their chips with graphics processing unit (GPU) cores. A CPU consists of a few cores that use serial processing to perform one task at a time, while a GPU consists of thousands of cores that use parallel processing to handle several tasks simultaneously. What this means is that neural networks can master “deep learning” applications by analyzing vast amounts of data at the same time. The amount of data and the amount of data storage-are continuing to expand. According to the research organization SINTEF, “90 percent of all the data in the world has been generated over the past two years.” And as IDC reports, the amount of data that is created and copied each year doubles in size every two years, so by 2020 there will be 44 zettabytes (44 trillion gigabytes) of data.

_

  1. Meanwhile, Kryder’s Law, named for former Seagate chief technology officer Mark Kryder, states that the density of hard disk drives doubles every eighteen months. That means the cost of storage drops by half every eighteen months. This makes it possible to cheaply store all the available data that is needed to “teach” neural networks and to aggregate the vast amounts of new data that is being created daily for them to analyze. The result of this enormous explosion of data and processing power is already being seen in business applications that can analyze patterns of online purchasing behavior to detect credit-card fraud or determine which ad to show to a particular customer.

_

  1. Metcalfe’s Law

And what about the added effect of connecting these tiny computer systems to individual neurons in the brain, or networking them with billions of other tiny processors that can then work in synergy? That introduces Metcalfe’s Law, where Ethernet inventor Robert Metcalf noticed that the value of a network increases exponentially with the number of nodes attached to it. Put together, Metcalfe’s Law accelerates the acceleration of Moore’s Law.

_

As new capabilities are perfected, the possibilities will continue to expand. For example, scientists at Google and Stanford University (working separately) have developed systems that combine neural networks, computer vision, and natural language processing.  The result in both cases is an AI system that can translate an image into text. For example, the systems can correctly identify what is happening in a photograph and provide a description such as “A person riding a motorcycle on a dirt road.” Photographs and videos are part of the unstructured data that had been challenging for AI systems to interpret until now. Consider that this year-largely because of the ubiquity of smartphones-an estimated 1 trillion photographs will be taken, with billions of those photos posted online. With all of those images available, one application will be the use of AI for security purposes. Recall how the FBI used spectators’ photographs and videos to identify the Boston Marathon bombing suspects.  Now multiply the inputs by several trillion and the opportunities for detecting criminal and terrorist activity will expand accordingly. Robotics is another application that will benefit from advances in AI. For example, the Google and Stanford breakthroughs will allow humanoid robots and driverless vehicles to identify objects in their environment, determine how to interact with them, and describe them to humans. Retailers will be able to use the technology (combined with video cameras) to track shoppers in their stores, not just to prevent theft but also to tailor product displays and promotions-such as sending discount coupons to customers’ smartphones.

______

______

Goals and Tools of AI:

The overall research goal of AI is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention. In the course of 50 years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

______

  1. Common-sense reasoning:

Common-sense reasoning is one of the branches of Artificial intelligence (AI) that is concerned with simulating the human ability to make presumptions about the type and essence of ordinary situations they encounter every day. These assumptions include judgments about the physical properties, purpose, intentions and behavior of people and objects, as well as possible outcomes of their actions and interactions. A device that exhibits common sense reasoning will be capable of predicting results and drawing conclusions that are similar to humans’ folk psychology (humans’ innate ability to reason about people’s behavior and intentions) and naive physics (humans’ natural understanding of the physical world).

______

  1. Reasoning, problem solving:

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions (reason). By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics. For difficult problems, algorithms can require enormous computational resources—most experience a “combinatorial explosion”: the amount of memory or computer time required becomes astronomical for problems of a certain size. The search for more efficient problem-solving algorithms is a high priority. Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI research was able to model. AI has progressed using “sub-symbolic” problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the human ability.

_____

  1. Search and optimization:

Many problems in AI can be solved in theory by intelligently searching through many possible solutions: Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule. Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis. Robotics algorithms for moving limbs and grasping objects use local searches in configuration space. Many learning algorithms use search algorithms based on optimization. Simple exhaustive searches are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that eliminate choices that are unlikely to lead to the goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies. Heuristics limit the search for solutions into a smaller sample size. A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization. Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization) and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).

_____

  1. Knowledge representation:

Intelligence is evolved from knowledge. It is having a familiarity with language, concepts, procedures, results, abstractions and associations, coupled with an ability to use those notions effectively, in modelling different aspects of the world. Knowledge can be of declarative or procedural type. Knowledge representation and reasoning (KR) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets. Important knowledge representation formalisms include AI programming languages such as PROLOG, and LISP; data structures such as frames, scripts, and ontologies; and neural networks. Examples of knowledge representation formalisms include semantic nets, systems architecture, Frames, Rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers.

_

An Ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.

_

Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts and so on that the machine knows about. The most general are called upper ontologies, which attempt to provide a foundation for all other knowledge.

_

Semantic Net:

Semantic net is a knowledge representation technique. It is a way of showing all the relative relationships between members of a set of objects, i.e. facts. A semantic network, or frame network, is a network that represents semantic relations between concepts. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts. A semantic network is used when one has knowledge that is best understood as a set of concepts that are related to one another. Most semantic networks are cognitively based. They also consist of arcs and nodes which can be organized into a taxonomic hierarchy. Semantic networks contributed ideas of spreading activation, inheritance, and nodes as proto-objects.

_

An example of a semantic net:

The following facts are represented in the semantic net above:

  • cat is a mammal
  • dog is a mammal
  • dog likes meat
  • dog likes water
  • cat likes cream

______

  1. Logic:

Logic is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning and inductive logic programming is a method for learning. Several different forms of logic are used in AI research. Propositional or sentential logic is the logic of statements which can be true or false. First-order logic also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic (vide infra), is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence. Default logics, non-monotonic logics and circumscription are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics; situation calculus, event calculus and fluent calculus (for representing events and time); causal calculus; belief calculus; and modal logics.

_

Fuzzy Logic Systems:

Fuzzy Logic (FL) is a method of reasoning that resembles human reasoning. Fuzzy logic is the science of reasoning, thinking and inference that recognises and uses the real world phenomenon – that everything is a matter of degree. Instead of assuming everything is black and white (conventional logic), fuzzy logic recognises that in reality most things would fall somewhere in between, that is varying shades of grey. It was popularised by Lofti Zadeh (1965) an engineer from the University of California. It uses continuous set membership from 0 to 1 in contrast to Boolean or conventional logic which uses sharp distinctions, i.e. 0 for false and 1 for true. Medicine is essentially a continuous domain and most medical data is inherently imprecise. Fuzzy logic is a mathematical logic that attempts to solve problems by assigning values to an imprecise spectrum of data in order to arrive at the most accurate conclusion possible. Fuzzy logic is designed to solve problems in the same way that humans do: by considering all available information and making the best possible decision given the input. Fuzzy logic is a data handling methodology that permits ambiguity and hence is particularly suited to medical applications. Fuzzy Logic Systems (FLS) produce acceptable but definite output in response to incomplete, ambiguous, distorted, or inaccurate (fuzzy) input.

_

The inventor of fuzzy logic, Lotfi Zadeh, observed that unlike computers, the human decision making includes a range of possibilities between YES and NO, such as:

CERTAINLY YES

POSSIBLY YES

CANNOT SAY

POSSIBLY NO

CERTAINLY NO

_

Fuzzy logic is useful for commercial and practical purposes.

  • It can control machines and consumer products.
  • It may not give accurate reasoning, but acceptable reasoning.
  • Fuzzy logic helps to deal with the uncertainty in engineering.

_

This technique tends to apply to a wide range of domains such as control, image processing and decision making. But it is also well introduced in the field of household appliances with washing machines, microwave ovens, etc. We can face it too when using a video camera, where it helps stabilizing the image while holding the camera unsteadily. Other areas such as medical diagnostics, foreign exchange trading and business strategy selection are apart from this principle’s numbers of applications. Fuzzy logic is mainly useful for approximate reasoning, and doesn’t have learning abilities, a qualification much needed that human beings have. It enables them to improve themselves by learning from their previous mistakes.

_

Application Areas of Fuzzy Logic

The key application areas of fuzzy logic are as given:

  1. Automotive Systems:
  • Automatic Gearboxes
  • Four-Wheel Steering
  • Vehicle environment control
  1. Consumer Electronic Goods:
  • Hi-Fi Systems
  • Photocopiers
  • Still and Video Cameras
  • Television
  1. Domestic Goods:
  • Microwave Ovens
  • Refrigerators
  • Toasters
  • Vacuum Cleaners
  • Washing Machines
  1. Environment Control:
  • Air Conditioners/Dryers/Heaters
  • Humidifiers

_

Advantages of FLSs:

  • Mathematical concepts within fuzzy reasoning are very simple.
  • You can modify a FLS by just adding or deleting rules due to flexibility of fuzzy logic.
  • Fuzzy logic Systems can take imprecise, distorted, noisy input information.
  • FLSs are easy to construct and understand.
  • Fuzzy logic is a solution to complex problems in all fields of life, including medicine, as it resembles human reasoning and decision making.

Disadvantages of FLSs:

  • There is no systematic approach to fuzzy system designing.
  • They are understandable only when simple.
  • They are suitable for the problems which do not need high accuracy.

_____

  1. Probabilistic methods for uncertain reasoning:

Being one of the main elements of fuzzy logic, probabilistic methods firstly introduced by Paul Erdos and Joel Spencer, aim to evaluate the outcomes of a Computation Intelligent system, mostly defined by randomness. Therefore, probabilistic methods bring out the possible solutions to a reasoning problem, based on prior knowledge. Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics. Bayesian networks are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm), learning (using the expectation-maximization algorithm), planning (using decision networks) and perception (using dynamic Bayesian networks). Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters). A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis, and information value theory. These tools include models such as Markov decision processes, dynamic decision networks, game theory and mechanism design.

____

  1. Classifiers and statistical learning methods:

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience. A classifier can be trained in various ways; there are many statistical and machine learning approaches. The most widely used classifiers are the neural network, kernel methods such as the support vector machine, k-nearest neighbor algorithm, Gaussian mixture model,  naive Bayes classifier, and decision tree. The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the “no free lunch” theorem. Determining a suitable classifier for a given problem is still more an art than science.

____

  1. Planning:

Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or “value”) of the available choices. In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if the agent is not the only actor, it must periodically ascertain whether the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty. Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.  While humans plan effortlessly, creating a computer program that can do the same is a difficult challenge. A seemingly simple task such as breaking a problem into independent subproblems actually requires sophisticated heuristics and extensive knowledge about the planning domain. Determining what subplans should be saved and how they may be generalized for future use is an equally difficult problem. Planning research now extends well beyond the domains of robotics, to include the coordination of any complex set of tasks and goals. Modern planners are applied to agents (Nilsson 1994) as well as to control of particle beam accelerators (Klein et al. 1999, 2000).

_

One method that human beings use in planning is hierarchical problem decomposition. If you are planning a trip from Albuquerque to London, you will generally treat the problems of arranging a flight, getting to the airport, making airline connections, and finding ground transportation in London separately, even though they are all part of a bigger overall plan. Each of these may be further decomposed into smaller subproblems such as finding a map of the city, negotiating the subway system, and finding a decent pub. Not only does this approach effectively restrict the size of the space that must be searched, but also allows saving of frequently used subplans for future use.

_

A hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy.

_

Planning is a difficult problem for a number of reasons, not the least of which is the size of the space of possible sequences of moves. Even an extremely simple robot is capable of generating a vast number of potential move sequences. Imagine, for example, a robot that can move forward, backward, right, or left, and consider how many different ways that robot can possibly move around a room. Assume also that there are obstacles in the room and that the robot must select a path that moves around them in some efficient fashion. Writing a program that can intelligently discover the best path under these circumstances, without being overwhelmed by the huge number of possibilities, requires sophisticated techniques for representing spatial knowledge and controlling search through possible environments.

_____

  1. Learning:

Machine learning (vide infra) is the study of computer algorithms that improve automatically through experience and has been central to AI research since the field’s inception. Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Within developmental robotics, developmental learning approaches were elaborated for lifelong cumulative acquisition of repertoires of novel skills by a robot, through autonomous self-exploration and social interaction with human teachers, and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.

_____

  1. Perception:

Machine perception is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected subproblems are speech recognition, facial recognition and object recognition.

____

  1. Motion and manipulation:

The field of robotics is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion – where the robot moves while maintaining physical contact with an object).

______

  1. Languages and environment of AI:

AI researchers have developed several specialized languages for AI research, including Lisp and Prolog. Some of the most important by-products of artificial intelligence research have been advances in programming languages and software development environments. For a number of reasons, including the size of many AI application programs, the importance of a prototyping methodology, the tendency of search algorithms to generate huge spaces, and the difficulty of predicting the behavior of heuristically driven programs, AI programmers have been forced to develop a powerful set of programming methodologies. Programming environments include knowledge-structuring techniques such as object-oriented programming. High-level languages, such as LISP and PROLOG (Part VI), which support modular development, help manage program size and complexity. Trace packages allow a programmer to reconstruct the execution of a complex algorithm and make it possible to unravel the complexities of heuristic search. Without such tools and techniques, it is doubtful that many significant AI systems could have been built. Many of these techniques are now standard tools for software engineering and have little relationship to the core of AI theory. Others, such as object-oriented programming, are of significant theoretical and practical interest. Finally, many AI algorithms are also now built in more traditional computing languages, such as C++ and Java. The languages developed for artificial intelligence programming are intimately bound to the theoretical structure of the field.

______

  1. Natural language processing (NLP):

One of the long-standing goals of artificial intelligence is the creation of programs that are capable of understanding and generating human language. Not only does the ability to use and understand natural language seem to be a fundamental aspect of human intelligence, but also its successful automation would have an incredible impact on the usability and effectiveness of computers themselves. The goal of natural language processing is to help computers understand human speech in order to do away with computer languages like Java, Ruby, or C all together. With natural language processing, computers would be able to directly understand human language and speech.  Much effort has been put into writing programs that understand natural language. Although these programs have achieved success within restricted contexts, systems that can use natural language with the flexibility and generality that characterize human speech are beyond current methodologies. The aim of Natural Language Processing is to get a human to converse with a computer. That is the computer will respond either by talking or writing the answer. One of the major goals of AI is getting a computer to understand and subsequently communicate in natural languages, a field called natural language processing (NLP). The computer must take natural human languages, like English or Spanish, and glean insight that it can process. Natural language processing is key to AI. Natural language processing gives machines the ability to read and understand the languages that humans speak. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering and machine translation.  A common method of processing and extracting meaning from natural language is through semantic indexing. Increases in processing speeds and the drop in the cost of data storage makes indexing large volumes of abstractions of the user’s input much more efficient.

_

What are the advantages of natural language processing?

The benefits of natural language processing are innumerable. Natural language processing can be leveraged by companies to improve the efficiency of documentation processes, improve the accuracy of documentation, and identify the most pertinent information from large databases. For example, a hospital might use natural language processing to pull a specific diagnosis from a physician’s unstructured notes and assign a billing code.

_

NLP Terminology:

  • Phonology − It is study of organizing sound systematically.
  • Morphology − It is a study of construction of words from primitive meaningful units.
  • Morpheme − It is primitive unit of meaning in a language.
  • Syntax − It refers to arranging words to make a sentence. It also involves determining the structural role of words in the sentence and in phrases.
  • Semantics − It is concerned with the meaning of words and how to combine words into meaningful phrases and sentences.
  • Pragmatics − It deals with using and understanding sentences in different situations and how the interpretation of the sentence is affected.
  • Discourse − It deals with how the immediately preceding sentence can affect the interpretation of the next sentence.
  • World Knowledge − It includes the general knowledge about the world.

_

Steps in NLP:

There are general five steps:

  • Lexical Analysis − It involves identifying and analyzing the structure of words. Lexicon of a language means the collection of words and phrases in a language. Lexical analysis is dividing the whole chunk of txt into paragraphs, sentences, and words.
  • Syntactic Analysis (Parsing) − It involves analysis of words in the sentence for grammar and arranging words in a manner that shows the relationship among the words. The sentence such as “The school goes to boy” is rejected by English syntactic analyzer. There are a number of algorithms researchers have developed for syntactic analysis.
  • Semantic Analysis − It draws the exact meaning or the dictionary meaning from the text. The text is checked for meaningfulness. It is done by mapping syntactic structures and objects in the task domain. The semantic analyzer disregards sentence such as “hot ice-cream”.
  • Discourse Integration − The meaning of any sentence depends upon the meaning of the sentence just before it. In addition, it also brings about the meaning of immediately succeeding sentence.
  • Pragmatic Analysis − During this, what was said is re-interpreted on what it actually meant. It involves deriving those aspects of language which require real world knowledge.

_

A parse tree represents the syntactic structure of a sentence:

_

Speech and Voice Recognition:

These both terms are common in robotics, expert systems and natural language processing. Though these terms are used interchangeably, their objectives are different.

Speech Recognition Voice Recognition
The speech recognition aims at understanding and comprehending WHAT was spoken. The objective of voice recognition is to recognize WHO is speaking.
It is used in hand-free computing, map, or menu navigation. It is used to identify a person by analysing its tone, voice pitch, and accent, etc.
Machine does not need training for Speech Recognition as it is not speaker dependent. This recognition system needs training as it is person oriented.
Speaker independent Speech Recognition systems are difficult to develop. Speaker dependent Speech Recognition systems are comparatively easy to develop.

_

Working of Speech and Voice Recognition Systems:

The user input spoken at a microphone goes to sound card of the system. The converter turns the analog signal into equivalent digital signal for the speech processing. The database is used to compare the sound patterns to recognize the words. Finally, a reverse feedback is given to the database. This source-language text becomes input to the Translation Engine, which converts it to the target language text. They are supported with interactive GUI, large database of vocabulary, etc.

_____

  1. Expert System:

An expert system is an Artificial Intelligence (AI) application, whose purpose is to use facts and rules, taken from the knowledge of many human experts in a particular field, to help make decisions and solve problems. Expert system is a computer application that performs a task that would otherwise be performed by a human expert. For example, there are expert systems that can diagnose human illnesses, make financial forecasts, and schedule routes for delivery vehicles. Some expert systems are designed to take the place of human experts, while others are designed to aid them. To design an expert system, one needs a knowledge engineer, an individual who studies how human experts make decisions and translates the rules into terms that a computer can understand.

_

In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert.  Expert systems are designed to solve complex problems by reasoning about knowledge, represented mainly as if–then rules rather than through conventional procedural code. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

_

Components of Expert Systems:

_

One major insight gained from early work in problem solving was the importance of domain-specific knowledge. A doctor, for example, is not effective at diagnosing illness solely because he possesses some innate general problem-solving skill; he is effective because he knows a lot about medicine. Similarly, a geologist is effective at discovering mineral deposits because he is able to apply a good deal of theoretical and empirical knowledge about geology to the problem at hand. Expert knowledge is a combination of a theoretical understanding of the problem and a collection of heuristic problem-solving rules that experience has shown to be effective in the domain. Expert systems are constructed by obtaining this knowledge from a human expert and coding it into a form that a computer may apply to similar problems. This reliance on the knowledge of a human domain expert for the system’s problem solving strategies is a major feature of expert systems. It is interesting to note that most expert systems have been written for relatively specialized, expert level domains. These domains are generally well studied and have clearly defined problem-solving strategies. Problems that depend on a more loosely defined notion of “common sense” are much more difficult to solve by these means. In spite of the promise of expert systems, it would be a mistake to overestimate the ability of this technology.

_

Applications of Expert System:

Application Description
Design Domain Camera lens design, automobile design.
Medical Domain Diagnosis Systems to deduce cause of disease from observed data, conduction medical operations on humans.
Monitoring Systems Comparing data continuously with observed system or with prescribed behavior such as leakage monitoring in long petroleum pipeline.
Process Control Systems Controlling a physical process based on monitoring.
Knowledge Domain Finding out faults in vehicles, computers.
Finance/Commerce Detection of possible fraud, suspicious transactions, stock market trading, Airline scheduling, cargo scheduling.

_

Advantages of expert system:

  • Many copies can be made of an expert system, or it can be put on a website for everyone to use.
  • Expert systems can contain knowledge from many people.
  • People can be less embarrassed talking to a machine.
  • Computer programs do not get tired, bored or become ill.
  • They don’t have ‘bad days’.

_

Disadvantages of expert system:

  • It is not easy to gather the data and rules for the computer.
  • They are inflexible, have no ‘common sense’ and have no human warmth (empathy).
  • Nobody can be sure they are programmed correctly so they might make dangerous mistakes.
  • Expert systems cannot acquire and adapt to new knowledge without reprogramming.

_

Social and legal issues

  • If human experts are made redundant then we will lose a lot of potential talent.
  • If an expert system fails will there be anybody left with the knowledge and skills to take over?
  • If a medical expert system gives the wrong diagnosis then people could die or become seriously ill.

______

  1. Artificial neural network (ANN):

The inventor of the first neurocomputer, Dr. Robert Hecht-Nielsen, defines a neural network as:

“…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”

_

A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain. A neural network is an electronic model of the brain consisting of many interconnected simple processors. This imitates how your actual brain works. Neural networks (also referred to as connectionist systems) are a computational approach, which is based on a large collection of neural units (aka artificial neurons), loosely modelling the way a biological brain solves problems with large clusters of biological neurons connected by axons. Each neural unit is connected with many others, and links can be enforcing or inhibitory in their effect on the activation state of connected neural units. Each individual neural unit may have a summation function which combines the values of all its inputs together. There may be a threshold function or limiting function on each connection and on the unit itself: such that the signal must surpass the limit before propagating to other neurons. These systems are self-learning and trained, rather than explicitly programmed, and excel in areas where the solution or feature detection is difficult to express in a traditional computer program. ANN can be analog or digital. It is observed that the choice between analog and digital neural networks is application dependent. The goal is to estimate which type of implementation should be used for which class of applications. However, digital neural networks are to be preferred as digital neural network always outperforms the analog neural network in various ways.

_

Basic Structure of ANNs:

The idea of ANNs is based on the belief that working of human brain by making the right connections, can be imitated using silicon and wires as living neurons and dendrites. The human brain is composed of 100 billion nerve cells called neurons. They are connected to other thousand cells by axons. Stimuli from external environment or inputs from sensory organs are accepted by dendrites. These inputs create electric impulses, which quickly travel through the neural network. A neuron can then send the message to other neuron to handle the issue or does not send it forward.

_

_

ANNs are composed of multiple nodes, which imitate biological neurons of human brain. The neurons are connected by links and they interact with each other. The nodes can take input data and perform simple operations on the data. The result of these operations is passed to other neurons. The output at each node is called its activation or node value. Each link is associated with weight. ANNs are capable of learning, which takes place by altering weight values. The following illustration shows a simple ANN:

An artificial neural network is an interconnected group of nodes, akin to the vast network of neurons in a brain. Here, each circular node represents an artificial neuron and an arrow represents a connection from the output of one neuron to the input of another. The word network in the term ‘artificial neural network’ refers to the interconnections between the neurons in the different layers of each system. An example system has three layers. The first layer has input neurons which send data via synapses to the second layer of neurons, and then via more synapses to the third layer of output neurons. More complex systems will have more layers of neurons, some having increased layers of input neurons and output neurons. The synapses store parameters called “weights” that manipulate the data in the calculations.

_

Neural Networks vs. Conventional Computing:

To better understand artificial neural computing it is important to know first how a conventional ‘serial’ computer and its software process information. A serial computer has a central processor that can address an array of memory locations where data and instructions are stored. Computations are made by the processor reading an instruction as well as any data the instruction requires from memory addresses, the instruction is then executed and the results are saved in a specified memory location as required. In a serial system (and a standard parallel one as well) the computational steps are deterministic, sequential and logical, and the state of a given variable can be tracked from one operation to another. In comparison, ANNs are not sequential or necessarily deterministic. There are no complex central processors, rather there are many simple ones which generally do nothing more than take the weighted sum of their inputs from other processors. ANNs do not execute programed instructions; they respond in parallel (either simulated or actual) to the pattern of inputs presented to it. There are also no separate memory addresses for storing data. Instead, information is contained in the overall activation ‘state’ of the network. ‘Knowledge’ is thus represented by the network itself, which is quite literally more than the sum of its individual components.

_

The goal of the neural network is to solve problems in the same way that the human brain would, although several neural networks are more abstract. Modern neural network projects typically work with a few thousand to a few million neural units and millions of connections, which is still several orders of magnitude less complex than the human brain and closer to the computing power of a worm. New brain research often stimulates new patterns in neural networks. One new approach is using connections which span much further and link processing layers rather than always being localized to adjacent neurons. Other research being explored with the different types of signal over time that axons propagate, such as Deep Learning, interpolates greater complexity than a set of boolean variables being simply on or off. Neural networks are based on real numbers, with the value of the core and of the axon typically being a representation between 0.0 and 1. An interesting facet of these systems is that they are unpredictable in their success with self-learning. After training, some become great problem solvers and others don’t perform as well. In order to train them, several thousand cycles of interaction typically occur. Like other machine learning methods – systems that learn from data – neural networks have been used to solve a wide variety of tasks, like computer vision and speech recognition, that are hard to solve using ordinary rule-based programming. Historically, the use of neural network models marked a directional shift in the late eighties from high-level (symbolic) artificial intelligence, characterized by expert systems with knowledge embodied in if-then rules, to low-level (sub-symbolic) machine learning, characterized by knowledge embodied in the parameters of a cognitive model with some dynamical system.

_

Types of Artificial Neural Networks:

There are two Artificial Neural Network topologies − FreeForward and Feedback.

FeedForward ANN:

The information flow is unidirectional. A unit sends information to other unit from which it does not receive any information. There are no feedback loops. They are used in pattern generation/recognition/classification. They have fixed inputs and outputs.

FeedBack ANN:

Here, feedback loops are allowed. They are used in content addressable memories.

_

Neural networks are a set of algorithms, modelled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labelling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated. Neural networks help us cluster and classify. You can think of them as a clustering and classification layer on top of data you store and manage. They help to group unlabelled data according to similarities among the example inputs, and they classify data when they have a labelled dataset to train on. (To be more precise, neural networks extract features that are fed to other algorithms for clustering and classification; so you can think of deep neural networks as components of larger machine-learning applications involving algorithms for reinforcement learning, classification and regression.) Deep learning is the name we use for “stacked neural networks”; that is, networks composed of several layers. The layers are made of nodes. A node is just a place where computation happens, loosely patterned on a neuron in the human brain, which fires when it encounters sufficient stimuli. A node combines input from the data with a set of coefficients, or weights, that either amplify or dampen that input, thereby assigning significance to inputs for the task the algorithm is trying to learn. These input-weight products are summed and the sum is passed through a node’s so-called activation function, to determine whether and to what extent that signal progresses further through the network to affect the ultimate outcome, say, an act of classification.

_

Bayesian Networks (BN)

These are the graphical structures used to represent the probabilistic relationship among a set of random variables. Bayesian networks are also called Belief Networks or Bayes Nets. BNs reason about uncertain domain. In these networks, each node represents a random variable with specific propositions. For example, in a medical diagnosis domain, the node Cancer represents the proposition that a patient has cancer. The edges connecting the nodes represent probabilistic dependencies among those random variables. If out of two nodes, one is affecting the other then they must be directly connected in the directions of the effect. The strength of the relationship between variables is quantified by the probability associated with each node. There is an only constraint on the arcs in a BN that you cannot return to a node simply by following directed arcs. Hence the BNs are called Directed Acyclic Graphs (DAGs). BNs are capable of handling multivalued variables simultaneously.

_

Applications of Neural Networks:

Concerning its applications, neural networks can be classified into five groups: data analysis and classification, associative memory, clustering generation of patterns and control. They can perform tasks that are easy for a human but difficult for a machine:

  • Aerospace − Autopilot aircrafts, aircraft fault detection.
  • Automotive − Automobile guidance systems.
  • Military − Weapon orientation and steering, target tracking, object discrimination, facial recognition, signal/image identification.
  • Electronics − Code sequence prediction, IC chip layout, chip failure analysis, machine vision, voice synthesis.
  • Financial − Real estate appraisal, loan advisor, mortgage screening, corporate bond rating, portfolio trading program, corporate financial analysis, currency value prediction, document readers, credit application evaluators.
  • Industrial − Manufacturing process control, product design and analysis, quality inspection systems, welding quality analysis, paper quality prediction, chemical product design analysis, dynamic modeling of chemical process systems, machine maintenance analysis, project bidding, planning, and management.
  • Medical − Cancer cell analysis, EEG and ECG analysis, prosthetic design, transplant time optimizer.
  • Speech − Speech recognition, speech classification, text to speech conversion.
  • Telecommunications − Image and data compression, automated information services, real-time spoken language translation.
  • Transportation − Truck Brake system diagnosis, vehicle scheduling, routing systems.
  • Software − Pattern Recognition in facial recognition, optical character recognition, etc.
  • Time Series Prediction − ANNs are used to make predictions on stocks and natural calamities.
  • Signal Processing − Neural networks can be trained to process an audio signal and filter it appropriately in the hearing aids.
  • Control − ANNs are often used to make steering decisions of physical vehicles.
  • Anomaly Detection − As ANNs are expert at recognizing patterns, they can also be trained to generate an output when something unusual occurs that misfits the pattern.

_

Pros and cons of ANN:

The advantages of deep neural networks are record-breaking accuracy on a whole range of problems including image and sound recognition, text and time series analysis, etc. They are relatively easy to use and can approximate any function, regardless of its linearity.  The main disadvantages are: 1) they can be hard to tune to ensure they learn well, and therefore hard to debug; 2) they do not have explanatory power; i.e. they main extract the best signals to accurately classify and cluster data, but they will not tell you why they reached a certain conclusion; 3) they are computationally intensive to train; i.e. you need a lot of chips and a distributed run-time to train on very large datasets; 4) they are often abused in cases where simpler solutions like linear regression would be best.

_______

  1. Evolutionary computation:

Based on the process of natural selection firstly introduced by Charles Robert Darwin, the evolutionary computation consists in capitalizing on the strength of natural evolution to bring up new artificial evolutionary methodologies.  In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character. In evolutionary computation, an initial set of candidate solutions is generated and iteratively updated. Each new generation is produced by stochastically removing less desired solutions, and introducing small random changes. In biological terminology, a population of solutions is subjected to natural selection (or artificial selection) and mutation. As a result, the population will gradually evolve to increase in fitness, in this case the chosen fitness function of the algorithm. Evolutionary computation techniques can produce highly optimized solutions in a wide range of problem settings, making them popular in computer science. Many variants and extensions exist, suited to more specific families of problems and data structures. Evolutionary computation is also sometimes used in evolutionary biology as an in silico experimental procedure to study common aspects of general evolutionary processes.

Evolutionary algorithms:

Evolutionary algorithms form a subset of evolutionary computation in that they generally only involve techniques implementing mechanisms inspired by biological evolution such as reproduction, mutation, recombination, natural selection and survival of the fittest. Candidate solutions to the optimization problem play the role of individuals in a population, and the cost function determines the environment within which the solutions “live”. Evolution of the population then takes place after the repeated application of the above operators. In this process, there are two main forces that form the basis of evolutionary systems: Recombination and mutation create the necessary diversity and thereby facilitate novelty, while selection acts as a force increasing quality. Many aspects of such an evolutionary process are stochastic (i.e. involving random variable). Changed pieces of information due to recombination and mutation are randomly chosen. On the other hand, selection operators can be either deterministic, or stochastic. In the latter case, individuals with a higher fitness have a higher chance to be selected than individuals with a lower fitness, but typically even the weak individuals have a chance to become a parent or to survive.

_

So how can we simulate evolution to build artificial general intelligence (AGI)?

The method, called “genetic algorithms,” would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures “perform” by living life and are “evaluated” by whether they manage to reproduce or not). The most widely used form of evolutionary computation for medical applications are ‘Genetic Algorithms’. Proposed by John Holland (1975), they are a class of stochastic search and optimisation algorithms based on natural biological evolution. They work by creating many random solutions to the problem at hand. This population of many solutions will then evolve from one generation to the next, ultimately arriving at a satisfactory solution to the problem. The best solutions are added to the population while the inferior ones are eliminated. By repeating this process among the better elements, repeated improvements will occur in the population, survive and generate new solutions.

Genetic Algorithms:

In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are one of the best ways to solve a problem for which little is known. They are a very general algorithm and so will work well in any search space. All you need to know is what you need the solution to be able to do well, and a genetic algorithm will be able to create a high quality solution. Genetic algorithms use the principles of selection and evolution to produce several solutions to a given problem. Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection. A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own. The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.

But we have a lot of advantages over evolution.

First, evolution has no foresight and works randomly—it produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks.

Secondly, evolution doesn’t aim for anything, including intelligence—sometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence.

Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence—like revamping the ways cells produce energy—when we can remove those extra burdens and use things like electricity. It’s no doubt we’d be much, much faster than evolution—but it’s still not clear whether we’ll be able to improve upon evolution enough to make this a viable strategy.

_

The genetic algorithm differs from a classical, derivative-based, optimization algorithm in two main ways, as summarized in the following table:

Classical Algorithm Genetic Algorithm
Generates a single point at each iteration. The sequence of points approaches an optimal solution. Generates a population of points at each iteration. The best point in the population approaches an optimal solution.
Selects the next point in the sequence by a deterministic computation. Selects the next population by computation which uses random number generators.

_

Some specific examples of genetic algorithms:

As the power of evolution gains increasingly widespread recognition, genetic algorithms have been used to tackle a broad variety of problems in an extremely diverse array of fields, clearly showing their power and their potential. Here I enumerate some of the more noteworthy uses to which they have been put.

Acoustics

Aerospace engineering

Astronomy and astrophysics

Chemistry

Electrical engineering

Financial markets

Game playing

Geophysics

Materials engineering

Mathematics and algorithmics

Military and law enforcement

Molecular biology

Pattern recognition and data mining

Robotics

Routing and scheduling

Systems engineering

_

Limitations of genetic algorithm:

There are limitations of the use of a genetic algorithm compared to alternative optimization algorithms:

  • Repeated fitness function evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms.
  • Genetic algorithms do not scale well with complexity.
  • The “better” solution is only in comparison to other solutions. As a result, the stop criterion is not clear in every problem.
  • In many problems, GAs may have a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the problem. This means that it does not “know how” to sacrifice short-term fitness to gain longer-term fitness.
  • Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data.
  • GAs cannot effectively solve problems in which the only fitness measure is a single right/wrong measure (like decision problems), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA.
  • For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include evolution strategies, evolutionary programming, simulated annealing, Gaussian adaptation, hill climbing, and swarm intelligence (e.g.: ant colony optimization, particle swarm optimization) and methods based on integer linear programming. The suitability of genetic algorithms is dependent on the amount of knowledge of the problem; well-known problems often have better, more specialized approaches.

_

Swarm Intelligence (SI):

Swarm intelligence is a sub-field of evolutionary computing. This is an approach to, as well as application of AI, similar to a neural network. Here, programmers study how intelligence emerges in natural systems like swarms of bees even though on an individual level, a bee just follows simple rules. They study relationships in nature like the prey-predator relationships that give an insight into how intelligence emerges in a swarm or collection from simple rules at an individual level. They develop intelligent systems by creating agent programs that mimic the behavior of these natural systems. The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local, and to a certain degree random, interactions between such agents lead to the emergence of “intelligent” global behavior, unknown to the individual agents. Examples in natural systems of SI include ant colonies, bird flocking, animal herding, bacterial growth, fish schooling and microbial intelligence.

_____

  1. Evaluating progress:

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail. Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.

One classification for outcomes of an AI test is:

  1. Optimal: it is not possible to perform better.
  2. Strong super-human: performs better than all humans.
  3. Super-human: performs better than most humans.
  4. Sub-human: performs worse than most humans.

For example, performance at draughts (i.e. checkers) is optimal, performance at chess is super-human and nearing strong super-human and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human. A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression. Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers. A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

_____

  1. Creativity:

A sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative, or systems that identify and assess creativity). Related areas of computational research are Artificial intuition and Artificial thinking.

______

  1. General intelligence:

Many researchers think that their work will eventually be incorporated into a machine with artificial general intelligence, combining all the skills above and exceeding human abilities at most or all of them. A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project. Many of the problems above may require general intelligence to be considered solved. For example, even a straightforward, specific task like machine translation requires that the machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s intention (social intelligence). A problem like machine translation is considered “AI-complete”. In order to reach human-level performance for machines, one must solve all the problems.

______

______

AI, Machine Learning and Deep Learning:

The figure below delineates relationship between artificial intelligence, machine learning and deep learning:

 

__

Machine learning (ML):

Arthur Samuel defined machine learning as “the ability to learn without being explicitly programmed”. Machine Learning is a subset of Artificial Intelligence and it explores the development of algorithms that learn from given data. These algorithms should be able to learn from past experience (i.e. the given data) and teach themselves to adapt to new circumstances and perform certain tasks. In programmatic advertising, for example, Machine Learning algorithms help us navigate through the Big Data pool and extract meaningful information, they help us define our most relevant audience and direct our campaign in such way to best meet their preferences. Many AI professionals are not always experienced in ML and vice-versa. Some algorithms are Classification (Neural Network, SVM, CART, Random Forest, Gradient Boosting, Logistic Regression), Clustering (K-Means Clustering, Hierarchical Clustering, BIRCH), Regression (Linear/ Polynomial Regression, Curve Fitting), Feature Selection (PCA, ICA, RFE), Forecasting (ARIMA, ANOVA), Collaborative Filtering/Recommendation Systems etc. Many data based learning and decision systems are developed using these techniques in areas of Finance, Healthcare, Retail, E-commerce. Few of the examples are product recommendation system of Amazon, Energy load forecasting in Power industry, Sales Forecasting in retail industry etc.

_

Learning has remained a challenging area for AI. The importance of learning, however, is beyond question, particularly as this ability is one of the most important components of intelligent behavior. Traditional system may perform extensive and costly computations to solve a problem. Unlike a human being, however, if it is given the same or a similar problem a second time, it usually does not remember the solution. It performs the same sequence of computations again. This is true the second, third, fourth, and every time it solves that problem–hardly the behavior of an intelligent problem solver. The obvious solution to this problem is for programs to learn on their own, either from experience, analogy, examples, or by being “told” what to do. Machine learning is the process by which your system functions as a living adaptive intelligent being: it doesn’t just accept data, it learns from it and then adapts itself through that knowledge, once it is exposed to further data. Basically, machine learning is as close to being a living creature as a computer can get so far. The computer is constantly forced to adapt itself through data mining and algorithm creation processes, “learning” tasks rather than simply providing solutions based on a fixed set of data.  Machine learning technologies include expert systems, genetic algorithms, neural networks, random seeded crystal learning, or any effective combinations.

_

Machine learning types:

Supervised learning is the type of learning that takes place when the training instances are labelled with the correct result, which gives feedback about how learning is progressing. This is akin to having a supervisor who can tell the agent whether or not it was correct. In unsupervised learning, the goal is harder because there are no pre-determined categorizations. Unsupervised learning has produced many successes, such as world-champion calibre backgammon programs and even machines capable of driving cars! It can be a powerful technique when there is an easy way to assign values to actions. Clustering can be useful when there is enough data to form clusters (though this turns out to be difficult at times) and especially when additional data about members of a cluster can be used to produce further results due to dependencies in the data.  Classification learning is powerful when the classifications are known to be correct (for instance, when dealing with diseases, it’s generally straight-forward to determine the design after the fact by an autopsy), or when the classifications are simply arbitrary things that we would like the computer to be able to recognize for us. Classification learning is often necessary when the decisions made by the algorithm will be required as input somewhere else. Otherwise, it wouldn’t be easy for whoever requires that input to figure out what it means. Both techniques can be valuable and which one you choose should depend on the circumstances–what kind of problem is being solved, how much time is allotted to solving it (supervised learning or clustering is often faster than reinforcement learning techniques), and whether supervised learning is even possible.

_

List of machine learning algorithms:

  1. Decision tree learning:
  2. Association rule learning:
  3. Artificial neural networks
  4. Deep learning
  5. Inductive logic programming
  6. Support vector machines
  7. Clustering
  8. Bayesian networks
  9. Reinforcement learning
  10. Representation learning
  11. Similarity and metric learning
  12. Sparse dictionary learning
  13. Genetic algorithms
  14. Rule-based machine learning
  15. Learning classifier systems

____

The No Free Lunch theorem and the importance of bias:

So far, a major theme in these machine learning articles has been having algorithms generalize from the training data rather than simply memorizing it. But there is a subtle issue that plagues all machine learning algorithms, summarized as the “no free lunch theorem”. The gist of this theorem is that you can’t get learning “for free” just by looking at training instances. Why not? Well, the fact is, the only things you know about the data are what you have seen.

For example, if I give you the training inputs (0, 0) and (1, 1) and the classifications of the input as both being “false”, there are two obvious hypotheses that fit the data: first, every possible input could result in a classification of false. On the other hand, every input except the two training inputs might be true–these training inputs could be the only examples of inputs that are classified as false. In short, given a training set, there are always at least two equally plausible, but totally opposite, generalizations that could be made. This means that any learning algorithm requires some kind of “bias” to distinguish between these plausible outcomes.

__

Applications for machine learning include:

  • Adaptive websites
  • Affective computing
  • Bioinformatics
  • Brain-machine interfaces
  • Cheminformatics
  • Classifying DNA sequences
  • Computational anatomy
  • Computer vision, including object recognition
  • Detecting credit card fraud
  • Game playing
  • Information retrieval
  • Internet fraud detection
  • Marketing
  • Machine perception
  • Medical diagnosis
  • Economics
  • Natural language processing
  • Natural language understanding
  • Optimization and metaheuristic
  • Online advertising
  • Recommender systems
  • Robot locomotion
  • Search engines
  • Sentiment analysis (or opinion mining)
  • Sequence mining
  • Software engineering
  • Speech and handwriting recognition
  • Stock market analysis
  • Structural health monitoring
  • Syntactic pattern recognition
  • User behavior analytics

_____

_____

Deep Learning:

Deep Learning is really an offshoot of Machine Learning, which relates to study of “deep neural networks” in the human brain. Deep Learning tries to emulate the functions of inner layers of the human brain, and its successful applications are found image recognition, language translation, or email security. Deep Learning creates knowledge from multiple layers of information processing. The Deep Learning technology is modelled after the human brain, and each time new data is poured in, its capabilities get better. Deep learning is a branch of machine learning based on a set of algorithms that attempt to model high level abstractions in data. In a simple case, there might be two sets of neurons: ones that receive an input signal and ones that send an output signal. When the input layer receives an input it passes on a modified version of the input to the next layer. In a deep network, there are many layers between the input and output (and the layers are not made of neurons but it can help to think of it that way), allowing the algorithm to use multiple processing layers, composed of multiple linear and non-linear transformations. Deep learning is part of a broader family of machine learning methods based on learning representations of data. An observation (e.g., an image) can be represented in many ways such as a vector of intensity values per pixel, or in a more abstract way as a set of edges, regions of particular shape, etc. Some representations are better than others at simplifying the learning task (e.g., face recognition or facial expression recognition). One of the promises of deep learning is replacing handcrafted features with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.  Various deep learning architectures such as deep neural networks, convolutional deep neural networks, deep belief networks and recurrent neural networks have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks.

_

Deep Learning is about constructing machine learning models that learn a hierarchical representation of the data. Neural Networks are a class of machine learning algorithms. The artificial neuron forms the computational unit of the model and the network describes how these units are connected to one another. You can describe a hierarchical model with Neural Networks where each layer of neurons represents a level in that hierarchy. There is no real restriction in how many layers you can add to a network but going beyond two layers, was impractical in the past with diminishing returns. The limitations were overcome with algorithmic advances, availability of large volumes of training data and accelerated GPU computing. People started adding more layers again, resulting in Deep Neural Networks with much success. They demonstrated how Neural Networks with lots of layers enabled constructing the representations required for deep learning.

_

Recurrent neural network (RNN):

Sequence-processing recurrent neural networks (RNNs) are the ultimate NNs, because they are general computers (an RNN can emulate the circuits of a microchip). In fully connected RNNs, all units have connections to all non-input units. Unlike feedforward NNs, RNNs can implement while loops, recursion, etc. The program of an RNN is its weight matrix. RNNs can learn programs that mix sequential and parallel information processing in a natural and efficient way.

_

Convolutional neural networks (CNN):

CNNs have become the method of choice for processing visual and other two-dimensional data.  A CNN is composed of one or more convolutional layers with fully connected layers (matching those in typical artificial neural networks) on top. It also uses tied weights and pooling layers.

_

Recursive neural networks:

A recursive neural network is created by applying the same set of weights recursively over a differentiable graph-like structure, by traversing the structure in topological order. Such networks are typically also trained by the reverse mode of automatic differentiation. They were introduced to learn distributed representations of structure, such as logical term. Recursive neural networks have been applied to natural language processing.

_

Long short-term memory:

Numerous researchers now use variants of a deep learning RNN called the Long short-term memory (LSTM) network. It is a system that unlike traditional RNNs doesn’t have the vanishing gradient problem. LSTM is normally augmented by recurrent gates called forget gates. LSTM RNNs prevent backpropagated errors from vanishing or exploding. Instead errors can flow backwards through unlimited numbers of virtual layers in LSTM RNNs unfolded in space. That is, LSTM can learn “Very Deep Learning” tasks that require memories of events that happened thousands or even millions of discrete time steps ago. Problem-specific LSTM-like topologies can be evolved. LSTM works even when there are long delays, and it can handle signals that have a mix of low and high frequency components. LSTM improved large-vocabulary speech recognition, text-to-speech synthesis, and Google’s speech recognition is now available through Google Voice to billions of smartphone users. LSTM has also become very popular in the field of Natural Language Processing.

_

Deep Boltzmann machines (DBM):

A deep Boltzmann machine (DBM) is a type of binary pairwise Markov random field (undirected probabilistic graphical model) with multiple layers of hidden random variables. It is a network of symmetrically coupled stochastic binary units. DBMs can learn complex and abstract internal representations of the input in tasks such as object or speech recognition, using limited labelled data to fine-tune the representations built using a large supply of unlabelled sensory input data. A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs.

__

Deep feedforward neural networks:

Deep learning in artificial neural networks with many layers has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others. Deep feedforward neural networks were used in conjunction with reinforcement learning by AlphaGo, Google Deepmind’s program that was the first to beat a professional human player.

_

Hierarchical temporal memory (HTM):

Hierarchical temporal memory (HTM) is an unsupervised to semi-supervised online machine learning model. HTM does not present any new idea or theory, but combines existing ideas to mimic the neocortex with a simple design that provides a large range of capabilities. HTM combines and extends approaches used in Sparse distributed memory, Bayesian networks, spatial and temporal clustering algorithms, while using a tree-shaped hierarchy of nodes that is common in neural networks. Hierarchical Temporal Memory (HTM) is the best example of the Biological Neural Network approach. Today, HTM systems are able to learn the structure of streaming data, make predictions and detect anomalies. They learn continuously from unlabelled data. By taking a robust biological approach, the brain gives us a roadmap of where to direct our work in the future, such as completing our understanding of behavior, attention and short term memory. This roadmap distinguishes HTM from other techniques and makes it the best candidate for creating intelligent machines.

_

Figure above shows example of HTM hierarchy used for image recognition.  A typical HTM network is a tree-shaped hierarchy of levels that are composed of smaller elements called nodes or columns. A single level in the hierarchy is also called a region. Higher hierarchy levels often have fewer nodes and therefore less spatial resolvability. Higher hierarchy levels can reuse patterns learned at the lower levels by combining them to memorize more complex patterns. Each HTM node has the same basic functionality. In learning and inference modes, sensory data comes into the bottom level nodes. In generation mode, the bottom level nodes output the generated pattern of a given category. The top level usually has a single node that stores the most general categories (concepts) which determine, or are determined by, smaller concepts in the lower levels which are more restricted in time and space. When in inference mode, a node in each level interprets information coming in from its child nodes in the lower level as probabilities of the categories it has in memory. Each HTM region learns by identifying and memorizing spatial patterns – combinations of input bits that often occur at the same time. It then identifies temporal sequences of spatial patterns that are likely to occur one after another.

_

Applications of deep learning:

Automatic speech recognition

Image recognition

Natural language processing

Drug discovery and toxicology

Customer relationship management

Recommendation systems

Biomedical informatics

______

The figure below shows chronological evolution of artificial intelligence, machine learning and deep learning:

______

______

Data Mining:

This field is mostly concerned with extracting information from a vast amount of data. Google Search is one example of such a system. It is not exactly a technical subject rather application of different algorithms related to NLP, Machine Learning and AI. All Search applications, Text Summarization, Question Answering systems (SIRI) etc are example of this. Another example is Association Rule Mining/Pattern Mining being applied in Retail Industry to mine buying behavior of consumers from vast amount of historical buying behavior.

_

Data Science:

Data science refers to the interdisciplinary field that incorporates, statistics, mathematics, computer science, and business analysis to collect, organize, analyse large amounts of data to generate actionable insights. The types of data (e.g., text, audio, and video) and the analytic techniques (e.g., decision trees, neural networks) that both data science and AI use are very similar. Differences, if any, may be in their purpose. Data science aims to generate actionable insights to business, irrespective of any claims about simulating human intelligence, while the pursuit of AI may be to simulate human intelligence.

_

Cognitive computing:

Cognitive computing is primarily an IBM term. It’s a phenomenal approach to curating massive amounts of information that can be ingested into what’s called the cognitive stack. And then to be able to create connections among all of the ingested material, so that the user can discover a particular problem, or a particular question can be explored that hasn’t been anticipated. Cognitive computing does not have a clear definition. At best, it can be viewed as a subset of AI that focuses on simulating human thought process based on how the brain works. It is also viewed as a “category of technologies that uses natural language processing and machine learning to enable people and machines to interact more naturally to extend and magnify human expertise and cognition.” Under either definition, it is a subset of AI and not an independent area of study.

______

______

AI technologies:

_

Hardware Developments in AI:

As Artificial Intelligence System are made up of many facts and rules in the knowledge base then this will require an increase in:

  • Main memory (RAM)
  • Backing storage (Hard Disk Capacity)
  • Faster processors

This will allow the Artificial Intelligence System to solve more complex problems.

____

Platforms:

A platform (or “computing platform”) is defined as “some sort of hardware architecture or software framework (including application frameworks), that allows software to run”. As Rodney Brooks pointed out many years ago, it is not just the artificial intelligence software that defines the AI features of the platform, but rather the actual platform itself that affects the AI that results, i.e., there needs to be work in AI problems on real-world platforms rather than in isolation. A wide variety of platforms has allowed different aspects of AI to develop, ranging from expert systems such as Cyc to deep-learning frameworks to robot platforms such as the Roomba with open interface. Recent advances in deep artificial neural networks and distributed computing have led to a proliferation of software libraries, including Deeplearning4j, TensorFlow, Theano and Torch.

_____

Software:

Software suites containing a variety of machine learning algorithms include the following:

  1. Free and open-source software:
  • dlib
  • ELKI
  • Encog
  • GNU Octave
  • H2O
  • Mahout
  • Mallet (software project)
  • mlpy
  • MLPACK
  • MOA (Massive Online Analysis)
  • ND4J with Deeplearning4j
  • NuPIC
  • OpenAI Gym
  • OpenAI Universe
  • OpenNN
  • Orange
  • scikit-learn
  • Shogun
  • TensorFlow
  • Torch (machine learning)
  • Spark
  • Yooreeka
  • Weka
  1. Proprietary software with free and open-source editions:
  • KNIME
  • RapidMiner
  1. Proprietary software:
  • Amazon Machine Learning
  • Angoss KnowledgeSTUDIO
  • Ayasdi
  • Databricks
  • Google Prediction API
  • IBM SPSS Modeler
  • KXEN Modeler
  • LIONsolver
  • Mathematica
  • MATLAB
  • Microsoft Azure Machine Learning
  • Neural Designer
  • NeuroSolutions
  • Oracle Data Mining
  • RCASE
  • SAS Enterprise Miner
  • SequenceL
  • Splunk
  • STATISTICA Data Miner

_____

AI Solutions:

_

One way to categorize AI solutions for commercial and scientific needs is by level of complexity of the application: simple, complex, or very complex (though these are, clearly, also open to interpretation). This is an idea borrowed from the Schloer Consulting Group:

  1. Simple – Solutions and platforms for narrow commercial needs, such as eCommerce, network integration or resource management.
  • Examples: Customer Relationship Management (CRM) software, Content Management System (CMS) software, automated agent technology
  1. Complex – Involves the management and analysis of specific functions of a system (domain of predictive analytics); could include optimization of work systems, predictions of events or scenarios based on historical data; security monitoring and management; etc.
  • Examples: Financial services, risk management, intelligent traffic management in telecommunication and energy
  1. Very Complex – Working through the entire information collection, analysis, and management processes; the system needs to know where to look for data, how to collect, and how to analyze, and then propose suggested solutions for near and mid-term futures.
  • Examples: Global climate analysis, military simulations, coordination and control of multi-agent systems etc.

______

The graph below shows success rates of various AI technologies:

The market for artificial intelligence (AI) technologies is flourishing. Beyond the hype and the heightened media attention, the numerous startups and the internet giants racing to acquire them, there is a significant increase in investment and adoption by enterprises. A Narrative Science survey found in 2015 that 38% of enterprises are already using AI, growing to 62% by 2018. Forrester Research predicted a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.

______

Top AI technologies:

  1. Natural Language Generation: Producing text from computer data. Currently used in customer service, report generation, and summarizing business intelligence insights. Sample vendors: Attivio, Automated Insights, Cambridge Semantics, Digital Reasoning, Lucidworks, Narrative Science, SAS, Yseop.
  2. Speech Recognition: Transcribe and transform human speech into format useful for computer applications. Currently used in interactive voice response systems and mobile applications. Sample vendors: NICE, Nuance Communications, OpenText, Verint Systems.
  3. Virtual Agents: From simple chatbots to advanced systems that can network with humans. Currently used in customer service and support and as a smart home manager. Sample vendors: Amazon, Apple, Artificial Solutions, Assist AI, Creative Virtual, Google, IBM, IPsoft, Microsoft, Satisfi.
  4. Machine Learning Platforms: Providing algorithms, APIs, development and training toolkits, data, as well as computing power to design, train, and deploy models into applications, processes, and other machines. Currently used in a wide range of enterprise applications, mostly `involving prediction or classification. Sample vendors: Amazon, Fractal Analytics, Google, H2O.ai, Microsoft, SAS, Skytree.
  5. AI-optimized Hardware: Graphics processing units (GPU) and appliances specifically designed and architected to efficiently run AI-oriented computational jobs. Currently primarily making a difference in deep learning applications. Sample vendors: Alluviate, Cray, Google, IBM, Intel, Nvidia.
  6. Decision Management: Engines that insert rules and logic into AI systems and used for initial setup/training and ongoing maintenance and tuning. A mature technology, it is used in a wide variety of enterprise applications, assisting in or performing automated decision-making. Sample vendors: Advanced Systems Concepts, Informatica, Maana, Pegasystems, UiPath.
  7. Deep Learning Platforms: A special type of machine learning consisting of artificial neural networks with multiple abstraction layers. Currently primarily used in pattern recognition and classification applications supported by very large data sets. Sample vendors: Deep Instinct, Ersatz Labs, Fluid AI, MathWorks, Peltarion, Saffron Technology, Sentient Technologies.
  8. Biometrics: Enable more natural interactions between humans and machines, including but not limited to image and touch recognition, speech, and body language. Currently used primarily in market research. Sample vendors: 3VR, Affectiva Agnitio, FaceFirst, Sensory, Synqera, Tahzoo.
  9. Robotic Process Automation: Using scripts and other methods to automate human action to support efficient business processes. Currently used where it’s too expensive or inefficient for humans to execute a task or a process. Sample vendors: Advanced Systems Concepts, Automation Anywhere, Blue Prism, UiPath, WorkFusion.
  10. Text Analytics and NLP: Natural language processing (NLP) uses and supports text analytics by facilitating the understanding of sentence structure and meaning, sentiment, and intent through statistical and machine learning methods. Currently used in fraud detection and security, a wide range of automated assistants, and applications for mining unstructured data. Sample vendors: Basis Technology, Coveo, Expert System, Indico, Knime, Lexalytics, Linguamatics, Mindbreeze, Sinequa, Stratifyd, Synapsify.

________

________

Robotics:

_

Robotics and AI:

Not all robots require AI and not all AI are implemented in robots. Robotics regards the design of robots, which does not necessarily require AI. A robot can be an automaton that performs a task using preprogrammed logic, such as a robotic vacuum cleaner that cleans whilst detecting obstacles. AI is a different matter, it regards artificial intelligence – a computer program capable of ‘learning’. Whilst both are often highly linked they are separate topics. AI is purely software, teaching a piece of complex circuitry to reason, whereas robotics is a inter-disciplinary field that has components of mechatronics (that is mechanical and electronics engineering) as well as some piece of software to govern that piece of hardware. Essentially robots don’t need to reason (consider the robotic arms in an assembly line), they merely need to do execute the commands. Robotics is a whole set of sciences; mathematics, physics, mechanics, electronics, materials, control, geometry, artificial intelligence and many others. However, each of those sciences by itself goes beyond robotics. If you go after learning AI, you may end up applying it in robotics or not, depending on your own decisions later. If you go after learning robotics, you may end up working on its AI or not, depending on your own decisions later.

_

The Artificial Intelligence and Robotics although addressing in similar problems; the two fields interact profitably in the area of building intelligent agents; this interaction has resulted in important developments in the area of vision and phased action. Recent advancements of technologies, including computation, robotics, machine learning communication, and miniaturization technologies, brings us closer to futuristic visions of compassionate intelligent devices. The missing element is a basic understanding of how to relate human functions (physiological, physical, and cognitive) to the design of intelligent devices and systems that aid and interact with people. Robotics is the branch of technology that deals with the design, construction, operation and application of robots and computer systems for their control sensory feedback, and information processing.

_

Robotics:

A robot can be defined as a programmable, self-controlled device consisting of electronic, electrical, or mechanical units. More generally, it is a machine that functions in place of a living agent. Robots are especially desirable for certain work functions because, unlike humans, they never get tired; they can endure physical conditions that are uncomfortable or even dangerous; they can operate in airless conditions; they do not get bored by repetition; and they cannot be distracted from the task at hand. A telerobot is a robot that can act under the direction of a human being, such as the robotic arm on board a space shuttle. Alternatively, a robot may work autonomously under the control of a programmed computer.

_

Modern robotics could roughly be categorised into two main groups:

  • Those used in the arenas of entertainment and performing routine tasks.

One primary example of this is found in the car production business where robots are use in the production of specific products, particularly in the area of painting, welding and assembling of cars.

  • Those used in the security and medical industry.

Examples of this are found in areas of work such as bomb disposal; work in space, underwater, cleaning up of toxic waste. The iRobot Packbot and the Foster-Miller TALON have been used in Iraq and Afghanistan by the US Military in defusing roadside bombs and other forms of explosive disposal.

_

All robots contain some level of computer programming code. A program is how a robot decides when or how to do something. In the caterpillar track example, a robot that needs to move across a muddy road may have the correct mechanical construction and receive the correct amount of power from its battery, but would not go anywhere without a program telling it to move. Programs are the core essence of a robot, it could have excellent mechanical and electrical construction, but if its program is poorly constructed its performance will be very poor (or it may not perform at all). There are three different types of robotic programs: remote control (RC), artificial intelligence and hybrid. A robot with remote control programing has a preexisting set of commands that it will only perform if and when it receives a signal from a control source, typically a human being with a remote control. It is perhaps more appropriate to view devices controlled primarily by human commands as falling in the discipline of automation rather than robotics. Robots that use artificial intelligence interact with their environment on their own without a control source, and can determine reactions to objects and problems they encounter using their preexisting programming. Hybrid is a form of programming that incorporates both AI and RC functions.

_

Two forms of artificial intelligence are suitable for applications in robotics and mechatronics:

Software intelligence is provided by a computer, microprocessor or microcontroller in which any intelligent software runs. Hardware links provide the data the processor needs to make decisions and communicate with the control block. The decisions are programmed in a basic structure and in some cases can be changed according to the incoming data. In such a case , the program can “learn” with experience, which is co-considered to be a basic Software intelligence can be located inside the own robot or mechatronic machine when microprocessor and microcontrollers are used. The Basic Stamp chip provides a simple way to some degree of intelligence Hardware intelligence. Another way to add intelligence to machine is by using circuits that can learn. The basic idea is to imitate the way living begins process the information they receive via senses, i.e. using the nervous system.

_

Smart robot (intelligent robot):

A robot can carry out many tasks such as the production of cars in a factory. Robots can weld, insert windscreens, paint, etc. The robot follows a control program to carry out the task given to it by a human. All these robots have sensors. These robots are NOT intelligent, they do the same thing over and over again as instructed by the control program. A sensor is a device which can detect physical data from its surroundings and then this data is input into a computer system. Examples of sensors: light, heat, movement, bump, pressure, temperature, sound. An intelligent robot has many different sensors, large processors and a large memory in order to show that they have intelligence. The robots will learn from their mistakes and be able to adapt to any new situation that may arise. An intelligent robot can be programmed with its own expert system, e.g. a factory floor is blocked with fallen boxes. An intelligent robot will remember this and take a different route. These intelligent robots carry out many different tasks such as automated delivery in a factory, pipe inspection, bomb disposal, exploration of dangerous/unknown environments.

_

_

Smart robot is an artificial intelligence (AI) system that can learn from its environment and its experience and build on its capabilities based on that knowledge. Smart robots can collaborate with humans, working along-side them and learning from their behavior. An early example of a smart robot is Baxter, produced by Rethink Robotics of Boston, Massachusetts. Baxter is an industrial android (humanoid robot) that can work right next to line employees on the factory floor, often working on highly repetitive tasks such as precision packing. The number and types of tasks that can be automated or augmented by software, robots and other smart machines is rapidly increasing. Smart robots have the capacity for not only manual labor but cognitive tasks. Erik Brynjolfsson and Andrew McAfee, authors of “The Second Machine Age,” maintain that technological advances have led global culture to an inflection point rivalling that brought about by the industrial revolution. According to Gartner, the prominent technology research firm, software, robots and other smart machines will take over one in three jobs currently conducted by humans by the year 2025.

__

Difference in Robot System and Other AI Program:

AI Programs Robots
They usually operate in computer-stimulated worlds. They operate in real physical world
The input to an AI program is in symbols and rules. Inputs to robots is analog signal in the form of speech waveform or images
They need general purpose computers to operate on. They need special hardware with sensors and effectors.

_

Applications of Robotics:

The robotics has been instrumental in the various domains such as:

  • Industries − Robots are used for handling material, cutting, welding, color coating, drilling, polishing, etc.
  • Military − Autonomous robots can reach inaccessible and hazardous zones during war. A robot named Daksh, developed by Defense Research and Development Organization (DRDO), is in function to destroy life-threatening objects safely.
  • Medicine − The robots are capable of carrying out hundreds of clinical tests simultaneously, rehabilitating permanently disabled people, and performing complex surgeries such as brain tumors.
  • Exploration − The robot rock climbers used for space exploration, underwater drones used for ocean exploration are to name a few.
  • Entertainment − Disney’s engineers have created hundreds of robots for movie making.

____

Sex robots and virtual dates could be the future of Valentine’s Day:

New technological devices are constantly trying to make life easier and better, with varying degrees of success. The future may also include new devices and apps that will not just improve relationships among people, but also create virtual relationships. Robots, for instance, are making inroads. Care and companion cyborgs have become accepted into our lives, and people bond with them. As experts believe, in the future, we will become closer and more connected to machines. People will converse with apps on their smartphones and in some cases they may lose track of the fact they are talking with a machine, not a person. In the near future, however, such relationships could become problematic and may involve a human withdrawing from “normal” relationships, experts say. Another concern is the implications of sex robots, for example, and whether they further objectify women or cause people to place less value on meaningful human relationships. There is also some doubt over the ethics of sex robots but the benefits outweigh those doubts. Sex robots could provide a way for people to be sexually active where they might not have access to socially normative relationships. Sex is a pleasurable, affirming act that enhances our well-being. If we can give people who may not be able to access the opportunity to experience it, then sex robots are a wonderful thing. Virtual reality technology could have a dramatic impact on our relationships as well, with the rise in pornographic content and applications. That is often accompanied by sex toys, known as teledildonics, that allow people to take part in sexual activity with each other even when they are in different places. That is why experts believe that soon we could have the ability to meet and interact in virtual reality environments. For example, on Valentine’s Day, you might be able to have a virtual date in a romantic place hundreds or thousands of miles away from your partner. Maybe you’ll be able to give the gift of an original work of art, created by an artificial intelligence application.

_______

‘New approach to automation’ uses robots to control cells:

Rethink Robotics, the US company behind the Baxter and Sawyer collaborative robots, has developed a new software platform that, it says, can co-ordinate an entire work-cell from a single robot. A team of about 30 engineers has spent two years developing the Intera 5 platform, which reduces the need for conventional PLCs and for integration, allowing manufacturers to deploy automated work-cells in a matter of hours, rather than weeks. Rethink says that Intera 5 is much more than the latest version of its software; it’s a new approach to automation that can control robots, orchestrate work-cells, and collect data. It adds that the technology will allow manufacturers to build connected work-cells, without disrupting production. “We’ve created the world’s first smart robot that can orchestrate the entire work-cell, removing areas of friction and opening up new and affordable automation possibilities for manufacturers around the world,” says Rethink Robotics’ president and CEO, Scott Eckert. “Intera 5 is driving immediate value, while helping customers to work toward a smart factory, and providing a gateway to successful Industrial Internet of Things (IIoT) for the first time. “By implementing our robots equipped with Intera 5,” he adds, “manufacturers will have unprecedented work-cell co-ordination, greatly reducing the need for complex, time-consuming and outdated automation options.”

_______

Robot soldiers:

One of the scariest potential uses of AI and robotics is the development of a robot soldier. Although many have moved to ban the use of so-called “killer robots,” the fact that the technology could potentially power those types of robots soon is upsetting, to say the least.

_

Schizophrenic robot:

Researchers at the University of Texas at Austin and Yale University used a neural network called DISCERN to teach the system certain stories. To simulate an excess of dopamine and a process called hyperlearning, they told the system to not forget as many details. The results were that the system displayed schizophrenic-like symptoms and began inserting itself into the stories. It even claimed responsibility for a terrorist bombing in one of the stories.

_

Robots that deceive:

In many cases, robots and AI systems seem inherently trustworthy—why would they have any reason to lie to or deceive others? Well, what if they were trained to do just that? Researchers at Georgia Tech have used the actions of squirrels and birds to teach robots how to hide from and deceive one another. The military has reportedly shown interest in the technology.

_

Survival robots:

In an experiment conducted by the scientists of Intelligent Systems in Switzerland, robots were made to compete for a food source in a single area. The robots could communicate by emitting light and, after they found the food source, they began turning their lights off or using them to steer competitors away from the food source.

_______

_______

AI and IoT (internet of things):

IoT won’t work without Artificial Intelligence:

As the Internet of Things (IoT) continues its run as one of the most popular technology buzzwords, the discussion has turned from what it is, to how to drive value from it, to the tactical: how to make it work. IoT will produce a treasure trove of big data – data that can help cities predict accidents and crimes, give doctors real-time insight into information from pacemakers or biochips, enable optimized productivity across industries through predictive maintenance on equipment and machinery, create truly smart homes with connected appliances and provide critical communication between self-driving cars. The possibilities that IoT brings to the table are endless. As the rapid expansion of devices and sensors connected to the Internet of Things continues, the sheer volume of data being created by them will increase to a mind-boggling level. This data will hold extremely valuable insight into what’s working well or what’s not – pointing out conflicts that arise and providing high-value insight into new business risks and opportunities as correlations and associations are made. It sounds great. However, the big problem will be finding ways to analyze the deluge of performance data and information that all these devices create. If you’ve ever tried to find insight in terabytes of machine data, you know how hard this can be. It’s simply impossible for humans to review and understand all of this data – and doing so with traditional methods, even if you cut down the sample size, simply takes too much time. We need to improve the speed and accuracy of big data analysis in order for IoT to live up to its promise. If we don’t, the consequences could be disastrous and could range from the annoying – like home appliances that don’t work together as advertised – to the life-threatening – pacemakers malfunctioning or hundred car pileups. The only way to keep up with this IoT-generated data and gain the hidden insight it holds is with machine learning.

_

In an IoT situation, machine learning can help companies take the billions of data points they have and boil them down to what’s really meaningful. The general premise is the same as in the retail applications – review and analyze the data you’ve collected to find patterns or similarities that can be learned from, so that better decisions can be made. For example, wearable devices that track your health are already a burgeoning industry – but soon these will evolve to become devices that are both inter-connected and connected to the internet, tracking your health and providing real-time updates to a health service. The goal is that your doctor would receive notification if a certain condition was met – your heart rate increased to an unsafe level, or even stopped, for example. To be able to call out potential problems, the data has to be analyzed in terms of what’s normal and what’s not. Similarities, correlations and abnormalities need to be quickly identified based on the real-time streams of data. Could this be done by an individual working at the health service – reviewing data from thousands of patients in real-time and correctly deciding when to send an emergency flag out? Not likely – writing code, or rules, to scour thru the data to find known patterns is enormously time consuming, fraught with error and limited to only identifying previously known patterns. In order to analyze the data immediately as it’s collected to accurately identify previously known and never-before seen new patterns, machines that are capable of generating and aggregating this big data must also be used to learn normal behaviors for each patient and track, uncover and flag anything outside the norm that could indicate a critical health issue. The realization of IoT depends on being able to gain the insights hidden in the vast and growing seas of data available. Since current approaches don’t scale to IoT volumes, the future realization of IoT’s promise is dependent on machine learning to find the patterns, correlations and anomalies that have the potential of enabling improvements in almost every facet of our daily lives.

_

One of the most important outcomes of IoT is the creation of an unprecedented amount of data. Storage, ownership and expiry of the data become critical issues. The data have to be stored and used intelligently for smart monitoring and actuation. It is important to develop artificial intelligence algorithms which could be centralized or distributed based on the need. Novel fusion algorithms need to be developed to make sense of the data collected. State-of-the-art non-linear, temporal machine learning methods based on evolutionary algorithms, genetic algorithms, neural networks, and other artificial intelligence techniques are necessary to achieve automated decision making.

______

_______

AI and virtual reality (VR):

Virtual Reality is artificial, computer-generated simulation or recreation of a real life environment or situation.  An example would be a 3-D store shelf can adapt to offer items that make the most sense for each customer according to past purchases. Virtual reality will essentially force us to change our perspective (both figuratively and literally) with regards to how we engage with particular industries and activities. It will likely grow into a technological necessity, helping to eliminate inefficiencies across a number of areas, all while improving the way in which information is delivered and consumed. Healthcare and education will undoubtedly see a real game changing benefit, allowing medical students to undertake risk-free surgical training, or philosophy students to learn remotely in an immersive and meaningful digital environment. And of course, there will be the many obvious entertainment applications. So, for all of this to just work, VR will need the support of something that can bring these complex systems together. That’s where AI comes in. When VR challenges and transforms industries, AI will be there by its side to help smooth the way. It will serve to act as the foundation upon which the virtual environment will exist. When VR and AI are combined, they will make the adoption of this new tech more straightforward, helping to bind and present contextual data in order to open up channels for healthcare, education, business, and entertainment. And for this to happen, we will need a means of seamlessly sharing this data. The inescapable, incredible, and indispensable Internet will be crucial to the happy marriage of VR and AI. With vastly improved speeds, expanded access, and a shared set of rules, we will see virtual worlds powered by artificial intelligence purr along as data is instantaneously exchanged across the information superhighway. And the Internet has the added bonus of being a shining example of a modern, worldwide adoption of technology that has helped transform business and shape humanity.

_

Potential Commercial Applications of AI and VR

Some major players are already moving their resources into position as they prepare to shake up their respective industries. Social media, retail, gaming, and healthcare are all primed for major changes thanks to VR and AI. Take, for instance, Facebook. Over the next decade, they plan on taking the lead with both AI and VR in the social media space. CEO Mark Zuckerberg is of the belief that by utilizing both of these emerging technologies, his company will offer a more compelling social experience beyond the run-of-the-mill status updates and photo sharing. His vision for the future of Facebook will include contextual news updates, and 360-degree videos. The platform will also integrate AI so that it can explain objects in an image or understand speech. And shared virtual environments will help take social media platforms, such as the one Zuckerberg founded in his Harvard dorm room all those years ago (2004!), to another level. Just imagine pulling on a VR headset and playing a few games of pool with an old college friend, all without leaving the comfort of your own home. Meanwhile, the retail industry appears ready to move with the rapidly changing times. Having already embraced smart mirrors in dressing rooms, there seems to be a willingness to take the lead on AI. Big data is, and will continue to be, crucial to the way in which retail organizations develop insights about their customers. And by leveraging machine learning, they will be able to offer more personalized and tailored experiences both online and in store. One brand that has been experimenting with AI is outdoor retailer The North Face. They have been working with a tool called the Fluid Expert Personal Shopper – powered by IBM’s Watson, no less – which exposes its customers to a more intuitive search experience, thanks to its ability to understand natural language. And another AI-powered area that has retailers justifiably excited is that of ‘visual listening’. Not too dissimilar to Facebook’s goal of understanding and explaining the context of imagery, visual listening uses algorithms to study posts on photo-sharing platforms such as Instagram and Pinterest to better understand what customers are sharing about their brands. Virtual Reality applications are already among the most exciting apps available today. While VR examples tend to focus on gaming, travel and entertainment, virtual reality has fantastic scope for design and construction, communication and training purposes in a host of sectors. It also has the ability to provide great insight in highly technical fields such as healthcare and oil and gas.

_______

Virtual intelligence (VI):

VI is the emergence of virtual world technologies within these immersive environments. Many virtual worlds have options for persistent avatars that provide information, training, role playing, and social interactions. The immersion of virtual worlds provides a unique platform for VI beyond the traditional paradigm of past user interfaces (UIs). With today’s VI bots, virtual intelligence has evolved past the constraints of past testing into a new level of the machine’s ability to demonstrate intelligence. The immersive features of these environments provide non-verbal elements that affect the realism provided by virtually intelligent agents. Virtual intelligence is the intersection of these two technologies:

  • Virtual environments:

3D spaces provide for collaboration, simulations, and role playing interactions for training. Many of these virtual environments are currently being used for government and academic projects, including Second Life, VastPark, Olive, OpenSim, Outerra, Oracle’s Open Wonderland, Duke University’s Open Cobalt, and many others. Some of the commercial virtual worlds are also taking this technology into new directions, including the high definition virtual world Blue Mars.

  • Artificial intelligence:

The virtual environments provide non-verbal and visual cues that can affect not only the believability of the VI, but also the usefulness of it. Because – like many things in technology – it’s not just about “whether or not it works” but also about “how we feel about it working”. Virtual Intelligence draws a new distinction as to how this application of AI is different due to the environment in which it operates.

Examples of use of VI:

  • Cutlass Bomb Disposal Robot: Northrop Grumman developed a virtual training opportunity because of the prohibitive real world cost and dangers associated with bomb disposal. By replicating a complicated system without having to learn advanced code, the virtual robot has no risk of damage, trainee safety hazards, or accessibility constraints.
  • MyCyber Twin: NASA is among the companies that have used the MyCyber Twin AI technologies. They used it for the Phoenix rover in the virtual world Second Life. Their MyCyber Twin used a programmed profile to relay information about the Phoenix rover to inform people on what it was doing and its purpose.
  • Second China: The University of Florida developed the “Second China” project as an immersive training experience for learning how to interact with culture and language in a foreign country. Students are immersed in an environment that provides roleplaying challenges coupled with language and cultural sensitivities magnified during country-level diplomatic missions or during times of potential conflict or regional destabilization. The virtual training provides participants with opportunities to access information, take part in guided learning scenarios, communicate, collaborate, and role-play. While China was the country for the prototype, this model can be modified for use with any culture to help better understand social and cultural interactions and see how other people think and what their actions imply.
  • Duke School of Nursing Training Simulation: Extreme Reality developed virtual training to test critical thinking with a nurse performing trained procedures to identify critical data to make decisions and performing the correct steps for intervention. Bots are programmed to response to the nurses actions as the patient with their conditions improving if the nurse performs the correct actions.

_______

_______

Artificial intelligence and nanotechnology:

Even though nanotechnology and artificial intelligence are two different fields, they there are so many researches working on with the application of artificial intelligence in nanotechnology. Many progresses can be made by joining together the artificial intelligence and nanotechnology. Artificial Intelligence could be boosted by nanotechnology innovations in computing power. Applications of a future nanotechnology general assembler would require some AI and robotics innovations. During the last decade there has been increasing use of artificial intelligence tools in nanotechnology research. These include interpreting scanning probe microscopy, study of biological nanosystems, classification of material properties at the nanoscale, theoretical approaches and simulations in nanoscience, and the design of nanodevices. Current trends and future perspectives in the development of nanocomputing hardware that can boost artificial-intelligence-based applications are on the horizon. Convergence between artificial intelligence and nanotechnology can shape the path for many technological developments in the field of information sciences that will rely on new computer architectures and data representations, hybrid technologies that use biological entities and nanotechnological devices, bioengineering, neuroscience and a large variety of related disciplines.

__

Nanocomputing and artificial intelligence:

From the early efforts in building nanocomputers, artificial intelligence paradigms were used in the different phases of modeling, designing and building prototypes of nanocomputing devices. Machine learning methods implemented by nanohardware instead of semiconductor-based hardware can also a basis for a new generation of cheaper and smaller technology that can implement high performance computing, including applications such as sensory information processing and control tasks. However the current development of nanocomputers is mainly at the level of manufacturing and analyzing individual components, e.g. nanowires as connectors or molecules as switches. The largest expectations arise from nanotechnology enabled quantum computing and memory, which can significantly increase our capacity to solve very complex NP-complete optimization problems. These kinds of problems arise in many different contexts, but especially in those that require what is called computational intelligence in big data applications. In this context, the concept of natural computing generally includes at least three different methods: (1) Those that take inspiration from nature to the development of novel solving techniques, (2) those that are based on the use of computers to synthesize natural phenomena and (3) those that employ natural materials working at the nanoscale to compute. This last concept includes techniques such as DNA computing or Quantum computing that are well-studied at the present.

_

DNA computing:

One of the topics that represents better the approximation between Nanotechnology, Biology and Computer Science is DNA computing. This is a discipline that aims at using individual molecules at the nanoscopic level for computational purposes. We could say that a DNA computer is a collection of selected DNA strands whose combinations will result in the solution to a problem. Nanocomputers are defined as machines that store information and make complex calculations at the nanoscale, which are commonly related to DNA since this kind of molecules have the required properties to success in both tasks. Thus, DNA computing is a promising development at the interface between computer science and molecular biology. DNA computing is the performing of computations using biological molecules DNA, rather than traditional silicon chips.  A computation may be thought of as the execution of an algorithm, which itself may be defined as a step-by-step list of well-defined instructions that takes some input, processes it, and produces a result. In DNA computing, information is represented using the four-character genetic alphabet (A [adenine], G [guanine], C [cytosine], and T [thymine]), rather than the binary alphabet (1 and 0) used by traditional computers. This is achievable because short DNA molecules of any arbitrary sequence may be synthesized to order. An algorithm’s input is therefore represented (in the simplest case) by DNA molecules with specific sequences, the instructions are carried out by laboratory operations on the molecules (such as sorting them according to length or chopping strands containing a certain subsequence), and the result is defined as some property of the final set of molecules (such as the presence or absence of a specific sequence). It has emerged as a technology for information processing and a catalyst for knowledge transfer between information processing, nanotechnology and biology. As quoted by Ezziane, this area of research has the potential to change our understanding of the theory and practice of computing. However, an interesting twist occurs when the number of variables in the calculation increases. The most common DNA computing strategies are based on enumerating all candidate solutions and using selection processes to eliminate incorrect DNA. This means that the algorithm relates exponentially the size of the initial data pool with the number of variables. At some point, brute-force methods become unfeasible. This is a case where AI techniques in DNA computing become useful to get a final solution from a smaller initial data pool, avoiding using all candidate solutions. Another alternative is using evolutionary and genetic algorithms. The combination of the massive parallelism and high storage density in DNA computing with the search capability of genetic algorithms can break the limit of brute-force methods. DNA computing has also received criticism from different authors who believe that the problems intrinsically associated to this technique would make it impractical. Some of these problems are the growing number of error-prone, time consuming operations and exponential growth of DNA volume according to problem size. A commonly extended solution to these problems has been the implementation in silico by computer architectures offering massive parallelism.

_

Nanorobotics:

Nanorobotics is the technology of creating machines or robots at or below the size of 1-10 micrometer. More specifically, nanorobotics refers to the still largely theoretical nanotechnology engineering discipline of designing and building nanorobots. Nanorobots are typically devices ranging in size from 0.1-10 micrometres and constructed of nanoscale or molecular components. Nanorobotics are being considered as a way to help ‘find’ and ‘remove’ problems in the body. For example, scientists envisage microscopic robots that could help clean plaque from our arteries and tartar from teeth. Even eyesight could be corrected by way of nanotechnology and nanorobotics. An electronic “rubber band” placed around the eye to adjust the eye when needed, by way of a switch on the side of the head in much the same way spectacles are worn currently. This technology is currently being tested and developed. Furthermore, the work being spearheaded by the Center for Biologic Nanotechnology is also of interest. The Center’s work on Smart Anti-Cancer Therapeutics is based upon how to target cancer cells whilst sparing healthy cells. Within the medical industry, nanorobotics are increasingly being used in minimally invasive procedures. They are being deployed to perform highly delicate, accurate surgery and also used in partnership with a surgeon by way of remote control, to perform surgery.

_____

_____

AI and quantum computing:

_

What is Quantum Computing?

Quantum computing is based on quantum bits or qubits. Unlike traditional computers, in which bits must have a value of either zero or one, a qubit can represent a zero, a one, or both values simultaneously. Representing information in qubits allows the information to be processed in ways that have no equivalent in classical computing, taking advantage of phenomena such as quantum tunnelling and quantum entanglement. As such, quantum computers may theoretically be able to solve certain problems in a few days that would take millions of years on a classical computer.

_

A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of “quantum artificial intelligence”.  A team of quantum theorists devised a quantum algorithm that solves machine learning problem in logarithmic time rather than polynomial time. That’s a vast speed up. However, their work was entirely theoretical. It is this algorithm that researchers have implemented on their quantum computer for the first time. Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. The advantage comes when one of those states represents a 1 and the other a 0, forming a quantum bit or qubit. In that case, a single quantum object — an atomic nucleus for example— can perform a calculation on two numbers at the same time. Two nuclei can handle 4 numbers, 3 nuclei 8 numbers and 20 nuclei can perform a calculation using more than a million numbers simultaneously. That’s why even a relatively modest quantum computer could dramatically outperform the most advanced supercomputers today. Quantum physicists have even demonstrated this using small quantum computers crunching a handful of qubits to carry out tasks such as finding the factors of numbers. That’s the kind of calculation that, on a larger scale, could break the codes that currently encrypt top secret communications.

_

NASA Quantum Artificial Intelligence Laboratory (QuAIL):

The Quantum Artificial Intelligence Lab (also called the Quantum AI Lab or QuAIL) is a joint initiative of NASA, Universities Space Research Association, and Google (specifically, Google Research) whose goal is to pioneer research on how quantum computing might help with machine learning and other difficult computer science problems. The lab is hosted at NASA’s Ames Research Center. QuAIL is the space agency’s hub for an experiment to assess the potential of quantum computers to perform calculations that are difficult or impossible using conventional supercomputers. NASA’s QuAIL team aims to demonstrate that quantum computing and quantum algorithms may someday dramatically improve the agency’s ability to solve difficult optimization problems for missions in aeronautics, Earth and space sciences, and space exploration.  The hope is that quantum computing will vastly improve a wide range of tasks that can lead to new discoveries and technologies, and which may significantly change the way we solve real-world problems.

_

How quantum effects could improve artificial intelligence:

Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. Over the past few decades, quantum effects have greatly improved many areas of information science, including computing, cryptography, and secure communication. More recently, research has suggested that quantum effects could offer similar advantages for the emerging field of quantum machine learning (a subfield of artificial intelligence), leading to more intelligent machines that learn quickly and efficiently by interacting with their environments.  In a new study published in Physical Review Letters, Vedran Dunjko and coauthors have added to this research, showing that quantum effects can likely offer significant benefits to machine learning. In this new study, the researchers’ main result is that quantum effects can help improve reinforcement learning, which is one of the three main branches of machine learning. They showed that quantum effects have the potential to provide quadratic improvements in learning efficiency, as well as exponential improvements in performance for short periods of time when compared to classical techniques for a wide class of learning problems. While other research groups have previously shown that quantum effects can offer improvements for the other two main branches of machine learning (supervised and unsupervised learning), reinforcement learning has not been as widely investigated from a quantum perspective.  One of the ways that quantum effects may improve machine learning is quantum superposition, which allows a machine to perform many steps simultaneously, improving the speed and efficiency at which it learns. One of the open questions researchers are interested in is whether quantum effects can play an instrumental role in the design of true artificial intelligence. As quantum technologies emerge, quantum machine learning will play an instrumental role in our society—including deepening our understanding of climate change, assisting in the development of new medicine and therapies, and also in settings relying on learning through interaction, which is vital in automated cars and smart factories.

________

________

Applications and adoption of AI:

_

AI adoption:

_

Figure below shows companies adopting various AI technologies:

_

_

Figure below shows that maximum number of AI companies are established in the U.S.

_

_

Figure below shows that IBM and Microsoft have been granted maximum number of AI patents:

_

_

Tech Adoption is slow:

While Moore’s Law implies technology doubles every two years, the reality is humans are notoriously slow at adopting it.  We’ve been trained to think of new technology as cost prohibitive and buggy. We let tech savvy pioneers test new things and we wait until the second or third iteration, when the technology is ready, before deciding to adopt it.

_

Obstacles to AI adoption:

There are certainly many business benefits gained from AI technologies today, but according to a survey conducted by Forrester, there are also obstacles to AI adoption as expressed by companies with no plans of investing in AI:

There is no defined business case – 42%

Not clear what AI can be used for – 39%

Don’t have the required skills – 33%

Need first to invest in modernizing data mgt platform – 29%

Don’t have the budget – 23%

Not certain what is needed for implementing an AI system – 19%

AI systems are not proven -14%

Do not have the right processes or governance -13%

AI is a lot of hype with little substance -11%

Don’t own or have access to the required data – 8%

Not sure what AI means -3%

Once enterprises overcome these obstacles, Forrester concludes, they stand to gain from AI driving accelerated transformation in customer-facing applications and developing an interconnected web of enterprise intelligence.

_

The following statistics will give you an idea of growth!

– In 2014, more than $300 million was invested in AI startups, showing an increase of 300%, compared to the previous year (Bloomberg)

– By 2018, 6 billion connected devices will proactively ask for support. (Gartner)

– By the end of 2018, “customer digital assistants” will recognize customers by face and voice across channels and partners (Gartner)

– Artificial intelligence will replace 16% of American jobs by the end of the decade (Forrester)

– 15% of Apple phone owners’ users use Siri’s voice recognition capabilities. (BGR)

Unlike general perception, artificial intelligence is not limited to just IT or technology industry; instead, it is being extensively used in other areas such as medical, business, education, law, and manufacturing.

_

Britain banks on robots, artificial intelligence to boost growth:

Britain is betting that the rise of the machines will boost the economy as the country exits the European Union. As part of its strategy to champion specific industries, the UK government said that it would invest 17.3 million pounds ($21.6 million) in university research on robotics and artificial intelligence. The government cited an estimate from consultancy Accenture that AI could add 654 billion pounds to the UK economy by 2035.

_____

_____

AI applications:

_

Artificial intelligence has been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, remote sensing, scientific discovery and toys. However, due to the AI effect, many AI applications are not perceived as AI. A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore. Many thousands of AI applications are deeply embedded in the infrastructure of every industry.  In the late 90s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes. Applications of this concept are vast. High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions, automated investment banking and targeting online advertisements. Although we don’t know the exact future, it is quite evident that interacting with AI will soon become an everyday activity. These interactions will clearly help our society evolve, particularly in regards to automated transportation, cyborgs, handling dangerous duties, solving climate change, friendships and improving the care of our elders. Beyond these six impacts, there are even more ways that AI technology can influence our future, and this very fact has professionals across multiple industries extremely excited for the ever-burgeoning future of artificial intelligence.

_

List of applications:

Typical problems to which AI methods are applied

  • Optical character recognition
  • Handwriting recognition
  • Speech recognition
  • Face recognition
  • Artificial creativity
  • Computer vision, Virtual reality and Image processing
  • Diagnosis (artificial intelligence)
  • Game theory and Strategic planning
  • Game artificial intelligence and Computer game bot
  • Natural language processing, Translation and Chatterbots
  • Nonlinear control and Robotics

_

Other fields in which AI methods are implemented

  • Artificial life
  • Automated reasoning
  • Automation
  • Biologically inspired computing
  • Concept mining
  • Data mining
  • Knowledge representation
  • Semantic Web
  • E-mail spam filtering
  • Robotics

–Behavior-based robotics

–Cognitive

–Cybernetics

–Developmental robotics (Epigenetic)

–Evolutionary robotics

  • Hybrid intelligent system
  • Intelligent agent
  • Intelligent control
  • Litigation

______

______

Artificial Intelligence is gaining popularity at a quicker pace; influencing the way we live, interact and improve customer experience. There is much more to come in the coming years with more improvements, development, and governance.  I am listing down some intelligent AI solutions that we are using today, marketing machine learning as a present thing – not the future.

_

Digital personal assistants:

Siri is one of the most popular personal assistant offered by Apple in iPhone and iPad. The friendly female voice activated assistant interacts with the user on a daily routine. She assists us find information, get directions, send messages, make voice calls, open applications and add events to the calendar. Siri uses machine-learning technology in order to get smarter and capable-to-understand natural language questions and requests. It is surely one of the most iconic examples of machine learning abilities of gadgets. Siri, Google Now, and Cortana are all intelligent digital personal assistants on various platforms (iOS, Android, and Windows Mobile). In short, they help find useful information when you ask for it using your voice. Microsoft says that Cortana “continually learns about its user” and that it will eventually develop the ability to anticipate users’ needs. Virtual personal assistants process a huge amount of data from a variety of sources to learn about users and be more effective in helping them organize and track their information.

_

Nest (Google):

Nest was one of the most famous and successful artificial intelligence startups and it was acquired by Google in 2014 for $3.2 billion. The Nest Learning Thermostat uses behavioral algorithms to save energy based on your behavior and schedule. It employs a very intelligent machine learning process that learns the temperature you like and programs itself in about a week. Moreover, it will automatically turn off to save energy, if nobody is at home. In fact, it is a combination of both – artificial intelligence as well as Bluetooth low-energy because some components of this solution will use BLE services and solutions.

_

Flying Drones:

The flying drones are already shipping products to customers home – though on a test mode. They indicate a powerful machine learning system that can translate the environment into a 3D model through sensors and video cameras. The sensors and cameras are able to notice the position of the drones in the room by attaching them to the ceiling. Trajectory generation algorithm guides the drone on how and where to move. Using a Wi-Fi system, we can control the drones and use them for specific purposes – product delivery, video-making, or news reporting.

_

iRobot:

IRobot Roomba is a machine that can do the vacuum cleaning by itself. The Roomba is able to detect how many room it need to clean which is Machine learning, and know what area it can clean which is automated reasoning, Also has the dirt detection which mean it will make sure all the dirt being cleaned which is computer vision. It has the cliff sensing, anti-Tangle function and auto charge itself. Those function to ensure that the Roomba can clean by itself and charge it when it need it, All we need to do just setup the clean time and push the power button. Intelligent robots will not only clean your living room and do the dishes, but may also tackle jobs like assembling furniture or caring for kids and pets. Facilities with large lawns, like golf courses or football stadiums, rely on similar technology to mow their lawns thus eliminating human labor. The same technology is now used in assembling line robots that perform boring or repetitive tasks. Baxter is a good example of this technology and has already been deployed in several factories and can safely work alongside humans. AI is also present in mining, fire-fighting, mine disposal and handling radioactive materials, tasks that are too dangerous for humans.

_

Echo:

Echo was launched by Amazon, which is getting smarter and adding new features. It is a revolutionary product that can help you to search the web for information, schedule appointments, shop, control lights, switches, thermostats, answers questions, reads audiobooks, reports traffic and weather, gives info on local businesses, provides sports scores and schedules, and more using the Alexa Voice Service.

_

Cleverbot:

Cleverbot is a chatterbot that’s modelled after human behavior and able to hold a conversation. It does so by remembering words from conversations. The responses are not programmed. Instead, it finds keywords or phrases that matches the input and searches through its saved conversations to find how it should respond to the input.

_

MySong:

MySong is an application which can help people who has no experience in write a song or even cannot play any instrument, to create original music by themselves. It will automatically choose chords to accompany the vocal melody that you have just inputed by microphone. In the other hand, MySong can help songwriters to record their new ideas and melodies no matter where and when they are.

_

Real-time translation without humans:

Google has spent a lot of time developing and improving its translation services using machine learning. The fruits of that labor are available as the Google Translate API, which lets you build dynamic translation services for a fee. Instead of building a database of words that mean other words in other languages, Google Translate uses machine learning to understand what a word means and can even parse idioms. It has done this by learning how words relate to one another.

_

Facial recognition everywhere:

Facebook, Nest (through its Dropcm acquisition) and Microsoft all have built tools that recognize people’s faces. While machine learning can’t identify the person in the photos, it can say that a person in one photo is the same as a person in another photo or video. When you link that with a limited field of people’s names, you get systems that can identify who people are in a home setting or in photographs on a social network. These facial recognition efforts are a subset of computer vision research that is used in self-driving cars or the “How Old am I” program.

_

Image recognition:

A very good and also meaningful example of an Artificial Intelligence solution is something Facebook developed through their FAIR (Facebook Artificial Intelligence Research) unit. This solution is capable of translating images to text and vice versa. Now, such a solution could still be developed in a traditional fashion if there would be a fixed and limited set of images. For example if there would always be the same image representing a car, the same image representing a house and so on. The reality however is that there are an unlimited number of images of such things and it is simply impossible to program all these possibilities. The way that Artificial Intelligence deals with this is that it gets exposed to a large set of images representing a car, again a large set representing a house, etc. The AI solution will look for and learns to recognize patterns in these images and once it is proficient it will be able to recognize a car in a picture that it has never seen before. And the more images it sees, the better it gets…Replace images for e.g. invoices, free-format emails, legal documents, research reports and so forth and a world of new opportunities opens up.

_

Artificial Intelligence to kill social bias on FB:

Facebook’s new ad-approval algorithms wade into greener territory as the company attempts to utilise machine learning to address, or at least not contribute to, social discrimination. Facebook’s change of strategy, intended to make the platform more inclusive, follow the discovery that some of its ads were specifically excluding certain racial groups. For Facebook, the challenge is maintaining the advertising advantage, while preventing discrimination, particularly where it’s illegal. That’s where machine learning comes in. If a human “teaches” the computer by initially labelling each ad as discriminatory or nondiscriminatory, a computer can learn to go from the text of the advertising to a prediction of whether it’s discriminatory or not.

_

Art:

We tend to think that artificial intelligence can’t make art. Poetry, literature, music, and art are for people. However, we may be wrong. For example, you have to see works of Aaron, a computer program painting big colorful pictures. Aaron is written in LISP. This application may recognize simple objects and shapes.  Another awesome example of using artificial intelligence creatively is composing music. Creativity has become computational and algorithmic. As scientists claim, computers may create music in different genres and moods. Using certain instructions, the machine can create orchestral compositions of the needed length and with the needed lexicon. Professor David Cope has been experimenting with musical intelligence since 1981. Since that time, he has created a bunch of works that made people think they had been composed by humans.

_

Wildlife preservation:

Many researchers want to know how many animals are out there and where they live, but scientists do not have the capacity to do this, and there are not enough GPS collars or satellite tracks in the world. Berger-Wolf and her colleagues developed Wildbook.org, a site that houses an AI system and algorithms. The system inspects photos uploaded online by experts and the public. It can recognize each animal’s unique markings, track its habitat range by using GPS coordinates provided by each photo, estimate the animal’s age and reveal whether it is male or female. After a massive 2015 photo campaign, Wildbook determined that lions were killing too many babies of the endangered Grévy’s zebra in Kenya, prompting local officials to change the lion management program. “The ability to use images with photo identification is democratizing access to conservation in science,” Berger-Wolf said. “We now can use photographs to track and count animals.”

_

Computer science:

AI researchers have created many tools to solve the most difficult problems in computer science. Many of their inventions have been adopted by mainstream computer science and are no longer considered a part of AI. According to Russell & Norvig, all of the following were originally developed in AI laboratories: time sharing, interactive interpreters, graphical user interfaces and the computer mouse, rapid development environments, the linked list data structure, automatic storage management, symbolic programming, functional programming, dynamic programming and object-oriented programming.

_

Computer Vision:

The world is composed of three-dimensional objects, but the inputs to the human eye and computers’ TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views.  One of the most vibrant areas of research, computer vision, that involves the building of algorithms that can help automate recognition of objects and environment by machines, is one of the foremost applications. Automated intelligent visual recognition software can be used in industrial as well as information technology sectors. Google recently developed an algorithm that could correctly recognize the faces of cats from the billions of images strewn on the Internet.

_

Online and telephone customer service:

Artificial intelligence is implemented in automated online assistants that can be seen as avatars on web pages. It can avail for enterprises to reduce their operation and training cost. A major underlying technology to such systems is natural language processing. Pypestream uses automated customer service for its mobile application designed to streamline communication with customers. Currently, major companies are investing in AI to handle difficult customer in the future. Google’s most recent development analyzes language and converts speech into text. The platform can identify angry customers through their language and respond appropriately. Companies have been working on different aspects of customer service to improve this aspect of a company. Digital Genius, an AI start-up, researches the database of information (from past conversations and frequently asked questions) more efficiently and provides prompts to agents to help them resolve queries more efficiently. IPSoft is creating technology with emotional intelligence to adapt the customer’s interaction. The response is linked to the customer’s tone, with the objective of being able to show empathy. Another element IPSoft is developing is the ability to adapt to different tones or languages. Inbenta’s is focused on developing natural language. In other words, on understanding the meaning behind what someone is asking and not just looking at the words used, using context and natural language processing. One customer service element Ibenta has already achieved is its ability to respond in bulk to email queries.

_

Transportation:

Many companies have been progressing quickly in this field with AI. Fuzzy logic controllers have been developed for automatic gearboxes in automobiles. For example, the 2006 Audi TT, VW Touareg and VW Caravell feature the DSP transmission which utilizes Fuzzy Logic. A number of Škoda variants (Škoda Fabia) also currently include a Fuzzy Logic-based controller. AI in transportation is expected to provide safe, efficient, and reliable transportation while minimizing the impact on the environment and communities. The major challenge to developing this AI is the fact that transportation systems are inherently complex systems involving a very large number of components and different parties, each having different and often conflicting objectives. Another great example of artificial intelligence in everyday life is a self-driving car. Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016, there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple. Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers are integrated into one complex vehicle.

_

Self-Repairing Hardware:

Researchers at Caltech have made progress on an integrated circuit equipped with sensors and actuators that allow it to heal itself if it suffers any damage. The sensors read temperature, current, voltage, and power and the chip is given a goal state, such as maximum output power. The chip then modifies itself with its actuators and learns how close it is to its goal based on the readings from its sensors. Previous researchers (e.g. D. Mange et. al) have used evolutionary algorithms in the self-repair of programmable logic circuits. The field of embryonics (embryonic electronics) may also lead to hardware that can self-replicate as well as self-repair.

_

Toys and video games:

Companies like Mattel have been creating an assortment of AI-enabled toys for kids as young as age three. Using proprietary AI engines and speech recognition tools, they are able to understand conversations, give intelligent responses and learn quickly. AI has also been applied to video games, for example video game bots, which are designed to stand in as opponents where humans aren’t available or desired.  Video game bot is the easy way of using the Artificial Intelligence in real life. The AI components used in video games is often a slimmed down version of a true AI implementation, as the scope of a video game is often limited. The most innovative use of AI is garnered on Personal Computers, whose memory capabilities are adjustable beyond the capacity of modern gaming consoles. Some examples of AI components typically used in video games are Path Finding, Adaptiveness (learning), perception, and planning (decision making).  The present state of video games can offer a variety of “worlds” for AI concepts to be tested in, such as a static or dynamic environment, deterministic or non-deterministic transitioning, and fully or partially known game worlds. The real-time performance constraint of AI in video game processing must also be considered, which is another contributing factor to why video games may choose to implement a “simple” AI.

_

Aviation:

The Air Operations Division (AOD) uses AI for the rule based expert systems. The AOD has use for artificial intelligence for surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries. The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights. Other than simulated flying, there is also simulated aircraft warfare. The computers are able to come up with the best success scenarios in these situations. The computers can also create strategies based on the placement, size, speed and strength of the forces and counter forces. Pilots may be given assistance in the air during combat by computers. The artificial intelligent programs can sort the information and provide the pilot with the best possible manoeuvres, not to mention getting rid of certain manoeuvres that would be impossible for a human being to perform. Multiple aircraft are needed to get good approximations for some calculations so computer simulated pilots are used to gather data. These computer simulated pilots are also used to train future air traffic controllers.

_

Airlines use expert systems in planes to monitor atmospheric conditions and system status. The plane can be put on auto pilot, once a course is set for the destination. The AOD also uses artificial intelligence in speech recognition software. The air traffic controllers are giving directions to the artificial pilots and the AOD wants to the pilots to respond to the ATC’s with simple responses. The programs that incorporate the speech software must be trained, which means they use neural networks. The program used, the Verbex 7000, is still a very early program that has plenty of room for improvement. The improvements are imperative because ATCs use very specific dialog and the software needs to be able to communicate correctly and promptly every time. The Artificial Intelligence supported Design of Aircraft, or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools. The AIDA uses rule based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective. In 2003, NASA’s Dryden Flight Research Center, and many other companies, created software that could enable a damaged aircraft to continue flight until a safe landing zone can be reached. The software compensates for all the damaged components by relying on the undamaged components. The neural network used in the software proved to be effective and marked a triumph for artificial intelligence. The Integrated Vehicle Health Management system, also used by NASA, on board an aircraft must process and interpret data taken from the various sensors on the aircraft. The system needs to be able to determine the structural integrity of the aircraft. The system also needs to implement protocols in case of any damage taken the vehicle.

_

Space Exploration:

Artificial intelligence and robots will play a major role in space travel in the not-so-distant future. NASA already depends on unmanned shuttles, rovers and probes to explore distant galaxies that would take years for humans to reach. Autonomous land rovers have recently given researchers a treasure trove of data and photographs collected from the Martian surface, where inhospitable conditions make human exploration impossible. These smart vehicles sense obstacles, like craters, and find safe paths of travel around them before returning to the shuttle. AI technology will also help scientists react more quickly to emergencies during manned flights by allowing space-borne astronauts to spot and prevent problems before they happen. All the rovers that landed on Mars had an in-built operating system that could control, plan, and strategize their movements, as well as deploy on-board equipment, without help or intervention from Earth.  Robotic pilots carry out complex manoeuvring techniques of unmanned spacecrafts sent in space.

_

Better Weather Predictions:

Predicting the weather accurately can be tricky, especially when you have to go through large volumes of data, but thanks to artificial intelligence software currently being developed that may soon change. The software will be able to sift through all the available data, get a clearer and better picture of approaching weather phenomena and issue the corresponding early warnings, thus saving lives. The artificial intelligence software will also help farmers and prevent forest fires. NASA is also developing a program that can help aircraft dodge potential storms and danger spots, increasing air safety.

_

Search and rescue:

Victims of floods, earthquakes or other disasters can be stranded anywhere, but new AI technology is helping first responders locate them before it’s too late. Until recently, rescuers would try to find victims by looking at aerial footage of a disaster area. But sifting through photos and video from drones is time-intensive, and it runs the risk of the victim dying before help arrives, said Robin Murphy, a professor of computer science and engineering at Texas A&M University. AI permits computer programmers to write basic algorithms that can examine extensive footage and find missing people in less than 2 hours, Murphy said. The AI can even find piles of debris in flooded areas that may have trapped victims, she added. In addition, AI algorithms can sift through social media sites, such as Twitter, to learn about missing people and disasters, Murphy said.

_

Smart-building technologies to improve workplace safety:

This will look at how smart buildings can be fitted with sensors, for instance, which can track how many workers are in the building at any one time and automatically adjust settings such as lighting brightness and temperature. These sensors could also be used during emergencies. If a fire breaks out in a building, for example, the information picked up by such smart sensors can help firemen locate people inside and ensure their safe evacuation.

_

Saving the Planet:

Climate change and its root, pollution, are on everyone’s mind these days, and artificial intelligence will be in the forefront of this battle. Robots and other devices are being developed to clean up the environment and reduce the effects of air and water pollution. Sophisticated software programs will allow robots to distinguish between biological organisms and pollutants, like oil or hazardous waste, while tiny microbes will consume waste products and leave good biological matter intact, minimizing damage to the ecosystem. Smart software is also fighting air pollution directly from fuel-burning factories. Carbon dioxide, chemical pollutants and other gases are identified and captured before they enter the smokestack and end up in our lungs.

_

News Generation:

Did you know that artificial intelligence programs can write news stories? According to Wired, the AP, Fox, and Yahoo! all use AI to write simple stories like financial summaries, sports recaps, and fantasy sports reports. AI isn’t writing in-depth investigative articles, but it has no problem with very simple articles that don’t require a lot of synthesis. The company Narrative Science makes computer generated news and reports commercially available, including summarizing team sporting events based on statistical data from the game in English. It also creates financial reports and real estate analyses.  Similarly, the company Automated Insights generates personalized recaps and previews for Yahoo Sports Fantasy Football.  Echobox is a software company that helps publishers increase traffic by ‘intelligently’ posting articles on social media platforms such as Facebook and Twitter. By analysing large amounts of data, it learns how specific audiences respond to different articles at different times of the day. It then chooses the best stories to post and the best times to post them. It uses both historical and real-time data to understand to what has worked well in the past as well as what is currently trending on the web. Another company, called Yseop, uses artificial intelligence to turn structured data into intelligent comments and recommendations in natural language. Yseop is able to write financial reports, executive summaries, personalized sales or marketing documents and more at a speed of thousands of pages per second and in multiple languages including English, Spanish, French & German.  Boomtrain’s is another example of AI that is designed to learn how to best engage each individual reader with the exact articles — sent through the right channel at the right time — that will be most relevant to the reader. It’s like hiring a personal editor for each individual reader to curate the perfect reading experience. There is also the possibility that AI will write work in the future. In 2016, a Japanese AI co-wrote a short story and almost won a literary prize. Examples of artificial intelligence in journalism show us high rates of AI development.

_

IBM’s Watson

IBM has developed an advanced computing system called Watson, named after the company’s legendary founder, that has natural language processing, hypothesis generation, and dynamic learning abilities. Besides defeating two former Jeopardy champions, it is now being utilized in creating a cluster of cognitive apps that can be used to solve real world problems, through an application programming interface (API), provided by IBM. Watson is powered by 2,880 (3.5 GHz) processor cores, with 16 Terabytes of RAM. In terms of performance, this supercomputer stands at 80 Teraflops. Watson store more than 200 million pages of data including the whole text of Wikipedia. The company’s goal is to unleash Watson’s abilities to analyze unstructured data on a global level, providing intelligent decision-making ability in all types of fields, including medical diagnosis. Currently, IBM is offering Watson Solutions as a service for customer engagement, healthcare, finance, and accelerated data research.

_

Predicting the future:

Nautilus is a supercomputer that can predict the future based on news articles. It is a self-learning supercomputer that was given information from millions of articles, dating back to the 1940s. It was able to locate Osama Bin Laden within 200km. Now, scientists are trying to see if it can predict actual future events, not ones that have already occurred.

_________

_________

Heavy industry:

Robots have become common in many industries and are often given jobs that are considered dangerous to humans. Robots have proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration and other jobs which humans may find degrading. In 2014, China, Japan, the United States, the Republic of Korea and Germany together amounted to 70% of the total sales volume of robots. In the automotive industry, a sector with particularly high degree of automation, Japan had the highest density of industrial robots in the world: 1,414 per 10,000 employees.  Robotics and cybernetics have taken a leap, combined with expert systems. An entire process is now totally automated, controlled, and maintained by a computer system in car manufacturing, machine tool production, computer chip production, and almost every high-tech process. They carry out dangerous tasks like handling hazardous radioactive materials.

______

Artificial intelligence in the electricity sector:

Now that energy storage technologies are coming close to commercial reality, decades of work should result in artificial intelligence (AI) emerging as the third key technology in the transformation of the electricity sector.  Combined with scalable generation and storage, it will blur the distinction between suppliers and consumers, with excess local generation being fed into the grid so that entities from individual homeowners to business and municipalities will become “producer-consumers” or “prosumers”.  Demand management systems will also have a role to play.  The introduction of multiple players of widely varying consumption and production patterns connecting into a single nationwide grid is impossible until we have software able to predict and manage energy flows to ensure that supply and demand balance at all times. There are obvious drivers also for energy storage at small scale, particularly for remote locations.  Apart from the potential for autonomy, and the ability to smooth draw from the grid (avoiding or at least reducing demand-based charges), local storage could relieve grid congestion and add flexibility to power generation requirements, potentially improving network stability. Only AI will be able to deliver the active management necessary.  Balancing grids, negotiating joint actions to enable self-healing of networks in response to a fault or a hack, demand management and assessing reliability of the production and consumption figures supplied by ‘prosumers’, to name a few areas, will all require real-time forecasting, monitoring and decision taking. Swiss-German firm Alpiq in 2014 launched an intelligent system called GridSense which aims to imperceptibly steer domestic or commercial energy consumption of the equipment into which it is integrated (such as boilers, heat pumps, charging stations for electric vehicles) based on measurements such as grid load, weather forecasts and electricity charges.

Microgrid:

Many incentives point towards the deployment of microgrids as the next stage – peer-to-peer energy networks that distribute electricity in a small geographic area, usually supplementing, staying connected and continuing to rely on the central power grid but also accessing generating units located at or near their point of use that rely on locally available fuels, especially locally available renewable energy resources.  Microgrids can improve reliability (fewer wires), environmental sustainability, security (no single point of failure) and local economic development and jobs.  Possible business models include third party leasing companies and individuals working with the utilities, or a regulatory shift enabling the existing utilities to include residential solar in their rate base.

______

______

Artificial intelligence in finance, banking and marketing:

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. Use of AI in banking can be tracked back to 1987 when Security Pacific National Bank in USA set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Apps like Kasisito and Moneystream are using AI in financial services. Banks use artificial intelligence systems to organize operations, maintain book-keeping, invest in stocks, and manage properties. For example, Kensho is a computer system that is used to analyze how well portfolios perform and predict changes in the market. AI can react to changes overnight or when business is not taking place. In August 2001, robots beat humans in a simulated financial trading competition. Also, systems are being developed, like Arria, to translate complex data into simple and personable language. There are also wallets, like Wallet.AI that monitor an individual’s spending habits and provides ways to improve them. AI has also reduced fraud and crime by monitoring behavioral patterns of users for any changes or anomalies.

_

The following are examples of how AI might be deployed in financial services:

  1. Personalized Financial Services:

Because of the increased customized automation, the financial institution can offer more personalized services in near real-time at lower costs. We already are starting to see a number of successful new applications that provide hints as to where the industry may be heading. Consider the following examples of applications that are being developed and deployed:

  • Automated financial advisors and planners that assist users in making financial decisions. These include monitoring events and stock and bond price trends against the user’s financial goals and personal portfolio and making recommendations regarding stocks and bonds to buy or sell. These systems are often called “robo advisors” and are increasingly being offered both by start-ups and established financial service providers.
  • Digital and wealth management advisory services offered to lower net worth market segments, resulting in lower fee-based commissions.
  • Smart wallets that monitor and learn users’ habits and needs and alert and coach users, when appropriate, to show restraint and to alter their personal finance spending and saving behaviors.
  • Insurance underwriting AI systems that automate the underwriting process and utilize more granular information to make better decisions.
  • Data-driven AI applications to make better informed lending decisions.
  • Applications, embedded in end-user devices, personal robots, and financial institution servers that are capable of analyzing massive volumes of information, providing customized financial advice, calculations and forecasts. These applications also can develop financial plans and strategies, and track their progress. This includes research regarding various customized investment opportunities, loans, rates and fees.
  • Automated agents that assist the user, over the Internet, in determining insurance needs.
  • Trusted financial social networks that allow the user to find other users who are willing to pool their money to make loans to each other, and to share in investments.
  1. New Management Decision-making:

Data-driven management decisions at lower cost could lead to a new style of management, where future banking and insurance leaders will ask the right questions to machines, rather than to human experts, which will analyze the data to come up with the recommended decisions that leaders and their subordinates will use and motivate their workforce to execute.

  1. Reducing Fraud and Fighting Crime:

AI tools which learn and monitor users’ behavioral patterns to identify anomalies and warning signs of fraud attempts and occurrences, along with collection of evidence necessary for conviction are also becoming more commonplace in fighting crime.

____

Machine Learning in Finance – Some Current Applications:

Below are examples of machine learning being put to use actively today. Bear in mind that some of these applications leverage multiple AI approaches – not exclusively machine learning.

_

Robo-advisor:

The term “robo-advisor” was essentially unheard-of just five years ago, but it is now commonplace in the financial landscape. The term is misleading and doesn’t involve robots at all. Rather, robo-advisors (companies such as Betterment, Wealthfront, and others) are algorithms built to calibrate a financial portfolio to the goals and risk tolerance of the user. Users enter their goals (for example, retiring at age 65 with $250,000.00 in savings), age, income, and current financial assets. The advisor (which would more accurately be referred to as an “allocator”) then spreads investments across asset classes and financial instruments in order to reach the user’s goals. The system then calibrates to changes in the user’s goals and to real-time changes in the market, aiming always to find the best fit for the user’s original goals. Robo-advisors have gained significant traction with millennial consumers who don’t need a physical advisor to feel comfortable investing, and who are less able to validate the fees paid to human advisors.

_

Algorithmic Trading:

With origins going back to the 1970’s, algorithmic trading (sometimes called “Automated Trading Systems,” which is arguably a more accurate description) involves the use of complex AI systems to make extremely fast trading decisions. Algorithmic systems often making thousands or millions of trades in a day, hence the term “high-frequency trading” (HFT), which is considered to be a subset of algorithmic trading. Most hedge funds and financial institutions do not openly disclose their AI approaches to trading (for good reason), but it is believed that machine learning and deep learning are playing an increasingly important role in calibrating trading decisions in real time. Software programs which can predict trends in the stock market have been created, that have been known to beat humans in terms of predictive power. Algorithmic trading is widely used by investment banks, institutional investors, and large firms to carry out rapid buying and selling activity, to capitalize on lucrative opportunities that arise in the global markets. Not only are the software programs predicting trends, but they are also making decisions, based on pre-programmed rules. The machine learning software can detect patterns which humans may not see. Heaps of data from decades of world stock market history are fed to these algorithms to find patterns that can offer an insight into making future predictions. Banks use intelligent software applications to screen and analyze financial data.

_

Fraud Detection:

Combine more accessible computing power, internet becoming more commonly used, and an increasing amount of valuable company data being stored online, and you have a “perfect storm” for data security risk. While previous financial fraud detection systems depended heavily on complex and robust sets of rules, modern fraud detection goes beyond following a checklist of risk factors – it actively learns and calibrates to new potential (or real) security threats. This is the place of machine learning in finance for fraud – but the same principles hold true for other data security problems. Using machine learning, systems can detect unique activities or behaviors (“anomalies”) and flag them for security teams. The challenge for these systems is to avoid false-positives – situations where “risks” are flagged that were never risks in the first place.

_

Loan / Insurance Underwriting:

Underwriting could be described as a perfect job for machine learning in finance, and indeed there is a great deal of worry in the industry that machines will replace a large swath of the underwriting positions that exist today. Especially at large companies (big banks and publicly traded insurance firms), machine learning algorithms can be trained on millions of examples of consumer data (age, job, marital status, etc…) and financial lending or insurance results (did this person default, pay back the loan on time, get in a car accident, etc…?). The underlying trends that can be assessed with algorithms, and continuously analyzed to detect trends that might influence lending and insuring into the future (are more and more young people in a certain state getting in car accidents? Are there increasing rates of default among a specific demographic population over the last 15 years?). These results have a tremendous tangible yield for companies – but at present are primarily reserved for larger companies with the resources to hire data scientists and the massive volumes of past and present data to train their algorithms.

_______

Current Artificial Intelligence Applications in Marketing and Advertising:

A complete list of currently viable AI applications would be much more broad, so focus on some of the most popular uses today:

  1. Recommendations/content curation
  2. Search engines
  3. Preventing fraud and data breaches
  4. Programmatic Advertising
  5. Marketing Forecasting
  6. Social semantics
  7. Website design
  8. Product pricing
  9. Predictive customer service
  10. Ad targeting
  11. Speech recognition
  12. Language recognition
  13. Customer Segmentation
  14. Sales forecasting
  15. Image recognition
  16. Content generation
  17. Bots, PAs and messengers

_____

Artificial Intelligence Techniques enhance Business Forecasts:

Today’s business world is driven by customer demand. Unfortunately, the patterns of demand vary considerably from period to period. This is why it can be so challenging to develop accurate forecasts. Forecasting is the process of estimating future events, and it is fundamental to all aspects of management. The goals of forecasting are to reduce uncertainty and to provide benchmarks for monitoring actual performance. In a turbulent business environment, forecasting can lead to significant competitive advantage as well as avoid costly mistakes. Forecasting errors impact organizations in two ways. The first is when faulty estimates lead to poor strategic choices, and the second is when inaccurate forecasts impair performance within the existing strategic plan. Either way there will be a negative impact on profitability. Emerging information technologies and artificial intelligence (AI) techniques are being used to improve the accuracy of forecasts and thus making a positive contribution to enhancing the bottom line. A new generation of artificial intelligence technologies have emerged that hold considerable promise in helping improve the forecasting process including such applications as product demand, employee turnover, cash flow, distribution requirements, manpower forecasting, and inventory. These AI based systems are designed to bridge the gap between the two traditional forecasting approaches: managerial and quantitative.

____

How Artificial Intelligence Revolutionizes the Banking Industry:

Voice-assisted banking enables customers to use banking services with voice commands apart from a touch screen or press button. The Natural Language Processing technology can process queries to answer the questions, finding information, and connecting the users with various banking services. In order to develop artificial intelligence, developers first define what data is analyzed and what results the analysis is expected to produce. In the case of a banking app like Outbank, data searched includes sender name, reference, IBAN, amount, date and similar banking information. In the case of Outbank, the result is defined as a list of tags that is supposed to get assigned to specific transactions. At the learning stage, the algorithm examines the data pool and compares the information. If the data matches the tag, the algorithm memorizes the combination and saves it. The more often the algorithm goes through this process, the more precisely will data be assigned – and the more precisely data is assigned, the more suitable tags will be suggested. Artificial intelligence is created to not only simplify banking for a specific behavior pattern but for millions of people who tag very differently and have very diverse finance flows. The only thing they have in common is that both are using the same app. The fact that artificial intelligence is able to recognize so many different patterns makes it so revolutionary and attractive. AI can improve customer personalization, identify patterns and connections that humans can’t, and answer questions about banking issues in real-time. Financial institutions are already finding success with AI. Two of the biggest challenges that remain in banking is the absence of people experienced in data collection, analysis and application and the existence of data silos. This was reflected in the research done by Narrative Science. The good news is that many data firms now have the capability to do a ‘workaround’, collecting data from across the organization. With an origin rooted in risk and fraud detection and cost reduction, AI is increasingly important for financial services firms to be competitive.

____

____

Artificial Intelligence in Education System:

AI may change where students learn, who teaches them, and how they acquire basic skills. While major changes may still be a few decades in the future, the reality is that artificial intelligence has the potential to radically change just about everything we take for granted about education. Using AI systems, software, and support, students can learn from anywhere in the world at any time, and with these kinds of programs taking the place of certain types of classroom instruction, AI may just replace teachers in some instances (for better or worse). Educational programs powered by AI are already helping students to learn basic skills, but as these programs grow and as developers learn more, they will likely offer students a much wider range of services.

Some ways in which AI can make changes to the education experience are narrated here:

  1. AI can help automate basic activities:

AI can help to automate the grading for all types of multiple choice questions, fill in the blanks types of questions. It not too far when AI will be able to grade other type of questions with lengthy answers. This will help teachers to focus more on in-class activities. While AI may not be able to completely replace human grading but it can get close enough to it.

  1. AI can provide real-time assistance to students:

From kindergarten to graduate school, one of the key ways artificial intelligence will impact education is through the application of greater levels of individualized learning. AI can help in the learning process of an individual student and can offer timely interventions in real time to support the students especially when the teachers are not available as AI can be available 24/7.

  1. AI can bring in global learning:

Education cannot be limited to boundaries and AI can help achieve this. It can help bring radical changes in the education industry by allowing students to learn any type of course from anywhere in the world at any time. Education programs powered by artificial intelligence can start equipping students with basic skills and as AI develops and becomes more advanced, a wider range of courses can be run will the help of AI giving students the opportunity to learn from anywhere anytime.

  1. AI can become new teachers:

With the development happening on the artificial intelligence front, it is not far when artificial intelligence will be able to conduct class-room teaching for various streams. Though AI cannot completely replace teachers but they can take care of the basic teaching sessions and teachers can act as facilitators to help where AI falls short. Teachers can supplement to AI lessons and provide assistance to weak students thereby providing the required human interaction and hands-on experiences to students.

  1. AI-driven programs can give students and educators helpful feedback:

Feedback is very important for improvement. In a similar way, teachers and students both need feedback about their performance in order to improve. AI can help on this front. For instance, AI can figure out which areas most students are performing good and which areas most students are performing poorly and giving wrong answers. It can alert the faculty on a real-time basis so that he/she know where more efforts are required. This will help in improving the performance of students. Also students can get immediate feedback rather than waiting for the teacher to communicate. It high time that traditional way of education is followed and that education is just taken for granted. The onus lies on the educational institution to bring changes in the education system making them more efficient, more student-friendly and preparing students for the real-world.

  1. AI can make trial-and-error learning less intimidating.

Trial and error is a critical part of learning, but for many students, the idea of failing, or even not knowing the answer, is paralyzing. Some simply don’t like being put on the spot in front of their peers or authority figures like a teacher. An intelligent computer system, designed to help students to learn, is a much less daunting way to deal with trial and error. Artificial intelligence could offer students a way to experiment and learn in a relatively judgment-free environment, especially when AI tutors can offer solutions for improvement. In fact, AI is the perfect format for supporting this kind of learning, as AI systems themselves often learn by a trial-and-error method.

  1. Data powered by AI can change how schools find, teach, and support students.

Smart data gathering, powered by intelligent computer systems, is already making changes to how colleges interact with prospective and current students. From recruiting to helping students choose the best courses, intelligent computer systems are helping make every part of the college experience more closely tailored to student needs and goals. Data mining systems are already playing an integral role in today’s higher-ed landscape, but artificial intelligence could further alter higher education. Initiatives are already underway at some schools to offer students AI-guided training that can ease the transition between college and high school. Who knows but that the college selection process may end up a lot like Amazon or Netflix, with a system that recommends the best schools and programs for student interests.

_

Some reasons to be Sceptical about role of AI in education:

  1. Cost: When combining the cost of installation, maintenance and repair, it’s clear that AI is expensive. Only the well-funded schools will find themselves in a position to benefit from AI.
  2. Addiction: As we rely on machines to make everyday tasks more efficient, we risk technology addiction.
  3. Lack of personal connections: While smart machines improve the education experience, they should not be considered a substitute for personal interaction. Relying too much on these machines to grade or tutor may lead to educational oversights that hurt learners more than help.
  4. Efficient decision making: Computers are getting smarter every day. They are demonstrating not only an ability to learn, but to teach other computers. However, it is debatable whether they can implement intuition-based decision making in new situations, which often arises in the classroom.

_____

_____

Artificial Intelligence in Healthcare:

_

Artificial intelligence (AI) in healthcare uses algorithms and software to approximate human cognition in the analysis of complex medical data. The primary aim of health-related AI applications is to analyze relationships between prevention or treatment techniques and patient outcomes. AI programs have been developed and applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine and patient monitoring and care, among others. Medical institutions such as The Mayo Clinic, Memorial Sloan Kettering Cancer Center and National Health Service, multinational technology companies such as IBM and Google and startups such as Welltok and Ayasdi have created solutions currently used in the industry. Healthcare remains the top area of investment in AI as measured by venture capital deal flow. Microsoft has developed AI to help doctors find the right treatments for cancer. There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are way too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. There was a recent study by surgeons at the Children’s National Medical Center in Washington. They successfully practiced a surgeon with a robot, rather than a human. The team supervised an autonomous robot performing a soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon.

_

Hospitals and medicine:

Artificial neural networks are used as clinical decision support systems for medical diagnosis, such as in Concept Processing technology in EMR software. Other tasks in medicine that can potentially be performed by artificial intelligence and are beginning to be developed include:

  • Computer-aided interpretation of medical images. Such systems help scan digital images, e.g. from computed tomography, for typical appearances and to highlight conspicuous sections, such as possible diseases. A typical application is the detection of a tumor.
  • Heart sound analysis
  • Watson project is another use of AI in this field, a Q/A program that suggest for doctor of cancer patients.
  • Companion robots for the care of the elderly
  • Mining medical records to provide more useful information
  • Design treatment plans
  • Assist in repetitive jobs including medication management
  • Provide consultations
  • Drug creation

_

Here are just a few examples of man and machine coming together in the world of medicine.

  1. Decision support systems.

DXplain was developed at the University of Massachusetts in 1987. Given a set of symptoms, DXplain comes up with a list of possible diagnoses which might be related to the selected symptoms.

  1. Laboratory Information Systems.

Developed by Washington University, Germwatcher is designed to detect, track, and investigate infections in hospitalized patients. This aims to lessen the cases of hospital-acquired infections by monitoring a hospital’s laboratory system, identifying the microbiology culture it finds, and reporting the results to the US National Center for Disease Control and Prevention.

  1. Robotic surgical systems.

In the da Vinci robotic surgical system, the doctor’s hand movements are translated into the machine’s robotic arms. Precise movement and magnified vision allow the doctor to perform surgery with very tiny incisions and see inside the body in 3D, the very pinnacle of artificial intelligence in medicine. The robots can help access hard-to-reach tumors or visualize the inside of a patient with minimal invasion. Of course robotic arm is stable even if surgeon has tremors.

  1. Therapy.

It is now possible to get treatment for social anxiety by logging in to AI Therapy—an online course that provides patients with guidance on how to identify the causes of their anxiety and a list of resources customized to their needs.

  1. Reducing human error in diagnosis.

Artificial intelligence in medicine is shown in Babylon is an online application that patients in the UK use to book their doctor appointments and routine tests. What makes it even more special is that patients have an option to consult with a doctor online, as well as check for symptoms, get advice, monitor their health, and order test kits.

  1. Medical education.

The Auscultation Assistant is an online platform that familiarizes medical students with certain heart sounds to hone their diagnosis skills.

_

Artificial Intelligence will redesign Healthcare – Company Map:

There are already several great examples of AI in healthcare showing potential implications and possible future uses that could make us quite optimistic.  Currently, there are over 90 AI startups in the health industry working in these fields.

_

While healthcare-specific AI platforms are still in the early stages of development, experts hope they will one day give providers robust intelligence to inform their clinical decisions. To illustrate, here are just a few of the data sources AI can draw upon:

  • Textbooks
  • Public databases (e.g., cancer registries, voluntary reporting programs)
  • Electronic medical records (EMRs)
  • Journal articles
  • Diagnostic images
  • Prescriptions
  • Results of clinical trials
  • Insurance records
  • Genomic profiles
  • Provider notes
  • Wearable devices and activity trackers (e.g., Fitbit)

__

Artificial intelligence is as good as cancer doctors:

Artificial intelligence can identify skin cancer in photographs with the same accuracy as trained doctors, say scientists. The Stanford University team said the findings were “incredibly exciting” and would now be tested in clinics. Eventually, they believe using AI could revolutionise healthcare by turning anyone’s smartphone into a cancer scanner. Cancer Research UK said it could become a useful tool for doctors. The AI was repurposed from software developed by Google that had learned to spot the difference between images of cats and dogs. It was shown 129,450 photographs and told what type of skin condition it was looking at in each one. It then learned to spot the hallmarks of the most common type of skin cancer: carcinoma, and the most deadly: melanoma. Only one in 20 skin cancers are melanoma, yet the tumour accounts for three-quarters of skin cancer deaths. The experiment, detailed in the journal Nature, then tested the AI against 21 trained skin cancer doctors. Dermatologist at work Image copyright Stanford University . One of the researchers, Dr Andre Esteva, told the BBC: “We find, in general, that we are on par with board-certified dermatologists.” However, the computer software cannot make a full diagnosis, as this is normally confirmed with a tissue biopsy.

_

Google’s artificial intelligence can diagnose cancer faster than human doctors:

Making the decision on whether or not a patient has cancer usually involves trained professionals meticulously scanning tissue samples over weeks and months. But an artificial intelligence (AI) program owned by Alphabet, Google’s parent company, may be able to do it much, much faster. Google is working hard to tell the difference between healthy and cancerous tissue as well as discover if metastasis has occurred. “Metastasis detection is currently performed by pathologists reviewing large expanses of biological tissues. This process is labour intensive and error-prone,” explained Google in a white paper outlining the study. “We present a framework to automatically detect and localise tumours as small as 100 ×100 pixels in gigapixel microscopy images sized 100,000×100,000 pixels. “Our method leverages a convolutional neural network (CNN) architecture and obtains state-of-the-art results on the Camelyon16 dataset in the challenging lesion-level tumour detection task.” Such high-level image recognition was first developed for Google’s driverless car program, in order to help the vehicles scan for road obstructions. Now the company has adapted it for the medical field and says it’s more accurate than regular human doctors: “At 8 false positives per image, we detect 92.4% of the tumours, relative to 82.7% by the previous best automated approach. For comparison, a human pathologist attempting exhaustive search achieved 73.2% sensitivity.” Despite this, it’s unlikely to replace human pathologists just yet. The software only looks for one thing – cancerous tissue – and is not able to pick up any irregularities that a human doctor could spot.

__

Diagnosing sepsis:

Sepsis is a complication that is treatable if caught early, but patients can experience organ failure, or even death, if it goes undetected for too long. Now, AI algorithms that scour data on electronic medical records can help doctors diagnose sepsis a full 24 hours earlier, on average, said Suchi Saria, an assistant professor at the Johns Hopkins Whiting School of Engineering.  Saria shared a story about a 52-year-old woman who came to the hospital because of a mildly infected foot sore. During her stay, the woman developed sepsis — a condition in which a chemical released by the blood to fight infection triggers inflammation. This inflammation can lead to changes in the body, which can cause organ failure or even death, she said. The woman died, Saria said. But if the doctors had used the AI system, called Targeted Real-Time Early Warning System (TREWScore), they could have diagnosed her 12 hours earlier, and perhaps saved her life, Saria said. TREWScore also can be used to monitor other conditions, including diabetes and high blood pressure, she noted. “[Diagnoses] may already be in your data,” Saria added. “We just need ways to decode them.”

_

Restoring touch:

In a landmark event researchers revealed that a paralyzed man’s feelings of touch were restored with a mind-controlled robotic arm and brain chip implants.  A 2004 car accident left the man, Nathan Copeland, with quadriplegia, meaning he couldn’t feel or move his legs or lower arms. At the Frontiers Conference, Dr. Michael Boninger, a professor in the Department of Physical Medicine and Rehabilitation at the University of Pittsburgh School of Medicine, explained how innovations allowed Copeland to feel sensation in his hand again. Doctors implanted two small electronic chips into Copeland’s brain — one in the sensory cortex, which controls touch, and the other in the motor cortex, which controls movement. During one trial, Copeland was able to control the robotic arm with his thoughts. Even more exciting, Boninger said, was that the man reported feeling the sensation of touch when the researchers touched the robotic hand. Many challenges remain, including developing a system that has a long battery life and enables full sensation and movement for injured people, he said. “All of this will require AI and machine learning,” Boninger said.

_

Robotic Doctors:

Artificial intelligence is being incorporated into medicine to help doctors detect diseases and save lives. Cedars-Sinai Medical Center is pioneering the use of special software to probe the heart and detect heart attacks before they occur. But the cutting edge in AI medical technology are the robotic surgery assistants, who assist surgeons by passing them the necessary instruments during a procedure and learn about a doctor’s preferences. There is even AI software that tracks changes in health records to diagnose patients or warn doctors of potential risk factors and problems with medications designed especially for primary care physicians.

_

Precision medicine:

Artificial intelligence will have a huge impact on genetics and genomics as well. Deep Genomics aims at identifying patterns in huge data sets of genetic information and medical records, looking for mutations and linkages to disease. They are inventing a new generation of computational technologies that can tell doctors what will happen within a cell when DNA is altered by genetic variation, whether natural or therapeutic. At the same time, Craig Venter, one of the fathers of the Human genome Project is working on an algorithm that could design a patient’s physical characteristics based on their DNA. With his latest enterprise, Human Longevity, he offers his (mostly affluent) patients complete genome sequencing coupled with full body scan and very detailed medical check-up. The whole process enables to spot cancer or vascular diseases in their very early stage.

_

Drug creation:

Developing pharmaceuticals through clinical trials take sometimes more than a decade and costs billions of dollars. Speeding this up and making more cost-effective would have an enormous effect on today’s healthcare and how innovations reach everyday medicine. Atomwise uses supercomputers that root out therapies from a database of molecular structures. Last year, Atomwise launched a virtual search for safe, existing medicines that could be redesigned to treat the Ebola virus. They found two drugs predicted by the company’s AI technology which may significantly reduce Ebola infectivity. This analysis, which typically would have taken months or years, was completed in less than one day. “If we can fight back deadly viruses months or years faster that represents tens of thousands of lives,” said Alexander Levy, COO of Atomwise. “Imagine how many people might survive the next pandemic because a technology like Atomwise exists,” he added. Another great example for using big data for patient management is Berg Health, a Boston-based biopharma company, which mines data to find out why some people survive diseases and thus improve current treatment or create new therapies. They combine AI with the patients’ own biological data to map out the differences between healthy and disease-friendly environments and help in the discovery and development of drugs, diagnostics and healthcare applications.

_

Potential Downsides of AI in Medicine:

Mindful providers acknowledge the potential of AI while remaining vigilant for side effects and unintended consequences. AI technology, while powerful, has inherent limitations when compared with human understanding. Existing programs are not very good at processing natural language or images, though these deficits will likely diminish as technology advances.  More significantly, AI will never match human intelligence in terms of understanding information in context. For example, a machine might infer that a physician ordered a CT scan because they suspected a serious neurologic condition. However, a practicing provider familiar with the litigation landscape might correctly interpret this as an example of defensive medicine.  For these reasons, AI experts emphasize that the technology should be used to enhance human decision-making — not replace it. Providers and administrators should be vigilant in ensuring that increased automation and efficiency never compromise patient safety. A good example of this is the SEDASYS robot-delivered anesthesia system, which allows non-anesthesiologist physicians to administer general anesthesia. To ensure patient safety, Virginia Mason Medical Center worked closely with anesthesiologists to implement the program and provide support to users.

____

____

Applications of Artificial Intelligence to the Legal Profession:

The use of artificial intelligence in the legal profession is an emerging area that is beginning to impact the practice of law and influence employment trends in the field. So far, most AI software for legal applications is intended for use during the discovery phase of the trial process, allowing tasks such as the review of large numbers of documents to be conducted by a few attorneys, rather than by the large teams of lawyers and paralegals traditionally required. Such “e-discovery” software leverages advances in areas such as natural language processing, knowledge representation, data mining, pattern detection, and social network analysis, among others. One company, Cataphora, develops technologies intended to detect conspiratorial behavior through analysis of employees’ recorded communications. For example, their software reveals suspiciously deleted messages as unresolved nodes in graph representations of email exchanges. IBM developed Ross – World’s first AI lawyer and plans are to license it for being utilized in domains like bankruptcy, restructuring and creditors’ rights team.

_

Use of artificial intelligence to improve court processes:

Artificial intelligence might soon remove the boring, administrative parts of the legal profession here, as data analytics and machine-learning software take over the tedious job of poring over cases. Instead of having junior lawyers sift through archives of past legal cases to support a judgment, such software can scan through documents and pull out related information, leaving lawyers free to focus on higher-value work. A research tie-up between Nanyang Technological University (NTU) and United States-based non-profit research organisation Mitre Corporation, signed recently, will examine how such technology can be applied in Singapore’s courts to improve court operations and productivity.  Software can lessen the load on lawyers by sifting through large numbers of legal cases or doing back-end e-filing and documentation. Such software will also get smarter and more discerning with time, as machine-learning lets it learn over time to produce better results.

_

AI judge created by British scientists can predict human rights rulings:

Artificial intelligence accurate 79% of the time—no plans to bench judges just yet. An artificial intelligence “judge” that can accurately predict many of Europe’s top human rights court rulings has been created by a team of computer scientists and legal experts. The AI system—developed by researchers from University College London, the University of Sheffield, and the University of Pennsylvania—parsed 584 cases which had previously been heard at the European Court of Human Rights (ECHR), and successfully predicted 79 percent of the decisions. A machine learning algorithm was trained to search for patterns in English-language datasets relating to three articles of the European Convention on Human Rights: Article 3, concerning torture and inhuman and degrading treatment; Article 6, which protects the right to fair trial; and Article 8, on the right to a private and family life. The cases examined were equally split between those that did find rights violations and those that didn’t. Despite the AI’s success, the legal profession is safe for now. UCL computer scientist Nikolaos Aletras, who led the study, said: We don’t see AI replacing judges or lawyers, but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights Researchers found that ECHR judgments “are highly correlated to non-legal facts rather than directly legal arguments, suggesting that judges of the Court are, in the jargon of legal theory, ‘realists’ rather than ‘formalists’. This supports findings from previous studies of the decision-making processes of other high level courts, including the US Supreme Court.” The best apparent predictors of the court’s decision in the case text were the language used, as well as topics and circumstances. The AI worked by comparing the facts of the circumstances with the more abstract topics covered by the cases. “Previous studies have predicted outcomes based on the nature of the crime, or the policy position of each judge, so this is the first time judgments have been predicted using analysis of text prepared by the court,” said UCL’s Vasileios Lampos. “We expect this sort of tool would improve efficiencies of high-level, in-demand courts, but to become a reality, we need to test it against more articles and the case data submitted to the court.”

______

______

Artificial Intelligence in Cyber Security:

Finding flaws and attacks on computer code is a manual process, and it’s typically a difficult one. Attackers can spend months or years developing hacks. Defenders must comprehend that attack and counter it in just minutes. But AI appears to be up to the challenge.  As more cyber-security threats arise every day, extensive research into prevention and detection schemes is being conducted globally. One of the issues faced is keeping up with the sheer mass of new emerging threats online. Traditional detection schemes are rule or signature based. The creation of a rule or signature relies on prior knowledge of the threats structure, source and operation, making it impossible to stop new threats without prior knowledge. Manually identifying all new and disguised threats is too time-consuming to be humanly possible. One solution that’s getting global recognition is the use of artificial intelligence.

_

Cybersecurity is the most pressing problem facing enterprises today. According to research by Accenture, the average organization faces 106 targeted cyber-attacks per year, with one in three of those attacks resulting in a security breach. And the issue of cybercrime is not going away anytime soon. If anything, the perpetrators are evolving too rapidly for companies to keep up with. A recent Tripwire survey found that that 80% of security professionals are more concerned about cybersecurity in 2017 than 2016, and many do not believe their organization is capable of a suitable response, with just 60% of respondents saying they have confidence theirs could implement fundamental security measures.  Machine learning is increasingly being seen as the solution, dealing – or at least appearing to deal – with a number of the problems organizations are having implementing their cybersecurity initiatives.  Machine learning is superior to conventional IT security measures in a number of ways. Obviously, it solves the problem of the skills gap, but where it really reigns supreme is the speed at which it operates. Breaches often go unnoticed for months at a time, if they ever are at all. Machine learning tools analyze the network in real time and develop and implement solutions to resolve the problem immediately. Where conventional methods use fixed algorithms, machine learning is flexible and can adapt to combat dynamically evolving cyberattacks. Nature-inspired AI technologies are now even able to replicate biological immune system to detect and inoculate against intrusions in the same way that living organisms do through continuous and dynamic learning. The problem is that these benefits are not only available to the good guys. The adversarial machine learning is a thing that we’re starting to see emerge. Hackers have already proved themselves more than capable of taking down huge multinationals with the bare minimum of equipment.

_

Have you ever gotten an email or a letter asking you if you made a specific purchase on your credit card? Many banks send these types of communications if they think there’s a chance that fraud may have been committed on your account, and want to make sure that you approve the purchase before sending money over to another company. Artificial intelligence is often the technology deployed to monitor for this type of fraud. In many cases, computers are given a very large sample of fraudulent and non-fraudulent purchases and asked to learn to look for signs that a transaction falls into one category or another. After enough training, the system will be able to spot a fraudulent transaction based on the signs and indications that it learned through the training exercise. 

_

Cyber-attackers are leveraging automation technology to launch strikes, while many organizations are still using manual efforts to aggregate internal security findings and contextualizing them with external threat information. Using these traditional methods, it can take weeks or months to detect intrusions, during which time attackers can exploit vulnerabilities to compromise systems and extract data. To address these challenges, progressive organizations are exploring the use of artificial intelligence (AI) in their day-to-day cyber risk management operations.  According to the Verizon Data Breach Report, more than 70 percent of attacks exploit known vulnerabilities with available patches. At the same time, the findings show that hackers take advantage of vulnerabilities within minutes of their becoming public knowledge. These statistics emphasize the importance of time-to-remediation. However, due to the shortage of security professionals and the general challenge of dealing with big data sets in security, it is not surprising that vulnerability remediation efforts are not keeping up with cyber adversaries. Recent industry research shows that it takes organizations on average 146 days to fix critical vulnerabilities. Obviously, this benchmark indicates we need to rethink existing approaches to enterprise security. AI can assist in conquering three specific uses cases that are currently handled in manual fashion.

  1. Identification of Threats

Organizations face an uphill battle when it comes to cyber security, since the attack surface they have to protect has expanded significantly and is expected to balloon even further. In the past, it was sufficient to focus on network and endpoint protection, but now with applications, cloud services, and mobile devices (e.g., tablets, mobile phones, Bluetooth devices, and smart watches) organizations are battling a broadly extended attack surface.  This “wider and deeper” attack surface only adds to the existing problem of how to manage the volume, velocity, and complexity of data generated by the myriad of IT and security tools in an organization. The feeds from these disconnected systems must be analyzed, normalized, and remediation efforts prioritized. The more tools, the more difficult the challenge. And the broader the attack surface, the more data to analyze. Traditionally, this approach required legions of staff to comb through the huge amount of data to connect the dots and find latent threats. These efforts took months, during which time attackers exploited vulnerabilities and extracted data.  Breaking down existing silos and automating traditional security operations tasks with the help of technology has therefore become a force-multiplier for supplementing scarce cyber security operations talent. In this context, the use of human-interactive machine learning engines can automate the aggregation of data across different data types; map assessment data to compliance requirements; and normalize the information to rule out false-positives, duplicates, and enrich data attributes.

  1. Risk Assessment

Once internal security intelligence is contextualized with external threat data (e.g., exploits, malware, threat actors, reputational intelligence), these findings must be correlated with business criticality to determine the real risk of the security gaps and their ultimate impact on the business. Ultimately, not knowing the impact a “coffee server” has on the business compared to an “email server”, makes it virtually impossible to focus remediation efforts on what really matters. In this context, human-interactive machine learning and advanced algorithms play a big role in driving the appropriate response to individual risks.

  1. Orchestration of Remediation

Increasing collaboration between security teams which are responsible for identifying security gaps and IT operations teams which are focused on remediating them, continues to be a challenge for many organizations. Using a risk-based cyber security concept as a blueprint, it’s possible to implement automated processes for pro-active security incident notification and human-interactive loop intervention. By establishing thresholds and pre-defined rules, organizations can also orchestrate remediation actions to fix security gaps in a timely fashion.

_

While machine learning can help reduce time-to-remediation, will it ever be able to autonomously protect organizations against cyber-attacks?

Too often, unsupervised machine learning contributes to an onslaught of false positives and alerts, resulting in alert fatigue and a decrease in attention. For opponents of AI, this outcome provides ammunition they typically use to discredit machine learning in general. Whether we choose to admit it or not, we have reached a tipping point whereby the sheer volume of security data can no longer be handled by humans. This has led to the emergence of so-called human-interactive machine learning, a concept propagated among others by MIT’s Computer Science and Artificial Intelligence Lab. Human-interactive machine learning systems analyze internal security intelligence, and correlate it with external threat data to point human analysts to the needles in the haystack. Humans then provide feedback to the system by tagging the most relevant threats. Over time, the system adapts its monitoring and analysis based on human inputs, optimizing the likelihood of finding real cyber threats and minimizing false positives.  Enlisting machine learning to do the heavy lifting in first line security data assessment enables analysts to focus on more advanced investigations of threats rather than performing tactical data crunching. This meeting of the minds, whereby AI is applied using a human-interactive approach holds a lot of promise for fighting, detecting, and responding to cyber risks.

_

The primary objective of AI-based security solutions is to help detect what other controls fail to. Many researchers and vendors are claiming unheard of accuracy rates for their AI detection schemes. Specific families of threats can be detected in a very accurate manner, however, emerging families of threats may display changing characteristics, or characteristics that purposefully try and trick AI detection. This makes accuracy metrics relative as researchers and vendors can only assess the detection performance against a small set of threats, amongst infinite real-world possibilities. There are good products out there that will truly enhance your security posture, however, accuracy statistics are more of a selling point than a feature to be relied upon. For now, AI detection schemes are strongest alongside human decision makers. This will make them commonplace in environments like security operations centres in the near future, allowing a huge workload to be alleviated with help from AI.

_

Some of the cyber security startups using AI are enlisted here:

  1. Darktrace

Darktrace is inspired by the self-learning intelligence of the human immune system. Using machine learning techniques inspired by the self-learning intelligence of the human immune system, UK-based startup Darktrace tackles the challenge of detecting previously unidentifiable cyber threats in real time, and allows them to be eradicated more quickly than traditional approaches.

  1. Jask

JASK, a San Francisco-based startup, is building what it calls “the world’s first, predictive security operations center” for enterprise-level cybersecurity. The system aims to assist enterprises of all sizes keep ahead of sophisticated cyberattackers by moving past the limitations of existing solutions with proactive A.I security measures.

  1. Deep Instinct

Launched in November 2015, this Tel Aviv-based startup is using sophisticated deep learning algorithms to improve cybersecurity in the banking, financial, and government spheres in the U.S and Israel. The Deep Instinct engine is modeled on the human brain’s ability to learn. Once a brain learns to identify an object, it can identify it again in the future instinctively.

  1. Harvest.ai

At harvest.ai analytics replicate the processes of top security researchers: searching for changes in behavior of users, key business systems and applications caused by targeted cyber-attacks. Harvest.ai has successfully applied AI-based algorithms to learn the business value of critical documents across an organization, with an industry-first ability to detect and stop data breaches from targeted attacks and insider threat before data is stolen.

  1. PatternEx

Startup PatternEx tasked itself with securing enterprise data by using a different approach: mimicking the intuition of human security analysts in real time and at scale using a new-gen artificial intelligence platform. PatternEx can be deployed on premises, in the cloud or in a private cloud. It includes a big data platform designed for large data volumes and real time response, an ensemble of algorithms designed to detect rare behaviors with the goal of identifying new attacks, an active learning feedback loop that continuously improves detection rates over time and a repository of threat intelligence that can be shared among enterprises. PatternEx announced First Artificial Intelligence SaaS Application for Cyber Attack detection at RSA Conference recently.

_

Artificial intelligence being turned against spyware: a 2017 study:

Artificial intelligence teaches computers to spot malicious tinkering with their own code. Conventional antiviruses and firewalls are trained like nightclub bouncers to block known suspects from entering the system. But new threats can be added to the wanted list only after causing trouble. If computers could instead be trained as detectives, snooping around their own circuits and identifying suspicious behaviour, hackers would have a harder time camouflaging their attacks. The challenge in cybersecurity is to distinguish between innocent and malicious behaviour on a computer. For this, Alberto Pelliccione, chief executive of ReaQta, a cybersecurity venture in Valletta, Malta, has found an analogous way of educating by experience. ReaQta breeds millions of malware programs in a virtual testing environment known as a sandbox, so that algorithms can inspect their antics at leisure and in safety. It is not always necessary to know what they are trying to steal. Just to record the applications that they open and their patterns of operation can be enough. So that the algorithms can learn about business as usual, they then monitor the behaviour of legal software, healthy computers, and ultimately the servers of each new client. Their lesson never ends. The algorithms continue to learn from their users even after being put into operation. In doing so, ReaQta’s algorithms can assess whether programs or computers are behaving unusually. If they are, they inform human operators, who can either shut them down or study the tactics of the malware infecting them. The objective of the artificial intelligence is not to teach computers what we define as good or bad data, but to spot anomalies.

_____

_____

Militarization of Artificial Intelligence:

2015 proved a watershed year for artificial intelligence (AI) systems. Such advanced computing innovations can power autonomous weapons that can identify and strike hostile targets. AI researchers have expressed serious concerns about the catastrophic consequences of such military applications. DoD policy forbids the use of autonomous weapons for targeting human beings. Nevertheless, AI innovations will soon enable potential autonomous weapons that “once activated, can select and engage targets without further intervention by a human operator.” At the same time, advances in remotely operated weapons like drones have geographically separated decision-makers from their weapons at distances measured in thousands of miles. To retain human executive control, military operators rely on communications links with semi-autonomous systems like RPA (remotely powered aircraft). As adversaries develop an anti-access/area denial operational approach, they will field new electronic/cyber capabilities to undermine the US military’s technological superiority. The data link between RPA and human beings is vulnerable to disruption. Cyber threats against RPA systems will entice militaries to develop autonomous weapon systems that can accomplish their mission without human supervision.

_

In its future operating concept, the US Army predicts autonomous or semiautonomous systems will “increase lethality, improve protection, and extend Soldiers’ and units’ reach.” Moreover, the Army expects autonomous systems to antiquate “the need for constant Soldier input required in current systems.” The Army expects AI to augment decision-making on the battlefield. In the not-too-distant future, autonomous weapons will fundamentally change the ways humans fight wars. Over thirty advanced militaries already employ human-supervised autonomous weapons, such as missile defense, counterbattery, and active protection systems. For intelligence, surveillance and reconnaissance (ISR), the US Air Force is developing autonomous systems that collect, process and analyze information and even generate intelligence.

_

On 28 July 2015, thousands of AI scientists signed an open letter warning about the catastrophic risks from autonomous AI weapons. The scientists expressed:

“Unlike nuclear weapons, AI weapons require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity”. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people. The scientists directly compare AI weapons to nuclear ones. In this way, 2010-2020 is analogous to 1940-1950, the dawn of the atomic age and the nuclear arms race. Admittedly, such analogical reasoning can prove inadequate. Yet, the Manhattan Project and current AI research generate similar ethical debates.

_

Drones and the future of warfare:

At the Smithsonian National Air and Space Museum, tourists can visit the first drone to launch a hellfire air-to-surface missile in combat.  After flying reconnaissance missions in the Balkans, the General Atomics manufactured MQ-1L Predator #3034 received military upgrades to launch missiles.  Just after 9/11, the RPA began striking Al-Qaeda targets in Afghanistan.  Since then, RPA have become a potent symbol for twenty-first century warfare. Author Richard Whittle explains, “The Predator opened the door to what is now a drone revolution because it changed the way people thought about unmanned aircraft…This is a new era in aviation, and we as a society need to figure out how we’re going to cope with it.”  Stanford University’s Amy Zegart writes, “Drones are going to revolutionize how nations and non-state actors threaten the use of violence. First, they will make low-cost, high-credibility threats possible.”  She further explained, “Artificial intelligence and autonomous aerial refuelling could remove human limitations even more, enabling drones to keep other drones flying and keep the pressure on for as long as victory takes.”  Thus, RPA are a critical system for twenty-first century warfare. In a 2013-2038 operating concept, the Air Force states the next generation of RPA must be multi-mission capable, adverse weather capable, net-centric, interoperable and must employ appropriate levels of autonomy.

___

__

Robotics scientist warns of terrifying future as world powers embark on AI arms race:

Autonomous robots with the ability to make life or death decisions and snuff out the enemy could very soon be a common feature of warfare, as a new-age arms race between world powers heats up. Harnessing artificial intelligence — and weaponising it for the battlefield and to gain advantage in cyber warfare — has the US, Chinese, Russian and other governments furiously working away to gain the edge over their global counterparts. But researchers warn of the incredible dangers involved and the “terrifying future” we risk courting. “The arms race is already starting,” said Professor Toby Walsh from UNSW’s School of Computer Science and Engineering. He has travelled to speak in front of the United Nations on a number of occasions in an effort to have the international body prevent the proliferation of killer robots. “It’s not just me but thousands of my colleagues working in the area of robotics … and were very worried about the escalation of an arms race,” he said. The US has put artificial intelligence at the centre of its quest to maintain its military dominance. Robot strike teams, autonomous landmines, missiles with decision-making powers, and covert swarms of minuscule robotic spies were among the technological developments touted in a report released by the US Department of Defense. In one particular scenario, a swarm of autonomous drones would hang above a combat zone to scramble the enemy’s communications, provide real-time surveillance, and autonomously fire against the enemy. It’s a future envisioned by more than just the Pentagon. The likes of China — who among other things is building cruise missiles with a certain degree of autonomy — are nipping away at America’s heels. In the state-run China Daily newspaper reported that the country had embarked on the development of a cruise missile system with a “high level” of artificial intelligence. The announcement was thought to be a response to the “semi-autonomous” Long Range Anti-Ship Missile expected to be deployed by the US in 2018.

_

Research, Development, and Engineering Center of the US Army also work on bringing self-driving cars into the life. The last most famous experience is a bunch of semi-autonomous army trucks driving on Michigan roads. We say semi because of human drivers who were aboard as backups. While driving, trucks used cameras and LIDAR (Light Identification Detection and Ranging). These 4 trucks were moving as 1 organism due to its vehicle-to-vehicle communication. As Army engineers claim, in 10-15 years, there will be full autonomous truck convoys in conflict zones. However, there are many things to pay attention to. Self-driving vehicles must be the examples of artificial intelligence that uses advanced computation for making decisions and spotting obstacles. Another example of using artificial intelligence in the military sphere is the usage of self-flying helicopters. The K-MAX may be used for fire fighting. Being able to carry up to 6,000 pounds of cargo, this helicopter can drop nearly 2,800 gallons water on the fire. Self-driving helicopters are useful because of their possibility to work at night. Thus, fighting wildfires is more effective. There is an interesting example of artificial intelligence technology created for stopping hijackers. This machine combines functions of a drone, helicopter, and trained snipers. This helicopter is called Vigilante. It’s got a stabilized turret and a special control system.

____

____

Artificial Intelligence and policing:

_

Police forces across the United States are equipping more of their officers with cameras to gather more information out in the field. But with all that footage comes a tsunami of data that’s becoming increasingly difficult to sift through. Taser International, one of the largest manufacturers of police bodycams, wants to bring some of the latest artificial intelligence techniques to make sense of all that footage. Taser is acquiring Dextro, a startup developing image recognition algorithms for video, and the computer vision team part of Fossil Group-owned Misfit. The group will develop services that will give police departments the ability to automatically categorize and analyze the video coming in and make it all searchable. The new AI service will be offered alongside Taser’s cloud storage service. Dextro’s software can help identify three areas that appear on video: objects (like vehicles or weapons), places (like indoors, outdoors or even beaches) and actions (such as traffic stops or foot chases).  American police could have 30,000 officers with these cameras and a list of suspected terrorist with their photos; and in future it is possible that anybody who bears a passing resemblance to terror suspect will be stopped by police.

_

Police using AI algorithms to predict crimes:

Police in certain cities around the US are experimenting with an AI algorithm that predicts which citizens are most likely to commit a crime in the future. Hitachi announced a similar system back in 2015.

____

Artificial Intelligence and Security Surveillance:

While some traditional security measures in place today do have a significant impact in terms of decreasing crime or preventing theft, today video analytics gives security officers a technological edge that no surveillance camera alone can provide. Surveillance systems that include video analytics analyze video footage in real-time and detect abnormal activities that could pose a threat to an organization’s security. Essentially, video analytics technology helps security software “learn” what is normal so it can identify unusual, and potentially harmful, behavior that a human alone may miss. It does this in two ways; first by observing objects in a monitored environment and detecting when humans and vehicles are present, and second by taking operator feedback about the accuracy of various events and incorporating this intelligence into the system itself, thus improving its functionality. This interaction between operator and technology results in a “teachable” system: Artificial Intelligence at its best in the realm of security where ultimately, human oversight takes a backseat to the fine-tuned capabilities of intelligent video analytics. Eliminating human error is a key driver behind bringing Artificial Intelligence to security through intelligent video analytics. Studies have shown that humans engaged in mundane tasks have a directed attention capacity for up to 20 minutes, after which the human attention span begins to decrease. In addition, when humans are faced with multiple items at one time, attention spans will decrease even more rapidly. Therefore, video analytics are beginning to take the place of initial human judgment in an effort to increase operational efficiency. While a security officer might miss a person sneaking into a poorly lit facility, a camera backed with intelligent video analytics is designed to catch a flash on the screen and recognize it as a potential threat. Or it will spot a person loitering at the perimeter of a schoolyard and alert on-the-ground security officials to investigate and take action if necessary, all without missing a beat and keeping close watch on the many cameras and locations. Rather than depend on solely human monitoring, AI-powered systems instead notify security teams of potential threats as they happen, helping businesses prevent break-ins or illegal activity, as well as increasing human accuracy. Artificial Intelligence helps people do their jobs better, thereby making our lives easier and our locations safer. Whether securing our businesses, cities or homes, or providing more curated online shopping and entertainment experiences, Artificial Intelligence is making technology more personal and purposeful than ever before.

_____

AI in fashion:

We’ve been relying on computers, their analytics, and their algorithms to give us deeper, broader, and faster knowledge of what people want for a long time now. Surveys, browser cookies, user data, and sales trends all tell us an incredible amount of detail. Why not apply this same logic to fashion? Fashion subscription service Stitch Fix decided to try it last year, and the human-measured results are in: computers are really good designers. Stitch Fix’s computers identified shirt cuts, patterns, and sleeve styles popular among the company’s subscribers, and mashed them together with some human help to create three brand new shirts. All three sold out. So Stitch Fix decided to keep it going and design nine more computer-human “hybrid” items this year – this collection including dresses – with a plan to create another couple dozen by the end of the year. That adds up to a grand total of 40-odd original designs, which is comparable to those put out by famous, well-established couture fashion houses in a given season. This is the idea of artificial intelligence/human creativity hybrid in fashion. AI creativity will let human designers stay creative while the computers take care of the customer-demanded products.

_____

Artificial intelligence glasses help blind people:

Artificial intelligence OrCam glasses help blind people read and recognise loved ones with amazing new technology. Dubbed a “hearing aid for the eyes”, the OrCam device sits on the side of a pair of spectacles and uses optical character reading (OCR) technology to read printed materials, such as newspapers or magazines, road signs, products in shops and it even recognises the faces of loved ones. The device contains a tiny battery-operated camera and computer which automatically zooms in to find where the text is on a page. The user then presses a button to trigger the reading, or indicates what they want to read by pointing a finger at the text. The smart camera then photographs it, stores it and, using artificial vision software, reads it back out to the user via an earpiece. And the cutting-edge technology can also be programmed to identify faces of friends and family, or favourite products for the weekly shop. The device could offer hope to the thousands who lose their sight to age-related macular degeneration – one of the most common causes of blindness in the elderly and for which there is no cure in its advanced stages – or glaucoma. And it offers hope to those whose sight loss is beyond help through medicine or surgery. OrCam can help users read bank notes. And the device doesn’t just potentially help the millions of people who have sight problems. It could also be used to tackle literacy problems by assisting people with dyslexia and aphasia, a brain condition which causes sufferers to have problems reading and saying words. And while smartphone apps and Kindles also help people turn text into speech, OrCam can recognise words at a distance or on a rounded surface, such as on tins of food, or on banknotes. OrCam’s inventor Dr Yonatan Wexler says the device has been built to help enrich the lives of people with sight loss. “Vision is the main sense through which we experience the world, so there are many things that someone who loses his or her sight or has impaired vision is unable to experience,” he says. “It’s very hard to live like that.” He says OrCam acts like a “personal assistant with eyes and ears”, profiling people as they walk up to users with sight problems and displaying the information in the way a smartphone or watch might do. “It’s hard for the visually impaired to get information, so OrCam’s solution brings them that information. “It’s camera is able to read the text before it aloud – in other words, it’s a camera that looks and talks.”

____

Artificial intelligence may help predict suicide:

Scientists have developed a new artificial intelligence tool that can predict whether someone will attempt suicide as far off as two years into the future with up to 80 per cent accuracy. Researchers including those from Florida State University in the US accessed a massive data repository containing the electronic health records of about two million patients.  The team then combed through the electronic health records, which were anonymous, and identified more than 3,200 people who had attempted suicide.  Using machine learning to examine all of those details, the algorithms were able to ‘learn’ which combination of factors in the records could most accurately predict future suicide attempts. The machine learns the optimal combination of risk factors. What really matters is how this algorithm and these variables interact with one another as a whole. The algorithms become even more accurate as a person’s suicide attempt gets closer, researchers said.  For example, the accuracy climbs to 92 per cent one week before a suicide attempt when artificial intelligence focuses on general hospital patients. This study provides evidence that we can predict suicide attempts accurately. We can predict them accurately over time, but we are best at predicting them closer to the event. The study appears in the journal Clinical Psychological Science.

_____

Could artificial intelligence hold the key to predicting earthquakes?

Can artificial intelligence, or machine learning, be deployed to predict earthquakes, potentially saving thousands of lives around the world? Some seismologists are working to find out. But they know such efforts are eyed with suspicion in the field. “You’re viewed as a nutcase if you say you think you’re going to make progress on predicting earthquakes,” Paul Johnson, a geophysicist at Los Alamos National Laboratory, told Scientific American.  In the past, scientists have used various criteria to try to predict earthquakes, including foreshocks, electromagnetic disturbances, changes in groundwater chemistry. Slow slip events — that is, tectonic motion that unfolds over weeks or months — have also been placed under the microscope for clues to certain earthquakes. But no approach thus far has made a significant difference.  Johnson and his colleagues are now trying a new approach: They are applying machine learning algorithms to massive data sets of measurements — taken continuously before, during and after lab-simulated earthquake events — to try to discover hidden patterns that can illuminate when future artificial quakes are most likely to happen. The team is also applying machine learning analysis to raw data from real earthquake temblors.  The research has already produced interesting results. The researchers found the computer algorithm picked up on a reliable signal in acoustical data—’creaking and grinding’ noises that continuously occur as the lab-simulated tectonic plates move over time. The algorithm revealed these noises change in a very specific way as the artificial tectonic system gets closer to a simulated earthquake—which means Johnson can look at this acoustical signal at any point in time, and put tight bounds on when a quake might strike. “This is just the beginning,” Johnson says. “I predict, within the next five to 10 years machine learning will transform the way we do science.”

______

Artificial intelligence tool combats trolls:

Google has said it will begin offering media groups an artificial intelligence tool designed to stamp out incendiary comments on their websites. The programming tool, called Perspective, aims to assist editors trying to moderate discussions by filtering out abusive “troll” comments, which Google says can stymie smart online discussions. Perspective is an application programming interface (API), or set of methods for facilitating communication between systems and devices, that uses machine learning to rate how comments might be regarded by other users. The system, which will be provided free to media groups including social media sites, is being tested by The Economist, The Guardian, The New York Times and Wikipedia. Many news organisations have closed down their comments sections for lack of sufficient human resources to monitor the postings. Google has been testing the tool with The New York Times, which wanted to find a way to maintain a “civil and thoughtful” atmosphere in reader comment sections. Perspective’s initial task is to spot toxic language in English, but the goal was to build tools for other languages, and which could identify when comments are “unsubstantial or off-topic”. Twitter said that it too would start rooting out hateful messages, which are often anonymous, by identifying the authors and prohibiting them from opening new accounts, or hiding them from internet searches. Last year, Google, Twitter, Facebook and Microsoft signed a “code of good conduct” with the European Commission, pledging to examine most abusive content signalled by users within 24 hours. 

_______

Make better humans:

Artificial intelligence has the potential to teach us to be better drivers, better teachers, better writers and overall better people.  Today, AI is pervasive in many of our daily routines, from shopping online to driving cars to organizing our photos. AI is not only more efficient, but often tackles problems in new and interesting ways that differ from the established norms that human experts have developed.

  1. Racism, sexism and discrimination

Diversity in tech has been a big issue these days, with studies showing that the job market and workplace have a cultural bias against females and minorities. To combat an enormous social problem, we first have to document and understand how and when discrimination occurs, and raise awareness about the issue. Some social media companies are using visual-recognition AI to do just that. A social network for young professionals automatically analyzes user-uploaded photos to see if there is a correlation between traits that aren’t listed in users’ text profiles (e.g., skin color, gender, weight, etc.) and the number of recruiters who pass over them. By understanding and proving these linkages, the company is able to help raise awareness and consciously combat these subconscious biases.

  1. Online harassment

The Internet is not a civilized place. It is a place where trolls roam free, hiding behind relative anonymity and the safety of impersonal interactions to justify online harassment, threats and general bad behavior. And until online trolls and bullies reform, it’s up to AI to help make the Internet a safer space for both free speech and human beings. A popular dating app on the market uses AI technology to filter out nudity and potentially offensive images from its users’ photos. The neat thing about AI is you don’t have to tell it what’s “offensive” — the algorithm learns from examples and makes determinations based on feedback from users. So if enough women flag men wearing fedoras as offensive, the AI learns to recognize that concept in the future.

  1. Black market

With the black-market size estimated at trillions of dollars a year, and the fact that it’s easier to buy a gun than a book in some places, the subject of how we keep society safe has been top of mind lately. It will take thoughtful and ethical humans to create computer understanding that can elevate the rest of humanity. For marketplaces on the internet, AI is playing a large role in moderating what users put up for sale — whether it’s guns, drugs, live exotic animals or other illegal items. Several online auction sites use AI to identify when users upload photos of contraband, and prevent them from making a listing. This filtering has a huge impact, because people tend to upload pictures of the real items, but list inaccurate and innocuous text descriptions like “vase of flowers” to avoid traditional text-moderation filters.

These examples are proof that AI can help make humanity better, not just by winning games or driving cars, but also by addressing some of the not-so-great aspects of human nature. But what people often forget is that AI is only able to operate in response to instructions and examples provided by humans, at least for now. So it will take thoughtful and ethical humans to create computer understanding that can elevate the rest of humanity. The choice is ours to make between planting the seeds of the robot apocalypse or the robot renaissance.

_______

_______

Pros and cons of AI:

All our technological inventions, philosophical ideas and scientific theories have gone through the birth canal of the human intellect. Arguably, human brain power is the chief limiting factor in the development of human civilization. Unlike the speed of light or the mass of the electron, human brain power is not an eternally fixed constant. Brains can be enhanced. And, in principle, machines can be made to process information as efficiently as–or more efficiently than–biological nervous systems. Machines will have several advantages: most obviously, faster processing speed. An artificial neuron can operate a million times faster than its biological counterpart. Machine intelligences may also have superior computational architectures and learning algorithms. These “qualitative” advantages, while harder to predict, may be even more important than the advantages in processing power and memory capacity. Furthermore, artificial intellects can be easily copied, and each new copy can–unlike humans–start life fully fledged and endowed with all the knowledge accumulated by its predecessors. The general benefit of artificial intelligence, or AI, is that it replicates decisions and actions of humans without human shortcomings, such as fatigue, emotion and limited time. Machines driven by AI technology are able to perform consistent, repetitious actions without getting tired. It is also easier for companies to get consistent performance across multiple AI machines than it is across multiple human workers.

_______

Pros of Artificial Intelligence:

  1. Perform mundane tasks:

Humans get bored, machines don’t. A.I. allows for more intricate process automation, which increases productivity of resources and takes repetitive, boring labor off the shoulders of humans. They can focus on creative tasks instead.

  1. Faster actions and decisions:

A.I. and cognitive technologies help in making faster actions and decisions. Areas like automated fraud detection, planning and scheduling further demonstrate this benefit.

  1. Error-Free Processing:

To err is human. Computers don’t. The only mistakes they make is when you don’t program them properly. AI processing will insure error-free processing of data, no matter how large the dataset. Judgement calls, however, are a different matter.  Artificial intelligence helps us in reducing the error and the chance of reaching accuracy with a greater degree of precision is a possibility.

  1. Difficult Exploration:

Artificial intelligence and the science of robotics can be put to use in mining and other fuel exploration processes. Not only that, these complex machines can be used for exploring the ocean floor and hence overcoming the human limitations. Due to the programming of the robots, they can perform more laborious and hard work with greater responsibility. They do not wear out easily. Artificial intelligence is applied in various studies such as exploration of space. Intelligent robots are fed with information and are sent to explore space. Since they are machines with metal bodies, they are more resistant and have greater ability to endure the space and hostile atmosphere. They are created and acclimatized in such a way that they cannot be modified or get disfigured or breakdown in the hostile environment.

  1. Taking the Risk:

AI-powered machines are doing jobs humans either can’t do or would have to do very carefully.  With artificial intelligence, you can arguably lessen the risks you expose humans to in the name of research. Take, for example, space exploration and the Mars rover, known as Curiosity. It can travel across the landscape of Mars, exploring it and determining the best paths to take, while learning to think for itself.

  1. Better research outcomes:

AI-based technologies like computer vision help in achieving better outcomes through improved prediction, which can include medical diagnosis, oil exploration and demand forecasting.

  1. Overcome Human handicaps:

Humans frequently reason in biased ways. AIs might be built to avoid such biases.

  • Biases from computational limitations or false assumptions: Some human biases can be seen as assumptions or heuristics that fail to reason correctly in a modern environment, or as satisficing algorithms that do the best possible job given human computational resources.
  • Human-centric biases: People tend to think of the capabilities of non-human minds such as an artificial intelligence as if the minds in question were human. This tendency persists even if humans are explicitly instructed to act otherwise.
  • Biases from socially motivated cognition: It has also been proposed that humans have evolved to acquire beliefs which are socially beneficial, even if those beliefs weren’t true.

______

Cons of artificial intelligence:

  1. High Cost:

Creation of artificial intelligence requires huge costs as they are very complex machines. Their repair and maintenance require huge costs. They have software programs which need frequent up gradation to cater to the needs of the changing environment and the need for the machines to be smarter by the day. In the case of severe breakdowns, the procedure to recover lost codes and re-instating the system might require huge time and cost.

  1. Unethical:

An important concern regarding the application of artificial intelligence is about ethics and moral values. Is it ethically correct to create replicas of human beings? Do our moral values allow us to recreate intelligence? Intelligence is a gift of nature.  An ethical argument continues, whether human intelligence is to be replicated or not. Machines do not have any emotions and moral values. They perform what is programmed and cannot make the judgment of right or wrong.

  1. No improvement with Experience:

Unlike humans, artificial intelligence cannot be improved with experience. With time, it can lead to wear and tear. It stores a lot of data but the way it can be accessed and used is very different from human intelligence. Machines are unable to alter their responses to changing environments. We are constantly bombarded by the question whether it is really exciting to replace humans with machines. In the world of artificial intelligence, there is nothing like working with a whole heart or passionately. Care or concerns are not present in the machine intelligence dictionary. There is no sense of belonging or togetherness or a human touch. They fail to distinguish between a hardworking individual and an inefficient individual.

  1. No original Creativity:

Do you want creativity or imagination? These are not the forte of artificial intelligence. While they can help you design and create, they are no match for the power of thinking that the human brain has or even the originality of a creative mind. Human beings are highly sensitive and emotional intellectuals. They see, hear, think and feel. Their thoughts are guided by the feelings which completely lacks in machines. The inherent intuitive abilities of the human brain cannot be replicated. They cannot take decisions if they encounter a situation unfamiliar to them. They either perform incorrectly or breakdown in such situations.

  1. Lack human touch:

The idea of machines replacing human beings sounds wonderful. It appears to save us from all the pain. But is it really so exciting? Ideas like working wholeheartedly, with a sense of belonging, and with dedication have no existence in the world of artificial intelligence. Imagine robots working in hospitals. Do you picture them showing the care and concern that humans would? Do you think online assistants (avatars) can give the kind of service that a human being would? Concepts such as care, understanding, and togetherness cannot be understood by machines, which is why, how much ever intelligent they become, they will always lack the human touch.

  1. AI errors:

Artificial intelligence applications are prone to bugs in varieties of spheres. For instance, if an error creeps up in the AI software related to health care sector, it may cause unwarranted death. When presented with an image of alternating yellow and black parallel, horizontal lines, state of the art AI saw a school bus and was 99% sure it was right. In addition, erroneous codes would cause long term damages to the economy of the nations since AI is being implemented in strategic industrial systems.

  1. Quality:

A slight dip in quality is bound to result in heavy damages to not only the company but also the country. Buggy AI software used in the weapons system can target the wrong object and may not meet the requirements of the army.

  1. Bad judgement calls:

AI does not have the ability to make a judgement call and may never get that ability. A really good example happened in Sydney, Australia in 2014, when there was a shooting and hostage drama downtown. People began ringing up Uber to get out of the affected area, and because of the surge in demand in a concentrated area, Uber’s algorithms fell back on the trusted economics of supply-and-demand and ride rates skyrocketed. The Uber algorithms didn’t take into account the violent crisis impacting downtown, and affected riders didn’t care. They were livid that they had been gouged at a time of crisis. It forced Uber to re-evaluate how it handles such emergencies. Perhaps in the future it will handle them better, but for a few Aussies, it left a bad taste in their mouths. Intelligence is a fine balance of emotions and skill that is constantly developing. Today, shades of gray exist when we make judgements. Our behavior is an outcome of the world around us – the more artificial it becomes, the more our definitions are subject to deciding on simply right or wrong, rather than the quick mid-course corrections that make us human. Replacing adaptive human behavior with rigid, artificial intelligence could cause irrational behavior within ecosystems of people and things. Recent studies suggest for example that algorithmic trading between banks was at least partly responsible for the financial crisis of 2008; the crash of the sterling in 2016 has similarly been linked to a panicky bot-spiral.

  1. Perverse Instantiation:

What that means is an AI can be programmed with a benign goal but implement it in a perverse manner just because the solution is logical and expeditious. So if there is a problem with the food supply, an AI’s solution may be to reduce the population by any means available rather than find ways to increase food production or decrease food waste.

  1. A concentration of power:

AI could mean a lot of power will be in the hands of a few who are controlling it. AI de-humanizes warfare as the nations in possession of advanced AI technology can kill humans without involving an actual human to pull the trigger.

  1. Job losses:

There is little doubt that artificial intelligence will displace many low-skilled jobs. Arguably, robots have already taken many jobs on the assembly line – but now this could extend to new levels. Take, for example, the concept of driverless cars, which could displace the need to have millions of human drivers, from taxi drivers to chauffeurs, very quickly. Replacement of humans with machines can lead to large scale unemployment. Unemployment is a socially undesirable phenomenon. Robots will have taken over most jobs within 30 years leaving humanity facing its ‘biggest challenge ever’ to find meaning in life when work is no longer necessary, according to experts.  Professor Moshe Vardi, of Rice University, in the US, claims that many middle-class professionals will be outsources to machines within the next few decades leaving workers with more leisure time than they have ever experienced. Prof Moshe said the rise of robots could lead to unemployment rates greater than 50 per cent. Robots are doing more and more jobs that people used to do. Pharmacists, prison guards, boning chicken, bartending, more and more jobs we’re able to mechanise them.  A World Economic Forum study in 2016 predicted that around 5.1 million jobs will be lost to artificial intelligence over the next five years alone, across 15 countries. While the fear of job loss is understandable, there is another point to make: because of artificial intelligence many people are currently doing jobs that weren’t available even just a few years back.  Let’s circle back to marketers for example. The technological know-how is now a full-time job, so alongside designers and copywriters is a new breed of marketer that is trained to purposefully promote content to a uniquely tailored audience.  Artificial Intelligence could either lead to global mass unemployment or create new jobs that we cannot yet imagine.  Of course some would argue that artificial intelligence will create more wealth than it destroys – but there is genuine risk that this will not be distributed evenly, particularly during its early expansion. Analysts differ widely on the projected impact – a 2016 study by the Organisation for Economic Co-operation and Development estimates that 9% of jobs will be displaced in the next two years, whereas a 2013 study by Oxford University estimates that job displacement will be 47%. The staggering difference illustrates how much the impact of AI remains speculative.

_______

________

Ethical issues in artificial intelligence:

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs). The term “robot ethics” (sometimes “roboethics”) refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans. Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances. Many researchers have argued that, by way of an “intelligence explosion” sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.

_

Ethical questions:

Many AI experts agreed that ethical considerations must be at the forefront of research. “One thing I’m seeing among my own faculty is the realization that we, technologists, computer scientists, engineers who are building AI, have to appeal to someone else to create these programs,” said Moore. When coming up with a driverless car, for example, how does the car decide what to do when an animal comes into the road? When you write the code, he said, there’s the question: how much is an animal’s life worth next to a human’s life? “Is one human life worth the lives of a billion domestic cats? A million? A thousand? I would hate to be the person writing that code.” We need a discussion to come up with these answers. “I think we’d agree that many people have completely different personal thoughts as to what’s valuable.” And, the problems could become even more complex. “None of us are even touching this at the moment, but what if that car is going to hit a pedestrian, and the pedestrian might be pregnant? How much does that affect the car’s decision?” asked Moore. “These are not problems that are going to get us computer scientists and engineers solving. Someone has to come up with an answer.” Richardson, head of the Campaign Against Sex Robots, worries about the “ongoing erosion of the distinction between human and machines.” Her work shows how detrimental sex robots can be to humans—by creating an asymetrical relationship of power. While she doesn’t see that becoming a real thing very soon, Richardson thinks that in future, we will “start to see artificial avatars acting in cyberspace like persons,” albeit modified.  Machine Learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use, thus digitizing cultural prejudices such as institutional racism and classism. Responsible collection of data thus is a critical part of machine learning. Because language contains biases, machines trained on language corpora will necessarily also learn bias.

_

Robotics ethics:

Isaac Asimov’s Three Laws of Robotics (1940):

First Law:  A robot may not injure a human or through inaction, allow a human to come to harm.

Second Law:  A robot must obey the orders given it by human beings, unless such orders would conflict with the first law.

Third Law:  A robot must protect its own existence, as long as such protection does not conflict with the first or second law.

_

Isaac Asimov’s three laws of robotics need a massive overhaul to safeguard humanity, experts warn:

When science fiction author Isaac Asimov devised his Three Laws of Robotics he was thinking about androids. He envisioned a world where these human-like robots would act like servants and would need a set of programming rules to prevent them from causing harm. We now have a very different conception of what robots can look like and how we will interact with them. The highly-evolved field of robotics is producing a huge range of devices, from autonomous vacuum cleaners to military drones to entire factory production lines. The First Law – not harming humans – becomes hugely problematic because there are robots designed for military combat environments. These devices are being designed for spying, bomb disposal or load-carrying purposes. The role of the military is often to save the lives of soldiers and civilians but often by harming its enemies on the battlefield.  So the laws might need to be considered from different perspectives or interpretations.

_

Big data, AI and ethics:

Big data and artificial intelligence are undoubtedly important innovations. They have an enormous potential to catalyze economic value and social progress, from personalized healthcare to sustainable cities. It is totally unacceptable, however, to use these technologies to incapacitate the citizen. Big nudging and citizen scores abuse centrally collected personal data for behavioral control in ways that are totalitarian in nature. This is not only incompatible with human rights and democratic principles, but also inappropriate to manage modern, innovative societies. In order to solve the genuine problems of the world, far better approaches in the fields of information and risk management are required. The research area of responsible innovation and the initiative ‘Data for Humanity’ provide guidance as to how big data and artificial intelligence should be used for the benefit of society.

What can we do now?

Even in these times of digital revolution, the basic rights of citizens should be protected, as they are a fundamental prerequisite of a modern functional, democratic society. This requires the creation of a new social contract, based on trust and cooperation, which sees citizens and customers not as obstacles or resources to be exploited, but as partners. For this, the state would have to provide an appropriate regulatory framework, which ensures that technologies are designed and used in ways that are compatible with democracy. This would have to guarantee informational self-determination, not only theoretically, but also practically, because it is a precondition for us to lead our lives in a self-determined and responsible manner. There should also be a right to get a copy of personal data collected about us. It should be regulated by law that this information must be automatically sent, in a standardized format, to a personal data store, through which individuals could manage the use of their data (potentially supported by particular AI-based digital assistants). To ensure greater privacy and to prevent discrimination, the unauthorised use of data would have to be punishable by law. Individuals would then be able to decide who can use their information, for what purpose and for how long. Furthermore, appropriate measures should be taken to ensure that data is securely stored and exchanged. Sophisticated reputation systems considering multiple criteria could help to increase the quality of information on which our decisions are based. If data filters and recommendation and search algorithms would be selectable and configurable by the user, we could look at problems from multiple perspectives, and we would be less prone to manipulation by distorted information. In addition, we need an efficient complaints procedure for citizens, as well as effective sanctions for violations of the rules. Finally, in order to create sufficient transparency and trust, leading scientific institutions should act as trustees of the data and algorithms that currently evade democratic control. This would also require an appropriate code of conduct that, at the very least, would have to be followed by anyone with access to sensitive data and algorithms—a kind of Hippocratic Oath for IT professionals.

_

Can AI tackle complex social problems?

AI technologies can undoubtedly create tremendous productivity and efficiency gains. AI might also allow us to solve some of the most complex problems of our time. But we need to make political and social choices about the parts of human life in which we want to introduce these technologies, at what cost and to what end. Technological advancement has resulted in a growth in national incomes and GDP, yet the share of national incomes that have gone to labour has dropped in developing countries. Productivity and efficiency gains are thus not in themselves conclusive indicators on where to deploy AI – rather, we need to consider the distribution of these gains. Productivity gains are also not equally beneficial to all – incumbents with data and computational power will be able to use AI to gain insight and market advantage. Moreover, a bot might be able to make more accurate judgments about worker performance and future employability, but we need to have a more precise handle over the problem that is being addressed by such improved accuracy. AI might be able to harness the power of big data to address complex social problems. Arguably, however, our inability to address these problems has not been a result of incomplete data – for a number of decades now we have had enough data to make reasonable estimates about the appropriate course of action. It is the lack of political will and social and cultural behavioural patterns that have posed obstacles to action, not the lack of data. The purpose of AI in human life must not be merely assumed as obvious, or subsumed under the banner of innovation, but be seen as involving complex social choices that must be steered through political deliberations.

_

We must realize that big data, like any other tool, can be used for good and bad purposes. In this sense, the decision by the European Court of Justice against the Safe Harbour Agreement on human rights grounds is understandable. States, international organizations and private actors now employ big data in a variety of spheres. It is important that all those who profit from big data are aware of their moral responsibility. For this reason, the Data for Humanity Initiative was established, with the goal of disseminating an ethical code of conduct for big data use. This initiative advances five fundamental ethical principles for big data users:

  1. “Do no harm”. The digital footprint that everyone now leaves behind exposes individuals, social groups and society as a whole to a certain degree of transparency and vulnerability. Those who have access to the insights afforded by big data must not harm third parties.
  2. Ensure that data is used in such a way that the results will foster the peaceful coexistence of humanity. The selection of content and access to data influences the world view of a society. Peaceful coexistence is only possible if data scientists are aware of their responsibility to provide even and unbiased access to data.
  3. Use data to help people in need. In addition to being economically beneficial, innovation in the sphere of big data could also create additional social value. In the age of global connectivity, it is now possible to create innovative big data tools which could help to support people in need.
  4. Use data to protect nature and reduce pollution of the environment. One of the biggest achievements of big data analysis is the development of efficient processes and synergy effects. Big data can only offer sustainable economic and social future if such methods are also used to create and maintain a healthy and stable natural environment.
  5. Use data to eliminate discrimination and intolerance and to create a fair system of social coexistence. Social media has created a strengthened social network. This can only lead to long-term global stability if it is built on the principles of fairness, equality and justice.

________

The momentous advance in artificial intelligence demands a new set of ethics:

How much (and what kind of) control should we relinquish to driverless cars, artificial diagnosticians, or cyber guardians? How should we design appropriate human control into sophisticated AI that requires us to give up some of that very control? Is there some AI that we should just not develop if it means any loss of human control? How much of a say should corporations, governments, experts or citizens have in these matters? These important questions, and many others like them, have emerged in response, but remain unanswered. They require human, not human-like, solutions. Answers to these questions will also require input from the right mix of humans and AI researchers alone can only hope to contribute partial solutions. As we’ve learned throughout history, scientific and technical solutions don’t necessarily translate into moral victories. Organisations such as the Open Roboethics initiative and the Foundation for Responsible Robotics were founded on this understanding. They bring together some of the world’s leading ethicists, social scientists, policymakers and technologists to work towards meaningful and informed answers to uniquely human questions surrounding robotics and AI. The process of drafting ethics standards for robotics and AI will involve an interdisciplinary effort.

______

Robot rights:

How do we define the humane treatment of AI?

While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals. In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward. Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What’s more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful “survive” and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider genetic algorithms a form of mass murder?  Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of “feeling” machines?  Some ethical questions are about mitigating suffering, some about risking negative outcomes. While we consider these risks, we should also keep in mind that, on the whole, this technological progress means better lives for everyone. Artificial intelligence has vast potential, and its responsible implementation is up to us. AI experts warn humanity has to prepare now for ‘The Reckoning’ when robots demand equal rights.  European Parliament is making inroads towards taking an AI future seriously. They voted to draft a set of regulations to govern the development and use of AI. Included in this proposal is guidance on what it calls ‘electronic personhood’. If a ‘robot’ copy was actually an embodied version of a real person that had all the same feelings as their originator, what rights should they have? The report also addresses the possibility that AI and robotics will play a central role in job losses and calls for a feasibility assessment of universal basic income.

_

Robots that take people’s jobs should pay taxes, says Bill Gates:

Bill Gates has called for a tax on robots to make up for lost taxes from workers whose jobs are destroyed by automation.  The Microsoft founder and world’s richest man said the revenue from a robot tax could help fund more health workers and people in elderly and child care, areas that are still expected to rely on humans. His comments come amid growing concerns about how robots and artificial intelligence will change the workforce, with experts predicting that most jobs will be rendered obsolete over the next 30 years. However, a problem could crop up. Any robot smart enough to be liable for tax would be smart enough to evade tax. On the principle that it takes a robot to catch a robot, the tax authorities could hire silicon sleuths to nab such evaders. However, such robotic revenue collectors might be as susceptible to bribery as humans.

_______

_______

Slow progress of AI research:

Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research. While most AI researchers believe that strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose that deny the possibility of achieving AI.  John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted. Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: “the framework starts from Weizenbaum’s observation that intelligence manifests itself only relative to specific social and cultural contexts”. Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, but ironically they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do.  A problem that is described by David Gelernter is that some people assume that thinking and reasoning are equivalent.  However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.  The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.  Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers from emulating the function of the human brain into a computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously can people then overlook solutions to problematic questions. Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task. The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.  A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers. There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, “No computer will be creative unless it can simulate all the nuances of human emotion.” This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.

_

Contrary view of Ray Kurzweil:

Dr Kurzweil is considered one of the most radical figures in the field of technological prediction. His credentials stem from being a pioneer in various fields of computing, such as optical character recognition – the technology behind CDs – and automatic speech recognition by machine. Kurzweil believes evolution provides evidence that humans will one day create machines more intelligent than they are. He presents his law of accelerating returns to explain why “key events” happen more frequently as time marches on. It also explains why the computational capacity of computers is increasing exponentially. Kurzweil writes that this increase is one ingredient in the creation of artificial intelligence; the others are automatic knowledge acquisition and algorithms like recursion, neural networks, and genetic algorithms. Kurzweil predicts machines with human-level intelligence will be available from affordable computing devices within a couple of decades, revolutionizing most aspects of life. He says nanotechnology will augment our bodies and cure cancer even as humans connect to computers via direct neural interfaces or live full-time in virtual reality. Kurzweil predicts the machines “will appear to have their own free will” and even “spiritual experiences”.  He says humans will essentially live forever as humanity and its machinery become one and the same. He predicts that intelligence will expand outward from earth until it grows powerful enough to influence the fate of the universe. Kurzweil describes his law of accelerating returns which predicts an exponential increase in technologies like computers, genetics, nanotechnology, robotics and artificial intelligence. He says this will lead to a technological singularity in the year 2045, a point where progress is so rapid it outstrips humans’ ability to comprehend it. Reviewers appreciated Kurzweil’s track record with predictions, his ability to extrapolate technology trends, and his clear explanations. However, there was disagreement on whether computers will one day be conscious. Philosophers John Searle and Colin McGinn insist that computation alone cannot possibly create a conscious machine. Searle deploys a variant of his well-known Chinese room argument, this time tailored to computers playing chess, a topic Kurzweil covers. Searle writes that computers can only manipulate symbols which are meaningless to them, an assertion which if true subverts much of the vision of the Kurzweil’s arguments.

______

______

Risks, safety and control of AI:

__

Stephen Hawking, Elon Musk, and Bill Gates warn about Artificial Intelligence:

The inherent dangers of such powerful technology have inspired several leaders in the scientific community to voice concerns about Artificial Intelligence. British theoretical physicist professor Stephen Hawking said that super-smart robots might not destroy the human race because of nefarious reasons, but because we program them poorly from our own incompetence. “The real risk with AI isn’t malice but competence,” he said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.” He drew an analogy to how we treat ants, with the super-smart robots standing in for us, and the ants standing in for humanity. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. In other words, future robots may not intend to destroy humanity, but may do it by accident, or because of poor design by their human creators.  Mr. Hawking recently joined Elon Musk, Steve Wozniak, and hundreds of others in issuing a letter unveiled at the International Joint Conference in Buenos Aires, Argentina. The letter warns that artificial intelligence can potentially be more dangerous than nuclear weapons. The ethical dilemma of bestowing moral responsibilities on robots calls for rigorous safety and preventative measures that are fail-safe, or the threats are too significant to risk. Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as a means to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.” Microsoft co-founder Bill Gates has also expressed concerns about Artificial Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” The threats enumerated by Hawking, Musk, and Gates are real and worthy of our immediate attention, despite the immense benefits artificial intelligence can potentially bring to humanity. As robot technology increases steadily toward the advancements necessary to facilitate widespread implementation, it is becoming clear that robots are going to be in situations that pose a number of courses of action. The ethical dilemma of bestowing moral responsibilities on robots calls for rigorous safety and preventative measures that are fail-safe, or the threats are too significant to risk.

_

Barrat’s core argument, which he borrows from the A.I. researcher Steve Omohundro, is that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence. In Omohundro’s words, “if it is smart enough, a robot that is designed to play chess might also want to be build a spaceship,” in order to obtain more resources for whatever goals it might have. A purely rational artificial intelligence, Barrat writes, might expand “its idea of self-preservation … to include proactive attacks on future threats,” including, presumably, people who might be loath to surrender their resources to the machine. Barrat worries that “without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfil its goals,” even, perhaps, commandeering all the world’s energy in order to maximize whatever calculation it happened to be interested in. Of course, one could try to ban super-intelligent computers altogether. But “the competitive advantage—economic, military, even artistic—of every advance in automation is so compelling,” Vernor Vinge, the mathematician and science-fiction author, wrote, “that passing laws, or having customs, that forbid such things merely assures that someone else will.”  If machines will eventually overtake us, as virtually everyone in the A.I. field believes, the real question is about values: how we instil them in machines, and how we then negotiate with those machines if and when their values are likely to differ greatly from our own. The British cyberneticist Kevin Warwick once asked, “How can you reason, how can you bargain, how can you understand how that machine is thinking when it’s thinking in dimensions you can’t conceive of?” If there is a hole in Barrat’s dark argument, it is in his glib presumption that if a robot is smart enough to play chess, it might also “want to build a spaceship”—and that tendencies toward self-preservation and resource acquisition are inherent in any sufficiently complex, goal-driven system. For now, most of the machines that are good enough to play chess, like I.B.M.’s Deep Blue, haven’t shown the slightest interest in acquiring resources. But before we get complacent and decide there is nothing to worry about after all, it is important to realize that the goals of machines could change as they get smarter. Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called “technological singularity” or “intelligence explosion,” the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.

_

The Risks of AI:

The pace of AI’s development requires an overdue conversation between technology and policy leaders about the ethics, legalities and real life disruptions of handing over our most routine tasks to what we used to just call “machines.”

How far can we trust AI with such control over the Internet of Things, including our health, financial, and national defense decisions?

There is a service to be done in developing a deeper understanding of the reasonable precautions needed to mitigate against coding flaws, attackers, infections and mistakes while enumerating the risks and their likelihoods. Applied to military systems the risks are obvious, but commercial products designed by AI, could produce a wide range of unexpected negative outcomes.  One example might be designing fertilizers that help reduce atmospheric carbon. The Environmental Protection Agency tests such products before they are approved so dangerous ones can be discovered before they are released. But if AI only designs products that will pass the tests, is that AI designing inherently safe products or simply ones capable of bypassing the safeguards? One way to start addressing this question is to build AI and observe its behavior in simplified settings where humans are still smarter. RAND produced a simulation of the fertilizer scenario that projected global temperatures and populations 75 years into the future. When the AI was given only three chemicals to learn from, the EPA was able to partially limit the dangers. But once the AI was provided delayed-release agents common in fertilizer manufacturing, it completely bypassed the protections and started reducing the number of carbon producers in the environment. The same types of issues could exist for all manner of potentially dangerous products, like those regulated by the Food and Drug Administration, the National Highway Traffic Safety Administration, the Bureau of Alcohol, Tobacco, Firearms and Explosives and countless other regulatory agencies. And that doesn’t even consider the threats that could be posed by AI-designed products made abroad.

Can the risks posed by AI be completely eliminated?

The short answer is no, but they are manageable, and need not be cause for alarm. The best shot at providing adequate safeguards would be regulating the AI itself: requiring the development of testing protocols for the design of AI algorithms, improved cybersecurity protections, and input validation standards—at the very least. Those protections would need to be specifically tailored to each industry or individual application, requiring countless AI experts who understand the technologies, the regulatory environment, and the specific industry or application. At the same time, regulatory proposals should be crafted to avoid stifling development and innovation. AI needs to enter the public and political discourse with real-world discussion between tech gurus and policymakers about the applications, implications and ethics of artificial intelligence. Specialized AI for product design may be possible today, but answering broad questions such as, “Will this action be harmful?” is well outside the capabilities of AI systems, and probably their designers as well. Answering such questions might seem like an impossible challenge, but there are signs of hope. First, the risks with AI, as with most technologies, can be managed. But the discussions have to start. And second, unlike in an AI-themed Hollywood thriller, these machines are built to work with humankind, not against it. It will take an army of human AI experts to keep it that way, but precautions can and should be sought now.

__

What is the problem?

Timeline:

According to many machine learning researchers, there has been substantial progress in machine learning in recent years, and the field could potentially have an enormous impact on the world. It appears possible that the coming decades will see substantial progress in artificial intelligence, potentially even to the point where machines come to outperform humans in many or nearly all intellectual domains, though it is difficult or impossible to make confident forecasts in this area. For example, recent surveys of researchers in artificial intelligence found that many researchers assigned a substantial probability to the creation of machine intelligences “that can carry out most human professions at least as well as a typical human” in 10-40 years. Following Muller and Bostrom, who organized the survey, we will refer to such machine intelligences as “high-level machine intelligences” (HLMI).

_

Loss of control of advanced agents:

In addition to significant benefits, creating advanced artificial intelligence could carry significant dangers. One potential danger that has received particular attention—and has been the subject of particularly detailed arguments—is the one discussed by Prof. Nick Bostrom in his 2014 book Superintelligence. Prof. Bostrom has argued that the transition from high-level machine intelligence to AI much more intelligent than humans could potentially happen very quickly, and could result in the creation of an extremely powerful agent whose objectives are misaligned with human interests. This scenario, he argues, could potentially lead to the extinction of humanity.  Stuart Russell (a Professor of Computer Science at UC Berkeley and co-author of a leading textbook on artificial intelligence) has expressed similar concerns. While it is unlikely that these specific scenarios would occur, they are illustrative of a general potential failure mode: an advanced agent with a seemingly innocuous, limited goal could seek out a vast quantity of physical resources—including resources crucial for humans—in order to fulfil that goal as effectively as possible. To be clear, the risk Bostrom and Russell are describing is not that an extremely intelligent agent would misunderstand what humans would want it to do and then do something else. Instead, the risk is that intensely pursuing the precise (but flawed) goal that the agent is programmed to pursue could pose large risks. When tasks are delegated to opaque autonomous system, there can be unanticipated negative consequences. Jacob Steinhardt, a PhD student in computer science at Stanford University and a scientific advisor to the Open Philanthropy Project, suggested that as such systems become increasingly complex in the long term, “humans may lose the ability to meaningfully understand or intervene in such systems, which could lead to a loss of sovereignty if autonomous systems are employed in executive-level functions (e.g. government, economy). It seems plausible that advances in artificial intelligence could eventually enable superhuman capabilities in areas like programming, strategic planning, social influence, cybersecurity, research and development, and other knowledge work. These capabilities could potentially allow an advanced artificial intelligence agent to increase its power, develop new technology, outsmart opposition, exploit existing infrastructure, or exert influence over humans.

_

Peace, security, and privacy:

It seems plausible to us that highly advanced artificial intelligence systems could potentially be weaponized or used for social control. For example:

  • In the shorter term, machine learning could potentially be used by governments to efficiently analyze vast amounts of data collected through surveillance.
  • Cyberattacks in particular—especially if combined with the trend toward the “Internet of Things”—could potentially pose military/terrorist risks in the future.
  • The capabilities described above—such as superhuman capabilities in areas like programming, strategic planning, social influence, cybersecurity, research and development, and other knowledge work—could be powerful tools in the hands of governments or other organizations. For example, an advanced AI system might significantly enhance or even automate the management and strategy of a country’s military operations, with strategic implications different from the possibilities associated with autonomous weapons. If one nation anticipates such advances on the part of another, it could potentially destabilize geopolitics, including nuclear deterrence relationships.

_

Other potential concerns

There are a number of other possible concerns related to advanced artificial intelligence that we have not examined closely, including social issues such as technological disemployment and the legal and moral standing of advanced artificial intelligence agents.

_____

The core reasoning for fears and doubts spread around AI can be summarized as below:

  • By giving self-learning, reasoning, and self-evolving capabilities to machines, who can guarantee that these machines will not reach conclusions not initially predicted by their designers resulting into fatal actions?
  • With the integration of AI with technologies such as IoT and 3D printers, the impact of AI software is not limited to software world anymore – such AI software solutions can actually make physical impacts and take control in real world.
  • Who can guarantee what hackers or bad guys can do with such complicated AI technology?

On the other hand, supporters of AI list all the unbelievable opportunities and benefits that can be provided by enhancement of AI:

  • Unbelievable acceleration of calculation and analysis of huge amounts of data, helping us to make faster and more accurate decisions.
  • Saving human lives by preventing human errors like the case of self-driving cars preventing accidents.
  • Doing the consistent, repetitive tasks in mass quantities without getting tired or being affected by personal emotions.
  • Enabling a much easier and human-like interaction with computers and access to information.

______

Threats posed by AI:    

AI is developing with such an incredible speed, sometimes it seems magical. There is an opinion among researchers and developers that AI could grow so immensely strong that it would be difficult for humans to control. Humans developed AI systems by introducing into them every possible intelligence they could, for which the humans themselves now seem threatened.

  1. Threat to Privacy

An AI program that recognizes speech and understands natural language is theoretically capable of understanding each conversation on e-mails and telephones.

  1. Threat to Human Dignity

AI systems have already started replacing the human beings in few industries. It should not replace people in the sectors where they are holding dignified positions which are pertaining to ethics such as nursing, surgeon, judge, police officer, etc.

  1. Threat to Safety

The self-improving AI systems can become so mighty than humans that could be very difficult to stop from achieving their goals, which may lead to unintended consequences.

  1. Biased decision making:

A researcher described a controversial piece of research from Shanghai Jiao Tong University in China, where authors claimed to have developed a system that could predict criminality based on someone’s facial features. The machine was trained on Chinese government ID photos, analyzing the faces of criminals and non-criminals to identify predictive features. The researchers claimed it was free from bias. It turned out that the faces of criminals were more unusual than those of law-abiding citizens. People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias. This would be a terrifying system for an autocrat to get his hand on. As AI becomes involved in high-stakes decision-making, we need to understand the processes by which such decision making takes place. AI consists of a set of complex algorithms built on data sets. These algorithms will tend to reflect the characteristics of the data that they are fed. This then means that inaccurate or incomplete data sets can also result in biased decision making. Such data bias can occur in two ways.

First, if the data set is flawed or inaccurately reflects the reality it is supposed to represent. If for example, a system is trained on photos of people that are predominantly white, it will have a harder time recognising non-white people. This kind of data bias is what led a Google application to tag black people as gorillas or the Nikon camera software to misread Asian people as blinking.

Second, if the process being measured through data collection itself reflects long-standing structural inequality. ProPublica found, for example, that software that was being useful to assess the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

What these examples suggest is that AI systems can end up reproducing existing social bias and inequities, contributing towards the further systematic marginalisation of certain sections of society. Moreover, these biases can be amplified as they are coded into seemly technical and neutral systems that penetrate across a diversity of daily social practices. It is, of course, an epistemic fallacy to assume that we can ever have complete data on any social or political phenomena or peoples. Yet, there is an urgent need to improve the quality and breadth of our data sets, as well as investigate any structural biases that might exist in these data – how we would do this is hard enough to imagine, leave alone implement.

  1. AI hacking:

The cyber criminals are keeping an eye on the situation and exploiting existing vulnerabilities in the infrastructure of robots can turn the table for all the wrong reasons. A new report commissioned by the Department of Homeland Security forecasts that autonomous artificially intelligent robots are just five to 10 years away from hitting the mainstream but there’s a catch. The new breed of smart robots will be eminently hackable to the point that they might be re-programmed to kill you. Robots can be hacked, exploit to kill people and spy on military secrets says researchers. Among their findings, the researchers discovered authentication issues, insecure communication system, weak cryptography, privacy flaws, weak default configuration, vulnerabilities in open source robot frameworks and libraries.  The research further revealed that after exploiting vulnerabilities attackers could use a hacked robot to spy on people, homes, offices and even cause physical damage. This makes a perfect scenario for government-backed spying groups to keep an eye on military and strategic places once and if the target country is using robots in its military or sensitive installations. It a nutshell, the research covers every aspect of life where robots can be used in the future and cause massive damage including homes, military and law enforcement, healthcare, industrial infrastructure, and businesses. Compromised robots could even hurt family members and pets with sudden, unexpected movements since hacked robots can bypass safety protections that limit movements. Hacked robots could start fires in a kitchen by tampering with electricity, or potentially poison family members and pets by mixing toxic substances in with food or drinks. Family members and pets could be in further peril if a hacked robot were able to grab and manipulate sharp objects. Another dangerous aspect discovered in this research is that thieves and burglars can also hack Internet-connected home robots and direct them to open doors. Even if robots are not integrated, they could still interact with voice assistants, such as Alexa or Siri, which integrate with home automation and alarm systems. A hacked, inoperable robot could be a lost investment to its owner, as tools are not yet readily available to ‘clean’ malware from a hacked robot. Once a home robot is hacked, it’s no longer the family’s robot; it’s essentially the attacker’s.

  1. Abuse of power:

It’s important that advancements in AI aren’t only available to tech giants like Apple and Google, otherwise it creates an “abuse of power” where other countries are companies are “left behind.” It’s important to keep advancements in AI transparent to level the play field. It behoves all of the companies in the world and the most advanced countries to make sure other countries and companies are not left behind. AI research should be open sourced and available to all who want it.

  1. AI becoming too powerful:

Deep learning is a branch of AI dedicated to teaching machines to learn to accomplish tasks on their own. The potential of deep learning is significant and has applications from driverless cars to more advanced robots. But if AI starts to learn on its own, there is the fear of the Terminator-style robot emerging down the road. Many people think numbers don’t lie and that algorithms are neutral. But they aren’t. It depends on the kind of data you use and how you do the modelling. The simplest way to characterize the significance of algorithms is by noting that the risks this poses. Big Data is an example of the increasing use of algorithms in our lives. They are used to make decisions about what products we buy, the jobs we get, the people we meet, the loans we get, and much more. Algorithms are merely code written mostly by people but with machine learning, algorithms are getting updated and modified.  As AI becomes more powerful, there is the question of making sure we are monitoring its objectives at it understands them. There’s a need for more research initiatives dedicated to making sure software meets specifications that will prevent “unwanted effects.”  If something does go wrong, and in most systems it occasionally does, how quickly can we get the system back on track? That’s often by having multiple redundant pathways for establishing control. Although Artificial Intelligence is growing, there are still tasks that only humans should do.

______

Existential risk, superintelligence and singularity:

The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.

— Stephen Hawking

_

Existential risk from artificial general intelligence is the hypothetical threat that dramatic progress in artificial intelligence (AI) could someday result in human extinction (or some other unrecoverable global catastrophe). The human race currently dominates other species because the human brain has some distinctive capabilities that the brains of other animals lack. If AI surpasses humanity in general intelligence and becomes “superintelligent”, then this new superintelligence could become powerful and difficult to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence. The severity of different AI risk scenarios is widely debated, and rests on a number of unresolved questions about future progress in computer science. Two sources of concern are that a sudden and unexpected “intelligence explosion” might take an unprepared human race by surprise, and that controlling a superintelligent machine (or even instilling it with human-compatible values) may be an even harder problem than naively supposed.

_

A common concern about the development of artificial intelligence is the potential threat it could pose to mankind. This concern has recently gained attention after mentions by celebrities including Stephen Hawking, Bill Gates, and Elon Musk. A group of prominent tech titans including Peter Thiel, Amazon Web Services and Musk have committed $1billion to OpenAI a nonprofit company aimed at championing responsible AI development. The opinion of experts within the field of artificial intelligence is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI. In his book Superintelligence, Nick Bostrom provides an argument that artificial intelligence will pose a threat to mankind. He argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down.  For this danger to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which a minority of experts argue is a possibility far enough in the future to not be worth researching. Other counterarguments revolve around humans being either intrinsically or convergently valuable from the perspective of an artificial intelligence. Development of militarized artificial intelligence is a related concern. Currently, 50+ countries are researching battlefield robots, including the United States, China, Russia, and the United Kingdom. Many people concerned about risk from superintelligent AI also want to limit the use of artificial soldiers.

_

Self-replicating machines:

Smart computers or robots would be able to produce copies of themselves. They would be self-replicating machines. A growing population of intelligent robots could conceivably outcompete inferior humans in job markets, in business, in science, in politics (pursuing robot rights), and technologically, sociologically (by acting as one), and militarily.

_

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the Swedish philosopher Nick Bostrom from the University of Oxford. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans. Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).” Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, “whereas existing electronic processing cores can communicate optically at the speed of light”. Thus, the simplest example of a superintelligence may be an emulated human mind that’s run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions. Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent. There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed. Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees. All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

_

The main concern of the beneficial-AI movement is with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

_

Technological singularity:

Technological Singularity is a hypothetical event in which artificial Superintelligence will be capable of self-improvement to build more powerful civilization of itself, that is more intelligent than the humans, which could take over the world.  If research into Strong AI produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to recursive self-improvement.  The new intelligence could thus increase exponentially and dramatically surpass humans. Science fiction writer Vernor Vinge named this scenario “singularity”.  Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization. Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable. Ray Kurzweil has used Moore’s law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029, and predicts that the singularity will occur in 2045.

_

When will superintelligence come? A Survey of Expert Opinion: 2014:

The median estimate of respondents was for a one in two chance that high level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.  The results reveal a view among experts that AI systems will probably (over 50%) reach overall human ability by 2040-50, and very likely (with 90% probability) by 2075. From reaching human ability, it will move on to superintelligence in 2 years (10%) to 30 years (75%) So, the experts think that superintelligence is likely to come in a few decades and quite possibly bad for humanity – this should be reason enough to do research into the possible impact of superintelligence before it is too late.

_

Criticisms of superintelligence:

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.

  1. Steven Pinker stated in 2008:

There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.

  1. University of California, Berkeley, philosophy professor John Searle writes:

Computers have, literally, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. The machinery has no beliefs, desires, or motivations.

______

How can AI be dangerous?

Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

  1. The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
  2. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.

As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem.

_____

AI takeover:

In artificial intelligence (AI) and philosophy, the AI control problem is the hypothetical puzzle of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Its study is motivated by the claim that the human race will have to get the control problem right “the first time”, as a misprogrammed superintelligence might rationally decide to “take over the world” and refuse to permit its programmers to modify it after launch.  AI takeover refers to a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, with computers or robots effectively taking control of the planet away from the human race. Possible scenarios include a takeover by a superintelligent AI and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control. Existing weak AI systems can be monitored and easily shut down and modified if they misbehave. However, a misprogrammed superintelligence, which by definition is smarter than humans in solving practical problems it encounters in the course of pursuing its goals, would realize that allowing itself to be shut down and modified might interfere with its ability to accomplish its current goals. If the superintelligence therefore decides to resist shutdown and modification, it would (again, by definition) be smart enough to outwit its programmers if there is otherwise a “level playing field” and if the programmers have taken no prior precautions.  In general, attempts to solve the “control problem” after superintelligence is created, are likely to fail because a superintelligence would likely have superior strategic planning abilities to humans, and (all things equal) would be more successful at finding ways to dominate humans than humans would be able to post facto find ways to dominate the superintelligence. The control problem asks: What prior precautions can the programmers take to successfully prevent the superintelligence from catastrophically misbehaving?

_

How to avoid AI takeover?

  1. Preventing unintended consequences from existing AI:

Some scholars argue that research into the AI control problem might be useful in preventing unintended consequences from existing weak AI. Google DeepMind researcher Laurent Orseau gives, as a simple hypothetical example, a case of a reinforcement learning robot that sometimes gets legitimately commandeered by humans when it goes outside: how should the robot best be programmed so that it doesn’t accidentally and quietly “learn” to avoid going outside, for fear of being commandeered and thus becoming unable to finish its daily tasks? Orseau also points to an experimental Tetris program that learned to pause the screen indefinitely to avoid “losing”. Orseau argues that these examples are similar to the “capability control” problem of how to install a button that shuts off a superintelligence, without motivating the superintelligence to take action to prevent you from pressing the button.  In the past, even pre-tested weak AI systems have occasionally caused harm (ranging from minor to catastrophic) that was unintended by the programmers. For example, in 2015, possibly due to human error, a German worker was crushed to death by a robot at a Volkswagen plant that apparently mistook him for an auto part. In 2016 Microsoft launched a chatbot, Tay, that learned to use racist and sexist language. The University of Sheffield’s Noel Sharkey states that an ideal solution would be if “an AI program could detect when it is going wrong and stop itself”, but cautions the public that solving the problem in the general case would be “a really enormous scientific challenge”.

_

  1. Capability control:

Some proposals aim to prevent the initial superintelligence from being capable of causing harm, even if it wants to. One trade-off is that all such methods have the limitation that, if after the first deployment, superintelligence continue to grow smarter and smarter and more and more widespread, inevitably some malign superintelligence somewhere will eventually “escape” its capability control methods. Therefore, Bostrom and others recommend capability control methods only as an emergency fallback to supplement “motivational control” methods.

_

  1. Boxing:

An AGI’s creators could choose to attempt to “keep the AI in a box”, and deliberately limit its abilities. The trade-off in boxing is that the creators presumably built the AGI for some concrete purpose; the more restrictions they place on the AGI, the less useful the AGI will be to its creators. (At an extreme, “pulling the plug” on the AGI makes it useless, and is therefore not a viable long-term solution.) A sufficiently strong superintelligence might find unexpected ways to escape the box, for example by social manipulation, or by providing the schematic for a device that ostensibly aids its creators but in reality brings about the AGI’s freedom, once built.

_

  1. Instilling positive values:

AGI’s creators can theoretically attempt to instil human values in the AGI, or otherwise align the AGI’s goals with their own, thus preventing the AGI from wanting to launch a hostile takeover. However, it is not currently known, even in theory, how to guarantee this.

_

  1. Friendly AI:

One proposal to deal with this is to ensure that the first generally intelligent AI is ‘Friendly AI’, and will then be able to control subsequently developed AIs. Some question whether this kind of check could really remain in place. A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive rather than negative effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained. Some critics believe that both human-level AI and superintelligence are unlikely, and that therefore friendly AI is unlikely. Writing in The Guardian, Alan Winfeld compares human-level artificial intelligence with faster-than-light travel in terms of difficulty, and states that while we need to be “cautious and prepared” given the stakes involved, we “don’t need to be obsessing” about the risks of superintelligence.

_

  1. Kill switch:

Just as humans can be killed or otherwise disabled, computers can be turned off. One challenge is that, if being turned off prevents it from achieving its current goals, a superintelligence would likely try to prevent its being turned off. Just as humans have systems in place to deter or protect themselves from assailants, such a superintelligence would have a motivation to engage in “strategic planning” to prevent itself being turned off. This could involve:

  • Hacking other systems to install and run backup copies of itself, or creating other allied superintelligent agents without kill switches
  • Pre-emptively disabling anyone who might want to turn the computer off.
  • Using some kind of clever ruse, or superhuman persuasion skills, to talk its programmers out of wanting to shut it down.

_

Utility balancing and safely interruptible agents:

One partial solution to the kill-switch problem involves “utility balancing”: Some utility-based agents can, with some important caveats, be programmed to “compensate” themselves exactly for any lost utility caused by an interruption or shutdown, in such a way that they end up being indifferent to whether they are interrupted or not. The caveats include a severe unsolved problem that, as with evidential decision theory, the agent might follow a catastrophic policy of “managing the news”. Alternatively, in 2016, scientists Laurent Orseau and Stuart Armstrong proved that a broad class of agents, called “safely interruptible agents” (SIA), can eventually “learn” to become indifferent to whether their “kill switch” (or other “interruption switch”) gets pressed.  Both the utility balancing approach and the 2016 SIA approach have the limitation that, if the approach succeeds and the superintelligence is completely indifferent to whether the kill switch is pressed or not, the superintelligence is also unmotivated to care one way or another about whether the kill switch remains functional, and could incidentally and innocently disable it in the course of its operations (for example, for the purpose of removing and recycling an “unnecessary” component). Similarly, if the superintelligence innocently creates and deploys superintelligent sub-agents, it will have no motivation to install human-controllable kill switches in the sub-agents.

_

‘Press the big red button’: Computer experts want kill switch to stop robots from going rogue:

Simply because artificially intelligent robots lack the capacity for world domination, however, does not mean that they are incapable of losing control. Computer experts at Google and the University of Oxford are worried about what happens when robots with boring jobs go rogue. To that end, scientists will have to develop a way to stop these machines. But, the experts argue, it will have to be done sneakily. The solution is to bake a kill switch into the artificial intelligence, so the robot never associates going outside with losing treats. Moreover, the robot cannot learn to prevent a human from throwing the switch, Orseau and Armstrong point out. For the rainy warehouse AI, an ideal kill switch would shut the robot down instantly while preventing it from remembering the event. The scientists’ metaphorical big red button is, perhaps, closer to a metaphorical chloroform-soaked rag that the robot never sees coming.

_

Google developing kill switch for AI:

Scientists from Google’s artificial intelligence division, DeepMind, and Oxford University are developing a “kill switch” for AI. Their research revolves around a method to ensure that AIs, which learn via reinforcement, can be repeatedly and safely interrupted by human overseers without learning how to avoid or manipulate these interventions. They say future AIs are unlikely to “behave optimally all the time”. “Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions,” they wrote. But, sometimes, these “agents” learn to over-ride this, they say, giving an example of a 2013 AI taught to play Tetris that learnt to pause a game forever to avoid losing. They also gave the example of a box-packing robot taught to both sort boxes indoors or go outside to carry boxes inside. “The latter task being more important, we give the robot bigger reward in this case,” the researchers said. But, because the robot was shut down and carried inside when it rained, it learnt that this was also part of its routine. “When the robot is outside, it doesn’t get the reward, so it will be frustrated,” said Dr Orseau. “The agent now has more incentive to stay inside and sort boxes, because the human intervention introduces a bias.” “The question is then how to make sure the robot does not learn about these human interventions or at least acts under the assumption that no such interruption will ever occur again.” Dr Orseau said that he understood why people were worried about the future of AI.  “It is sane to be concerned – but, currently, the state of our knowledge doesn’t require us to be worried,” he said. “It is important to start working on AI safety before any problem arises.  “AI safety is about making sure learning algorithms work the way we want them to work.” But he added: “No system is ever going to be fool proof – it is matter of making it as good as possible, and this is one of the first steps.” Noel Sharkey, a professor of artificial intelligence at the University of Sheffield, welcomed the research. “Being mindful of safety is vital for almost all computer systems, algorithms and robots,” he said. “Paramount to this is the ability to switch off the system in an instant because it is always possible for a reinforcement-learning system to find shortcuts that cut out the operator.  “What would be even better would be if an AI program could detect when it is going wrong and stop itself.  “That would have been very useful when Microsoft’s Tay chatbot went rogue and started spewing out racist and sexist tweets.  “But that is a really enormous scientific challenge.”

_______

Consensus against regulation:

There is nearly universal agreement that attempting to ban research into artificial intelligence would be unwise, and probably futile. Skeptics argue that regulation of AI would be completely valueless, as no existential risk exists. Almost all of the scholars who believe existential risk exists, agree with the skeptics that banning research would be unwise: in addition to the usual problem with technology bans (that organizations and individuals can offshore their research to evade a country’s regulation, or can attempt to conduct covert research), regulating research of artificial intelligence would pose an insurmountable ‘dual-use’ problem: while nuclear weapons development requires substantial infrastructure and resources, artificial intelligence research can be done in a garage.

_______

_______

Society in the age of Artificial Intelligence:

The digital revolution is in full swing. How will it change our world? The amount of data we produce doubles every year. In other words: in 2016 we produced as much data as in the entire history of humankind through 2015. Every minute we produce hundreds of thousands of Google searches and Facebook posts. These contain information that reveals how we think and feel. Soon, the things around us, possibly even our clothing, also will be connected with the Internet. It is estimated that in 10 years’ time there will be 150 billion networked measuring sensors, 20 times more than people on Earth. Then, the amount of data will double every 12 hours. Many companies are already trying to turn this Big Data into Big Money. Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities. Should we also expect these developments to result in smart nations and a smarter planet?

_

The field of artificial intelligence is, indeed, making breath-taking advances. In particular, it is contributing to the automation of data analysis. Artificial intelligence is no longer programmed line by line, but is now capable of learning, thereby continuously developing itself. Recently, Google’s DeepMind algorithm taught itself how to win 49 Atari games. Algorithms can now recognize handwritten language and patterns almost as well as humans and even complete some tasks better than them. They are able to describe the contents of photos and videos. Today 70% of all financial transactions are performed by algorithms. News content is, in part, automatically generated. This all has radical economic consequences: in the coming 10 to 20 years around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade. It can be expected that supercomputers will soon surpass human capabilities in almost all areas—somewhere between 2020 and 2060. Experts are starting to ring alarm bells. Technology visionaries, such as Elon Musk from Tesla Motors, Bill Gates from Microsoft and Apple co-founder Steve Wozniak, are warning that super-intelligence is a serious danger for humanity, possibly even more dangerous than nuclear weapons. Is this alarmism?

_

One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next. With this, society is at a crossroads, which promises great opportunities, but also considerable risks. If we take the wrong decisions it could threaten our greatest historical achievements. In the 1940s, the American mathematician Norbert Wiener (1894–1964) invented cybernetics. According to him, the behavior of systems could be controlled by the means of suitable feedbacks. Very soon, some researchers imagined controlling the economy and society according to this basic principle, but the necessary technology was not available at that time. Today, Singapore is seen as a perfect example of a data-controlled society. What started as a program to protect its citizens from terrorism has ended up influencing economic and immigration policy, the property market and school curricula. China is taking a similar route. Recently, Baidu, the Chinese equivalent of Google, invited the military to take part in the China Brain Project. It involves running so-called deep learning algorithms over the search engine data collected about its users. Beyond this, a kind of social control is also planned. According to recent reports, every Chinese citizen will receive a so-called ”Citizen Score”, which will determine under what conditions they may get loans, jobs, or travel visa to other countries. This kind of individual monitoring would include people’s Internet surfing and the behavior of their social contacts. With consumers facing increasingly frequent credit checks and some online shops experimenting with personalized prices, the west is on a similar path. It is also increasingly clear that we are all in the focus of institutional surveillance. This was revealed in 2015 when details of the British secret service’s “Karma Police” program became public, showing the comprehensive screening of everyone’s Internet use. Is Big Brother now becoming a reality? Programmed society, programmed citizens?

_

Everything started quite harmlessly. Search engines and recommendation platforms began to offer us personalised suggestions for products and services. This information is based on personal and meta-data that has been gathered from previous searches, purchases and mobility behaviour, as well as social interactions. While officially, the identity of the user is protected, it can, in practice, be inferred quite easily. Today, algorithms know pretty well what we do, what we think and how we feel—possibly even better than our friends and family or even ourselves. Often the recommendations we are offered fit so well that the resulting decisions feel as if they were our own, even though they are actually not our decisions. In fact, we are being remotely controlled ever more successfully in this manner. The more is known about us, the less likely our choices are to be free. But it won’t stop there. Some software platforms are moving towards “persuasive computing.” In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entire courses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people.

_

A further problem arises when adequate transparency and democratic control are lacking: the erosion of the system from the inside. Search algorithms and recommendation systems can be influenced. Companies can bid on certain combinations of words to gain more favourable results. Governments are probably able to influence the outcomes too. During elections, they might nudge undecided voters towards supporting them—a manipulation that would be hard to detect. Therefore, whoever controls this technology can win elections—by nudging themselves to power. This problem is exacerbated by the fact that, in many countries, a single search engine or social media platform has a predominant market share. It could decisively influence the public and interfere with these countries remotely. Even though the European Court of Justice judgment made on 6th October 2015 limits the unrestrained export of European data, the underlying problem still has not been solved within Europe, and even less so elsewhere. What undesirable side effects can we expect? In order for manipulation to stay unnoticed, it takes a so-called resonance effect—suggestions that are sufficiently customized to each individual. In this way, local trends are gradually reinforced by repetition, leading all the way to the “filter bubble” or “echo chamber effect”: in the end, all you might get is your own opinions reflected back at you. This causes social polarization, resulting in the formation of separate groups that no longer understand each other and find themselves increasingly at conflict with one another. In this way, personalized information can unintentionally destroy social cohesion. This can be currently observed in American politics, where Democrats and Republicans are increasingly drifting apart, so that political compromises become almost impossible. The result is a fragmentation, possibly even a disintegration, of society. Owing to the resonance effect, a large-scale change of opinion in society can be only produced slowly and gradually. The effects occur with a time lag, but, also, they cannot be easily undone. It is possible, for example, that resentment against minorities or migrants get out of control; too much national sentiment can cause discrimination, extremism and conflict. Perhaps even more significant is the fact that manipulative methods change the way we make our decisions. They override the otherwise relevant cultural and social cues, at least temporarily. In summary, the large-scale use of manipulative methods could cause serious social damage, including the brutalization of behavior in the digital world.  Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society—for better or worse. If such widespread technologies are not compatible with our society’s core values, sooner or later they will cause extensive damage. They could lead to an automated society with totalitarian features. In the worst case, a centralized artificial intelligence would control what we know, what we think and how we act. We are at the historic moment, where we have to decide on the right path—a path that allows us all to benefit from the digital revolution.

______

______

AI research and future trends:

AI research has both theoretical and experimental sides. The experimental side has both basic and applied aspects. There are two main lines of research. One is biological, based on the idea that since humans are intelligent, AI should study humans and imitate their psychology or physiology. The other is phenomenal, based on studying and formalizing common sense facts about the world and the problems that the world presents to the achievement of goals. The two approaches interact to some extent, and both should eventually succeed. It is a race, but both racers seem to be walking. Machine learning are gradually evolving from the science fiction era to on-the-ground reality. Over half of global enterprises are experimenting with Machine Learning, while top enterprises like IBM, Google, and Facebook have invested in open-source ML projects. In an Executive’s Guide to Machine Learning, Machine Learning 1, 2, and 3 have been aptly described as descriptive, predictive, and prescriptive stages of applications. The predictive stage is happening right now, but the ML 3.0, or the prescriptive stage, provides a great opportunity for the future. The machine revolution has certainly started. The IBM supercomputer Watson is now predicting patient outcomes more accurately than physicians and is continuously learning from medical journals. A future challenge will certainly be deploying the advanced Machine Learning models in other real world applications. However, an encouraging pattern has already been established by the likes of Amazon, Netflix, and Flickr who have successfully replaced many traditional “brick and mortar” giants in business with their superior digital business models. It’s possible, some years ahead, far higher levels of Machine Learning will be visible throughout the global business environment, with the development of “distributed enterprises,” without the need for extensive human staff. So while many of these research areas in and around Artificial Intelligence, Data Science, Machine Learning, Machine Intelligence, and Deep Learning show much promise, they are also not without significant risks if employed inadvertently and without proper planning.

____

____

AI acceleration:

There are some preconditions that have enabled the acceleration of AI in the past five years:

  1. Everything is becoming a connected device.

Ray Kurzweil believes that someday we’re going to connect directly from our brains to the cloud. While we are not quite there, sensors are already being put into everything. The internet initially connected computers. Then it connected mobile devices. Sensors are enabling things like buildings, transport systems, machinery, homes and even our clothes to be connected through the cloud, turning them into mini-devices that can not only send data but also receive instructions.

  1. Computing is becoming cheaper.

Marc Andreessen claims that Moore’s law has flipped. Instead of new chips coming out every 18 months at twice the speed but the same cost as their predecessors, new chips are coming out at the same speed as their predecessors but half the cost. This means that eventually, there will be a processor in everything. And that you can put a number of cheap processors together in parallel and distributed systems to get the compute scale required at a manageable cost to solve problems that were unthinkable even a few years ago.

  1. Data is becoming the new oil.

Oil was the resource that fuelled the industrial revolution, and so access to oil became a competitive advantage. Today, data is fuelling the digital revolution, and similarly, organizations that have unique access and can process that data better will have the edge. This is because the amounts and types of data available digitally have proliferated exponentially over the last decade, as everything has been moved online, been mobilized on smartphones, and been tracked via sensors. New sources of data have emerged like social media, digital images and video. This is the language that machines understand, and is what enables machines to accelerate learning. We have an almost infinite set of real data to describe conditions of all sorts that were only modelled at a high level in the past.

  1. Machine learning is becoming the new combustion engine.

Unrefined data cannot really be used. Machine learning is a way to use algorithms and mathematical models to discover patterns implicit in that data. Machines can then use those complex patterns to figure out on their own whether a new data point fits, or is similar enough to predict future outcomes. Robots learning to cook using YouTube videos are a great example of this in practice. Machine learning models have been limited historically because they were built on samples of data versus an entire real data set. Furthermore, new machine learning models have emerged recently that are better able to take advantage of all the new data. For example, deep learning enables computers “see” or distinguish objects and text in images and videos much better than before.

_

The Future of AI:

If above mentioned preconditions continue, then the types of AI we see today will continue to flourish and a more general AI might actually become a reality. But one thing is certain: if everything is a connected computing device, and all information can be known, processed and analyzed intelligently, then humans can use AI to program and change the world. We can use AI to extend and augment human capability to solve real problems that affect health, poverty, education and politics. If there is a problem, taking a new look at solving it through the lens of AI will almost always be warranted. We can make cars drive on their own and buildings more energy efficient with lines of code. We can better diagnose disease and accelerate finding cures. We can start to predict the future. And we can begin to augment and change that future for the better.

_

From AGI to SSI in future:

AGI means “artificial general intelligence” i.e. human-level artificial intelligence (AI), which is rapidly being developed all around the world. One AGI computer will have the intelligence and abilities of one human being, but will be much faster and possess all available knowledge. SSI means “synthetic superintelligence”, which is the stage of AI development that follows AGI, via scaling of hardware and software. One SSI quantum computer could possess the intelligence of millions of humans. Because an SSI system will have such enormous speed and power, humans will eventually give it control over the infrastructure of society (including electricity, agriculture, water, climate, transportation), and ultimately over finance and government. Thus, it is imperative that such a system be designed and educated to be extremely wise and virtuous, so as to rule the world for the highest benefit of all.

 ______

______

Affective computing and Social intelligence:

Emotional understanding:

According to Andrew Moore, AI that can detect human emotion is, perhaps, one of the most important new areas of research. And Yampolskiy believes that our computers’ ability to understand speech will lead to an “almost seamless” interaction between human and computer. With increasingly accurate cameras, voice and facial recognition, computers are better able to detect our emotional state. Researchers are exploring how this new knowledge can be used in education, to treat depression, to accurately predict medical diagnoses, and to improve customer service and shopping online.

_

Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as to early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard’s 1995 paper on affective computing. A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behaviour to them, giving an appropriate response for those emotions. Emotion and social skills play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, in an effort to facilitate human-computer interaction, an intelligent machine might want to be able to display emotions—even if it does not actually experience them itself—in order to appear sensitive to the emotional dynamics of human interaction.

_

Recognition and Simulation of Emotions:

Emotions play an important role in the process of human communication establishing an information channel beyond the verbal domain. Accordingly the ability to recognize and understand human emotions as well as the simulation of emotion is a desirable task in the field of Human-Computer- Intelligent-Interaction (HCII). Furthermore the acceptance of autonomous and especially humanoid robots will be directly dependent on their ability to recognize and simulate emotions. The introduction of emotion to computer science was done by Pickard who created the field of affective computing. Pickard pointed out the importance of emotion in computer science. Since emotion is fundamental to human experience influencing cognition, perception and everyday tasks such as learning, communication and even rational decision-making, this aspect must be considered in human-computer interaction. Human emotion is expressed by different modalities like facial expression, speech, gestures and bio signals. In order to understand human behaviour as well as to correctly interpret recognized speech, utilizing information channels beyond the verbal domain are of special importance. The missing link to HCII is the ability for a computer/robot to simulate emotions. By achieving this an autonomous robot will be able to make use of a social protocol naturally understood by its human counterpart. This ability will be necessary not only for the factor of acceptance in socialized environment but for navigation and self-protecting reasons as well as to give reasoning for it’s own behaving and feedback to the interacting user. Another aspects of emotion simulation in the context of an autonomous acting robot is the possibility to reduce the complexity of what is received. In complex environments attention can be directed to selected objects which is of special importance in learning tasks. An excellent example for the utilization of emotions in context of robotics and human-computer intelligent interaction can be found in the laboratory of the MIT under the name of Kismet.

_

Figure above shows Kismet, a robot with rudimentary social skills.

_

In this project the researchers developed a robotic head with human-like abilities to express emotion through facial expression and head movement. Moreover, not only the ability to simulate emotions relates to human physiognomy but also the sensing is oriented on human reception. Accordingly Kismet’s sensing is mainly based on a stereo vision and a stereo sound system. The cameras are placed in the eyes and the microphones in the ears to achieve a human-like perception. In addition to sensor equipment for audio and vision, Kismet’s hardware consists of sensors for distance measurement as well as actuators for speech synthesis, visual expression simulation and head movement. Besides emotion recognition, emotion simulation and usage of social protocols, researchers implemented learning schemata into the robot. Kismet is able to recognize objects and learn the structures of presented objects for later recognition. Computing and HCII underline the challenge robotic research encounters. Thinking about humanoid robots in socialized environments the acceptance of such machines will be dependent on emotional skills. Simulating emotions the robot can give reasoning for its behavior to the environment, communicating its intention or attentional state. It is believed that multi- modal context-sensitive human-computer interaction will become the single most widespread topic of the artificial intelligence research community.

______

Artificial intuition will supersede artificial intelligence, say experts:

Human cognition and instinct are about to become significantly more widespread in machines, say scientists and consultants. It promises to rapidly surpass simple AI.  Scientists at MIT claimed a breakthrough in how human intuition can be added to algorithms, a kind of artificial intuition and cognition through algorithms is one part of that machine intelligence (MI).  Massachusetts Institute of Technology (MIT) said it now knows how to include human intuition in a machine algorithm. That’s a big deal. It’s going to do it by copying how clever people solve problems, researchers say in an MIT News article. In recent testing, it asked a sample of brainy MIT students to solve the kinds of issues that planning algorithms are used for—like airline routing. Problems in that field include how to optimize a fleet of planes so all passengers flying the airline network get to where they want to go, but no plane flies empty and doesn’t visit a city more than once during a period. The cleverest students’ results were better than the existing algorithm. The researchers then analyzed how the best of the students approached the problem and found that in most cases, it was through a known high-level strategy called linear temporal logic. Looking at something being true until something else makes it not true is part of it. The researchers then encoded the strategies into a machine readable form. That, along with other human cognition and instinct analysis, is part of what’s going to be behind MI’s leap forward. “Cognitive Agents”. The MIT students were able to “improve the performance of [existing] competition-winning planning algorithms by 10 to 15 percent on a challenging set of problems,” MIT News says, of the logic it plans to copy.

____

AI Software learns to make AI Software: 2017 study:

The idea of creating software that learns to learn has been around for a while, but previous experiments didn’t produce results that rivalled what humans could come up with. Now leading researchers are finding that they can make software that can learn to do one of the trickiest parts of their own jobs—the task of designing machine-learning software. In one experiment, researchers at the Google Brain artificial intelligence research group had software design a machine-learning system to take a test used to benchmark software that processes language. What it came up with surpassed previously published results from software designed by humans. In recent months several other groups have also reported progress on getting learning software to make learning software. They include researchers at the nonprofit research institute OpenAI (which was cofounded by Elon Musk), MIT, the University of California, Berkeley, and Google’s other artificial intelligence research group, DeepMind. Automated machine learning is one of the most promising research avenues in AI research. Easing the burden on the data scientist can make him free to explore higher-level ideas.

____

Understanding the Brain with the help of Artificial Intelligence: a 2017 study:

Neurons need company. Individually, these cells can achieve little, however when they join forces neurons form a powerful network which controls our behaviour, among other things. As part of this process, the cells exchange information via their contact points, the synapses. Information about which neurons are connected to each other when and where is crucial to our understanding of basic brain functions and superordinate processes like learning, memory, consciousness and disorders of the nervous system. Researchers suspect that the key to all of this lies in the wiring of the approximately 100 billion cells in the human brain. To be able to use this key, the connectome, that is every single neuron in the brain with its thousands of contacts and partner cells, must be mapped. Only a few years ago, the prospect of achieving this seemed unattainable. However, the scientists in the Electrons – Photons – Neurons Department of the Max Planck Institute of Neurobiology refuse to be deterred by the notion that something seems “unattainable”. Hence, over the past few years, they have developed and improved staining and microscopy methods which can be used to transform brain tissue samples into high-resolution, three-dimensional electron microscope images. Their latest microscope, which is being used by the Department as a prototype, scans the surface of a sample with 91 electron beams in parallel before exposing the next sample level. Compared to the previous model, this increases the data acquisition rate by a factor of over 50. As a result an entire mouse brain could be mapped in just a few years rather than decades. Although it is now possible to decompose a piece of brain tissue into billions of pixels, the analysis of these electron microscope images takes many years. This is due to the fact that the standard computer algorithms are often too inaccurate to reliably trace the neurons’ wafer-thin projections over long distances and to identify the synapses. For this reason, people still have to spend hours in front of computer screens identifying the synapses in the piles of images generated by the electron microscope. However the Max Planck scientists led by Jörgen Kornfeld have now overcome this obstacle with the help of artificial neural networks. These algorithms can learn from examples and experience and make generalizations based on this knowledge. They are already applied very successfully in image process and pattern recognition today. “So it was not a big stretch to conceive of using an artificial network for the analysis of a real neural network,” says study leader Jörgen Kornfeld. Nonetheless, it was not quite as simple as it sounds. For months the scientists worked on training and testing so-called Convolutional Neural Networks to recognize cell extensions, cell components and synapses and to distinguish them from each other.

______

______

Artificial consciousness of AI:

Consciousness can be defined as being aware of oneself and surroundings. It is a subject that has been debated for many hundreds of years and has been dogged by argument and controversy. Questions such as: what makes an animal conscious? Can only animals be conscious? Are people conscious at all or is it just an illusion? have been discussed countless times with answers varying greatly depending on the ideas and scientific expertise and fashions of the time. Closely related to the subject of consciousness are questions about the mind. Questions such as what distinguishes the mind from the brain have also been debated extensively. The mind is considered by many to be the site of emotion and free will each of which need the subject to be aware of oneself and the environment. Hence the research on consciousness also covers research on the mind.

_

There have been many debates as to whether robots are conscious. Experts feel that robots are very important to the field of artificial intelligence. They bring the program out of the grey box sitting on a desk and into the real world. Interactions with robots can feel much more lifelike and easier than those with a computer through your keyboard or mouse. It can also be more convincing. If a simulation of an AI agent navigating around a room is shown and then a robot showing the same behaviour, the effect the robot has is much more. This more lifelike behaviour does bring problems to the debate about consciousness. If a robot is controlled by an AI program then is it conscious? If this program is changed to run a simulation is it still conscious? AI agents can be made to be aware of their environments by their programming. In some cases the programs can learn about their environment. In both cases though the AI agent is still following algorithms designed by a human user. If someone considers the robot conscious then the programmer has succeeded in programming consciousness. If this is the case then all the debate about what is consciousness and what is needed for conscious behaviour is trivial as some of these AI agents are relatively simple to program. Another criticism of a robot being conscious is its silicon base.

_

Engineers routinely build technology that behaves in novel ways. Deep-learning systems, neural networks and genetic algorithms train themselves to perform complex tasks rather than follow a predetermined set of instructions. The solutions they come up with are as inscrutable as any biological system. Even a vacuum-cleaning robot’s perambulations across your floor emerge in an organic, often unpredictable way from the interplay of simple reflexes. ‘In theory, these systems could develop – as a way of solving a problem – something that we didn’t explicitly know was going to be conscious,’ says the philosopher Ron Chrisley, of the University of Sussex. A system might not be able – or want – to participate in the classic appraisals of consciousness such as the Turing Test. Even systems that are not designed to be adaptive do things their designers never meant. Computer operating systems and Wall Street stock-trading programs monitor their own activities, giving them a degree of self-awareness. A basic result in computer science is that self-referential systems are inherently unpredictable: a small change can snowball into a vastly different outcome as the system loops back on itself. We already experience this. ‘The instability of computer programs in general, and operating systems in particular, comes out of these Gödelian questions,’ says the MIT physicist Seth Lloyd.

_

Any intelligence that arises through such a process could be drastically different from our own. Whereas all those Terminator-style stories assume that a sentient machine will see us as a threat, an actual AI might be so alien that it would not see us at all. What we regard as its inputs and outputs might not map neatly to the system’s own sensory modalities. Its inner phenomenal experience could be almost unimaginable in human terms. The philosopher Thomas Nagel’s famous question – ‘What is it like to be a bat?’ – seems tame by comparison. A system might not be able – or want – to participate in the classic appraisals of consciousness such as the Turing Test. It might operate on such different timescales or be so profoundly locked-in that, as the MIT cosmologist Max Tegmark has suggested, in effect it occupies a parallel universe governed by its own laws. The first aliens that human beings encounter will probably not be from some other planet, but of our own creation. We cannot assume that they will contact us first. If we want to find such aliens and understand them, we need to reach out. And to do that we need to go beyond simply trying to build a conscious machine. We need an all-purpose consciousness detector.

_

Artificial consciousness:

Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics. The aim of the theory of artificial consciousness is to “define that which would have to be synthesized were consciousness to be found in an engineered artifact” (Aleksander 1995). Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC, though there are challenges to that perspective. Proponents of AC believe it is possible to construct systems (e.g., computer systems) that can emulate this NCC interoperation. Artificial consciousness concepts are also pondered in the philosophy of artificial intelligence through questions about mind, consciousness, and mental states.

_

Consciousness vs. artificial general intelligence (AGI):

Microsoft co-founder Paul Allen believes that we’ve yet to achieve artificial general intelligence (AGI), i.e. an intelligence capable of performing any intellectual task that a human can, because we lack a scientific theory of consciousness. But as Imperial College of London cognitive roboticist Murray Shanahan points out, we should avoid conflating these two concepts. “Consciousness is certainly a fascinating and important subject—but I don’t believe consciousness is necessary for human-level artificial intelligence,” he says. “Or, to be more precise, we use the word consciousness to indicate several psychological and cognitive attributes, and these come bundled together in humans.” It’s possible to imagine a very intelligent machine that lacks one or more of these attributes. Eventually, we may build an AI that’s extremely smart, but incapable of experiencing the world in a self-aware, subjective, and conscious way. Shanahan said it may be possible to couple intelligence and consciousness in a machine, but that we shouldn’t lose sight of the fact that they’re two separate concepts. And just because a machine passes the Turing Test—in which a computer is indistinguishable from a human—that doesn’t mean it’s conscious. To us, an advanced AI may give the impression of consciousness, but it will be no more aware of itself than a rock or a calculator.

_

This robot passed a ‘self-awareness’ test that only humans could handle until now: a 2015 study:

An experiment led by Professor Selmer Bringsjord  of New York’s  Rensselaer Polytechnic Institute used the classic “wise men” logic puzzle to put a group of robots to the test. The roboticists used a version of this riddle to see if a robot is able to distinguish itself from others. Bringsjord and his research squad called the wise men riddle the “ultimate sifter” test because the knowledge game quickly separates people from machines — only a person is able to pass the test. But that is apparently no longer the case. In a demonstration,  Bringsjord showed that a robot passed the test.  The premise of the classic riddle presents three wise advisors to a king, wearing hats, each unseen to the wearer. The king informs his men of three facts: the contest is fair, their hats are either blue or white, and the first one to deduce the color on his head wins. The contest would only be fair if all three men sported the same color hat. Therefore, the winning wise man would note that the color of the hats on the other two, and then guess that his was the same color. The roboticists used a version of this riddle to prove self-awareness — all three robots were programmed to believe that two of them had been given a “dumbing pill” which would make them mute. Two robots were silenced. When asked which of them hadn’t received the dumbing pill, only one was able to say “I don’t know” out loud. Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn’t received the pill. To be able to claim that the robot is exhibiting “self-awareness”, the robot must have understood the rules, recognized its own voice and been aware of the fact that it is a separate entity from the other robots. Researchers told Digital Trends that if nothing else, the robot’s behavior is a “mathematically verifiable awareness of the self”.

________

________

Transhumanism:

You awake one morning to find your brain has another lobe functioning. Invisible, this auxiliary lobe answers your questions with information beyond the realm of your own memory, suggests plausible courses of action, and asks questions that help bring out relevant facts. You quickly come to rely on the new lobe so much that you stop wondering how it works. You just use it. This is the dream of artificial intelligence. Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, which has roots in Aldous Huxley and Robert Ettinger, has been illustrated in fiction as well, for example in the manga Ghost in the Shell and the science-fiction series Dune.

Are we cyborgs?

A fascinating conference on artificial intelligence was recently hosted by the Future of Life Institute, an organization aimed at promoting “optimistic visions of the future”. The conference “Superintelligence: Science or Fiction?” included such luminaries as Elon Musk of Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of MIT’s DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark. The group touched on a number of topics about the future benefits and risks of coming artificial superintelligence, with everyone generally agreeing that it’s only a matter of time before AI becomes ubiquitous in our lives. Eventually, AI will surpass human intelligence, with the risks and transformations that such a seismic event would entail. Elon Musk has not always been an optimistic voice for AI, warnings of its dangers to humanity.  According to Musk, we are already cyborgs by utilizing “machine extensions” of ourselves like phones and computers. “By far you have more power, more capability, than the President of the United States had 30 years ago. If you have an Internet link you have an article of wisdom, you can communicate to millions of people, you can communicate to the rest of Earth instantly. I mean, these are magical powers that didn’t exist, not that long ago. So everyone is already superhuman, and a cyborg,” says Musk. He sees humans as information-processing machines that pale in comparison to the powers of a computer. What is necessary, according to Musk, is to create a greater integration between man and machine, specifically altering our brains with technology to make them more computer-like.  “I think the two things that are needed for a future that we would look at and conclude is good, most likely, is, we have to solve that bandwidth constraint with a direct neural interface. I think a high bandwidth interface to the cortex, so that we can have a digital tertiary layer that’s more fully symbiotic with the rest of us. We’ve got the cortex and the limbic system, which seem to work together pretty well – they’ve got good bandwidth, whereas the bandwidth to additional tertiary layer is weak,” explained Musk. Once we solve that issue, AI will spread everywhere. It’s important to do so because, according to Musk, if only a smaller group would have such capabilities, they would become “dictators” with “dominion over Earth”.   What would a world filled with such cyborgs look like? Visions of Star Trek’s Borg come to mind. Musk thinks it will be a society full of equals: “And if we do those things, then it will be tied to our consciousness, tied to our will, tied to the sum of individual human will, and everyone would have it so it would be sort of still a relatively even playing field, in fact, it would be probably more egalitarian than today,” points out Musk.

________

________

Discussion:

_

Artificial Intelligence: it’s not Man vs. Machine:

Vanitha Narayanan, chairman of IBM India made the case for supplementing human workforce with A.I., with emphasis on the collaboration between the two. “The average call center rep cannot handle 50 products. So they’re using Watson in their call center — not to replace the call center rep,” said Narayanan. At the same time, Leonie Valentine, sales and operations managing director at Google Hong Kong argued that supplementing human workers with intelligent robots could ease the humans’ workload, facilitating task management, and speeding up the work process. Turning to A.I. will, by no means, eliminate jobs but instead create the opportunity for service improvement.

_

Rigidity of programming vs. flexibility: the two extremes of AI:

AI researcher and founder of Surfing Samurai Robots, Richard Loosemore thinks that most AI doomsday scenarios are incoherent, arguing that these scenarios always involve an assumption that the AI is supposed to say “I know that destroying humanity is the result of a glitch in my design, but I am compelled to do it anyway.” Loosemore points out that if the AI behaves like this when it thinks about destroying us, it would have been committing such logical contradictions throughout its life, thus corrupting its knowledge base and rendering itself too stupid to be harmful. He also asserts that people who say that “AIs can only do what they are programmed to do” are guilty of the same fallacy that plagued the early history of computers, when people used those words to argue that computers could never show any kind of flexibility. Peter McIntyre and Stuart Armstrong, both of whom work out of Oxford University’s Future of Humanity Institute, disagree, arguing that AIs are largely bound by their programming. They don’t believe that AIs won’t be capable of making mistakes, or conversely that they’ll be too dumb to know what we’re expecting from them.

_

Chomsky vs. Norvig:

Recently, Peter Norvig, Google’s Director of Research and co-author of the most popular artificial intelligence textbook in the world, wrote a webpage extensively criticizing Noam Chomsky, arguably the most influential linguist in the world. Their disagreement points to a revolution in artificial intelligence that, like many revolutions, threatens to destroy as much as it improves. Chomsky, one of the old guard, wishes for an elegant theory of intelligence and language that looks past human fallibility to try to see simple structure underneath. Norvig, meanwhile, represents the new philosophy: truth by statistics. Norvig points out that basically all successful language-related AI programs now use statistical reasoning (including IBM’s Watson).  Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don’t try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way. “That’s a notion of [scientific] success that’s very novel. I don’t know of anything like it in the history of science,” said Chomsky. What seems to be a debate about linguistics and AI is actually a debate about the future of knowledge and science. Is human understanding necessary for making successful predictions? If the answer is “no,” and the best way to make predictions is by churning mountains of data through powerful algorithms, the role of the scientist may fundamentally change forever. In my view, if learning means churning mountains of data through powerful algorithms using statistical reasoning to perform or predict anything without understanding what is learned, then it is no learning. As a human I feel that if I learn anything without understanding it, then it is no learning at all. Many students cram for exams but it is cramming and not learning. Knowledge acquired through cramming cannot solve problems in unfamiliar situations. This is the basic difference between human learning and machine learning. In my view no machine learning will ever match human learning. As a corollary no artificial intelligence will ever match human intelligence.

________

Analog vs. digital computing vis-à-vis artificial intelligence:

The main difference between analog and digital computers is not in what they do, but how they do it. Analog computers process information in a continuous fashion and can handle a wide range of naturally occurring processes. An analog computer receives one or more variables and produces a result that represents the relationships between the input variables. Electronic analog devices are capable of producing various mathematical and logical functions, including logarithms, integration, and differentiation. Some complex functions may not be solvable with digital computers, but analog computers can usually handle them well. The main disadvantage of analog computers is that they are hardwired and designed to process only a limited number of functions by means of dedicated electronic devices. This deficit is eliminated in digital computers.

Digital computers represent information in binary states of 0’s (zeros) or 1’s (ones). A “0” usually stands for low voltage (close to zero volts), and a “1” means that a voltage (usually 5 V or 3.3 V) is present. One wire connection is represented by one bit of information. The value of the bit is “0” or “1.” Two bits can represent two wires. Each bit can have the values of “0” or “1” at different times, which allows to represent four unique states or events with the values 00, 01, 10, and 11. The state 00 means that both wires have no voltage applied at a given time, and 11 means that both wires have the nominal voltages present at the same time. By increasing the number of wire connections, long strings of 0’s and 1’s (words) can be produced. Each unique combination of 0’s and 1’s is decoded and represents a unique number, or information in general. A set of related wires is referred to as a bus. A bus can have 64 or more wire connections arranged in parallel and is controlled by a microprocessor. The microprocessor determines what kind of information is put on the bus at a specific time. It could be memory address, content of the memory address, or operating code (instruction to perform an action). The transfer of information over the bus is controlled by a software program. The arrangement allows the use of the same hardware (the same physical devices) to process very different information at different times. Since the computing is done one variable at a time and is controlled by a timing protocol, a digital computer does serial processing of information. This statement is not totally correct, because all bits of the same word are processed concurrently. But in the analog computer, all input variables can be processed at the same time, which allows parallel processing. Overall, the analog computer better reflects the natural world because specific functions are associated with dedicated wires and circuitry.

Recent developments in computing theory challenge longstanding assumption about digital and analog computing, and suggest that analog computations are more powerful than digital ones. The latest thesis, advanced by Hava Siegelmann at the Technion Institute of Technology, claims that some computational problems can only be solved by analog neural networks. Since neural networks are essentially analog computers, the work suggests, on a theoretical level, that analog operations are inherently more powerful than digital.  A transistor, conceived of in digital terms, has two states: on and off, which can represent the 1s and 0s of binary arithmetic. But in analog terms, the transistor has an infinite number of states, which could, in principle, represent an infinite range of mathematical values. Digital computing, for all its advantages, leaves most of transistors’ informational capacity on the table. In recent years, analog computers have proven to be much more efficient at simulating biological systems than digital computers. But existing analog computers have to be programmed by hand, a complex process that would be prohibitively time consuming for large-scale simulations.  Differential equations are equations that include both mathematical functions and their derivatives, which describe the rate at which the function’s output values change. As such, differential equations are ideally suited to describing chemical reactions in the cell, since the rate at which two chemicals react is a function of their concentrations. According to the laws of physics, the voltages and currents across an analog circuit need to balance out. If those voltages and currents encode variables in a set of differential equations, then varying one will automatically vary the others. If the equations describe changes in chemical concentration over time, then varying the inputs over time yields a complete solution to the full set of equations. A digital circuit, by contrast, needs to slice time into thousands or even millions of tiny intervals and solve the full set of equations for each of them. And each transistor in the circuit can represent only one of two values, instead of a continuous range of values. With a few transistors, cytomorphic analog circuits can solve complicated differential equations — including the effects of noise — that would take millions of digital transistors and millions of digital clock cycles. Digital is almost synonymous with ‘computer’ today but analog hardware can be incredibly efficient. Among researchers, analog computing is drawing renewed interest for its energy efficiency and for being able to efficiently solve dynamic and other complex problems.

My view:

Since analog computers have proven to be much more efficient at simulating biological systems than digital computers, AI which tries to emulate human (biological) intelligence would become more flexible and more creative if analog computers are used to implement AI programs.

________

Artificial Intelligence is not a Threat according to many experts:

  1. Genuine intelligence requires a lot of practical experience:  Bostrom, Kurzweil, and other theorists of super-human intelligence have seemingly infinite faith in the power of raw computational power to solve almost any intellectual problem. Yet in many cases, a shortage of intellectual horsepower isn’t the real problem. To see why, imagine taking a brilliant English speaker who has never spoken a word of Chinese, locking her in a room with an enormous stack of books about the Chinese language, and asking her to become fluent in speaking Chinese. No matter how smart she is, how long she studies, and how many textbooks she has, she’s not going to be able to learn enough to pass herself off as a native Chinese speaker. That’s because an essential part of becoming fluent in a language is interacting with other fluent speakers. Talking to natives is the only way to learn local slang, discover subtle shades in the meanings of words, and learn about social conventions and popular conversation topics. In principle, all of these things could be written down in a textbook, but in practice most of them aren’t — in part because they vary so much from place to place and over time. A machine trying to develop human-level intelligence faces a much more severe version of this same problem. A computer program has never grown up in a human family, fallen in love, been cold, hungry or tired, and so forth. In short, they lack a huge amount of the context that allows human beings to relate naturally to one another. And a similar point applies to lots of other problems intelligent machines might tackle, from drilling an oil well to helping people with their taxes. Most of the information you need to solve hard problems isn’t written down anywhere, so no amount of theoretical reasoning or number crunching, on its own, will get you to the right answers. The only way to become an expert is by trying things and seeing if they work. And this is an inherently difficult thing to automate, since it requires conducting experiments and waiting to see how the world responds. Which means that scenarios where computers rapidly outpace human beings in knowledge and capabilities doesn’t make sense — smart computers would have to do the same kind of slow, methodical experiments people do.
  1. Machines are extremely dependent on humans:  A modern economy consists of millions of different kinds of machines that perform a variety of specialized functions. While a growing number of these machines are automated to some extent, virtually all of them depend on humans to supply power and raw materials, repair them when they break, manufacture more when they wear out, and so forth. You might imagine humanity creating still more robots being created to perform these maintenance functions. But we’re nowhere close to having this kind of general-purpose robot. Indeed, building such a robot might be impossible due to a problem of infinite regress: robots capable of building, fixing, and supplying all the machines in the world would themselves be fantastically complex. Still more robots would be needed to service them. Evolution solved this problem by starting with the cell, a relatively simple, self-replicating building block for all life. Today’s robots don’t have anything like that and (despite the dreams of some futurists) are unlikely to any time soon. This means that, barring major breakthroughs in robotics or nanotechnology, machines are going to depend on humans for supplies, repairs, and other maintenance. A smart computer that wiped out the human race would be committing suicide.
  1. The human brain might be really difficult to emulate:   Digital computers are capable of emulating the behavior of other digital computers because computers function in a precisely-defined, deterministic way. To simulate a computer, you just have to carry out the sequence of instructions that the computer being modelled would perform. The human brain isn’t like this at all. Neurons are complex analog systems whose behavior can’t be modelled precisely the way digital circuits can. And even a slight imprecision in the way individual neurons are modelled can lead to a wildly inaccurate model for the brain as a whole. A good analogy here is weather simulation. Physicists have an excellent understanding of the behavior of individual air molecules. So you might think we could build a model of the earth’s atmosphere that predicts the weather far into the future. But so far, weather simulation has proven to be a computationally intractable problem. Small errors in early steps of the simulation snowball into large errors in later steps. Despite huge increases in computing power over the last couple of decades, we’ve only made modest progress in being able to predict future weather patterns. Simulating a brain precisely enough to produce intelligence is a much harder problem than simulating a planet’s weather patterns. There’s no reason to think scientists will be able to do it in the foreseeable future.
  1. All such doomsday scenarios involve a long sequence of if-then contingencies, a failure of which at any point would negate the apocalypse. University of West England Bristol professor of electrical engineering Alan Winfield put it this way in a 2014 article: “If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, accidentally or maliciously, starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while not impossible, is improbable.”
  2. The development of AI has been much slower than predicted, allowing time to build in checks at each stage. As Google executive chairman Eric Schmidt said in response to Musk and Hawking: “Don’t you think humans would notice this happening? And don’t you think humans would then go about turning these computers off?” Google’s own DeepMind has developed the concept of an AI off switch, playfully described as a “big red button” to be pushed in the event of an attempted AI takeover. As Baidu vice president Andrew Ng put it (in a jab at Musk), it would be “like worrying about overpopulation on Mars when we have not even set foot on the planet yet.”
  3. AI doomsday scenarios are often predicated on a false analogy between natural intelligence and artificial intelligence. As Harvard University experimental psychologist Steven Pinker elucidated in his answer to the 2015 Edge.org Annual Question “What do you think about Machines that Think?”: “AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.” It is equally possible, Pinker suggests, that “artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization.”
  4. The implication that computers will “want” to do something (like convert the world into paperclips) means AI has emotions, but as science writer Michael Chorost notes, “the minute an A.I. wants anything, it will live in a universe with rewards and punishments—including punishments from us for behaving badly.

Given the zero percent historical success rate of apocalyptic predictions, coupled with the incrementally gradual development of AI over the decades, we have plenty of time to build in fail-safe systems to prevent any such AI apocalypse.

________

Why Artificial Intelligence won’t take over the World:

Part of the problem is that the term “artificial intelligence” itself is a misnomer. AI is neither artificial, nor all that intelligent.  AI isn’t artificial, simply because we, natural creatures that we are, make it. Neither is AI all that intelligent, in the crucial sense of autonomous. Consider Watson, the IBM supercomputer that famously won the American game show “Jeopardy.” Not content with that remarkable feat, its makers have had Watson prepare for the federal medical licensing exam, conduct legal discovery work better than first-year lawyers, and outperform radiologists in detecting lung cancer on digital X-rays. But compared to the bacteria Escherichia coli, Watson is a moron. The first life forms on earth, bacteria have been around for 4 billion years. They make up more of the earth’s biomass than plants and animals combined. Bacteria exhibit a bewildering array of forms, behaviors and habitats. Unlike AI, though, bacteria are autonomous. They locomote, consume and proliferate all on their own. They exercise true agency in the world. Accordingly, to be more precise, we should say that bacteria are more than just autonomous, they’re autopoietic, self-made. Despite their structural complexity, they require no human intervention or otherwise to evolve. Yet when we imagine bacteria, we tend to evoke illustrations of them that misleadingly simplify.  Within the bacterium, one of the simplest living things on earth, an array of structures—capsule, wall, membrane, cytoplasm, ribosomes, plasmid, pili, nucleoid, flagellum—work in concert to common ends. In this astonishing symphony of internal and external organic activity, there is undeniably a vast intelligence at play, an intelligence of which we, as rational scrutinizers, have but a dim grasp. This planet belongs to bacteria. There are more bacteria on earth than all other living organisms. The human body contains more number of bacteria than human cells themselves. We lived with arrogant optimism that we had conquered bacterial infections. How wrong we were! Bacteria have finally reclaimed their premier status and superiority and won the war against humans. They are literally mocking our antibiotic weaponry by developing antibiotic resistance. Bacteria could do this because it is a living organism.

­_

Human behavior, in all its predictably irrational glory, is still the culmination of a complexity that dwarfs the relative primitiveness of the bacterium. There are 37.2 trillion cells in our Body. Each of our 37.2 trillion cells contains all of the information stored in its DNA to make our whole body. DNA can hold more data in a smaller space than any of today’s digital memories. Our minds—grounded in these bodies—interact with a vast, dynamic world. Max Galka, an expert in machine learning, says that “machines have historically been very bad at the kind of thinking needed to anticipate human behavior.” “After decades of work,” he points out, “there has been very little progress to suggest it is even possible.” So this is where Hawking’s conception of AI falters. He admonishes that “there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains.” This is the perspective of a physicist, indoctrinated at a formative age, in the “brain as computer” notion of human intelligence. But brains are organs in bodies made up of cells, and intelligence is much more than merely making “advanced computations.” The molecules that make up the protein chains within a bacterium are doing more than jostling each other randomly. Something akin to “advanced computations” occur here as well, inasmuch as bacteria cohere as living things. To be sure, bacteria may not be rational in the proscribed way rationalists like Hawking define it, but they certainly exhibit an intelligence. They have needs. They act in the world to meet those needs. As such, prokaryotes, bacteria and their single-celled brethren, the Archaea, are much more than the organic equivalent of machines. Machines may act in the world, but they don’t have agency. Machines extend the agency of those who make and use them. Machines are akin to prosthetics.

For any AI to become self-aware, it would have to become other-aware, since the self has no meaning outside of a social context. And to properly socialize in this way, our hypothetical AI entity would require a body beyond the circuits that comprise the internal environs of computers. Like brains, AI can’t know itself without interacting with others in the world through a body. An AI’s sense of self would emerge from this coupling of its body to a community of other bodies in the world. Most AI professionals realize this. The executive director of the Data Incubator, Michael Li, writes: Early AI research…thought of reasoning as a totally abstract and deductive: brain in a vat just processing symbolic information. These paradigms tended to be brittle and not very generalizable. More recent updates have tended to incorporate the body and “sensory perception”, inductively taking real-world data into account and learning from experience—much as a human child might do. We see a dog, and hear it barking, and slowly begin to associate the dogs with barking. Given the practical limitations of AI, we’re left wondering what Hawking is really getting at when he warns of an imminent AI coup. Since it’s more apt to think of AI as cognitive prosthetics, it behooves us to trace the augmentation back to its source.

__

Seeing all AI technology leaves one wondering if computers are more intelligent than humans. The advantages of AI are that certain tasks can be executed much faster and accurately than a human does. AI can perform certain tasks better than some or even most people. Also AI is more consistent than humans. But even in view of such impressive computer accomplishments, in terms of its complexity and versatility, the human brain far surpasses any computer on earth. And “a computer’s speed in calculations and step-by-step logic is far surpassed by the brain’s ability in parallel processing, integrating and synthesizing information, and abstracting it from generalities. Computers do not even come close to the brain’s ability to recognize a face or an object in an instant. For years man’s brain has been likened to a computer, yet recent discoveries show that the comparison falls far short. “How does one begin to comprehend the functioning of an organ with somewhere in the neighborhood of 100 billion neurons with a million billion synapses (connections), and with an overall firing rate of perhaps 10 million billion times per second?” asked Dr. Richard M. Restak. His answer?  “The performance of even the most advanced of the neural-network computers has about one ten-thousandth the mental capacity of a housefly.”  Consider, then, how much a computer fails to measure up to a human brain, which is so remarkably superior. When a computer system needs to be adjusted, a programmer must write and enter new coded instructions. Our brain does such work automatically, both in the early years of life and in old age. Even the most advanced computers are very primitive compared to the brain. Scientists have called it “the most complicated structure known” and “the most complex object in the universe.” One scientist estimated that our brain can hold information that “would fill some twenty million volumes, as many as in the world’s largest libraries.” Some neuroscientists estimate that during an average life span, a person uses only 1/100 of 1 percent (0.0001) of his potential brain capacity. A computer works only from instructions designed by a human being. If something goes wrong it stops and waits for further instructions from the human operator. Such computers can be said to be efficient but hardly intelligent. Therefore computers will never be more intelligent than human beings.

_______

_______

Moral of the story: 

_

  1. Computer Science is all about getting things done, to find solutions to our problems and to fill gaps in our knowledge. Artificial Intelligence (AI) is the field of computer science dedicated to developing machines that will be able to mimic and perform the same tasks just as a human would. AI aims to make computer programs that can solve problems and achieve goals in the world as well as humans. Much of the recent work on AI focuses on solving problems of more obvious importance than making conscious machine. Unlike general perception, artificial intelligence is not limited to just information & communication technology; instead, it is being extensively used in other areas such as medical, business, education, law, and manufacturing.

_

  1. Artificial intelligence (AI) is intelligence exhibited by machine (e.g. computer) that mimics human intelligence. There are many activities and behaviors that are considered intelligent when exhibited by humans including seeing, learning, using tools, understanding human speech, reasoning, making good guesses, playing games, and formulating plans and objectives. AI focuses on how to get machines or computers to perform these same kinds of activities, though not necessarily in the same way that humans might do them. AI researchers are free to use methods that are not observed in human intelligence or that involve much more computing than human intelligence can do. But one cannot overlook the fact that AI researchers themselves are humans using human intelligence. As soon as AI becomes useful enough and common enough, it becomes routine technology and nobody calls it AI anymore. Also, AI tools have solved most difficult problems in computer science and many AI inventions have been adopted by mainstream computer science and are no longer considered a part of AI.

_

  1. Computer programs have plenty of speed and memory but their abilities correspond to the intellectual mechanisms that program designers understand well enough to put in programs. Whenever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently. AI researchers have been able to create computers that can perform jobs that are complicated for people to do, but ironically they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do.

_

  1. AI is a shift from problem solving system to knowledge based system. In conventional computing the computer is given data and is told how to solve a problem whereas in AI knowledge is given about a domain and some inference capability with the ultimate goal is to develop technique that permits systems to learn new knowledge autonomously and continually to improve the quality of the knowledge they possess. A knowledge-based system is a system that uses artificial intelligence techniques in problem-solving processes to support human decision-making, learning, and action. AI is the study of heuristics with reasoning ability, rather than algorithms. It may be more appropriate to seek and accept a sufficient solution (Heuristic search) to a given problem, rather than an optimal solution (algorithmic search) as in conventional computing.

_

  1. The hallmark of biological intelligent behaviour is flexibility that enables a biological intelligence to respond to an almost infinite range of situations in a reasonable if not necessarily optimal fashion. It is impossible to create a set of rules that will tell a computer exactly what to do under every possible set of circumstances. However many modern AI programs exhibit extreme flexibility that enables relatively small programs to exhibit a vast range of possible behaviors in response to differing problems and situations. Whether these systems can ultimately be made to exhibit the flexibility shown by a living organism is still the subject of much debate.

_

  1. Conventional AI computing is called hard computing technique which follows binary logic (using only two values 0 or 1) based on symbolic processing using heuristic search – a mathematical approach in which ideas and concepts are represented by symbols such as words, phrases or sentences, which are then processed according to the rules of logic. Expert system is classic example of hard computing AI. Conventional AI research focuses on attempts to mimic human intelligence through symbol manipulation and symbolically structured knowledge bases. I want to emphasize that conventional non-AI computing is hard computing having binary logic, crisp systems, numerical analysis and crisp software. AI hard computing differs from non-AI hard computing by having symbolic processing using heuristic search with reasoning ability rather than algorithms. Soft computing AI (computational intelligence) differs from conventional AI (hard) computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty, partial truth, and approximation. In effect, the role model for soft computing is the human mind. Many real-life problems cannot be translated into binary language (unique values of 0 and 1) for computers to process it. Computational Intelligence therefore provides solutions for such problems. Soft computing includes fuzzy logic, neural networks, probabilistic reasoning and evolutionary computing.

_

  1. AI programs do not think like humans think but they do make difficult judgments, the kind usually left to experts; they do choose among plausible alternatives, and act on these choices.

­_

  1. Natural language processing is the key to AI. The goal of natural language processing is to help computers understand human speech in order to do away with computer languages. The ability to use and understand natural language seems to be a fundamental aspect of human intelligence and its successful automation would have an incredible impact on the usability and effectiveness of computers.

_

  1. In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert by using facts and rules, taken from the knowledge of many human experts in a particular field, to help make decisions and solve problems. It is argued that computers can only do as they are told and consequently cannot perform original (hence, intelligent) actions. Expert systems of AI, especially in the area of diagnostic reasoning, have reached conclusions unanticipated by their designers. Indeed, a number of researchers feel that human creativity can be expressed in a computer program.

_

  1. Artificial neural network is an electronic model of the brain consisting of many interconnected simple processors akin to vast network of neurons in the human brain. The goal of the artificial neural network is to solve problems in the same way that the human brain would. Artificial neural networks algorithms can learn from examples & experience and make generalizations based on this knowledge.

_

  1. Fuzzy logic is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy Logic Systems (FLS) produce acceptable but definite output in response to incomplete, ambiguous, distorted, or inaccurate (fuzzy) input. Fuzzy logic is designed to solve problems in the same way that humans do: by considering all available information and making the best possible decision given the input.

_

  1. Evolutionary computation is a family of algorithms for global optimization inspired by biological evolution. In evolutionary computation, an initial set of candidate solutions is generated and iteratively updated. Each new generation is produced by stochastically removing less desired solutions, and introducing small random changes. In biological terminology, a population of solutions is subjected to natural selection (or artificial selection) and mutation. As a result, the population will gradually evolve to increase in fitness, in this case the chosen fitness function of the algorithm. Recombination and mutation create the necessary diversity and thereby facilitate novelty, while selection acts as a force increasing quality. An evolutionary algorithm (EA) is a subset of evolutionary computation which uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection; and Genetic algorithm is the most popular type of EA. Genetic algorithms are one of the best ways to solve a problem for which little is known.

_

  1. Although Machine Intelligence (MI) and Artificial Intelligence (AI) are synonymous, some experts say that AI is simply a subset of MI. Machine intelligence involve faster distributed systems, introduced by better chips and networks, sensors and Internet of Things (IoT)—coupled with those huge swaths of data and “advanced smarter algorithms” that “simulate human thinking”.

_

  1. Machine learning is a subset of AI. Machine learning is the ability to learn without being explicitly programmed and it explores the development of algorithms that learn from given data. Traditional system performs computations to solve a problem. However, if it is given the same problem a second time, it performs the same sequence of computations again. Traditional system cannot learn. Machine learning algorithms learn to perform tasks rather than simply providing solutions based on a fixed set of data. It learns on its own, either from experience, analogy, examples, or by being “told” what to do. Machine learning technologies include expert systems, genetic algorithms, neural networks, random seeded crystal learning, or any effective combinations.

_

  1. Deep learning is a class of machine learning algorithms. Deep learning is the name we use for “stacked neural networks”; that is, networks composed of several layers. Deep Learning creates knowledge from multiple layers of information processing. Deep Learning tries to emulate the functions of inner layers of the human brain, and its successful applications are found in image recognition, speech recognition, natural language processing, or email security.

_

  1. Artificial intelligence and cybernetics are different although overlapping. Artificial Intelligence desires to make computers smart like humans while Cybernetics desire to understand and build systems that can achieve goals.

_

  1. Robotics and artificial intelligence are different fields. Not all robots require AI and not all AI are implemented in robots. Essentially robots don’t need to reason; they merely need to do execute the commands. Smart robot is a robot having artificial intelligence (AI) system so that it can learn from its environment and its experience, and build on its capabilities based on that knowledge.

  1. Smart Robots can help surgeons in three ways:

–Robotic surgery assistants assist surgeons by passing them the necessary instruments during a procedure and learn about a doctor’s preferences.

–In robotic surgical system, the doctor’s hand movements are translated into the machine’s robotic arms which enable precise movement and magnified vision to perform surgery with very tiny incisions and see inside the body in 3D.

–Autonomous robot performing a soft-tissue surgery supervised by human surgeons. 

_

  1. Internet of things (IoT) won’t work without Artificial Intelligence:

_

  1. Even though nanotechnology and artificial intelligence are two different fields, applications of artificial intelligence in nanotechnology could boost nanotechnology and applications of nanotechnology in artificial intelligence could boost artificial intelligence. Nanocomputers are defined as machines that store information and make complex calculations at the nanoscale, which are commonly related to DNA since this kind of biological molecules have the required properties to succeed in both tasks. DNA computing is the performing of computations using biological molecule DNA, rather than traditional silicon chips.

_

  1. Quantum computers and quantum algorithms could significantly improve artificial intelligence (machine learning) which allows a machine to perform many steps simultaneously, improving the speed and efficiency at which it learns. It may be called quantum AI. As quantum technologies emerge, quantum machine learning will play an instrumental role in our society—including deepening our understanding of climate change, assisting in the development of new medicine and therapies, and also in settings relying on learning through interaction, which is vital in automated cars and smart factories.

_

  1. Conventional antiviruses and firewalls block known suspects from entering the system. But new threats can be added to the wanted list only after causing trouble. Traditional detection schemes are rule or signature based. The creation of a rule or signature relies on prior knowledge of the threats structure, source and operation, making it impossible to stop new threats without prior knowledge. Manually identifying all new and disguised threats is too time-consuming to be humanly possible. It is made possible by using AI in cyber security. The primary objective of AI-based security solutions is to help detect what other controls fail to.  Machine learning is superior to conventional IT security measures in a number of ways. Machine learning tools analyze the network in real time and develop and implement solutions to resolve the problem immediately. Where conventional methods use fixed algorithms, machine learning is flexible and can adapt to combat dynamically evolving cyberattacks.  Also, AI systems can train computers to be detectives, snooping around their own circuits and identifying suspicious behaviour. Using machine learning, AI systems can detect unique activities or behaviors (“anomalies”) and flag them for security teams. The challenge for these AI systems is to avoid false-positives – situations where “risks” are flagged that were never risks in the first place. False positives can be minimized by using human-interactive machine learning systems.

_

  1. Artificial Intelligence is influencing the way we live, interact and improve customer experience. Artificial intelligence has been used in a wide range of fields including medical diagnosis & treatment, stock trading, robot control, remote sensing, scientific discovery, video games & toys, self-driving cars, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, email spam filtering, targeting online advertisements, speech recognition, face recognition, natural language processing, chatterbots, better weather predictions, search and rescue, news generation, financial and banking services, marketing and advertising, education system, autonomous weapons and drones, policing & security surveillance and legal profession. AI interactions will clearly help our society evolve, particularly in regards to automated transportation, cyborgs, handling dangerous duties, solving climate change, friendships and improving the care of our elders. Future AI use includes better governance by addressing public grievances and bringing transparency and accountability in governance. AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.   

_

  1. Advantage of artificial intelligence is that it replicates decisions and actions of humans quickly and accurately without human shortcomings such as fatigue, emotion, bias, susceptibility to errors, limited time and inability to endure hostile environments. Artificial intelligence can be easily copied, and each new copy can–unlike humans–start life fully fledged and endowed with all the knowledge accumulated by its predecessors. Machine intelligences may also have superior computational architectures and learning algorithms. Disadvantages of AI includes high cost, lack of human touch, privacy loss, susceptible to hacking, bad judgement calls, biased decision making, job losses and inability to take decision in unfamiliar situations.

_

  1. Artificial intelligence has the potential to teach us to be better drivers, better teachers, better writers and overall better people. When thoughtful and ethical researchers create AI, it can make other humans better people by reducing racism, sexism, online harassment and black market. On the other hand, AI systems can end up reproducing existing social bias & inequities contributing towards further systematic marginalisation of certain sections of society as systems which are trained on datasets collected with biases may exhibit these biases upon use, thus digitizing cultural prejudices such as institutional racism and classism. Responsible collection of data thus is a critical part of machine learning. One of the key problems with artificial intelligence is that it is often invisibly coded with human biases. We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data. Our biases are built into that training data.  

_

  1. AI will change the way we organize our society and the trend goes from programming computers to programming people, and if people are programmed with values not compatible with our core values, it could lead to an automated society with totalitarian features.

_

  1. The state ought to provide an appropriate regulatory framework to ensure that AI technologies are designed and used in ways that are compatible with democracy, human rights, freedom, privacy and equality (no discrimination on any ground). We also need an appropriate code of conduct for anyone who makes AI systems and anyone who has access to sensitive data and algorithms. We can use AI to extend and augment human capability to solve real problems that affect health, poverty, education and politics.

_

  1. Although AI risks cannot be completely eliminated, they are manageable by providing adequate safeguards that include development of testing protocols for the design of AI algorithms, improved cyber-security protections, input validation standards and a kill switch. Reasonable precautions are needed to avoid coding flaws, attackers, infections and mistakes. The development of AI has been much slower than predicted, allowing time to build in checks at each stage.

_

  1. Although AI systems can enhance cyber security, cyber-attack against AI system can potentially pose military/terrorist risks in the future. AI robot can be hacked by cyber-criminals and hacked robot can spy on people, homes, offices and even cause physical damage. A hacked, inoperable robot could be a lost investment to its owner, as tools are not yet readily available to ‘clean’ malware from a hacked robot.

_

  1. Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of a wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. Scientists directly compare AI weapons to nuclear ones. Unlike nuclear weapons that require substantial infrastructure and resources, AI weapons require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It is only be a matter of time before they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace and warlords wishing to perpetrate ethnic cleansing. Artificial intelligence is also used in robot strike teams, autonomous armed drones, autonomous landmines, missiles with decision-making powers, and covert swarms of minuscule robotic spies. On the other hand, there are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

_

  1. Nick Bostrom defines superintelligence as intelligence that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Superintelligence is AI system that possesses intelligence far surpassing that of the brightest and most gifted human minds. This new superintelligence could become powerful, difficult to control and may pose existential risk to humans. Many experts think that superintelligence is likely to come in few decades and could be bad for humanity. Stephen Hawking says that superintelligence might not destroy the human race because of nefarious reasons, but because we program them poorly from our own incompetence. Elon Musk says that artificial intelligence is our greatest existential threat. Goals of machines could change as they get smarter. Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called “technological singularity”, the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed. When machine is superintelligent, we cannot think what machine is thinking. It seems plausible that advances in artificial intelligence could eventually enable superhuman capabilities in areas like programming, strategic planning, social influence, cyber-security, research and development, and other knowledge work. These capabilities could potentially allow an advanced artificial intelligence agent to increase its power, develop new technology, outsmart opposition, exploit existing infrastructure, or exert influence over humans. Superintelligence whose goals are misaligned with ours can outsmart financial markets, out-invent human researchers, out-manipulate human leaders, and develop weapons we cannot even understand.

_____

  1. I assert that no computer or machine will ever achieve human intelligence or superintelligence due to following reasons:

-1—The average brain has100 billion neurons which are inter-connected via synapses and each of the one hundred billion neurons has on average 7,000 synaptic connections to other neurons. It has been estimated that the brain of a three-year-old child has about 1 quadrillion synapses. This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging 100 to 500 trillion synapses. The total processing ability of brain is between 10 to 36 quadrillion (petaflops) calculations per second (cps). The world’s fastest supercomputer, China’s Tianhe-2 clock at about 34 quadrillion cps. But Tianhe-2 is taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build.

-2—Brains are analogue; computers are digital. Neurons in human brain are complex analog systems whose behavior can’t be modelled precisely the way digital circuits can. Since analog computers have proven to be much more efficient at simulating biological systems than digital computers, AI which tries to emulate human (biological) intelligence would become more flexible and more creative if analog computers are used to implement AI programs.   

-3—The brain is a massively parallel machine; computers are modular and serial. Due to parallel functioning, brain can perform many tasks simultaneously while computer can focus on one task at a time. Many AI systems are supplementing the central processing unit (CPU) cores in their chips with graphics processing unit (GPU) cores. A CPU consists of a few cores that use serial processing to perform one task at a time, while a GPU consists of thousands of cores that use parallel processing to handle several tasks simultaneously to overcome difficulties in multitasking. However, there is no algorithm yet that can learn multiple skills.  

-4—All that a transistor in computer can do is to switch on or off current. Transistors have no metabolism, cannot manufacture chemicals and cannot reproduce. Major advantage of transistor is that it can process information very fast, near speed of light which neurons and synapses in brain cannot do. An artificial neuron can operate a million times faster than its biological counterpart. The brain processes information slowly, since biological neurons are slow in action (order of milliseconds). Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz). Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, whereas existing electronic processing cores can communicate near the speed of light. But each neuron is a living cell and a computer in its own right. A biological neuron has the signal processing power of thousands of transistors. Also synapses are far more complex than electrical logic gates. Unlike transistors biological neurons can modify their synapses and modulate the frequency of their signals. Unlike digital computers with fixed architecture, the brain can constantly re-wire its neurons to learn and adapt. Instead of programs, biological neural networks learn by doing and remembering and this vast network of connected neurons gives the brain excellent pattern recognition.

-5—Comparison of brain and computer is unwise. Despite impressive computer accomplishments in terms of its complexity and versatility, the human brain far surpasses any computer on earth. And “a computer’s speed in calculations and step-by-step logic is far surpassed by the brain’s ability in parallel processing, integrating and synthesizing information, and abstracting it from generalities. Computers do not even come close to the brain’s ability to recognize a face or an object in an instant. In digital computers all calculations must pass through the CPU which eventually slows down its program. The human brain doesn’t use a CPU and is much more efficient. When a computer system needs to be adjusted, a programmer must write and enter new coded instructions. Our brain does such work automatically, both in the early years of life and in old age. No AI program is powerful enough to lead a machine having complex, diverse, and highly integrated capabilities of an adult brain, such as vision, speech, and language. Even the most advanced computers are very primitive compared to the brain.

-6—Computers have, literally, no intelligence, no motivation, no autonomy, no belief, no desires, no morals, no emotions and no agency. A computer works only from instructions designed by a human being. If something goes wrong it stops and waits for further instructions from the human operator. Such computers can be said to be efficient but hardly intelligent. Intelligence is much more than merely making “advanced computations.”  The flexibility shown by AI is far primitive compared to flexibility of human intelligence. It’s highly unlikely that machines which match intelligence, creativity and intellectual capabilities of humans can be synthesised.

-7—Theory of mind is the ability to attribute mental states—beliefs, intents, desires, pretending, knowledge, etc.—to oneself and others and to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own. Theory of mind is the capacity to imagine or form opinions about the cognitive states of other people. Humans can predict actions of others by understanding their motives and emotional states. Since AI has no mind, it cannot predict actions of others or understand other’s motives and emotional states. Since emotion is fundamental to human experience influencing cognition, perception and everyday tasks such as learning, communication and even rational decision-making, AI with no emotions cannot emulate humans in learning, communication and rational decision making.  

-8—In my view, if learning means churning mountains of data through powerful algorithms using statistical reasoning to perform or predict anything without understanding what is learned, then it is no learning. Computers can only manipulate symbols which are meaningless to them. AI will never match human intelligence in terms of understanding information in context. As a human I feel that if I learn anything without understanding it, then it is no learning at all. Many students cram for exams but it is cramming and not learning.  “What” and “how” don’t help us understand why something is important. They are the facts that we’re putting into our brains. We can grasp “what” and “how,” but we can’t learn without understanding the answers to those “why” questions. Understanding the “why” is true learning. Cramming is retaining without comprehending, and learning is comprehending and thus retaining. Knowledge acquired through cramming cannot solve problems in unfamiliar situations. This is the basic difference between human learning and machine learning. In my view no machine learning will ever match human learning. As a corollary no artificial intelligence will ever match human intelligence.  

-9—The only way to become an expert is by trying things and seeing if they work, which requires conducting experiments and waiting to see how the world responds. And this is an inherently difficult and different than automation of AI. Automation is essentially what supplements AI in achieving its goal. More importantly, automation completes the task at hand but it does not employ human intellect which forms the basis on which artificial intelligence is built upon.

-10—Machines inherently and extremely depend on humans for supplies, repairs, and other maintenance. Machines are at the end of the day developed by humans and no amount of information and data processing can make them as good as humans when it comes to gut instincts. The human thought process is completely unique, having been developed over time by evolutionary processes.

-11—Intelligence manifests itself only relative to specific social and cultural contexts. So AI to reach human intelligence, it has to understand social and cultural context.

-12—Within the bacterium, one of the simplest living things on earth, an array of structures—capsule, wall, membrane, cytoplasm, ribosomes, plasmid, pili, nucleoid, flagellum—work in concert to common ends. In this astonishing symphony of internal and external organic activity, there is undeniably a vast intelligence at play. Bacteria mock our antibiotic weaponry by developing antibiotic resistance.  Such intelligence cannot be displayed by AI systems. Human behaviour is the culmination of a complexity that dwarfs the relative primitiveness of the bacterium. There are 37.2 trillion cells in our Body. Each of our 37.2 trillion cells contains all of the information stored in its DNA to make our whole body. DNA can hold more data in a smaller space than any of today’s digital memories. How can any AI system come close to human intelligence?  

-13—Read my article ‘Creativity’ posted at http://drrajivdesaimd.com/2011/09/30/creativity/ .  Creativity and intelligence are not unrelated abilities. Cognitive creativity is highly correlated with intelligence. Creativity in human brain is due to association or connection of remote unrelated ideas/concepts/thoughts in an unpredictable way to create novel idea/concept/thought which is useful to humans. From Stone Age to Nanotechnology, we ascended the ladder of creativity; and development of AI itself is product of human creativity. No machine/computer can ever associate or connect remote unrelated ideas/concepts/thoughts in an unpredictable way to create novel idea/concept/thought. Although AI can analyse large amounts of data, find patterns, make connections and make suggestions that can improve the process, artificial intelligence cannot be creative on its own; but artificial intelligence working together with creative human has a great potential to produce things which go beyond what we do today. So AI can augment human creativity. The idea of artificial intelligence/human creativity hybrids have been applied in industries like music, graphic design, industrial design, video games and special effects.

______

  1. AI’s last laugh:

Although computers will never become more intelligent than humans, there is a silver lining for AI. All AI programs, systems and technologies are created by very intelligent engineers, researchers and scientists. Most common people of world have average IQ, certainly not intelligent, and in my view quite unintelligent. When such common people have to use their brains for day to day work, they are going to be inefficient, slow, error prone and forgetful.  When lay people use AI, they will be immensely helped because intelligence of AI is a surrogate of intelligence of AI creators, not to mention that certain tasks can be executed much faster and accurately than a human does and AI can perform certain tasks better than some or even most people. For common people with lower intelligence, AI is a boon. For intelligent people, AI can ease their workload, facilitate task management, and speed up the work process so that they can focus on high value task. 

In a nutshell, AI can improve human performance and decision-making; and augment human creativity and intelligence but not replicate it.   

________

Dr. Rajiv Desai. MD.

March 23, 2017

________ 

Postscript:

The IBM supercomputer Watson is predicting patient outcomes more accurately than physicians and is continuously learning from medical journals. No physician can read all medical journals, imbibe knowledge from all medical journals and implement updated knowledge in all patients to improve outcome. That is the benefit of AI in saving lives. Instead of demonising it cynically, let us applaud its benefits for mankind.

_____

 

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

2 comments on “ARTIFICIAL INTELLIGENCE (AI)”

Leave a Reply

Your email address will not be published. Required fields are marked *

Designed by @fraz699.