An Educational Blog
Robot:
_
Pepper (above), a humanoid robot developed by SoftBank Robotics, was launched in 2014 and can read human emotions. Over 2,000 companies around the world have adopted Pepper as an assistant to welcome, inform and guide visitors in an innovative way.
____
Section-1
Prologue:
What is the first thing that comes to mind when you think of a robot?
For many people it is a machine that imitates a human—like the androids in Star Wars, Terminator and Star Trek: The Next Generation. However much these robots capture our imagination, such robots still only inhabit Science Fiction. Science-fiction films and novels usually portray robots as one of two things: destroyers of the human race or friendly helpers. People still haven’t been able to give a robot enough ‘common sense’ to reliably interact with a dynamic world and the independent intelligent machines seen in science fiction are still a long way off. The type of robots that you will encounter most frequently are robots that do work that is too dangerous, repetitive, boring, onerous, or just plain nasty. Most of the robots in the world are of this type. They can be found in manufacturing, military, medical and space industries. Some robots like the Mars Rover Sojourner or the underwater robot Caribou help us learn about places that are too dangerous for us to go. Many robots are just disembodied hands that sit at the ends of conveyor belts, picking up things and moving them while other types of robots are just plain fun for kids of all ages.
_
What is a robot?
There’s no precise definition, but by general agreement a robot is a programmable machine that imitates the actions or appearance of an intelligent creature–usually a human. To qualify as a robot, a machine has to be able to do two things: 1) get information from its surroundings, and 2) do something physical–such as move or manipulate objects. Robots are machines capable of carrying out physical tasks. They can be directly controlled by humans, or have some ability to operate by themselves. Although intelligent humanoid robots still remain mostly in the realm of science fiction, robotic machines are all around us. These feats of engineering already help us with many areas of life, and could transform the future for us. I have published articles on ‘Driverless car’ and ‘Drone’, both of them are kind of robots.
_
Robots have been with us for less than 60 years, but the idea of inanimate creations to do our bidding is much, much older. The ancient Greek poet Homer described maidens of gold, metallic helpers for the Hephaistos, the Greek god of the forge. The golems of medieval Jewish legend were robot-like servants made of clay, brought to life by a spoken charm. Leonardo da Vinci drew plans for a mechanical man in 1495. The word robot comes from the Czech word robota, meaning drudgery or slave-like labor. It was first used to describe fabricated workers in a fictional 1920s play by Czech author Karel Capek called Rossum’s Universal Robots. In the story, a scientist invents robots to help people by performing simple, repetitive tasks. However, once the robots are used to fight wars, they turn on their human owners and take over the world. The real robots wouldn’t become possible until the 1950’s and 60’s, with the invention of transistors and integrated circuits. Compact, reliable electronics and a growing computer industry added brains to the brawn of already existing machines.
_
A robot has these essential characteristics:
First of all your robot would have to be able to sense its surroundings. It would do this in ways that are not unsimilar to the way that you sense your surroundings. Giving your robot sensors: light sensors (eyes), touch and pressure sensors (hands), chemical sensors (nose), hearing and sonar sensors (ears), and taste sensors (tongue) will give your robot awareness of its environment.
A robot needs to be able to move around its environment. Whether rolling on wheels, walking on legs or propelling by thrusters a robot needs to be able to move. To count as a robot either the whole robot moves, like the Sojourner or just parts of the robot moves, like the Canadarm.
A robot needs to be able to power itself. A robot might be solar powered, electrically powered, battery powered. The way your robot gets its energy will depend on what your robot needs to do.
A robot needs some kind of “smarts.” This is where programming enters the pictures. A programmer is the person who gives the robot its ‘smarts.’ The robot will have to have some way to receive the program so that it knows what it is to do.
Robotics is an interdisciplinary branch of computer science and engineering involving design, construction, operation, and use of robots. Robotics brings together several very different engineering areas and skills. Robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task.
_
If you think robots are mainly the stuff of space movies, think again. Right now, all over the world, robots are on the move. They’re painting cars at Ford plants, assembling Milano cookies for Pepperidge Farms, walking into live volcanoes, driving trains in Paris, and defusing bombs in Northern Ireland. As they grow tougher, nimbler, and smarter, today’s robots are doing more and more things we can’t –or don’t want to–do. The applications of robotics in recent years have emerged beyond the field of manufacturing or industrial robots itself. Robotics applications are now widely used in medical, transport, underwater, entertainment and military sector. Robots are found everywhere: in factories, homes and hospitals, and even in outer space. Much research and development are being invested in developing robots that interact with humans directly. There are humanoid robots that resemble humans that serve as companions to the elderly and remind them to take their medications and that hold intelligent conversations. Robots are used in schools in order to increase students’ motivation to study STEM and as a pedagogical tool to teach STEM in a concrete environment. You may be worried a robot is going to steal your job, but you may be more likely to work alongside a robot in the near future than have one replace you.
_
Since antiquity, humankind has dreamed of creating intelligent machines. The invention of the computer and the breathtaking pace of technological progress appear to be bringing the realization of this dream within our grasp. Scientists and engineers across the world are working on the development of intelligent robots, which are poised to become an integral part of all areas of human life. Robots are to do the housework, look after the children, care for the elderly… Yet, the ultimate vision goes even further, envisioning a merger of man and machine that will throw off the biological shackles of evolution and finally make eternal life a reality!
_____
_____
Abbreviations and synonyms:
IFR = International Federation of Robotics
R.U.R. = Rossum’s Universal Robots.
KUKA = Keller und Knappich Augsburg.
FANUC = Fuji Automatic NUmerical Control
ERC = Electronic Review Comments
ASIMO = Advanced Step in Innovative Mobility
AIBO = Artificial Intelligence bot (“bot” is short for “robot”)
TOPIO = TOSY Ping Pong Playing Robot
ROS = Robotic Operating System
RPA = Robotic Process Automation
AMR = Autonomous Mobile Robots
IoRT = Internet of Robotic Things
SCARA = Selective Compliance Arm for Robotic Assembly
SCOT = Smart Cyber Operating Theater
AMIGO = Advanced Multimodality Image Guided Operating
_____
_____
Robo-cabulary:
Human-robot interaction:
A field of robotics that studies the relationship between people and machines. For example, a self-driving car could see a stop sign and hit the brakes at the last minute, but that would terrify pedestrians and passengers alike. By studying human-robot interaction, roboticists can shape a world in which people and machines get along without hurting each other.
Humanoid:
The classical sci-fi robot. This is perhaps the most challenging form of robot to engineer, on account of it being both technically difficult and energetically costly to walk and balance on two legs. But humanoids may hold promise in rescue operations, where they’d be able to better navigate an environment designed for humans, like a nuclear reactor.
Actuator:
Actuators are the generators of the forces that robots employ to move themselves and other objects. Actuators are what power most robots. The control signals are usually electrical but may, more rarely, be pneumatic or hydraulic.
End-effector:
Accessory device or tool specifically designed for attachment to the robot wrist or tool mounting plate to enable the robot to perform its intended task. (Examples may include gripper, spot-weld gun, arc-weld gun, spray- paint gun, or any other application tools.)
Hydraulics:
Control of mechanical force and movement, generated by the application of liquid under pressure.
Pneumatics:
Control of mechanical force and movement, generated by the application of compressed gas.
Soft robotics:
A field of robotics that foregoes traditional materials and motors in favor of generally softer materials and pumping air or oil to move its parts.
Lidar:
Lidar, or light detection and ranging, is a system that blasts a robot’s surroundings with lasers to build a 3-D map. This is pivotal both for self-driving cars and for service robots that need to work with humans without running them down.
Singularity:
The hypothetical point where the machines grow so advanced that humans are forced into a societal and existential crisis. Singularity also means a condition caused by the collinear alignment of two or more robot axes resulting in unpredictable robot motion and velocities.
Multiplicity:
The idea that robots and AI won’t supplant humans, but complement them.
_
Types of robots:
_____
_____
Section-2
History of robots & robotics:
The history of robots has its origins in the ancient world. During the industrial revolution, humans developed the structural engineering capability to control electricity so that machines could be powered with small motors. In the early 20th century, the notion of a humanoid machine was developed. The first uses of modern robots were in factories as industrial robots. These industrial robots were fixed machines capable of manufacturing tasks which allowed production with less human work. Digitally programmed industrial robots with artificial intelligence have been built since the 2000s.
_
The robot notion derives from two strands of thought, humanoids and automata. The notion of a humanoid (or human- like nonhuman) dates back to Pandora in The Iliad, 2,500 years ago and even further. Egyptian, Babylonian, and ultimately Sumerian legends fully 5,000 years old reflect the widespread image of the creation, with god- men breathing life into clay models. One variation on the theme is the idea of the golem, associated with the Prague ghetto of the sixteenth century. This clay model, when breathed into life, became a useful but destructive ally. The golem was an important precursor to Mary Shelley’s Frankenstein: The Modern Prometheus (1818). This story combined the notion of the humanoid with the dangers of science (as suggested by the myth of Prometheus, who stole fire from the gods to give it to mortals). In addition to establishing a literary tradition and the genre of horror stories, Frankenstein also imbued humanoids with an aura of ill fate.
Automata, the second strand of thought, are literally “self- moving things” and have long interested mankind. The idea of automata originates in the mythologies of many cultures around the world. Engineers and inventors from ancient civilizations, including Ancient China, Ancient Greece, and Ptolemaic Egypt, attempted to build self-operating machines, some resembling animals and humans. Early descriptions of automata include the artificial doves of Archytas, the artificial birds of Mozi and Lu Ban, a “speaking” automaton by Hero of Alexandria, a washstand automaton by Philo of Byzantium, and a human automaton described in the Lie Zi. Early models depended on levers and wheels, or on hydraulics. Clockwork technology enabled significant advances after the thirteenth century, and later steam and electro- mechanics were also applied. The primary purpose of automata was entertainment rather than employment as useful artifacts. Although many patterns were used, the human form always excited the greatest fascination. During the twentieth century, several new technologies moved automata into the utilitarian realm. Geduld and Gottesman and Frude review the chronology of clay model, water clock, golem, homunculus, android, and cyborg that culminated in the contemporary concept of the robot.
_
Early beginnings:
Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the island from pirates.
In ancient Greece, the Greek engineer Ctesibius (c. 270 BC) “applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures.” In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called “The Pigeon”. Hero of Alexandria (10–70 AD), a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water.
In ancient China, the 3rd-century text of the Lie Zi describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an ‘artificer’. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical ‘handiwork’ made of leather, wood, and artificial organs. There are also accounts of flying automata in the Han Fei Zi and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds (ma yuan) that could successfully fly.
In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours. His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.
Samarangana Sutradhara, a Sanskrit treatise by Bhoja (11th century), includes a chapter about the construction of mechanical contrivances (automata), including mechanical bees and birds, fountains shaped like humans and animals, and male and female dolls that refilled oil lamps, danced, played instruments, and re-enacted scenes from Hindu mythology.
_
13th century Muslim Scientist Ismail al-Jazari created several automated devices. He built automated moving peacocks driven by hydropower. He also invented the earliest known automatic gates, which were driven by hydropower, created automatic doors as part of one of his elaborate water clocks. Al-Jazari is not only known as the “father of robotics” he also documented 50 mechanical inventions (along with construction drawings) and is considered to be the “father of modern-day engineering.” The inventions he mentions in his book include the crank mechanism, connecting rod, programmable automaton, humanoid robot, reciprocating piston engine, suction pipe, suction pump, double-acting pump, valve, combination lock, cam, camshaft, segmental gear, the first mechanical clocks driven by water and weights, and especially the crankshaft, which is considered the most important mechanical invention in history after the wheel. One of al-Jazari’s humanoid automata was a waitress that could serve water, tea or drinks. The drink was stored in a tank with a reservoir from where the drink drips into a bucket and, after seven minutes, into a cup, after which the waitress appears out of an automatic door serving the drink. Al-Jazari invented a hand washing automaton incorporating a flush mechanism now used in modern flush toilets. It features a female humanoid automaton standing by a basin filled with water. When the user pulls the lever, the water drains and the female automaton refills the basin.
Mark E. Rosheim summarizes the advances in robotics made by Muslim engineers, especially al-Jazari, as follows:
Unlike the Greek designs, these Arab examples reveal an interest, not only in dramatic illusion, but in manipulating the environment for human comfort. Thus, the greatest contribution the Arabs made, besides preserving, disseminating and building on the work of the Greeks, was the concept of practical application. This was the key element that was missing in Greek robotic science.
_
In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci’s notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo’s robot, able to sit up, wave its arms and move its head and jaw. The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it. According to Encyclopædia Britannica, Leonardo da Vinci may have been influenced by the classic automata of al-Jazari.
_
In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century Karakuri zui (Illustrated Machinery, 1796). One such automaton was the karakuri ningyō, a mechanized puppet. Different variations of the karakuri existed: the Butai karakuri, which were used in theatre, the Zashiki karakuri, which were small and used in homes, and the Dashi karakuri which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.
In France, between 1738 and 1739, Jacques de Vaucanson exhibited several life-sized automatons: a flute player, a pipe player and a duck. The mechanical duck could flap its wings, crane its neck, and swallow food from the exhibitor’s hand, and it gave the illusion of digesting its food by excreting matter stored in a hidden compartment.
_
Remote-controlled systems:
Remotely operated vehicles were demonstrated in the late 19th century in the form of several types of remotely controlled torpedoes. The early 1870s saw remotely controlled torpedoes by John Ericsson (pneumatic), John Louis Lay (electric wire guided), and Victor von Scheliha (electric wire guided).
The Brennan torpedo, invented by Louis Brennan in 1877, was powered by two contra-rotating propellers that were spun by rapidly pulling out wires from drums wound inside the torpedo. Differential speed on the wires connected to the shore station allowed the torpedo to be guided to its target, making it “the world’s first practical guided missile”. In 1897 the British inventor Ernest Wilson was granted a patent for a torpedo remotely controlled by “Hertzian” (radio) waves and in 1898 Nikola Tesla publicly demonstrated a wireless-controlled torpedo that he hoped to sell to the US Navy.
In 1903, the Spanish engineer Leonardo Torres y Quevedo demonstrated a radio control system called “Telekino”, which he wanted to use to control an airship of his own design. Unlike the previous systems, which carried out actions of the ‘on/off’ type, Torres device was able to memorize the signals received to execute the operations on its own and could carry out to 19 different orders.
Archibald Low, known as the “father of radio guidance systems” for his pioneering work on guided rockets and planes during the First World War. In 1917, he demonstrated a remote controlled aircraft to the Royal Flying Corps and in the same year built the first wire-guided rocket.
_
Origin of the term robot and robotics:
The term robot derives from the Czech word robota, meaning forced work or compulsory service, or robotnik, meaning serf. It was first used by the Czech playwright Karel Çapek in 1918 in a short story and again in his 1921 play R. U. R., which stood for Rossum’s Universal Robots. Rossum, a fictional Englishman, used biological methods to invent and mass- produce “men” to serve humans. Eventually they rebelled, became the dominant race, and wiped out humanity. The play was soon well known in English- speaking countries. According to Karel Čapek, the word was created by his brother Josef Capek from the Czech word robota. The word robota means literally “corvée”, “serf labor”, and figuratively “drudgery” or “hard work” in Czech and also (more general) “work”, “labor” in many Slavic languages (e.g.: Bulgarian, Russian, Serbian, Slovak, Polish, Macedonian, Ukrainian, archaic Czech, as well as robot in Hungarian). Traditionally the robota (Hungarian robot) was the work period a serf (corvée) had to give for his lord, typically 6 months of the year.
The word “robotics” also comes from science fiction – it first appeared in the short story “Runaround” (1942) by Isaac Asimov. This story was later included in Asimov’s famous book “I, Robot.” The robot stories of Isaac Asimov also introduced the idea of a “positronic brain” (used by the character “Data” in Star Trek) and the “three laws of robotics.”
_
Asimov was not the first to conceive of well-engineered, non-threatening robots, but he pursued the theme with such enormous imagination and persistence that most of the ideas that have emerged in this branch of science fiction are identifiable with his stories. To cope with the potential for robots to harm people, Asimov, in 1940, in conjunction with science fiction author and editor John W. Campbell, formulated the Laws of Robotics. He subjected all of his fictional robots to these laws by having them incorporated within the architecture of their (fictional) “platinum-iridium positronic brains”. The laws (see below) first appeared publicly in his fourth robot short story, “Runaround”.
The 1940 Laws of Robotics:
First Law:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law:
A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The laws quickly attracted – and have since retained – the attention of readers and other science fiction writers.
Only two years later, another established writer, Lester Del Rey, referred to “the mandatory form that would force built-in unquestioning obedience from the robot”. As Asimov later wrote (with his characteristic clarity and lack of modesty), “Many writers of robot stories, without actually quoting the three laws, take them for granted, and expect the readers to do the same”. Asimov’s fiction even influenced the origins of robotic engineering. “Engelberger, who built the first industrial robot, called Unimate, in 1958, attributes his long-standing fascination with robots to his reading of [Asimov’s] ‘I, Robot’ when he was a teenager”, and Engelberger later invited Asimov to write the foreword to his robotics manual. The laws are simple and straightforward, and they embrace “the essential guiding principles of a good many of the world’s ethical systems”. They also appear to ensure the continued dominion of humans over robots, and to preclude the use of robots for evil purposes. In practice, however – meaning in Asimov’s numerous and highly imaginative stories – a variety of difficulties arise.
_
Early robots:
In 1928, one of the first humanoid robots, Eric, was exhibited at the annual exhibition of the Model Engineers Society in London, where it delivered a speech. Invented by W. H. Richards, the robot’s frame consisted of an aluminium body of armour with eleven electromagnets and one motor powered by a twelve-volt power source. The robot could move its hands and head and could be controlled through remote control or voice control. Both Eric and his “brother” George toured the world. Westinghouse Electric Corporation built Televox in 1926; it was a cardboard cutout connected to various devices which users could turn on and off. In 1939, the humanoid robot known as Elektro was debuted at the 1939 New York World’s Fair. Seven feet tall (2.1 m) and weighing 265 pounds (120.2 kg), it could walk by voice command, speak about 700 words (using a 78-rpm record player), smoke cigarettes, blow up balloons, and move its head and arms. The body consisted of a steel gear, cam and motor skeleton covered by an aluminum skin. In 1928, Japan’s first robot, Gakutensoku, was designed and constructed by biologist Makoto Nishimura.
_
Modern autonomous robots:
The first electronic autonomous robots with complex behaviour were created by William Grey Walter of the Burden Neurological Institute at Bristol, England in 1948 and 1949. He wanted to prove that rich connections between a small number of brain cells could give rise to very complex behaviors – essentially that the secret of how the brain worked lay in how it was wired up. His first robots, named Elmer and Elsie, were constructed between 1948 and 1949 and were often described as tortoises due to their shape and slow rate of movement. The three-wheeled tortoise robots were capable of phototaxis, by which they could find their way to a recharging station when they ran low on battery power. Walter stressed the importance of using purely analogue electronics to simulate brain processes at a time when his contemporaries such as Alan Turing and John von Neumann were all turning towards a view of mental processes in terms of digital computation. His work inspired subsequent generations of robotics researchers such as Rodney Brooks, Hans Moravec and Mark Tilden. Modern incarnations of Walter’s turtles may be found in the form of BEAM robotics. BEAM robotics (from biology, electronics, aesthetics and mechanics) is a style of robotics that primarily uses simple analogue circuits, such as comparators, instead of a microprocessor in order to produce an unusually simple design.
_
Unimation, the company that developed the Unimate:
In 1956, George Devol and Joe Engelberger, established a company called Unimation, a shortened form of the words Universal Animation. Engelberger, a physicist working on the design of control systems for nuclear power plants and jet engines, met inventor Devol by chance at a cocktail party. Devol had recently received a patent called “Programmed Article Transfer.” Inspired by the short stories and novels of Isaac Asimov, Devol and Engelberger brainstormed to derive the first industrial robot arm, based upon Devol’s patent, called the Unimate. After almost two years in development, Engelberger and Devol produced a prototype – the Unimate #001, the first digitally operated and programmable robot. This ultimately laid the foundations of the modern robotics industry. Programmed Article Transfer became the seminal industrial robot patent which was ultimately sub-licensed around the world. Devol sold the first Unimate to General Motors in 1960, and it was installed in 1961 in a plant in Trenton, New Jersey to lift hot pieces of metal from a die casting machine and stack them. The first industrial robot in Europe, a Unimate, was installed at Metallverken, Uppsland Väsby, Sweden in 1967. Devol’s patent for the first digitally operated programmable robotic arm represents the foundation of the modern robotics industry.
_
The first palletizing robot was introduced in 1963 by the Fuji Yusoki Kogyo Company. In 1973, a robot with six electromechanically driven axes was patented by KUKA robotics in Germany, and the programmable universal manipulation arm was invented by Victor Scheinman in 1976, and the design was sold to Unimation. From 1980 on, the rate of new robotics started to climb exponentially. Takeo Kanade created the first robotic arm with motors installed directly in the joint in 1981. It was much faster and more accurate than its predecessors. Yaskawa America Inc. introduced the Motorman ERC control system in 1988. This has the power to control up to 12 axes, which was the highest number possible at the time. FANUC robotics also created the first prototype of an intelligent robot in 1992. Two years later, in 1994, the Motorman ERC system was upgraded to support up to 21 axes. The controller increased this to 27 axes in 1998 and added the ability to synchronize up to four robots. The first collaborative robot (cobot) was installed at Linatex in 2008. This Danish supplier of plastics and rubber decided to place the robot on the floor, as opposed to locking it behind a safety fence. Instead of hiring a programmer, they were able to program the robot through a touchscreen tool. It was clear from that point on, that this was the way of the future.
_
In April 2001, the Canadarm2 was launched into orbit and attached to the International Space Station. The Canadarm2 is a larger, more capable version of the arm used by the Space Shuttle, and is hailed as “smarter”. Also in April, the Unmanned Aerial Vehicle Global Hawk made the first autonomous non-stop flight over the Pacific Ocean from Edwards Air Force Base in California to RAAF Base Edinburgh in Southern Australia. The flight was made in 22 hours.
The popular Roomba, a robotic vacuum cleaner, was first released in 2002 by the company iRobot.
In 2005, Cornell University revealed a robotic system of block-modules capable of attaching and detaching, described as the first robot capable of self-replication, because it was capable of assembling copies of itself if it was placed near more of the blocks which composed it. Launched in 2003, on 3 and 24 January, the Mars rovers Spirit and Opportunity landed on the surface of Mars. Both robots drove many times the distance originally expected, and Opportunity was still operating as of mid-2018 although communications were subsequently lost due to a major dust storm.
Self-driving cars had made their appearance by around 2005, but there was room for improvement. None of the 15 devices competing in the DARPA Grand Challenge (2004) successfully completed the course; in fact no robot successfully navigated more than 5% of the 150-mile (240 km) off-road course, leaving the $1 million prize unclaimed. In 2005, Honda revealed a new version of its ASIMO robot, updated with new behaviors and capabilities. In 2006, Cornell University revealed its “Starfish” robot, a four-legged robot capable of self modeling and learning to walk after having been damaged. In 2007, TOMY launched the entertainment robot, i-sobot, a humanoid bipedal robot that can walk like a human and performs kicks and punches and also some entertaining tricks and special actions under “Special Action Mode”.
_
Robonaut 2, the latest generation of the astronaut helpers, was launched to the space station aboard Space Shuttle Discovery on the STS-133 mission in 2011. It is the first humanoid robot in space, and although its primary job for now is teaching engineers how dextrous robots behave in space; the hope is that through upgrades and advancements, it could one day venture outside the station to help spacewalkers make repairs or additions to the station or perform scientific work.
On 25 October 2017 at the Future Investment Summit in Riyadh, a robot called Sophia and referred to with female pronouns was granted Saudi Arabian citizenship, becoming the first robot ever to have a nationality. This has attracted controversy, as it is not obvious whether this implies that Sophia can vote or marry, or whether a deliberate system shutdown can be considered murder; as well, it is controversial considering how few rights are given to Saudi human women.
Commercial and industrial robots are now in widespread use performing jobs more cheaply or with greater accuracy and reliability than humans. They are also employed for tasks that are too dirty, dangerous or dull to be suitable for humans. Robots are widely used in manufacturing, assembly and packing, transport, Earth and space exploration, surgery, weaponry, laboratory research, and mass production of consumer and industrial goods.
In 2019, engineers at the University of Pennsylvania created millions of nanorobots in just a few weeks using technology borrowed from semiconductors. These microscopic robots, small enough to be hypodermically injected into the human body and controlled wirelessly, could one day deliver medications and perform surgeries, revolutionizing medicine and health.
______
______
Section-3
Introduction to robots and robotics:
A robot is the product of the robotics field, where programmable machines are built that can assist humans or mimic human actions. Robots were originally built to handle monotonous tasks (like building cars on an assembly line), but have since expanded well beyond their initial uses to perform tasks like fighting fires, cleaning homes and assisting with incredibly intricate surgeries. Each robot has a differing level of autonomy, ranging from human-controlled robots that carry out tasks that a human has full control over to fully-autonomous robots that perform tasks without any external influences.
As technology progresses, so too does the scope of what is considered robotics. In 2005, 90% of all robots could be found assembling cars in automotive factories. These robots consist mainly of mechanical arms tasked with welding or screwing on certain parts of a car. Today, we’re seeing an evolved and expanded definition of robotics that includes the development, creation and use of robots that explore Earth’s harshest conditions, robots that assist law-enforcement and even robots that assist in almost every facet of healthcare.
We’re really bound to see the promise of the robotics industry sooner, rather than later, as artificial intelligence and software also continue to progress. In the near future, thanks to advances in these technologies, robots will continue getting smarter, more flexible and more energy efficient. They’ll also continue to be a main focal point in smart factories, where they’ll take on more difficult challenges and help to secure global supply chains.
Though relatively young, the robotics industry is filled with an admirable promise of progress that science fiction could once only dream about. From the deepest depths of our oceans to thousands of miles in outer space, robots will be found performing tasks that humans couldn’t dream of achieving alone.
_
When we think of robots, we tend to imagine large, powerful machines hammering out metal parts or mounting car doors. Industrial robots have indeed been used for over 50 years in manufacturing for these and other tasks aimed at improving productivity. However, the last ten years have seen a dramatic shift in the capabilities and uses of robots. Today, many robots are out of their cages, moving around factories, warehouses, homes, and public spaces.
For decades, industrial robots have made manufacturing safer for workers, carrying out tasks that are dangerous, dirty – or plain dull. The range of tasks robots perform in factories has expanded greatly over the past 20 years. Robot grippers are now far more dexterous, and so can handle a greater range of materials. Robots are smaller and lighter, meaning they can be used in factories that are short on space and in which robots and humans need to work alongside one another – from pharmaceutical research to food processing, as well as in later stages of traditional manufacturing such as product assembly. The advent of robots that can sense and respond to their environment – and in many cases move around within it – has taken robots outside of industrial settings and into public life. Whether in direct contact with people, or behind the scenes, performing tasks we rely on but never think about, robots are making our daily lives healthier, safer, and more convenient. Robots also have an increasing role to play in making our planet sustainable for a rapidly rising global population.
__
Robots are widely used in such industries as automobile manufacture to perform simple repetitive tasks, and in industries where work must be performed in environments hazardous to humans. Many aspects of robotics involve artificial intelligence; robots may be equipped with the equivalent of human senses such as vision, touch, and the ability to sense temperature. Some are even capable of simple decision making, and current robotics research is geared toward devising robots with a degree of self-sufficiency that will permit mobility and decision-making in an unstructured environment. Today’s industrial robots do not resemble human beings; a robot in human form is called an android. Robotics is a field where science, technology, and engineering meet to make the machines we know as robots. It helps to have a good understanding of science to work in robotics and a fair amount of creativity to solve problems that have never been seen before. Robotics is a popular field, and one of the fastest-growing industries out there, and there’s been a lot of advancements in the last few years. Machine learning, artificial intelligence, and many more advances in technology have made it easier than ever before to get into the field. Robots are also commonly used in many industries today; businesses love robots because they help them make more products more efficiently.
_
During the 1980s, the scope of information technology applications and their impact on people increased dramatically. Control systems for chemical processes and air conditioning are examples of systems that already act directly and powerfully on their environments. And consider computer- integrated manufacturing, just- in- time logistics, and automated warehousing systems. Even data processing systems have become integrated into organizations’ operations and constrain the ability of operations- level staff to query a machine’s decisions and conclusions. In short, many modern computer systems are arguably robotic in nature already; their impacts are visible.
_
When people hear the word “robot,” they often think about two-legged, two-handed machines that walk and talk like people. But that’s not always the case. Many robots are just disembodied hands that sit at the ends of conveyor belts, picking up things and moving them. While most robots aren’t human-like, this is quickly changing as technology moves forward and humanoid robots become more popular.
Robots can:
____
On the most basic level, human beings are made up of five major components:
-A body structure
-A muscle system to move the body structure
-A sensory system that receives information about the body and the surrounding environment
-A power source to activate the muscles and sensors
-A brain system that processes sensory information and tells the muscles what to do
Of course, we also have some intangible attributes, such as intelligence and morality, but on the sheer physical level, the list above about covers it.
A robot is made up of the very same components. A basic typical robot has a movable physical structure, a motor of some sort, a sensor system, a power supply and a computer “brain” that controls all of these elements. Essentially, robots are human-made versions of animal life — they are machines that replicate human and animal behavior. Many early robots were big machines, with significant brawn and little else. Old hydraulically powered robots were relegated to tasks in the 3-D category – dull, dirty and dangerous. The technological advances since the first industry implementation have completely revised the capability, performance and strategic benefits of robots. For example, by the 1980s robots transitioned from being hydraulically powered to become electrically driven units. Accuracy and performance improved.
Although the appearance and capabilities of robots vary vastly, all robots share the features of a mechanical, movable structure under some form of autonomous control. The structure of a robot is usually mostly mechanical and can be called a kinematic chain (its functionality being similar to the skeleton of the human body). The chain is formed of links (its bones), actuators (its muscles) and joints which can allow one or more degrees of freedom. Most contemporary robots use open serial chains in which each link connects the one before to the one after it. These robots are called serial robots and often resemble the human arm. Some robots, such as the Stewart platform, use closed parallel kinematic chains. Other structures, such as those that mimic the mechanical structure of humans, various animals and insects, are comparatively rare. However, the development and use of such structures in robots is an active area of research (e.g. biomechanics). Robots used as manipulators have an end effector mounted on the last link. This end effector can be anything from a welding device to a mechanical hand used to manipulate the environment.
_
Joseph Engelberger, a pioneer in industrial robotics, once remarked, “I don’t know how to define one, but I know one when I see one!” If you consider all the different machines people call robots, you can see that it’s nearly impossible to come up with a comprehensive definition. Everybody has a different idea of what constitutes a robot.
You’ve probably heard of several of these famous robots:
-R2-D2 and C-3PO: The intelligent, speaking robots with loads of personality in the “Star Wars” movies
-Sony’s AIBO: A robotic dog that learns through human interaction
-Honda’s ASIMO: A robot that can walk on two legs like a person
-Industrial robots: Automated machines that work on assembly lines
-Lieutenant Commander Data: The almost-human android from “Star Trek”
-BattleBots: The remote control fighters from the long-running TV show
-Bomb-defusing robots
-NASA’s Mars rovers
-HAL: The ship’s computer in Stanley Kubrick’s “2001: A Space Odyssey”
-Roomba: The vacuuming robot from iRobot
-The Robot in the television series “Lost in Space”
-MINDSTORMS: LEGO’s popular robotics kit
All of these things are considered robots, at least by some people. But you could say that most people define a robot as anything that they recognize as a robot. Most roboticists (people who build robots) use a more precise definition. They specify that robots have a reprogrammable brain (a computer) that moves a body. By this definition, robots are distinct from other movable machines such as tractor-trailer trucks because of their computer elements. Even considering sophisticated onboard electronics, the driver controls most elements directly by way of various mechanical devices. Robots are distinct from ordinary computers in their physical nature — normal computers don’t have physical bodies attached to them.
_
Robots can be used in many situations for many purposes, but today many are used in dangerous environments (including inspection of radioactive materials, bomb detection and deactivation), manufacturing processes, or where humans cannot survive (e.g., in space, underwater, in high heat, and clean up and containment of hazardous materials and radiation). Robots can take any form, but some are made to resemble humans in appearance. This is claimed to help in the acceptance of robots in certain replicative behaviors which are usually performed by people. Such robots attempt to replicate walking, lifting, speech, cognition, or any other human activity. Many of today’s robots are inspired by nature, contributing to the field of bio-inspired robotics. Certain robots require user input to operate while other robots function autonomously. The AI and machine learning allows autonomous robots to manipulate physical objects far more efficiently than humans, while continuously improving and optimizing their performance over time.
_
Most of the robots are several forms of mechanical design. A robot ‘s metal presence helps it accomplish projects in the world it is built for. For instance, the wheels of a Mars 2020 Rover are independently remote-controlled and constructed of aluminum tubing which helps it securely grip the red planet ‘s harmful landscape. In the army nowadays, robotics is an important element that is being developed and applied with each day. Notable success has already been accomplished for unmanned aircraft such as the drone, that is capable of taking surveillance images, and even firing missiles accurately at ground targets, without a pilot. Even so, there are several benefits in robotic technology in combat. Machines never get sick. They don’t turn a blind eye. If it falls, they don’t hide underneath trees. They don’t speak to their friends. And machines do not know fear.
Can a robot be afraid?
No. A robot does not experience emotions like a human. However, a programmer could program a robot to exhibit human-like emotions that are pre-programmed conditions. For example, a robot with heat sensors could exhibit fear if its temperature sensor exceeded 100-degrees.
_
Robots in world:
Roughly half of all the robots in the world are in Asia, 32% in Europe, and 16% in North America, 1% in Australasia and 1% in Africa. 40% of all the robots in the world are in Japan, making Japan the country with the highest number of robots. A couple of decades ago, 90% of robots were used in car manufacturing, typically on assembly lines doing a variety of repetitive tasks. Today only 50% are in automobile plants, with the other half spread out among other factories, laboratories, warehouses, energy plants, hospitals, and many other industries. Robots are used for assembling products, handling dangerous materials, spray-painting, cutting and polishing, inspection of products. The number of robots used in tasks as diverse as cleaning sewers, detecting bombs and performing intricate surgery is increasing steadily, and will continue to grow in coming years.
_
Robot intelligence:
Even with primitive intelligence, robots have demonstrated ability to generate good gains in factory productivity, efficiency and quality. Beyond that, some of the “smartest” robots are not in manufacturing; they are used as space explorers, remotely operated surgeons and even pets – like Sony’s AIBO mechanical dog. In some ways, some of these other applications show what might be possible on production floors if manufacturers realize that industrial robots don’t have to be bolted to the floor, or constrained by the limitations of yesterday’s machinery concepts.
With the rapidly increasing power of the microprocessor and artificial intelligence techniques, robots have dramatically increased their potential as flexible automation tools. The new surge of robotics is in applications demanding advanced intelligence. Robotic technology is converging with a wide variety of complementary technologies – machine vision, force sensing (touch), speech recognition and advanced mechanics. This results in exciting new levels of functionality for jobs that were never before considered practical for robots.
The introduction of robots with integrated vision and touch dramatically changes the speed and efficiency of new production and delivery systems. Robots have become so accurate that they can be applied where manual operations are no longer a viable option. Semiconductor manufacturing is one example, where a consistent high level of throughput and quality cannot be achieved with humans and simple mechanization. In addition, significant gains are achieved through enabling rapid product changeover and evolution that can’t be matched with conventional hard tooling.
Robotic Assistance:
A key robotics growth arena is Intelligent Assist Devices (IAD) – operators manipulate a robot as though it were a bionic extension of their own limbs with increased reach and strength. This is robotics technology – not replacements for humans or robots, but rather a new class of ergonomic assist products that helps human partners in a wide variety of ways, including power assist, motion guidance, line tracking and process automation. IAD’s use robotics technology to help production people to handle parts and payloads – more, heavier, better, faster, with less strain. Using a human-machine interface, the operator and IAD work in tandem to optimize lifting, guiding and positioning movements. Sensors, computer power and control algorithms translate the operator’s hand movements into super human lifting power.
______
______
Robotics:
Robotics is an interdisciplinary branch of computer science and engineering. Robotics involves design, construction, operation, and use of robots. The goal of robotics is to design machines that can help and assist humans. Robotics requires a working knowledge of electronics, mechanics and software, and is usually accompanied by a large working knowledge of many subjects. Robotics integrates fields of mechanical engineering, electrical engineering, information engineering, mechatronics, electronics, bioengineering, computer engineering, control engineering, software engineering, mathematics, etc. A person working in the field is a roboticist. Robotics develops machines that can substitute for humans and replicate human actions. A robot is a unit that implements this interaction with the physical world based on sensors, actuators, and information processing. Industry is a key application of robots, or to be precise Industry 4.0, where industrial robots are used.
_
Robotics is the intersection of science, engineering and technology that produces machines, called robots, that substitute for (or replicate) human actions. These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, or cognition. The field of robotics has greatly advanced with several new general technological achievements. One is the rise of big data, which offers more opportunity to build programming capability into robotic systems. Another is the use of new kinds of sensors and connected devices to monitor environmental aspects like temperature, air pressure, light, motion and more. All of this serves robotics and the generation of more complex and sophisticated robots for many uses, including manufacturing, health and safety, and human assistance. The field of robotics also intersects with issues around artificial intelligence. Since robots are physically discrete units, they are perceived to have their own intelligence, albeit one limited by their programming and capabilities.
_
From the viewpoint of biology, human beings can be discriminated from other species by three distinctive features or capabilities. They are biped walking, use instrumentation, and have the invention/use of language. The first two capabilities have been attacked by robotics researchers as challenges in locomotion and manipulation that are the main issues in robotics. The third capability has not been considered as within robotics but linguistics. It seems far away from robotics. However, recent progress of research activities developed by the idea of “embodiment” in behavior-based robotics proposed by Rod Brooks at the MIT AI laboratory in the late 1980s has caused more conceptual issues such as body scheme, body image, self, consciousness, theory of mind, communication, and the emergence of language. Although these issues have been attacked in the existing disciplines such as brain science, neuroscience, cognitive science, and developmental psychology, robotics may project a new light on understanding these issues by constructing artifacts similar to us. Many of today’s robots are inspired by nature contributing to the field of bio-inspired robotics. These robots have also created a newer branch of robotics: soft robotics. Thus, robotics covers a broad range of disciplines; therefore it seems very difficult to find comprehensive textbooks to understand the area.
_
Machine learning in robotics:
Machine learning and robotics intersect in a field known as robot learning. Robot learning is the study of techniques that enable a robot to acquire new knowledge or skills through machine learning algorithms. Some applications that have been explored by robot learning include grasping objects, object categorization and even linguistic interaction with a human peer. Learning can happen through self-exploration or via guidance from a human operator. To learn, intelligent robots must accumulate facts through human input or sensors. Then, the robot’s processing unit will compare the newly acquired data to previously stored information and predict the best course of action based on the data it has acquired. However, it’s important to understand that a robot can only solve problems that it is built to solve. It does not have general analytical abilities. Robotic process automation automates repetitive and rules-based tasks that rely on digital information.
_
There are two emerging technologies that will have a dramatic impact on future robots — their form, shape, function and performance — and change the way we think about robotics.
First, advances in Micro Electronic Mechanical Systems (MEMS) will enable inexpensive and distributed sensing and actuation. Just as nature provides complex redundant pathways in critical processes (e.g., synthesis of biomolecules and cell cycle control) to combat the inherent noisiness in the underlying processes and the uncertainty in the environment, we will be able to design and build robots that can potentially deal with uncertainty and adapt to unstructured environments.
Second, advances in biomaterials and biotechnology will enable new materials that will allow us to build softer and friendlier robots, robots that can be implanted in humans, and robots that can be used to manipulate, control, and repair biomolecules, cells, and tissue.
Realistically, we are far from realizing this tremendous potential. While some of the obstacles are technological in flavor (for example, lack of high strength to weight ratio actuators, or lack of inexpensive three-dimensional vision systems), there are several obstacles in robotics that stem from our lack of understanding of the basic underlying problems and the lack of well-developed mathematical tools to model and solve these problems.
_
Robotics Fields:
At the moment being, the number of robotics fields is nearly uncatchable, since robot technology is being applied in so many domains that nobody is able to know how many and which they are. Such an exponential growth cannot be fully tracked and we will discuss the most evident fields of application:
_____
_____
Robot:
A robot is a machine—especially one programmable by a computer—capable of carrying out a complex series of actions automatically. A robot can be guided by an external control device, or the control may be embedded within. Robots may be constructed to evoke human form, but most robots are task-performing machines, designed with an emphasis on stark functionality, rather than expressive aesthetics.
_
Essentially, there are three problems you need to solve if you want to build a robot: 1) sense things (detect objects in the world), 2) think about those things (in “intelligent” way), and then 3) act on them (move or otherwise physically respond to the things it detects and thinks about). In psychology (the science of human behavior) and in robotics, these things are called perception (sensing), cognition (thinking), and action (moving). Some robots have only one or two. For example, robot welding arms in factories are mostly about action (though they may have sensors), while robot vacuum cleaners are mostly about perception and action and have no cognition to speak of. There’s been a long and lively debate over whether robots really need cognition, but most engineers would agree that a machine needs both perception and action to qualify as a robot.
_
Is robot software considered robotics?
The word robot can refer to both physical robots and virtual software agents, but the latter are usually referred to as bots. A software robot (bot) is an abundant type of computer program which carries out tasks autonomously, such as a chatbot or a web crawler. However, because software robots only exist on the internet and originate within a computer, they are not considered robots. In order to be considered a robot, a device must have a physical form, such as a body or a chassis.
A bot — short for robot and also called an internet bot — is a computer program that operates as an agent for a user or other program or to simulate a human activity. Bots are normally used to automate certain tasks, meaning they can run without specific instructions from humans. An organization or individual can use a bot to replace a repetitive task that a human would otherwise have to perform. Bots are also much faster at these tasks than humans. Although bots can carry out useful functions, they can also be malicious and come in the form of malware.
Why does website ask if I’m a robot?
Software bots are written to perform common tasks such as a form submission to advertise on a website automatically. To protect against these bots, a website asks if you’re a robot as a CAPTCHA to determine if you’re a human or a robot. This protection is done by analyzing the mouse movements and looking for any other irregularities as the user checks the I’m not a robot check box.
_
Is a robot a computer?
No. A robot is considered a machine and not a computer. The computer gives the machine its intelligence and its ability to perform tasks. A robot is a machine capable of manipulating or navigating its environment, and a computer is not. For example, a robot at a car assembly plant assists in building a car by grabbing parts and welding them onto a car frame. A computer helps track and control the assembly, but cannot make any physical changes to a car.
_
There is no consensus on which machines qualify as robots but there is general agreement among experts, and the public, that robots tend to possess some or all of the following abilities and functions: accept electronic programming, process data or physical perceptions electronically, operate autonomously to some degree, move around, operate physical parts of itself or physical processes, sense and manipulate their environment, and exhibit intelligent behavior, especially behavior which mimics humans or other animals. Closely related to the concept of a robot is the field of Synthetic Biology, which studies entities whose nature is more comparable to beings than to machines.
_
Robots can be autonomous or semi-autonomous and range from humanoids such as Honda’s Advanced Step in Innovative Mobility (ASIMO) and TOSY’s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own. Autonomous things are expected to proliferate in the future, with home robotics and the autonomous car as some of the main drivers.
_
Undeterred by its somewhat chilling origins (or perhaps ignorant of them), technologists of the 1950s appropriated the term robot to refer to machines controlled by programs. A robot is “a reprogrammable multifunctional device designed to manipulate and/or transport material through variable programmed motions for the performance of a variety of tasks”. The term robotics, which Asimov claims he coined in 1942 refers to “a science or art involving both artificial intelligence (to reason) and mechanical engineering (to perform physical acts suggested by reason)”
As currently defined, robots exhibit three key elements:
We can conceive of a robot, therefore. as either a computer- enhanced machine or as a computer with sophisticated input/output devices. Its computing capabilities enable it to use its motor devices to respond to external stimuli, which it detects with its sensory devices. The responses are more complex than would be possible using mechanical, electromechanical, and/or electronic components alone.
_
According to The Robot Institute of America (1979):
“Robot is a reprogrammable, multifunctional manipulator designed to move materials, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks.”
According to the Webster dictionary:
“Robot is automatic device that performs functions normally ascribed to humans or a machine in the form of a human (Webster, 1993).”
There is no single agreed definition of a robot although all definitions include an outcome of a task that is completed without human intervention. Whilst some definitions require the task to be completed by a physical machine that moves and responds to its environment, other definitions use the term robot in connection with tasks completed by software, without physical embodiment.
The IFR supports the International Organization for Standardization (ISO) definition 8373-2021 of a robot:
Industrial Robot:
-Reprogrammable: whose programmed motions or auxiliary functions may be changed without physical alterations;
-Multipurpose: capable of being adapted to a different application with physical alterations;
-Physical alterations: alteration of the mechanical structure or control system except for changes of programming cassettes, ROMs, etc.
-Axis: direction used to specify the robot motion in a linear or rotary mode
Service Robot:
Note: The classification of a robot into industrial robot or service robot is done according to its intended application.
Mobile robot:
_
A robot is a reprogrammable, multifunctional machine designed to manipulate materials, parts, tools, or specialized devices, through variable programmed motions for the performance of a variety of task. It can conveniently be used for variety of industrial tasks. Today 90% of all robots used are found in factories and they are referred to as industrial robots. Robots are slowly finding their way into warehouses, laboratories, research and exploration sites, power plants, hospitals, undersea, and even outer space. Few of the advantages for which the robots are attractive in industrial uses are as follows:
_
With the merging of computers, telecommunications networks, robotics, and distributed systems software. and the multiorganizational application of the hybrid technology, the distinction between computers and robots may become increasingly arbitrary. In some cases it would be more convenient to conceive of a principal intelligence with dispersed sensors and effectors, each with subsidiary intelligence (a robotics- enhanced computer system). In others, it would be more realistic to think in terms of multiple devices, each with appropriate sensory, processing, and motor capabilities, all subjected to some form of coordination (an integrated multi-robot system). The key difference robotics brings is the complexity and persistence that artifact behaviour achieves, independent of human involvement.
_
Many industrial robots resemble humans in some ways. In science fiction, the tendency has been even more pronounced, and readers encounter humanoid robots, humaniform robots, and androids. In fiction, as in life, it appears that a robot needs to exhibit only a few human- like characteristics to be treated as if it were human. For example, the relationships between humans and robots in many of Asimov’s stories seem almost intimate, and audiences worldwide reacted warmly to the “personality” of the computer HAL in 2001.’ A Space Odyssey, and to the gibbering rubbish- bin R2- D2 in the Star Wars series.
The tendency to conceive of robots in humankind’s own image may gradually yield to utilitarian considerations, since artifacts can be readily designed to transcend humans’ puny sensory and motor capabilities. Frequently the disadvantages and risks involved in incorporating sensory, processing, and motor apparatus within a single housing clearly outweigh the advantages. Many robots will therefore be anything but humanoid in form. They may increasingly comprise powerful processing capabilities and associated memories in a safe and stable location, communicating with one or more sensory and motor devices (supported by limited computing capabilities and memory) at or near the location(s) where the robot performs its functions.
_
Types of Robots:
Mechanical robots come in all shapes and sizes to efficiently carry out the task for which they are designed. All robots vary in design, functionality and degree of autonomy. From the 0.2 millimeter-long “RoboBee” to the 200 meter-long robotic shipping vessel “Vindskip,” robots are emerging to carry out tasks that humans simply can’t. Generally, there are five types of robots:
-1) Pre-Programmed Robots
Pre-programmed robots operate in a controlled environment where they do simple, monotonous tasks. An example of a pre-programmed robot would be a mechanical arm on an automotive assembly line. The arm serves one function — to weld a door on, to insert a certain part into the engine, etc. — and its job is to perform that task longer, faster and more efficiently than a human.
Pre-programmed robots are ones that have to be told ahead of time what to do, and then they simply execute that program. They cannot change their behavior while they are working, and no human is guiding their actions.
In industry:
There is probably no other industry that has been so revolutionized by industrial robots than the automobile industry. There are pre-programmed robotic arms so large that they are able to handle entire automobiles as if they were toy cars. There are also pre-programmed robots that can make the tiniest weld or spray paint with aesthetic precision. Manufacturing has never been the same since robots have taken over the jobs that humans used to do.
In medicine:
Pre-programmed robots are particularly good for tasks that require a very high level of accuracy at a very fine level. For example, if we want to use radiation to kill tumors in cancer patients, we need to be able to deliver the radiation precisely to the tumor. If we miss, we may end up killing healthy tissue near the tumor instead of the tumor itself. The CyberKnife radiosurgery machine is a pre-programmed robot designed to deliver very precise doses of radiation to very precise locations within the human body.
Another common use of pre-programmed robots is as training simulations, such as the S575 Noelle. These robots are constructed to present medical students with various scenarios that they must diagnose and treat. The robot is pre-programmed with a possible scenario, and its responses to the students’ treatment is also pre-programmed. So you could not, for example, try an experimental treatment because the robot would not know how to respond. You must try one of the treatments that the robot is programmed to respond to.
-2) Humanoid Robots
Humanoid robots are robots that look like and/or mimic human behavior. These robots usually perform human-like activities (like running, jumping and carrying objects), and are sometimes designed to look like us, even having human faces and expressions. Two of the most prominent examples of humanoid robots are Hanson Robotics’ Sophia and Boston Dynamics’ Atlas.
-3) Autonomous Robots
Autonomous robots operate independently of human operators. These robots are usually designed to carry out tasks in open environments that do not require human supervision. They are quite unique because they use sensors to perceive the world around them, and then employ decision-making structures (usually a computer) to take the optimal next step based on their data and mission. Historic examples include space probes. Modern examples include self-driving cars. An example of an autonomous robot would be the Roomba vacuum cleaner, which uses sensors to roam freely throughout a home.
Autonomous medical robots are able to operate intelligently and adapt to their environment without direct human supervision. In particular, they should be able to perform their duties in an environment that might be changing, and without a person sitting at a bank of controls directing their activities.
In military:
The U.S. Defense Department funds cutting-edge research through DARPA. Several times DARPA has offered a $2 million prize to the team that developed the fastest autonomous robotic car that could navigate a cross-country course.
In home:
In the home consumer market, IRobot has already sold millions of robots to everyday consumers. The most popular is the Rhoomba which is an autonomous vacuum cleaner.
In medicine:
Delivery services:
One of the things that nurses in a hospital spend a lot of time doing is giving medications to patients. This requires the nurses to keep track of which patient gets which medicines and what the dosages are. It also requires them to keep track of when the patients have received their medications. Finally, this information must be transmitted across multiple shifts of nurses without error. The TUG automated delivery system is able to deliver medications, both scheduled and on-demand, to patients and keep track of all the information about the delivery of these medications. It can also deliver meals and other items, such as extra pillows and blankets. Using a map of the hospital it has stored, it can navigate hallways and elevators to get to whatever room it is summoned or sent to.
Delicate treatments:
Another use of autonomous robots is in delivering treatments to delicate locations. For example, if we want a treatment to be delivered to the surface of the heart, this requires either open heart surgery, which is dangerous and traumatic for the patient, or the insertion of some sort of needle near the heart, which can also be dangerous. The HeartLander is an experimental robot under development at Carnegie Mellon University which can be inserted near the heart. It then finds its own way to the heart, adheres to the heart’s surface, and autonomously finds the location for delivering treatment.
-4) Teleoperated Robots
Teleoperated robots are semi-autonomous bots that use a wireless network to enable human control from a safe distance. These robots usually work in extreme geographical conditions, weather, circumstances, etc. Examples of teleoperated robots are the human-controlled submarines used to fix underwater pipe leaks during the BP oil spill or drones used to detect landmines on a battlefield.
Teleoperated robots are controlled remotely by a human being. The remote control signals can be sent through a wire, through a local wireless system (like Wi-Fi), over the Internet or by satellite.
In government:
The major funding source for teleoperated robots is the federal government — including both NASA and the military. In the military, there are remote-controlled airplanes (drones) that do surveillance and drop bombs, and there are remote-controlled mobile land robots that carry equipment, shoot weapons, and defuse bombs. NASA has developed robots for use on the moon and other planets.
In medicine:
Teleoperated robots are probably the most common type of medical robot in use today. These robots are typically controlled by a surgeon or doctor and allow her to perform various tasks and treatments that she would not normally be able to do.
-5) Augmenting Robots
Augmenting robots either enhance current human capabilities or replace the capabilities a human may have lost. The field of robotics for human augmentation is a field where science fiction could become reality very soon, with robots that have the ability to redefine the definition of humanity by making humans faster and stronger. Some examples of current augmenting robots are robotic prosthetic limbs or exoskeletons used to lift hefty weights.
Augmenting robots generally enhance capabilities that a person already has or replace capabilities that a person has lost.
In medicine (Prostheses):
The most common example of an augmenting medical device would be a prosthetic limb. Modern prosthetics can be complex electronic devices that learn to respond to neural signals sent by the patient. For example, some prosthetic arms can be connected to muscles in the chest so that the patient moving those muscles can cause the arm to move in characteristic ways. This is a process called targeted muscle reinnervation. There is also a lot of ongoing research on building prosthetic limbs that can be integrated with the brain in such a way that they will respond to the patient’s thoughts, much as a normal arm would. The DEKA Arm is one of the most advanced versions of this type of prosthetic.
In industry (Exoskeletons):
Finally, there are several examples of exoskeletons in development that can enhance a person’s existing capabilities (as opposed to prosthetics, which give a person a missing capability). The aim of these systems is typically to augment human strength without sacrificing speed or agility. However, some of these exoskeletons could eventually be used to allow patients who are paralyzed, or suffering from some neuro-muscular disease like multiple sclerosis or Lou Gherig’s disease to recover a full range of motor abilities.
_
Independent vs. dependent robots:
Independent robots are capable of functioning completely autonomously and independent of human operator control. These typically require more intense programming but allow robots to take the place of humans when undertaking dangerous, mundane or otherwise impossible tasks, from bomb diffusion and deep-sea travel to factory automation. Independent robots have proven to be the most disruptive to society, eliminating low-wage jobs but presenting new possibilities for growth.
Dependent robots are non-autonomous robots that interact with humans to enhance and supplement their already existing actions. This is a relatively new form of technology and is being constantly expanded into new applications, but one form of dependent robots that has been realized is advanced prosthetics that are controlled by the human mind. A famous example of a dependent robot was created by Johns Hopkins APL in 2018 for a patient named Johnny Matheny, a man whose arm was amputated above the elbow. Matheny was fitted with a Modular Prosthetic Limb (MPL) so researchers could study its use over a sustained period. The MPL is controlled via electromyography, or signals sent from his amputated limb that controls the prosthesis. Over time, Matheny became more efficient in controlling the MPL and the signals sent from his amputated limb became smaller and less variable, leading to more accuracy in its movements and allowing Matheny to perform tasks as delicate as playing the piano.
_
Nonadaptive robot vs. adaptive robot:
A nonadaptive robot does not receive feedback from the environment and will execute its task as programmed. Adaptive robots use input and output data to generate environment data. With this data, the robot can react to environmental changes and stop its task if necessary. For adaptive robots, it is important to define the environment data to which the robot is reacting. The data might be predefined parameters, like material amounts or sizes or shapes for quality definitions. Or it might be uncontrolled parameters, like having people move around the robot or malfunctions that when detected put the robot in a safe state. An adaptive robot requires a sensing module. An area safety scanner or safety skin is placed either at the base of the robot or attached somewhere on top of the robot. It supervises the surrounding area of the manipulator and prevents humans or other machines from getting too close to the robot; if they do, the robot stops or slows down.
_
Figure below illustrates the relationship between environmental variability and human input responsibility for various types of robotic systems. Until additional significant progress is made in autonomous systems, human involvement with robotic systems must increase with environmental variability. Figure below illustrates this principle: where variability is low, autonomous robots are efficient and human involvement is at the level of strategic decision making. Where variability is high, human sensing and decision making are more important and the human user must take more responsibility.
_
Nowadays, there are many practical applications for robots across a wide range of fields. Some of these types of robots include:
_
Robotic aspects:
There are many types of robots; they are used in many different environments and for many different uses. Although being very diverse in application and form, they all share three basic similarities when it comes to their construction:
-1. Robots all have some kind of mechanical construction, a frame, form or shape designed to achieve a particular task. For example, a robot designed to travel across heavy dirt or mud, might use caterpillar tracks. The mechanical aspect is mostly the creator’s solution to completing the assigned task and dealing with the physics of the environment around it. Form follows function.
-2. Robots have electrical components that power and control the machinery. For example, the robot with caterpillar tracks would need some kind of power to move the tracker treads. That power comes in the form of electricity, which will have to travel through a wire and originate from a battery, a basic electrical circuit. Even petrol powered machines that get their power mainly from petrol still require an electric current to start the combustion process which is why most petrol powered machines like cars, have batteries. The electrical aspect of robots is used for movement (through motors), sensing (where electrical signals are used to measure things like heat, sound, position, and energy status) and operation (robots need some level of electrical energy supplied to their motors and sensors in order to activate and perform basic operations).
-3. All robots contain some level of computer programming code. A program is how a robot decides when or how to do something. In the caterpillar track example, a robot that needs to move across a muddy road may have the correct mechanical construction and receive the correct amount of power from its battery, but would not go anywhere without a program telling it to move. Programs are the core essence of a robot, it could have excellent mechanical and electrical construction, but if its program is poorly constructed its performance will be very poor (or it may not perform at all).
There are three different types of robotic programs: remote control, artificial intelligence and hybrid. A robot with remote control programming has a preexisting set of commands that it will only perform if and when it receives a signal from a control source, typically a human being with a remote control. It is perhaps more appropriate to view devices controlled primarily by human commands as falling in the discipline of automation rather than robotics. Robots that use artificial intelligence interact with their environment on their own without a control source, and can determine reactions to objects and problems they encounter using their preexisting programming. Hybrid is a form of programming that incorporates both AI and RC functions in them.
__
The characteristics of the robots:
There are some characteristics of robots given below:
_
Three types of robotic systems:
There are three main types of automation systems when considering adding robots to your production line. They are the manipulation robotic system, the mobile robotic system, and the data acquisition and control robotic system. All of these help to reduce the amount of labor and production costs associated with the manufacturing process. Robotic systems are a way of automating manufacturing applications while reducing the amount of labor and production costs and time associated with the process. These systems are used in almost every manufacturing industry today. While manual labor has dominated in manufacturing for centuries, it is the robotic system that revolutionized the process. Manufacturers are now able to produce a superior-quality product in a reduced amount of time because of robot systems.
-1. The manipulation robot system is the most commonly used in the manufacturing industry. These systems are made up of many of the robot arms with 4-6 axes and varying degrees of freedom. They can perform several different functions, including welding, material handling and material removal applications.
-2. The mobile robotic system is a bit different. This system consists of an automated platform that moves items from one place to another. While these robot systems are used heavily in manufacturing for carrying tools and spare parts, they are also used in the agricultural industry for transporting products. These can also be used by several different industries because of their ability to swim and fly, as well as move along the ground.
-3. Data acquisition and control robotic systems are used to gather, process and transmit data for a variety of signals. They are also used in software for engineering and business. Many of the mobile robotic systems can use signals from these systems.
_____
_____
The Purpose of Robots:
Robots are used for the following tasks:
_
Main benefits of robot investments:
The reasons why companies consider investing in a robot system differ widely. Some factors include the positive effect on parts quality, increase of manufacturing productivity (faster cycle time) and/or yield (less scrap), improved worker safety, reduction of work-in-progress, greater flexibility in the manufacturing process and reduction of costs.
Figure above shows major reasons to automate with robots.
Main reasons for investing in industrial robots:
Overall, robots increase productivity and competitiveness. Used effectively, they enable companies to become or remain competitive. This is particularly important for small-to-medium sized (SME) businesses that are the backbone of both developed and developing country economies. It also enables large companies to increase their competitiveness through faster product development and delivery. Increased use of robots is also enabling companies in high cost countries to ‘re-shore’ or bring back to their domestic base parts of the supply chain that they have previously outsourced to sources of cheaper labor.
_
Robotics offers benefits such as high reliability, accuracy, and speed of operation. Low long- term costs of computerized machines may result in significantly higher productivity, particularly in work involving variability within a general pattern. Humans can be relieved of mundane work and exposure to dangerous workplaces. Their capabilities can be extended into hostile environments involving high pressure (deep water), low pressure (space), high temperatures (furnaces), low temperatures (ice caps and cryogenics), and high- radiation areas (near nuclear materials or occurring naturally in space).
On the other hand, deleterious consequences are possible. Robots might directly or indirectly harm humans or their property; or the damage may be economic or incorporeal (for example, to a person’s reputation). The harm could be accidental or result from human instructions. Indirect harm may occur to workers, since the application of robots generally results in job redefinition and sometimes in outright job displacement. Moreover, the replacement of humans by machines may undermine the self- respect of those affected, and perhaps of people generally.
______
______
Why has the field advanced so much in the last few years?
There are four reasons:
-1. Sensors
The demand for mobile computing has been a boon for robotics development, leading to falling prices, rapid advances, and miniaturization of sensor technology. Accelerometers used to cost hundreds of dollars each. Now every smartphone can measure acceleration, as well as capture stunning video, fix geographical location and offer guidance, interface with other devices, and transmit across several bands of spectrum — functionality robots need to maneuver through our world productively.
The ubiquity of IoT devices is another driver. By 2025 there will be 100 billion Internet of Things connected devices generating revenue of $10 trillion. For the first time, sensors that capture and send data related to pressure, torque, and position are dirt cheap, leading to a boom in robotics development.
Similarly, prices for lidar and infrared sensors, previously the most expensive sensing equipment for self-guiding robots, have plummeted 90% thanks in large part to the aggressive development of self-driving cars by Google’s Waymo and others. And 3D cameras, which used to be out of reach to all but the most lavishly-funded R&D teams and Hollywood titans, are now available off-the-shelf thanks to some smart work with algorithms.
-2. Open-source development
In 2009, a paper presented at the IEEE International Conference on Robotics and Automation (ICRA) introduced the Robotic Operating System (ROS) to the world. ROS is the first standard OS for robotics development. It also happens to be free, open source, and inherently flexible, freeing robotics developers from the time-prohibitive task of developing an OS from scratch.
There are plenty of open-source users in personal computing, but because proprietary operating environments like Windows reached scale first, open-source options have always been an alternative to something else. Not so with robotics, where open source is now the norm, resulting in a flurry of crowd-assisted development.
Open Robotics, under whose stewardship ROS falls, has also unveiled a robotics simulator called Gazebo which allows engineers to test robots in virtual reality without risking hardware.
-3. Rapid prototyping
Though we’re still waiting to see if 3D printers will fundamentally change how (and where) consumer goods are manufactured, the impact of additive manufacturing on robotics development has been enormous. ‘3D printing enables the creator to go from a mind-bending concept to a solid product in a matter of hours (or days),’ according to Robotics Tomorrow, which tracks the industry. Printers in maker spaces and university engineering departments, some of which allow for multi-material and metal printing, have significantly lowered the barrier to entry for robotics development. When engineers can make prototype components at their workbench, innovation follows.
-4. Technology Convergence
Just as it brought sensor prices plummeting, the enormous success of mobile computing has spurred advances in voice and object recognition, which have clear applications in robotics. 3D gaming sensors are helping robots navigate the clutter of the unstructured human world. And companies like Google, Amazon, and Apple have been hard at work bringing limited Artificial Intelligence platforms online and into homes. This has all been accompanied by predictable year-over-year increases in computing power, along with the arrival of the cloud and IoT technology. Put it all together and you can see that a lot of technology that roboticists have been waiting for has matured in just the last few years.
______
______
How robots will change the world:
According to a report from McKinsey, automation and machines will see a shift in the way we work. They predict that across Europe, workers may need different skills to find work. Their model shows that activities that require mainly physical and manual skills will decline by 18% by 2030, while those requiring basic cognitive skills will decline by 28%.
Workers will need technological skills, and there will be an even greater need for those with expertise in STEM. Similarly, many roles will require socioemotional skills, particularly in roles where robots aren’t good substitutes, such as caregiving and teaching.
We may also see robots as a more integral part of our daily routine. In our homes, many simple tasks such as cooking and cleaning may be totally automated. Similarly, with robots that can use computer vision and natural language processing, we may see machines that can interact with the world more, such as self-driving cars and digital assistants.
Robotics may also shape the future of medicine. Surgical robots can perform extremely precise operations, and with advances in AI, could eventually carry out surgeries independently.
The ability for machines and robots to learn could give them an even more diverse range of applications. Future robots that can adapt to their surroundings, master new processes, and alter their behaviour would be suited to more complex and dynamic tasks.
Ultimately, robots have the potential to enhance our lives. As well as shouldering the burden of physically demanding or repetitive tasks, they may be able to improve healthcare, make transport more efficient, and give us more freedom to pursue creative endeavors.
_____
The following robots are currently transforming the world:
-1. Collaborative robots
A new generation of collaborative robots has emerged in the last few years. Unlike the heavy industrial robots of the 20th century, these collaborative robots, most of which have one or multiple articulated arms, are flexible and easily reprogrammable on the fly. Many models learn by watching humans demonstrate tasks. The primary feature that makes collaborative robots from companies like Universal Robots, Rethink Robotics, and ABB safe is their ability to avoid unwanted collisions and, using high accuracy torque sensors, to recognize when they’ve bumped into something or someone they shouldn’t have. That capability allows the robots to function outside of safety cages and alongside humans, which opens up new productivity potential for industrial manufacturers. The robots can learn complex tasks and then act as a second pair of dexterous hands to augment the capabilities of skilled workers — thus the “collaborative” designation.
Significance:
Automation is increasing in industries like automotive and electronics manufacturing and making speedy inroads in order fulfillment warehouses. As prices for task versatile platforms fall, small- and mid-sized manufacturers are starting to employ robots. Even so, a plausible future that sees robots replacing industrial workers entirely is far on the horizon, and in the meantime, with the economics favoring a hybrid approach, safety is of primary concern.
The market for collaborative robots could reach $3.3 billion by 2022.
-2. Telepresence robots
Telepresence robots, which have been something of a novelty, are starting to creep into broader use. There are several different types, from the bare bones Double models, which are basically iPads on wheels, to iRobot’s $30,000 Ava 500.
Significance:
Across most sectors there’s a growing segment of contract workers and freelancers who can’t be in the office full time, and offices are seeing the value of poaching talent across time zones. Telepresence robots offer a surprisingly adequate alternative to being physically present. The market for telepresence robots could reach $8 billion by 2023.
-3. Warehouse and logistics
Of all the categories of robots covered here, warehouse and logistics automation is having the most substantial impact on global commerce right now. In 2012, Amazon bought Kiva Systems, which makes automation systems for warehouses, for $775 million. Amazon can offer same-day fulfillment of the automation systems at its fulfillment centers. Today, you’d be hard-pressed to find a retailer with any e-commerce aspirations that isn’t revamping its operations with an eye toward automation.
Significance:
It completely transformed how global commerce functioned.
-4. Healthcare
The burgeoning field of robotic surgery is dominated by Intuitive Surgical, which makes the da Vinci Surgical System. Hundreds of thousands of surgeries are now conducted with da Vinci systems each year — virtually every prostate patient with a choice opts for it — and robotic surgery has quickly passed the crucial adoption threshold. Intuitive Surgical now has an $18.2 billion market cap. Surgical robots are going to play a much bigger role in healthcare in the years ahead. But surgery isn’t the only way robots are entering healthcare. Personal assistant robots, such as the models developed by Aldebaran, are likely to appear in senior centers soon, particularly in countries with rapidly aging populations, such as Japan. Toyota unveiled the $1 billion Toyota Research Institute a couple years ago, which is currently developing robots that can operate in unstructured and semi-structured environments, such as hospitals and other care facilities. And robots such as Aethon’s TUG are already moving supplies down linoleum corridors while robotic telepresence solutions are aiding in teaching and helping connect patients in remote areas with specialists around the world.
Significance:
Robot-assisted surgery is less invasive, more precise, and likely to open new horizons for surgical treatments. Auris, for example, is exploring non-invasive surgical tools for lung and throat cancers. More broadly, robots can reduce healthcare costs by automating operational tasks while potentially reducing mistakes.
-5. Self-driving vehicles
Self-driving vehicles are the flashy technology in robotics right now. But the cars you see Google and Uber testing on California roads are only one application for self-driving technology. So far, small self-guided vehicles have had far more impact on commerce as they deftly navigate the structured and semi-structured environments of factories and warehouses, spaces that offer less randomness than the open road. Materials handling, in particular, has been ripe for automation via self-guided vehicles, in large part because its such a dangerous sector for human workers. Self-guided robots equipped with lidar, cameras, and a bevy of other sensors can safely and quickly navigate loading docks and factory floors while avoiding collisions with workers. The global market for these vehicles will reach $2.8 billion by 2022.
Back on the roads, self-driving vehicles are showing lots of promise, but the biggest early impact will likely come from semi-autonomous trucks. The idea is that long haul truckers will be able to put their rigs on autopilot while on highways, where they spend most of the time and then switch back to operator mode on busy city streets.
Significance:
Safety is the biggest advantage. Along with some huge technology players, almost every major car manufacturer is pursuing self-driving technology. We’re still a decade or more out from viable fully autonomous cars and trucks, and that’s not factoring in potential regulatory holdups. Even when the technology arrives, it will take a while for the existing fleet to turn over. But make no mistake, a future awaits in which most cars on the road drive themselves most of the time. When that happens, road accidents should plummet and traffic will improve.
The market for self-driving and semi-autonomous vehicles could be $77 billion by 2035.
_____
Here are the most expensive robots in the world in 2022:
PR2 Robot System: The PR2 is one of the most advanced research robots ever built. Its powerful hardware and software systems let it do things like clean up tables, fold towels, and fetch you drinks from the fridge. This will look at some of the features of the robot and how it has become so popular amongst companies such as Google and M.I.T
I-Cub: I-Cub is a research-grade humanoid robot designed to help develop and test embodied AI algorithms. It is the ideal companion to your robotics laboratory. The I-Cub Project blends results from various IIT Research Lines by applying the principles of systems engineering and by seeking worldwide collaboration opportunities.
Xenex: Xenex robots were invented to save patients and hospital staff from the ravages of MRSA. It is used to sterilize rooms and is effective in killing up to 99.9% of all pathogens and bacteria. It is currently being used in hospitals worldwide and is considered a game changer in the healthcare industry.
HRP-4: HRP-4 is one of the world’s most advanced humanoids, the culmination of a decade of R&D. It’s designed to collaborate with humans and can perform remarkably natural human-like movements.
Robo Thespian Humanoid Robot: RoboThespian is a robotic actor, meaning he’s an actor who is a robot, as opposed to a bad actor who is human. It speaks more than 30 languages, and you can find it on stages worldwide. Robots help us visualize the world in our heads. The demand for robots is only growing and the technology behind them is only improving.
Hubo II: Hubo II is a full-size humanoid that can walk, run, dance, and grasp objects. It uses a straight-leg walking gait that’s an improvement over most bipedal robots, which keep their knees bent to balance.
Kuratas: Kuratas is a rideable and user-operated mecha built by the Japanese company Suidobashi Heavy Industry. It goes over its specifications, design, strengths, and weaknesses. Although the world has witnessed a lot of advancement in this field, engineers have been able to develop a robot that looks like a human being.
Atlas: Atlas, is the most advanced humanoid robot ever created. It is powered by an i7 processor, which is the same processor used in MacBooks and iMacs.a robot was a regular human, we would have robots running countries, businesses, and homes.
ASIMO: ASIMO stands for Advanced Step in Innovative Mobility and is a humanoid robot created by Honda in 2000. It can run, dance, hop, and kick a soccer ball. It travels the world as an ambassador to robot kind, making humans excited about robotics.
Valkyrie: Valkyrie robot is highly advanced in terms of its capabilities and is seen as a real competitor in the race. It is an advanced humanoid robot that will be used in future space missions.
_____
The top most advanced and artificial intelligence-powered military robots in 2022:
MUTT- Multi-Utility Tactical Transport: MUTT accompanies the soldiers and carries equipment that eases travel while traveling on foot in difficult terrains. It can carry 1200 pounds of weight and provide up to 3000 watts of power, and travel 60 miles on a single fill.
RiSE: RiSE, a climbing robot by Boston Robotics, has micro-clawed feet that allow it to deftly scale rough surfaces, including walls, fences, and trees. The RiSE project aims to build a bioinspired climbing robot with the unusual ability to walk on land and climb vertical terrain.
DOGO: DOGO, was developed to function as a watchdog for soldiers in battle. This robot, created by General Robotics, is the terrestrial version of the common combat drone. The most intriguing aspect of DOGO is that a fully armed commando can carry.
AVATAR III: AVATAR III is Robotex’s tactical robot. This robot improves the skills of law enforcement and first responders by enabling them to safely and quickly investigate dangerous situations. AVATAR III is fully customizable with a plug-and-play payload bay, allowing users to configure the robot to their needs.
Centaur: A skilled warfighter is one who is capable of finding, verifying, determining, and eliminating dangers, including landmines, unexploded ordnance, improvised explosive devices, and moving forces. A centaur is a medium-sized unmanned ground vehicle that may be controlled remotely.
_____
_____
Robotics and NASA:
Robotics is the study of robots. Robots are machines that can be used to do jobs. Some robots can do work by themselves. Other robots must always have a person telling them what to do. NASA uses robots in many different ways. Robotic arms on spacecraft are used to move very large objects in space. Spacecraft that visit other worlds are robots that can do work by themselves. People send them commands. The robots then follow those commands. This type of robot includes the rovers that explore the surface of Mars. Robotic airplanes can fly without a pilot aboard. NASA is researching new types of robots that will work with people and help them.
_
Figure above shows Spirit which is one of a group of robots that have explored Mars from the surface or from orbit.
_
Robotic Arms:
NASA uses robotic arms to move large objects in space. The space shuttle’s “Canadarm” robot arm first flew on the shuttle’s second mission in 1981. The International Space Station is home to the larger Canadarm2. The space shuttle has used its arm for many jobs. It could be used to release or recover satellites. For example, the arm has been used to grab the Hubble Space Telescope on five different repair missions. The shuttle and space station arms work together to help build the station. The robotic arms have been used to move new parts of the station into place. The arms also can be used to move astronauts around the station on spacewalks. The space station’s arm can move to different parts of the station. It moves along the outside of the station like an inchworm, attached at one end at a time. It also has a robotic “hand” named Dextre that can do smaller jobs. An astronaut or someone in Mission Control must control these robotic arms. The astronaut uses controllers that look like joysticks used to play video games to move the arm around.
Figure above shows Dextre which is attached to the end of the International Space Station’s robotic arm, Canadarm2.
_
Robots explore other Worlds:
Robots help NASA explore the solar system and the universe. Spacecraft that explore other worlds, like the moon or Mars, are all robotic. These robots include rovers and landers on the surface of other planets. The Mars rovers Spirit and Opportunity are examples of this kind of robot. Other robotic spacecraft fly by or orbit other worlds and study them from space. Cassini, the spacecraft that studies Saturn and its moons and rings, is this type of robot. The Voyager and Pioneer spacecraft now traveling outside Earth’s solar system are also robots. Unlike the robotic arm on the space station, these robots are autonomous. That means they can work by themselves. They follow the commands people send. People use computers and powerful antennas to send messages to the spacecraft. The robots have antennas that receive the messages and transfer the commands telling them what to do into their computers. Then the robot will follow the commands.
_
NASA uses Robotic Airplanes:
NASA uses many airplanes called UAVs. UAV stands for unmanned aerial vehicle. These planes do not carry pilots aboard them. Some UAVs are flown by remote control by pilots on the ground. Others can fly themselves, with only simple directions. UAVs provide many benefits. The planes can study dangerous places without risking human life. For example, UAVs might be used to take pictures of a volcano. A UAV also could fly for a very long time without the need to land. Since they do not carry a pilot, UAVs also can be smaller or more lightweight than they would with a person aboard.
_
Robots Help Astronauts:
NASA is developing new robots that could help people in space. For example, one of these ideas is called Robonaut. Robonaut looks like the upper body of a person. It has a chest, head and arms. Robonaut could work outside a spacecraft, performing tasks like an astronaut on a spacewalk. With wheels or another way of moving, Robonaut also could work on the moon, or another world. Robonaut could work alongside astronauts and help them.
Another robot idea is called SPHERES. These are small robots that look a little like soccer balls. The current SPHERES are being used on the space station to test how well they can move in microgravity. Someday, similar robots could fly around inside the station helping astronauts.
NASA also is studying the possibility of other robots. For example, a small version of the station’s robotic arm could be used inside the station. A robot like that might help in an emergency. If an astronaut were seriously hurt, a doctor on Earth could control the robotic arm to perform surgery. This technology could help on Earth, as well. Doctors could use their expertise to help people in remote locations.
Robots also can be used as scouts to check out new areas to be explored. Scout robots can take photographs and measure the terrain. This helps scientists and engineers make better plans for exploring. Scout robots can be used to look for dangers and to find the best places to walk, drive or stop. This helps astronauts work more safely and quickly. Having humans and robots work together makes it easier to study other worlds.
______
Robotic Telescopes:
A robotic telescope is a telescope that can make observations without hands-on human control. Its low-level behavior is automatic and computer-controlled. Robotic telescopes usually run under the control of a scheduler, which provides high-level control by selecting astronomical targets for observation. The scheduler itself may or may not be highly automated. Some robotic telescopes are scheduled in a simple manner, and are provided with an observing plan by a human astronomer at the start of each night. They are only robotic in the sense that individual observations are carried out automatically.
Advantages:
Robotic telescopes have many advantages. Removing humans from the observing process allows faster observation response times. Robotic telescopes can also respond quickly to alert broadcasts from satellites and begin observing within seconds. Particularly in the field of gamma ray bursts, very early observations have led to significant advances in astronomers’ understanding of these events. Automation in a telescope’s observing program eliminates the need for an observer to be constantly present at a telescope. This makes observations more efficient and less costly. Many telescopes operate in remote and extreme environments such as mountain tops, deserts, and even Antarctica. Under difficult conditions like these, a robotic telescope is usually cheaper, more reliable and more efficient than an equivalent non-robotic telescope.
The sheer volume of observations that can be acquired by a robotic telescope is usually much greater than what can be done by humans. Artificial intelligence techniques can be applied to identify trends and anomalies in the data and in some cases the scientific return on the instruments can be maximized by deriving interesting secondary science from the data. Since no observer needs to be physically present at the telescope, there is no requirement that observations have to occur in a single consecutive block of time. This allows observers to get data over a longer span of time without an increase in the total number of observations needed. Perhaps the most exciting feature of automated telescopes, whether fully robotic or not, is that when many such telescopes are connected in a network spread across two or more geographically distant sites, observations do not need to end when the Sun comes up. They can continue to be carried out by a distant telescope that is still in the dark. Another advantage of a robotic telescope network is the ability to make simultaneous observations, with both similar and different instruments. For example, an astronomer might want to observe an object with a spectrograph and also have access to images in several filters.
Disadvantages:
The main disadvantage of a robotic system is that automation requires work. The more sophisticated the degree of autonomy the telescope has, the greater the amount of work required to enable that functionality. Scheduling systems usually combine a number of different variables (visibility, priority, weather conditions and many more) in order to decide the best course of action for a telescope at any given time. A scheduler needs to have an interface that allows astronomers to input requests for complex, multi-step observations. The scheduler then needs to be able to prioritize among all of the observation requests, and find the most efficient way to complete as many high quality and complete observations as possible. These systems are complicated and difficult to develop.
______
______
Statistics of robots:
Robot density:
Robot density, i.e. the number of robots per 10,000 employees is a measure for the degree of automation. The automation of production is accelerating around the world: in 2017, 74 robot units per 10,000 employees was the average of global robot density in the manufacturing industries (2015: 66 units). By regions, the average robot density in Europe was 99 units, in the Americas 84 and in Asia 63 units. The top 10 most automated countries in the world were: South Korea, Singapore, Germany, Japan, Sweden, Denmark, USA, Italy, Belgium and Taiwan. This is according to the 2017 World Robot Statistics, issued by the International Federation of Robotics (IFR).
Robot density is an excellent standard for comparison in order to take into account the differences in the automation degree of the manufacturing industry in various countries. As a result of the high volume of robot installations in Asia in recent years, the region has the highest growth rate. Between 2010 and 2016, the average annual growth rate of robot density in Asia was 9 percent, in the Americas 7 percent and in Europe 5 percent.
_
Robot Usage around the World:
In 2017, nearly 2 million industrial robots were in use around the world, up nearly 280% since 1993. The use of robots has more than doubled in the last 20 years in most advanced economies. The top users of industrial robots in 2017 were China, Japan and South Korea, using nearly 50% of the world’s stock of robots. European nations were also significant users of industrial robots in 2017, with Germany employing around 200,000 robots.
Regarding industries, the highest usage of industrial robots in 2017 belonged to the automotive industry—which employed about 42% of all robots—followed by the electrical and electronics industry at 28%.
Thus, industries around the world are adopting robots at a rapid pace.
_
The IFR Statistical Department compiles statistical data on annual installations of multipurpose industrial robots for around 40 countries, broken down into areas of application, customer industries, types of robots and other technical and economic aspects.
Figure above shows annual installations of industrial robots in top 15 countries in 2019.
_
The use of industrial robots in factories around the world is accelerating at a high rate: 126 robots per 10,000 employees is the new average of global robot density in the manufacturing industries – nearly double the number five years ago (2015: 66 units). This is according to the 2021 World Robot Report. By regions, the average robot density in Asia/Australia is 134 units, in Europe 123 units and in the Americas 111 units. The top 5 most automated countries in the world are: South Korea, Singapore, Japan, Germany, and Sweden. There are more than three million industrial robots operating in factories around the world, according to the International Federation of Robotics (IFR) 2021 report. In 2020, there was $13.2 billion of new robot installations. From 2015 to 2020, robot density nearly doubled worldwide, jumping from 66 units in 2015 to 126 units in 2020. In 2020 alone, robot density globally jumped from 113 units in 2019 to 126 units. South Korea has 932 Industrial robots per 10K employees. It’s no surprise South Korea leads the world in robot density. South Korea’s robot density is seven times higher than the global average, and the country has been increasing its robot density by 10% every year since 2015. Automotive is the industry with the highest number of operational stock of industrial robots, followed by electrical/electronic and metal and machinery.
______
______
Section-4
Mechatronics, automation and robotics:
_
Mechatronics and robotics:
Mechatronics, also called mechatronics engineering, is an interdisciplinary branch of engineering that focuses on the integration of mechanical, electronic and electrical engineering systems, and also includes a combination of robotics, electronics, computer science, telecommunications, systems, control, and product engineering. As technology advances over time, various subfields of engineering have succeeded in both adapting and multiplying. The intention of mechatronics is to produce a design solution that unifies each of these various subfields. Originally, the field of mechatronics was intended to be nothing more than a combination of mechanics and electronics, hence the name being a portmanteau of mechanics and electronics; however, as the complexity of technical systems continued to evolve, the definition had been broadened to include more technical areas.
Mechatronic engineering is focused on the design of automated machinery. It is the junction of three separate disciplines: electrical engineering, mechanical engineering and computer science. Yet it is distinct in that it is focused solely on mechanical and electrical interactions and the system as a whole. Robotics engineering is a subset of mechatronics — the differences between robotics and mechatronics requires a nuanced examination. Mechatronics is a superset of robotic technologies. The discipline is a study of interactions between mechanical systems, electrical systems and control theory. It covers a diverse field of electro-mechanical systems, from simple on/off controls to sophisticated robotic systems.
_
Design. Build. Operate. If this was an introductory study on mechatronics that would be the directive. It is also the directive shaping building automation and manufacturing systems. Mechatronics systems evolve with an emphasis on automation and increased efficiency, and as these systems become autonomous the behavior resembles that of robots. They are constantly performing complex, repetitive tasks without human interaction.
A central heating system is a prime example of evolving mechatronic technologies. Traditionally, a bimetal thermostat operates a heating system, which is not a mechatronics system as it is a thermal-mechanical system that activates a switch point. However, a digital thermostat with a feedback sensor and microprocessor is a mechatronic system. It is a single device that integrates electrical and mechanical components to complete a repetitive task. Building automation requires more than a digital thermostat. Modern mechatronic systems must address broad requirements, like those needed to support a building automation system (BAS). Mechatronics is no longer solely the study of mechanical and electrical interactions, but rather the study of electro-mechanical interactions with other technical systems including robotics, electronics, computer engineering, telecommunications engineering, systems engineering and control engineering. French standard NF E 01-010 defines mechatronics as an “approach aim[ed] at the synergistic integration of mechanics, electronics, control theory, and computer science within product design and manufacturing, in order to improve and/or optimize its functionality.”
BASs like Johnson Controls’s Metasys Building Automation System is a collection of commercial HVAC, lighting, security and protection systems. It is an advanced mechatronic system that incorporates a range of technical systems to maintain a desired climate and lighting scheme. It also integrates with security and protection systems, and functions autonomously based on occupancy schedules and performance criteria, and also features alarm and notification capabilities.
Fully autonomous vehicles are another example of the advanced mechatronics. The term mechatronics has become a modern-day buzzword often used to describe a robotic system.
_
Mechatronics is the combination of mechanical, electrical and computer engineering in the design of products and manufacturing processes. Robotics is a subset of mechatronics – all robots are mechatronic! Robotics, however, are an elevated class of mechatronics, incorporating automation, programming, and even autonomous action. As automation and autonomous machines become increasingly important in our society, robotics – and its parent discipline, mechatronics – are more vital than ever. In simple words, all robotic systems are mechatronic systems. But, all mechatronic systems aren’t robotic systems. Mathematically, robotics is the subset of mechatronics set.
Robotics is a specific branch of mechatronics that requires the knowledge of,
As you can see, even a mechatronic system requires the above mentioned elements. However, it differs from a robotic system in terms of functionality and the use case. For example, coffee wending machine, airplane, washing machine, automobile etc. are all mechatronic systems, which belong to different use cases. However, robots are unique in terms of their functionality and use case. Robotics is a subfield of mechatronics, as mechatronics includes things that are not entirely robotic in nature. The meeting point between robotic and mechatronic is automation. In certain instances both technologies are used to automate physical tasks, however, many types of automation have nothing to do with physical robots and some robots have nothing to do with automation.
______
______
Automation and robotics:
The term automation was coined in the automobile industry about 1946 to describe the increased use of automatic devices and controls in mechanized production lines. The origin of the word is attributed to D.S. Harder, an engineering manager at the Ford Motor Company at the time. The term is used widely in a manufacturing context, but it is also applied outside manufacturing in connection with a variety of systems in which there is a significant substitution of mechanical, electrical, or computerized action for human effort and intelligence.
Automation is application of machines to tasks once performed by human beings or, increasingly, to tasks that would otherwise be impossible. Although the term mechanization is often used to refer to the simple replacement of human labour by machines, automation generally implies the integration of machines into a self-governing system. Automation has revolutionized those areas in which it has been introduced, and there is scarcely an aspect of modern life that has been unaffected by it.
In general usage, automation can be defined as a technology concerned with performing a process by means of programmed commands combined with automatic feedback control to ensure proper execution of the instructions. The resulting system is capable of operating without human intervention. The development of this technology has become increasingly dependent on the use of computers and computer-related technologies. Consequently, automated systems have become increasingly sophisticated and complex. Advanced systems represent a level of capability and performance that surpass in many ways the abilities of humans to accomplish the same activities.
Automation technology has matured to a point where a number of other technologies have developed from it and have achieved a recognition and status of their own.
Robotics is one of these technologies overlapping with automation in which the automated machine possesses certain anthropomorphic, or humanlike, characteristics. The most typical humanlike characteristic of a modern industrial robot is its powered mechanical arm. The robot’s arm can be programmed to move through a sequence of motions to perform useful tasks, such as loading and unloading parts at a production machine or making a sequence of spot-welds on the sheet-metal parts of an automobile body during assembly. As these examples suggest, industrial robots are typically used to replace human workers in factory operations.
_
Industries refer to automation for a multiplicity of reasons as given below:
_
Automation and robots are two distinct technologies, but the terms are often used interchangeably. While these terms are similar, it’s not always correct to use them synonymously. Before considering either of these technologies to transform your business, you must understand not only the difference between them, but also which one would benefit your business most. Together, they have transformed the manufacturing space. Formerly time-consuming duties are now fully automated with minimal operator input, and robots are taking over more labor-intensive and hazardous tasks from humans. So how are automation and robotics different, and how are they related?
Automation is a process that uses software, machines, or other technology to carry out tasks in place of human workers. Automation is the use of self-operating physical machines, computer software, and other technologies to perform tasks that are usually done by people. This process is designed to automatically follow a predetermined sequence of operations or respond to encoded instructions. Automation is applied to both virtual tasks and physical ones. This methodology can be used for many simple functions, such as a programmable thermostat. It can also perform extremely complex processes (such as in manufacturing) and is sometimes powered by artificial intelligence or machine learning.
Robotics is a field that combines engineering and computer science to design and build robots to perform tasks. These are physical robots that substitute for (or replicate) human actions. Not all types of automation use robots – and not all robots are designed for process automation. Physical robots may be used in automation, but some robots are not created for that purpose.
Robots can be separated into three (broad) categories:
-1. Those that rely entirely on human input to operate
-2. Semi-autonomous robots that can perform some tasks on their own but require some human intervention /observation
-3. And Autonomous robots that have the intelligence to perform tasks entirely on their own and can respond to real-world environments with minimal human intervention
_
Here are some examples that illustrate the difference between automation and robotics:
_
Robots that are not automation but autonomous:
To make it a little more complex, some robots are “autonomous” (meaning that they operate without humans directly controlling them) but they are not used in automation. For example, a toy line-following robot can autonomously follow a line painted on the ground. However, it is not “automation” because it isn’t performing a specific task. If instead the line-following robot were transporting medicines around a hospital, then it would be automation.
_
Two Types of Automation:
Automation as a field is much more diverse than what we think of. The branch is further classified into two categories, which are:
Software Automation:
As the name suggests, software automation is used as a substitute for human intelligence. One can simply expect software automation to be as accurate as of the human intellect. The effectiveness of software automation is one of the prime reasons why automation has become such a popular subject of interest. If you’ve read anything at all about automation on the internet, it’s almost certainly been about software automation. A computer program that is programmed to perform repetitive activities using the same reasoning that humans use when using computer applications.
The forms of software automation are:
_
Industrial Automation:
When we talk about “automation and robotics”, we are usually referring to industrial automation. Industrial automation is all about controlling physical processes. It involves using physical machines and control systems to automate tasks within an industrial process. A fully autonomous factory is the extreme example. There are many types of machines within industrial automation. For example, CNC machines are common in manufacturing. Robots are only one type of machine. From a realistic point of view, there are major three types of industrial automation are: (a) fixed automation, (b) programmable automation, and (c) flexible automation.
Fixed Automation:
Fixed automation refers to the use of special purpose equipment to automate a fixed sequence of processing or assembly operations. It is also known as hard automation, typically it is associated with high production rates and it is relatively difficult to accommodate changes in product design. Fixed automation is not considered for parts or products with short life cycles. This automation makes sense only when product designs are stable usually built on the modular principle. They are basically known transfer machine and consist of the two major components: transfer mechanism and power head production units. The advantages of this automation system are (a). maximum efficiency, (b). low cost unit, and (c). automated material handling. Main disadvantages of this system are: (a) inflexible in accommodating product variety and (b) large initial investment.
Programmable Automation:
This automation, the equipment is designed to accommodate a specific class of product changes and the assembly operations can be changed by modifying the control program. In this system, reconfiguring the system for a new product is time consuming because it involves reprogramming and set up for the machines, and new fixture and tools. For example: numerically controlled machine, industrial robots etc. The advantages of this system are: (i) low unit cost for large batches and (ii) flexibility to deal with variations and changes in the product. But this system requires more time for making new product.
Flexible Automation:
It is also known as soft automation. In this system, the equipment is designed to manufacture a variety of products and required a very low time for changing one product to another. So, this type of manufacturing system can be used to manufacture various combinations of products according to any specified schedule. Honda is widely credited with using this type of automation technology to introduce 113 changes to its line of motorcycle products in 1970’s. The main advantages are: (a) customized products, and (ii) flexibility to deal with product design variations. But this type automation requires large initial investment and also high unit cost relative to fixed or programmable automation.
_____
_____
Robotic process automation (RPA):
First things first:
There aren’t really any robots involved in robotic process automation. Robotic process automation is not a physical [or] mechanical robot. Rather, the “robot” in robotic process automation is software robots running on a physical or virtual machine. RPA automates everyday processes that once required human action – often a great deal of it performed in rote, time-consuming fashion. That’s also how RPA promises to boost efficiency for organizations.
_
Robotic process automation (RPA) is a software technology that makes it easy to build, deploy, and manage software robot (bot) that emulate humans’ actions interacting with digital systems and software. Just like people, software robots can do things like understand what’s on a screen, complete the right keystrokes, navigate systems, identify and extract data, and perform a wide range of defined actions. But software robots can do it faster and more consistently than people, without the need to get up and stretch or take a coffee break. In any organization, there are a lot of tasks that are repetitive and time-consuming in nature. While doing these types of tasks, there is always a huge possibility for error occurrence because of repetition. Hence, in order to avoid these errors and save time, a lot of RPA Software is available in the market. Daily tasks that are performed on the software by employees are automated using the bot. The software which uses the bot for performing this automation is called the RPA software.
_
The benefits of RPA:
There are multiple benefits of RPA, including:
Less coding:
RPA does not necessarily require a developer to configure; drag-and-drop features in user interfaces make it easier to onboard non-technical staff.
Rapid cost savings:
Since RPA reduces the workload of teams, staff can be reallocated towards other priority work that does require human input, leading to increases in productivity and ROI.
Higher customer satisfaction:
Since bots and chatbots can work around the clock, they can reduce wait times for customers, leading to higher rates of customer satisfaction.
Improved employee morale:
By lifting repetitive, high-volume workload off your team, RPA allows people to focus on more thoughtful and strategic decision-making. This shift in work has a positive effect on employee happiness.
Better accuracy and compliance:
Since you can program RPA robots to follow specific workflows and rules, you can reduce human error, particularly around work which requires accuracy and compliance, like regulatory standards. RPA can also provide an audit trail, making it easy to monitor progress and resolve issues more quickly.
Existing systems remain in place:
Robotic process automation software does not cause any disruption to underlying systems because bots work on the presentation layer of existing applications. So, you can implement bots in situations where you don’t have an application programming interface (API) or the resources to develop deep integrations.
_
Robotic process automation is mainly used in Banking, Insurance, Retail, Manufacturing, Healthcare, and Telecommunication industries.
_
RPA and artificial intelligence:
Robotic process automation is often mistaken for artificial intelligence (AI), but the two are distinctly different. AI combines cognitive automation, machine learning (ML), natural language processing (NLP), reasoning, hypothesis generation and analysis.
The critical difference is that RPA is process-driven, whereas AI is data-driven. On the most fundamental level, RPA is associated with “doing” whereas AI and ML is concerned with “thinking” and “learning” respectively. RPA bots can only follow the processes defined by an end user, while AI bots use machine learning to recognize patterns in data, in particular unstructured data, and learn over time. Put differently, AI is intended to simulate human intelligence, while RPA is solely for replicating human-directed tasks. While the use of artificial intelligence and RPA tools minimize the need for human intervention, the way in which they automate processes is different.
That said, RPA and AI also complement each other well. AI can help RPA automate tasks more fully and handle more complex use cases. RPA also enables AI insights to be actioned on more quickly instead of waiting on manual implementations.
_
Intelligent automation (IA):
In order for RPA tools in the marketplace to remain competitive, they will need to move beyond task automation and expand their offerings to include intelligent automation (IA). This type of automation expands on RPA functionality by incorporating sub-disciplines of artificial intelligence, like machine learning, natural language processing, and computer vision. Intelligent process automation demands more than the simple rule-based systems of RPA. You can think of RPA as “doing” tasks, while AI and ML encompass more of the “thinking” and “learning,” respectively. It trains algorithms using data so that the software can perform tasks in a quicker, more efficient way. It is important to realise that RPA and AI are nothing but different ends of a continuum known as IA as seen in the figure below. In other words, RPA + AI = IA
RPA + AI = IA
______
______
What Are the Labor and Product Market Effects of Automation? New Evidence from France, 2021 paper:
Authors use comprehensive micro data in the French manufacturing sector between 1995 and 2017 to document the effects of automation technologies on employment, sales, prices, wages, and the labor share. Causal effects are estimated with event studies and a shift-share IV design leveraging pre-determined supply linkages and productivity shocks across foreign suppliers of industrial equipment. At all levels of analysis — plant, firm, and industry — the estimated impact of automation on employment is positive, even for unskilled industrial workers. Authors find that automation leads to higher sales, higher profits, and lower consumer prices, while it leaves wages, the labor share and within-firm wage inequality unchanged. Consistent with the importance of business-stealing across countries, the estimated industry-level employment response to automation is stronger in industries that face international competition. These estimates can be accounted for in a simple monopolistic competition model: firms that automate more increase their profits but pass through some of the productivity gains to consumers, inducing higher scale and higher labor demand. The results indicate that automation can increase labor demand and can generate productivity gains that are broadly shared across workers, consumers and firm owners. In a globalized world, attempts to curb domestic automation in order to protect domestic employment may be self-defeating due to foreign competition.
______
______
Section-5
Artificial intelligence (AI) and robotics:
It is a common perception that AI and Robotics are the same or somewhat similar. In layman’s language, AI is the brain while Robotics is the body. Robots have existed without AI in the past. And will continue to do so. As without Robots, the implementation of AI is nothing but software interaction. An artificially-intelligent robot is a term for the combination of these two technologies and it is still under research work. The augmentation of both will do wonders. But until then one needs to clear the concept that both AI and Robotics serve different purposes. Robotics is that field concerned with the connection of perception to action. Artificial Intelligence must have a central role in Robotics if the connection is to be intelligent. Artificial Intelligence addresses the crucial questions of: what knowledge is required in any aspect of thinking; how that knowledge should be represented; and how that knowledge should be used by forcing it to deal with real objects in the real world. Robots combine mechanical effectors, sensors, and computers. AI has made significant contributions to each component.
_
Robots are autonomous or semi-autonomous machines meaning that they can act independently of external commands. Artificial intelligence is software that learns and self-improves. In some cases, robots make use of artificial intelligence to improve their autonomous functions by learning. However, it is also common for robots to be designed with no capability to self-improve. Robotics and artificial intelligence serve very different purposes. However, people often get them mixed up. A lot of people wonder if robotics is a subset of artificial intelligence or if they are the same thing. Let’s put things straight. The first thing to clarify is that robotics and artificial intelligence are not the same thing at all. In fact, the two fields are almost entirely separate.
A Venn diagram of the two would look like this:
_
Robotics is a branch of technology which deals with robots. Robots are programmable machines which are usually able to carry out a series of actions autonomously, or semi-autonomously.
There are three important factors which constitute a robot:
-1. Robots interact with the physical world via sensors and actuators.
-2. Robots are programmable.
-3. Robots are usually autonomous or semi-autonomous.
Robots are “usually” autonomous because some robots aren’t. Telerobots, for example, is entirely controlled by a human operator but telerobotics is still classed as a branch of robotics. This is one example where the definition of robotics is not very clear. Robotics involves designing, building and programming physical robots. Only a small part of it involves artificial intelligence.
Example of a robot: Basic cobot
A simple collaborative robot (cobot) is a perfect example of a non-intelligent robot.
For example, you can easily program a cobot to pick up an object and place it elsewhere. The cobot will then continue to pick and place objects in exactly the same way until you turn it off. This is an autonomous function because the robot does not require any human input after it has been programmed. The task does not require any intelligence because the cobot will never change what it is doing.
Most industrial robots are non-intelligent.
_
Artificial intelligence (AI) is a branch of computer science. It involves developing computer programs to complete tasks which would otherwise require human intelligence. AI algorithms can tackle learning, perception, problem-solving, language-understanding and/or logical reasoning. AI is used in many ways within the modern world. For example, AI algorithms are used in Google searches, Amazon’s recommendation engine and SatNav route finders. Most AI programs are not used to control robots. Even when AI is used to control robots, the AI algorithms are only part of the larger robotic system, which also includes sensors, actuators and non-AI programming. Often — but not always — AI involves some level of machine learning, where an algorithm is “trained” to respond to a particular input in a certain way by using known inputs and outputs. The key aspect that differentiates AI from more conventional programming is the word “intelligence.” Non-AI programs simply carry out a defined sequence of instructions. AI programs mimic some level of human intelligence.
Example of a pure AI: AlphaGo
One of the most common examples of pure AI can be found in games. The classic example of this is chess, where the AI Deep Blue beat world champion, Gary Kasparov, in 1997. A more recent example is AlphaGo, an AI which beat Lee Sedol the world champion Go player, in 2016. There were no robotic elements to AlphaGo. The playing pieces were moved by a human who watched the AI’s moves on a screen.
_
Figure below shows differentiation and overlap between Robotics and Artificial Intelligence.
The main aim of using AI in robotics is to better manage variability and unpredictability in the external environment, either in real-time, or offline.
It’s important to note that:
-AI is not per se required to enable the tasks but can bring significant performance benefits and enable specific functionality. A general rule of thumb is that the more variability and unpredictability there is in the external environment, the more useful AI will be.
-In real-world robotic applications, AI algorithms are combined with non-AI software programs as well as hardware such as sensors and cameras to execute the required task. In contrast to software applications such as recommendation engines, AI is only ever one component of a robotic application. This is particularly relevant to the question of robot safety, which to date is generally governed by hard-coded, deterministic algorithms.
_
Comparison robotics and artificial intelligence:
|
Robotics |
Artificial Intelligence |
Definition |
The field of robotics is concerned with the design and operation of robots. Robots are programmed robots that really can perform tasks independently or semi-autonomously. |
Artificial intelligence (AI) is an area of computer science that develops systems that can solve issues and help in the same way that humans do. |
Requirements |
Needs hardware and software |
Needs Software. |
Discipline |
Electronics, Mechatronics, nanotechnology |
Computer Science, Mathematics Computational concepts. |
Mode of operation |
They operate in real physical world |
They usually work in computer-assisted environments. It is all about Programming Intelligence. They can be accessed upon desktops and laptops. |
Components |
Deals with computers, effectors and sensors |
Its logic and vision are aided by artificial intelligence. |
Input/Output |
Analog signals in the form of voice waveforms or pictures are used as inputs to robots. |
An AI program receives its input in the form of symbols and rules. |
Applications |
Robots are utilized in a variety of applications, including packaging, medical surgery, space exploration, defence weaponry, and so on. |
Netflix, Apple’s Siri, Spotify, and other AI-based apps are examples. |
_
In a nutshell
As you can see, robotics and artificial intelligence are really two separate things. Robotics involves building robots whereas AI involves programming intelligence. There is slight confusion about software robots. “Software robot” is the term given to a type of computer program which autonomously operates to complete a virtual task. They are not physical robots, as they only exist within a computer. The classic example is a search engine web crawlers which roams the internet, scanning websites and categorizing them for search. Some advanced software robots may even include AI algorithms. However, software robots are not part of robotics. Software robot (bot) is not classified as robot in this article. A robot ought to have physical structure.
______
______
AI technology used in Robotics:
-1. Computer Vision
Robots can also see, and this is possible by one of the popular Artificial Intelligence technologies named Computer vision. Computer Vision plays a crucial role in all industries like health, entertainment, medical, military, mining, etc. Computer Vision is an important domain of Artificial Intelligence that helps in extracting meaningful information from images, videos and visual inputs and take action accordingly.
-2. Natural Language Processing
NLP (Natural Languages Processing) can be used to give voice commands to AI robots. It creates a strong human-robot interaction. NLP is a specific area of Artificial Intelligence that enables the communication between humans and robots. Through the NLP technique, the robot can understand and reproduce human language. Some robots are equipped with NLP so that we can’t differentiate between humans and robots.
-3. Edge Computing
The Cloud is the dominant architecture today. In contrast, Edge Computing involves implementing analytics away from the centralized nodes and towards the source of the Data. AI at the Cloud and Edge are complimentary. Hence, in Edge computing, the processing of data occurs near the source i.e. at the point of origin – rather than transmitting the data to the Cloud. Often, this means the device may not be continuously connected. We can see AI and Edge being used in complex applications of robotics like autonomous cars and drones. Edge computing in robotics provides better data management, lower connectivity cost, better security practices, more reliable and uninterrupted connection.
-4. Complex Event Process
Complex event processing (CEP) combines data from multiple streaming sources to infer a more complex event with the goal of responding to the event as quickly as possible. An event is described as a Change of State. One or more events combine to define a Complex event. For example, the deployment of an airbag in a car is a complex event based in the data from multiple sensors in real time. The complex event process is most widely used term in various industries such as healthcare, finance, security, marketing, etc. It is primarily used in credit card fraud detection and also in stock marketing field.
-5. Transfer Learning and AI
Transfer learning is a technique that reuses knowledge gained from solving one problem and reapplies it in solving a related problem – for example the model used for identifying an apple may be used to identify an Orange. Transfer learning re-uses a pre-trained model on another (related) problem for image recognition. Only the final layers of the new model are trained – which is relatively cheaper and less time consuming. Transfer Learning applies to Mobile devices for inference at the edge i.e. the model is trained in the Cloud and deployed on the Edge. The same principle applies to Robotics i.e. the model could be trained in the Cloud and deployed on the Device. Transfer Learning is useful in many cases where you may not have access to the Cloud (where the Robot is offline). In addition, transfer learning can be used by Robots to train other Robots. In robotics, transfer learning can be used to train one machine with the help of other machines.
-6. Reinforcement Learning
Reinforcement learning is a feedback-based learning method in machine learning that enables an AI agent to learn and explore the environment, perform actions and learn automatically from experience or feedback for each action. Further, it is also having feature of autonomously learn to behave optimally through hit-and-trail action while interacting with the environment. It is primarily used to develop the sequence of decisions and achieve the goals in uncertain and potentially complex environment. In robotics, robots explore the environment and learn about it through hit and trial. For each action, he gets rewarded (positive or negative). Reinforcement learning provides Robotics with a framework to design and simulate sophisticated and hard-to-engineer behaviours.
-7. Affective computing
Affective computing is a field of study that deals with developing systems that can identify, interpret, process, and simulate human emotions. Affective computing aims to endow robots with emotional intelligence to hope that robots can be endowed with human-like capabilities of observation, interpretation, and emotion expression.
-8. Mixed Reality
Mixed Reality is also an emerging domain. It is mainly used in the field of programming by demonstration (PbD). PbD creates a prototyping mechanism for algorithms using a combination of physical and virtual objects.
-9. Hardware acceleration for AI
Hardware acceleration at the microprocessor level for AI applications is an emerging area but will also see a growing uptake in the near future for Robotics.
-10. GANs
Generative Adversarial Networks (GANs) can be used to get better data (esp. image data). This can help in areas where Data is not easy to come by or similar data is needed but not available for training. GANs could thus be used in training Robotics.
_____
Tenets of Artificial intelligence and Machine Learning in Robotics:
There are four areas of robotic processes that AI and machine learning are impacting to make current applications more efficient and profitable. The scope of AI in robotics includes:
-1. Vision – AI is helping robots detect items they’ve never seen before and recognize objects with far greater detail.
-2. Grasping – robots are also grasping items they’ve never seen before with AI and machine learning helping them determine the best position and orientation to grasp an object.
-3. Motion Control – machine learning helps robots with dynamic interaction and obstacle avoidance to maintain productivity.
-4. Data – AI and machine learning both help robots understand physical and logistical data patterns to be proactive and act accordingly.
AI and machine learning are still in their infancy in regards to robotic applications, but they’re already having an important impact.
_____
Artificially Intelligent Robots:
Artificial intelligent robots connect AI with robotics. Artificially intelligent robots are the bridge between robotics and AI. AI robots are controlled by AI programs and use different AI technologies, such as Machine learning, computer vision, RL learning, etc. Usually, most robots are not AI robots, these robots are programmed to perform repetitive series of movements, and they don’t need any AI to perform their task. Up until quite recently, all industrial robots could only be programmed to carry out a repetitive series of movements. And repetitive movements do not require artificial intelligence. Non-intelligent robots are quite limited in their functionality. AI algorithms are necessary when you want to allow the robot to perform more complex tasks.
A warehousing robot might use a path-finding algorithm to navigate around the warehouse. A drone might use autonomous navigation to return home when it is about to run out of battery. A self-driving car might use a combination of AI algorithms to detect and avoid potential hazards on the road. All these are the examples of artificially intelligent robots.
_
Artificial intelligence in robots gives companies new opportunities to increase productivity, make work safer, and save people valuable time. Substantial research is being devoted to using AI to expand robot functionality. Commercially available applications include the use of AI to:
_____
Benefits of Integrating AI with robotics:
While integrating artificial intelligence with an existing operation or business model appears overwhelming at first, the benefits realized typically far outweigh the challenges experienced.
-1. Increased Productivity and Efficiency
Companies today are juggling more demands than ever before. Customers want faster delivery. Stakeholders want higher productivity and increased efficiency. And workers want to contribute without fatigue or injury. AI robots are helping on all fronts. They perform repetitive or time-consuming tasks, such as checking inventory and alerting staff to out-of-stock or misplaced items in retail environments. This expedites product delivery, improves productivity, and frees human workers to take on higher-level, less physically taxing tasks, such as looking for ways to improve processes, troubleshooting AMR issues, or developing new ideas.
-2. Improved Quality and Accuracy
AI robots can see and understand their environments, which enables them to complete complex tasks such as quality-control inspections on assembly lines. In industrial applications, AI robots can check the quality of goods inline, instead of delaying the task to the end of the process—saving time and money for the manufacturer. Audi partnered with Intel and Nebbiolo Technologies to boost weld inspections and enhance quality-control processes with Intel-enabled robotic arms, machine learning, and predictive analytics.
-3. Enhanced Worker Safety
AI robots play a major role in improving workplace safety. Companies in the oil and gas sector often use them to perform data collection or safety inspection tasks in dangerous environments to reduce risk to humans. And because AI-enabled robots can learn from human gestures and speech, they’re able to continuously improve their ability to complete their tasks while safely working alongside employees.
_____
When augmented with AI, robots can help businesses innovate and transform their operations. Today’s most common types of robots powered by AI include:
-1. Autonomous Mobile Robots (AMRs)
As AMRs move through their environments, AI enables them to:
Depending on the industry, the tasks and actions completed by AI-empowered AMRs vary widely. For example, when moving inventory from one point to another in a warehouse, AMRs can avoid collisions by navigating around human workers or fallen boxes while simultaneously determining the optimal path for task completion.
-2. Articulated Robots (Robotic Arms)
AI allows articulated robots to perform tasks faster and more accurately. AI technologies infer information from vision sensors, such as 2D/3D cameras, to segment and understand scenes as well as detect and classify objects. Learn more about articulated robots and robotic arms.
-3. Cobots
AI allows cobots to respond to and learn from human speech and gestures without worker-assisted training.
_____
_____
The scope of robot interactions.
Figure above outlines the scope of robotic interaction with both the physical and virtual worlds. It considers a range of inputs (both active control and perception driven) and shows examples of the numerous possible outputs, actions or roles that can arise. Using this kind of model, the role of IT services as an enabling and necessary part of the robotic interaction ecosystem begins to emerge. A robot may have a direct physical manifestation that allows it to mechanically act and react in the real world, but it may also operate in the virtual world using Artificial intelligence and miniaturization.
______
______
Connected robots:
Robots are a well-established and widely accepted feature of modern manufacturing plants, carrying out dangerous and unergonomic tasks such as welding, metal cutting and assembly of large parts such as car chassis and doors. Many robots work alongside manufacturing employees, tending machines, fetching and carrying parts and materials and performing tasks such as stacking, parts assembly and product finishing.
Increasingly, robots are no longer stand-alone machines, but are connected to other machines and software applications as part of automation and Industry 4.0 strategies. The vision is of a seamless automated process from order through to dispatch and delivery. Machines can communicate with each other as well as report on their own status, producing data that can be aggregated across multiple machines or an entire process. This data can be analyzed and used to continuously optimize production, anticipate bottlenecks in the production process, and forecast when machines need servicing, saving costs in machine downtime which can run to millions of dollars per minute.
_
The digitization of parts or all of the manufacturing process gives manufacturers transparency across the full end-to-end production, enabling them to maximize efficiency along the full value chain. For example, Bosch, a global supplier of technology and services, has implemented connected solutions in virtually all its 280 plants. The company claims that by using intelligent software it is able to increase productivity every year – at some locations, by up to 25 percent – while also reducing stock levels by up to 30 percent. Siemens says that digitalization has enabled its smart factory in Amberg to produce 13 times more with the same number of 1300 employees as in 1989. Automotive manufacturer VW expects productivity to increase by 30% by 2025 due to digitally connecting machines to ecosystems capable of analyzing the data being produced. Manufacturers are also able to gain an exact picture of material and energy use, enabling them to minimize waste, cut energy use and – as a result of both – reduce emissions.
_
There are benefits of connecting robots to various parts of the production process. Connecting robots to other machines and programs in the production process brings various benefits to manufacturers including:
-1. Increased flexibility to quickly adapt production and respond to changes in demand and smaller batch sizes
-2. Improved resilience to deal with production peaks and withstand systemic shocks such as COVID 19
-3. Energy and resource efficiency through optimized performance
-4. Improved productivity for manufacturing employees. Robots are assistants to manufacturing operators. Decision making for engineers, technicians and production managers is improved through better information from machine performance data that can be analyzed for machine and process optimization
_
The IFR has identified five common scenarios in which robots are connected within broader automation strategies:
-1. Automated production: Linking the first stages of production such as order entry and product design to downstream processes such as parts ordering and machine scheduling enables manufacturers to immediately understand the resource implications of producing a new product or order and to better optimize the organization of production.
-2. Optimizing performance: Connecting robots and other machines to a central computing server enables manufacturers to extract and aggregate data that can be used to optimize machine performance in real-time or retrospectively, avoiding unplanned machine downtime which can cost manufacturers over $1 million per hour.
-3. Digital twins: Virtual representations of robots and other production machines enable manufacturers to simulate operations and the impact of changes to parameters and programs before they are implemented, enabling improved production planning, and avoiding costly downtime.
-4. Robots as a Service: Adopting robots on a pay-per-use basis can be particularly beneficial for small-to-medium-sized manufacturers, sparing them up front capital investment and unpredictable maintenance costs, and giving them predictability of operating expenditure.
-5. Sense and Respond: Sensors and vision systems enable robots to respond to their external environment in real-time, expanding the range of tasks the robot can perform – such as picking and placing unsorted parts – and expanding robot mobility. Mobile robots are key to enabling flexible manufacturing, in which production is split into discrete processes and production cells running in parallel.
_
Below are five common scenarios in which connected robots deliver the benefits described in the previous section:
_____
_____
Autonomy in Robotics:
Until recently, most robots were hard-coded to execute a task according to a pre-defined trajectory and with a pre-defined level of force. These robots are oblivious to their external environment. This means that the object the robot works on (such as a car part) must always be presented in exactly the same position, and the robot is not able to adjust its force or motion – for example to stop if something or someone gets in its way.
Over the last decade, ‘co-bots’, with either built-in or add-on force-torque sensors, have given robots a limited ability to sense and respond to their external environment. For example, co-bots can recognize movement in a designated zone and adjust speed – or stop – accordingly. This has enabled robots to be integrated into production lines alongside humans. Force-torque sensors also enable co-bots to adjust to minor variance in the external environment – for example adjusting force and to a flexible part or in a sanding process where the amount of force required reduces during the process as the surface becomes smoother.
The past few years have seen strong growth in autonomous robots which are able to adjust to far greater variability in their external environment. For example, autonomous mobile robots can not only stop if they encounter an object in their path, they can also re-plan their route and adjust their path in real-time. Autonomy does not necessarily require AI. However, the higher the level of autonomy, the greater the chances of AI algorithms being employed to categorize an unfamiliar environment and to determine the best way to interact with that environment to achieve the application’s goal (for example picking up a bottle from an unsorted bin and placing it in a rack).
_
Components and criteria of robotic autonomy:
Self-maintenance:
The first requirement for complete physical autonomy is the ability for a robot to take care of itself. Many of the battery-powered robots on the market today can find and connect to a charging station, and some toys like Sony’s Aibo are capable of self-docking to charge their batteries. Self-maintenance is based on “proprioception”, or sensing one’s own internal status. In the battery charging example, the robot can tell proprioceptively that its batteries are low and it then seeks the charger. Another common proprioceptive sensor is for heat monitoring. Increased proprioception will be required for robots to work autonomously near people and in harsh environments. Common proprioceptive sensors include thermal, optical, and haptic sensing, as well as the Hall effect (electric).
Sensing the environment:
Exteroception is sensing things about the environment. Autonomous robots must have a range of environmental sensors to perform their task and stay out of trouble.
-Common exteroceptive sensors include the electromagnetic spectrum, sound, touch, chemical (smell, odor), temperature, range to various objects, and altitude.
Some robotic lawn mowers will adapt their programming by detecting the speed in which grass grows as needed to maintain a perfectly cut lawn, and some vacuum cleaning robots have dirt detectors that sense how much dirt is being picked up and use this information to tell them to stay in one area longer.
Task performance:
The next step in autonomous behavior is to actually perform a physical task. A new area showing commercial promise is domestic robots, with a flood of small vacuuming robots beginning with iRobot and Electrolux in 2002. While the level of intelligence is not high in these systems, they navigate over wide areas and pilot in tight situations around homes using contact and non-contact sensors. Both of these robots use proprietary algorithms to increase coverage over simple random bounce.
The next level of autonomous task performance requires a robot to perform conditional tasks. For instance, security robots can be programmed to detect intruders and respond in a particular way depending upon where the intruder is. For example, Amazon (company) launched its Astro for home monitoring, security and eldercare in September 2021.
_
The IFR has defined the following five levels of autonomy in robotics:
_
Other ways to define autonomy levels:
Control systems may also have varying levels of autonomy.
-1. Direct interaction is used for haptic or teleoperated devices, and the human has nearly complete control over the robot’s motion.
-2. Operator-assist modes have the operator commanding medium-to-high-level tasks, with the robot automatically figuring out how to achieve them.
-3. An autonomous robot may go without human interaction for extended periods of time. Higher levels of autonomy do not necessarily require more complex cognitive capabilities. For example, robots in assembly plants are completely autonomous but operate in a fixed pattern.
Another classification takes into account the interaction between human control and the machine motions.
-1. Teleoperation. A human controls each movement, each machine actuator change is specified by the operator.
-2. Supervisory. A human specifies general moves or position changes and the machine decides specific movements of its actuators.
-3. Task-level autonomy. The operator specifies only the task and the robot manages itself to complete it.
-4. Full autonomy. The machine will create and complete all its tasks without human interaction.
_____
_____
Section-6
Internet of things (IoT) and Robotics:
Why we focus on robotics versus creating a generic IoT platform? After all robots are just things. However there are unique needs and challenges of robotics that set it aside from the more generic IoT.
-1. Each robot represents a self-contained constellation of “things”
In typical IoT scenarios, each node is a standalone sensor reporting data that is then aggregated in the cloud to make decisions. In the case of autonomous robots, each robot is typically includes its own self-contained constellation of sensors (RGB, depth stereo cameras, lidar, distance/proximity/bump sensors, GPS, etc.) and actuators (servos, brushless motors, grippers, etc.) By self-contained, we mean that the data generated by these sensors is primarily used locally by the robot’s on-board processor to build a real-time model of the world and make decisions on it, in turn sending real-time commands to the actuators.
Many modern robots use the Robot Operating System (ROS) as a way to manage this massive data flow. Technically more middleware than operating system, ROS includes capabilities for hardware abstraction and message passing to integrate these various data sources. For instance, a typical ground-based autonomous robot may have four HD cameras, two 3D depth sensors, and one or more lidars. If we include input and output from robot processes doing AI (computer vision, path planning), this adds up to 500Gb/h.
To make matters worse, many robots run on networks outside of the control of their developers and/or operator. It is not uncommon for service robots to be deployed on customer’s premises, outdoors, in public areas or constructions sites. In addition, most robots have some level of mobility, whether moving from one location to another (AMRs, drones, autonomous forklifts, etc.) or manipulating physical objects (robot arms with 6 degrees of freedom, etc.) All this adds up to a number of unique challenges.
-2. The amount of data generated by a single robot is far greater than a typical IoT device
Partly as a corollary of the previous point, modern autonomous robots can generate several orders of magnitude more data than, say, a smart thermostat. This requires a different approach to enable remote management. Robots in the field typically operate in constrained network environments: spotty WiFi coverage in warehouses or retail stores, limited 4G bandwidth, etc.
-3. Robots can interact with people, with each other and with IoT devices
Robots can sometimes operate on their own, relying purely on their on-board sensors. However, real world deployments will typically have a variety of sensors and often multiple robots, most likely from different vendors. For example, a robot may operate in a warehouse where there are fixed cameras, RFID readers, proximity sensors at loading docks, etc. The same warehouse may use heavier equipment such as autonomous forklifts for moving pallets, fixed arms for de-palletizing and goods-to-person systems. You can orchestrate all of this through the cloud, creating a “single pane of glass” for all the robot and sensor data. In warehouse example, operators can have a full operational view that provides real time data as well as the ability to take coordinated actions when interventions are needed.
_
Robots can be thought of as either a subset or a superset of IoT devices. Yes, we can say robotics and Internet of things overlap a lot primarily because there are a lot of similarities. Robots have sensors, actuators and most importantly micro controllers which is same in IoT. They are not exactly the same and there are differences such as compared to IoT there is lot more focus on mechanical concepts with Robotics. Both IoT devices and robots depend on sensors to understand the environment around them, quickly process data and determine how to respond. Robots are able to handle anticipated situations, while most IoT applications can only handle well-defined tasks. The main difference between the IoT and the robotics community is that robots take real action and are in the physical world. They do something. Focus has been shifting from the cyber component of IoT to the physical aspect, and that’s where the efforts are combining.
_
Most of modern robots, in fact, are equipped with sensing, computing, and communication capabilities, which make them able to execute complex and coordinated operations. Indeed, these features would be significantly magnified by IoT technology, toward the fulfillment of requirements posed by advanced applications in pervasive and distributed environments, especially those characterized by a high level of criticality. These are the cases, in fact, in which the objective is to capture the largest and broadest information in the operational space, in order to enable information-intensive interaction among its actors. several entities should complement the robot works, such as smart objects, field sensors, servers, and network devices of any kind, connected through a complex and heterogeneous network infrastructure. These challenging goals can be achieved by exploiting a dense IoT network, whose devices continuously interact with humans, robots, and the environment.
_
Figure below shows global reference scenario for IoT-aided robotics applications in which objects and robots are designed to collaborate to reach a common goal.
The ongoing revolution of Internet of Things (IoT), together with the growing diffusion of robots in many activities of everyday life, makes IoT-aided robotics applications a tangible reality of our upcoming future. Accordingly, new advanced services, based on the interplay between robots and “things”, are being conceived in assisting humans. IoT-aided robotics applications will grow upon a digital ecosystem where humans, robots, and IoT nodes interact on a cooperative basis. In this framework, the actors involved should be free to autonomously agree on secure communication principles, based on the meaning of the information they want to exchange and on the services they intend to provide/access. Both IoT-based and robotics applications have been successfully applied in several scenarios. Recently work has been carried out on the interaction between the two fields.
_
IoT-aided robotics applications are classified in the following fields: health-care, industrial and building, military, and rescue management.
-1. Health-care applications
IoT-aided robotics applications in the health-care field may include:
-2. Industrial plants and smart areas
IoT technology will enable a global interaction among robots, smart object directly integrated in machinery, electrical and electronic devices installed in buildings, and humans, thus paving the way towards the development of a number of advanced services and applications. In particular, data collected from the IoT domain will be delivered to robots to perform, for example, the following operations:
-3. Military applications
IoT-aided robotics applications may cover the following activities:
-4. Rescue management systems
Some possible IoT-aided robotics tasks could be:
______
IoT and robotics tech are evolving together:
Many people often think about Internet of Things (IoT) and robotics technology as separate fields but these two niches seem to be growing simultaneously. IoT is a network of things which is connected to the internet, including IoT devices and IoT-enabled physical assets ranging from consumer devices to sensor-equipped connected technology. These items are an essential driver for customer-facing innovation, data-driven optimization, new applications, digital transformation, business models and revenue streams across all sectors. IoT devices usually designed to handle specific tasks, while robots need to react to unexpected conditions. Artificial intelligence and machine learning help these robots deal with unexpected conditions that arise. The IoT and robotics communities are coming together to create The Internet of Robotic Things (IoRT). The IoRT is a concept in which intelligent devices can monitor the events happening around them, fuse their sensor data, make use of local and distributed intelligence to decide on courses of action and then behave to manipulate or control objects in the physical world.
_
IoT focuses on supporting services for pervasive sensing, monitoring and tracking, while the robotic communities focus on production action, interaction and autonomous behavior. A concept where the sensor data is coming from a range of sources is fused, processed with local and distributed intelligence, used to control, manipulate objects in the physical world is how the term “Internet of Robotic Things” was created. The main difference with the Internet of Things as we know it, is that the devices, the robots, take real action (and are) in the physical world. In other words: your intelligent device “does” something.
IoRT has three intelligent components:
-1. First, the robot can sense that it has embedded monitoring capabilities and can get sensor data from other sources.
-2. Second, it can analyze data from the event it monitors, which means there’s edge computing involved. Edge computing is where the data is processed and analyzed locally instead of cloud. It eliminates the need to transmit a wealth of data to the cloud.
-3. Third, because of the first two components, the robot can determine which action to take and then take that action. The robot can control or manipulate a physical object, and if it was designed to, it can move in the physical world. The bigger idea for now is collaborations between machine / machine and between man / machine. These interactions could move toward predictive maintenance and entirely new services.
_____
Traditionally, robotics systems include a programmable dimension that is designed for repetitive, labor-intensive work, including sensing, and acting upon an environment (Vermesan et al., 2017b). The emergence of AI and Machine Learning (ML) has allowed robotic things to function using learning algorithms and cognitive decision-making rather than traditional programming. Combining different branches and scientific disciplines (Figure below) makes it possible to develop autonomous programmable systems that combine robotics and machine learning. The IoRT multidisciplinary nature brings various perspectives from different disciplines and offers interdisciplinary solutions that consider the reciprocal effects and interactions between the multiple dimensions of next-generation IoRT ecosystems.
Figure above shows IoRT – An interdisciplinary branch of engineering and science.
______
Classic IoRT example:
Collaborative robots and IoRT: FANUC Intelligent Edge Link and Drive:
In the Industrial Internet or Industry 4.0 space, FANUC, a well-known Japanese and globally active manufacturer of industrial and intelligent robots ad expert in factory automation, joined forces with Rockwell Automation, Preferred Networks and Cisco for the development of a system called ‘FIELD‘(FANUC Intelligent Edge Link and Drive).
Figure above shows FANUC FIELD. It uses sensors, middleware, deep learning, edge computing and more to enable industrial robotics devices that coordinate and collaborate. Industrial collaborative robots are one of the main areas in IoRT.
Other IoRT examples: from parking lot robots to cleaning robots and Amazon Robotics:
-A robotic device that could check in a corporate parking lot if a car is authorized to use that lot and, if not, alert about it.
-Another example is of Amazon Robotics‘ warehouse automation fulfillment center where mobile robots move bins and pallets and can coordinate their movements (to avoid accidents).
Obviously, these are all still relatively early initiatives. You can imagine applications in the personal robot space, as said also a growing phenomenon, whereby robots can take real physical action by learning and combining sensor data, whether it’s in garden maintenance, support of the elderly or cleaning. An often-mentioned example in this regard is iRobot (cleaning appliances).
The potential applications of IoRT are plenty, as it combines applications earlier said to be the domain of IoT or robotics.
Figure above shows potential applications of IoRT.
_____
_____
Robots, IoT, and Artificial Intelligence are leading Digital Transformation:
Robots, the Internet of Things (IoT), and Artificial Intelligence (AI) are major drivers of digital transformation, giving the world a better way to live and conduct businesses. The combination of these three technologies has the potential to change the shape of the working culture of businesses, industries, and economies. Separately, each technology has its own control over the system and is essential in modern users’ day-to-day activities. People use these technologies to mimic intelligent behavior that can assist their complex work schedules. There is a never-ending list of use cases for AI, IoT, and robots in almost every aspect of our lives. Whether it’s medical care and performing surgical operations precisely, manufacturing, or digital marketing, these modern technologies are everywhere.
According to Grand View Research, the annual growth rate of AI is around 38.1% from 2022 to 2030, whereas Mordor Intelligence says that the annual growth rate of Robots in industries is 17.45% from 2021 to 2026. The IoT market is ready to register a CAGR of 10.53% from 2022 to 2027.
Also, IoT and AI make a perfect pair and can be identified as AIoT, which means both can simultaneously do their job efficiently to provide the best customer experience.
IoT, Robots, and AI are growing rapidly in almost every sector. In today’s ecosystem, these three technologies play a significant role in shaping business processes and lifestyles.
Data is the fuel for every industry to run. IoT, robots, and AI work differently to track the metrics and data, manage it, and gain useful insights. It will help you produce better services and products for your customers, enhance their experience, and encourage them to come back again and again for more. Apart from businesses, these technologies help individuals and homes to make lives better.
In the upcoming years, it is expected that the use of IoT, AI, and robots will further increase and provide more benefits to individuals and businesses.
_______
_______
Section-7
Components of robots:
A typical robot has a movable physical structure, a motor of some sort, a sensor system, a power supply and a computer “brain” that controls all of these elements. Essentially robots are man-made versions of animal life — they are machines that replicate human and animal behavior.
What materials are robots made of? Most robots use either Aluminum or Steel. Here are some of the materials to keep in mind when designing and building robots.
How are robot parts made?
It is made by casting or by welding then machined. Many robot manufacturers use robots to weld parts for new ones. Those areas that mate with the rest of the robot are machined with close dimensional control to assure proper fit and operation of the attaching components.
_
The structure of a robot is usually mostly mechanical and is called a kinematic chain (its functionally similar to the skeleton of human body). The chain is formed of links (its bones), actuators (its muscles) and joints which can allow one or more degrees of freedom. Some robots use open serial chains in which each link connects the one before to the one after it. Robots used as manipulators have an end effectors mounted on the last link. This end effectors can be anything from a welding device to a mechanical hand used to manipulate the environment. Robotic parts are a blend of metallic and electronic components fused to form the structure of a robot. Each part plays either a structural role, like insulating electrical components, or a functional role like powering the robot. While robot manufacturers may use parts differently based on their robot design, the overall concept and the functions of such parts is often similar.
_
Components of Robot are depicted in the figure below:
Several components construct a robot, these components are as follows:
Actuators: Actuators are the devices that are responsible for moving and controlling a system or machine. It helps to achieve physical movements by converting energy like electrical, hydraulic and air, etc. Actuators can create linear as well as rotary motion.
Power Supply: It is an electrical device that supplies electrical power to an electrical load. The primary function of the power supply is to convert electrical current to power the load.
Electric Motors: These are the devices that convert electrical energy into mechanical energy and are required for the rotational motion of the machines.
Pneumatic Air Muscles: Air Muscles are soft pneumatic devices that are ideally best fitted for robotics. They can contract and extend and operate by pressurized air filling a pneumatic bladder. The air muscle is a simple yet powerful device for providing a pulling force. When inflated with compressed air, it contracts by up to 40% of its original length. The key to its behavior is the braiding visible around the outside, which forces the muscle to be either long and thin, or short and fat. Since it behaves in a very similar way to a biological muscle, it can be used to construct robots with a similar muscle/skeleton system to an animal. For example, the Shadow robot hand uses 40 air muscles to power its 24 joints.
Hydraulic drive: Modern hydraulic drives in robots work like artificial muscles. Since 2014, Japanese developers have been working on an artificial muscle consisting of a rubber hose, tension-proof fibers, and a protective collar. This system, which imitates a human muscle, does not use compressed air but is moved hydraulically. The hydraulic muscle is more efficient and can also carry out fine movements. The system is also more sturdy than an electric motor. Robots equipped with a hydraulic drive system can withstand unfavorable conditions in disaster zones.
Muscles wire: These are made up of nickel-titanium alloy called Nitinol and are very thin in shape. It can also extend and contract when a specific amount of heat and electric current is supplied into it. Also, it can be formed and bent into different shapes when it is in its martensitic form. They can contract by 5% when electrical current passes through them.
Piezo Motors: The basic purpose of piezo motors is to generate motion based on small deformations of a material when an electrical current is applied. It helps a robot to move in the desired direction.
Sensor: They provide the ability like see, hear, touch and movement like humans. Sensors are the devices or machines which help to detect the events or changes in the environment and send data to the computer processor. These devices are usually equipped with other electronic devices. Similar to human organs, the electrical sensor also plays a crucial role in Artificial Intelligence & robotics. AI algorithms control robots by sensing the environment, and it provides real-time information to computer processors.
Electroactive polymers: Are classes of plastics which change shape in response to electric stimulation.
Elastic Nanotubes: These are a promising, early-stage experimental technology. The absence of defects in nanotubes enables these filaments to deform elastically by several percent, with energy storage levels of perhaps 10J per cu cm for metal nanotubes. Human biceps could be replaced with an 8mm diameter wire of this material. Such compact “muscle” might allow future robots to outrun and outjump humans.
_
Figure below shows basic components of robots on robot itself:
______
______
Robotic parts in detail:
Given the high level of automation involved, robot structures are quite complex. It may be impossible to define every nut, bolt, and circuit in a robot. There are, however, several components that are fundamental to an industrial robot and thus worth highlighting.
_
-1. Robotic Arm
Figure above shows robotic arm produces dishwashers at an intelligent workshop on Nov. 12, 2021, in Hefei, Anhui Province of China.
A robotic arm is also known as a manipulator. It is the part of an industrial robot that is used to execute tasks. Its structure is akin to that of the human arm and consists of a shoulder, an elbow, and a wrist. The shoulder is the part of the robotic arm linked to the mainframe of the industrial robot. The elbow is the jointed part of the arm that flexes as it moves and the wrist is the end of the arm that performs the actual task.
For flexibility, a robotic arm is fitted with various joints that allow it to move in different directions when working. A 6-axis robotic arm, for example, has more joints than a 4-axis arm which is less flexible. Additionally, the structures of robotic arms vary in terms of how far they can reach and the payloads they can handle.
The robotic arm is frequently used in manufacturing roles. A typical robotic arm is made up of seven metal segments, joined by six joints. The computer controls the robot by rotating individual stepper motors connected to each joint (some larger arms use hydraulics or pneumatics). Unlike ordinary motors, step motors move in exact increments. This allows the computer to move the arm very precisely, performing the same movement over and over. The robot uses motion sensors to make sure it moves just the right amount. An industrial robot with six joints closely resembles a human arm — it has the equivalent of a shoulder, an elbow and a wrist. Typically, the shoulder is mounted to a stationary base structure rather than to a movable body. This type of robot has six degrees of freedom, meaning it can pivot in six different ways. A human arm, by comparison, has seven degrees of freedom. Your arm’s job is to move your hand from place to place. Similarly, the robotic arm’s job is to move an end effector from place to place. You can outfit robotic arms with all sorts of end effectors, which are suited to a particular application. One common end effector is a simplified version of the hand, which can grasp and carry different objects. Robotic hands often have built-in pressure sensors that tell the computer how hard the robot is gripping a particular object. This keeps the robot from dropping or breaking whatever it’s carrying. Other end effectors include blowtorches, drills and spray painters.
The manipulator is an assembly of various axes that is capable of providing motion in various directions. The manipulator essentially consists of Base, Shoulder swivel, Elbow extension, and wrist. The “wrist ” located at the end of the robot arm has 1 to 3 DOF (Degree of Freedom), depending on the model made. The 3 degrees of freedom are pitch, yaw, and roll axes. The large manipulator is powered by pneumatic cylinders or hydraulic cylinders or hydraulic motors to power the various axes of motion. Feedback devices are used to measure the position and velocity of various axes of motion and send this information to the control systems for use in coordinating the robot motions. They may be simple limit switches, actuated by the robot’s arm or position-measuring devices such as encoders, potentiometers, resolvers, and/or tachometers. Depending on the devices used, the feedback data is either digital or analog. The duty of Feedback devices is to send feedback to the Robot controller taken from the work region. The Robot controller will check the (actual amount of work to be done) to the (work has done on the workpiece) and calculate the difference. This difference is again given to the manipulator to do the remaining work.
Industrial robots are designed to do the same thing. For example, a robot might twist the caps onto peanut butter jars coming down an assembly line. To teach a robot how to do its job, the programmer guides the arm through the motions using a handheld controller. The robot stores the exact sequence of movements in its memory and does it again every time a new unit comes down the assembly line. Common types of industrial, programmable robot arms include Cartesian, polar, cylindrical and SCARA arms, each of which operates within a differently shaped and coordinated ‘envelope’ of physical space in order to perform its main tasks.
Most industrial robots work in auto assembly lines, putting cars together. Robots can do a lot of this work more efficiently than human beings because they are so precise. They always drill in the same place, and they always tighten bolts with the same amount of force, no matter how many hours they’ve been working. Manufacturing robots are also very important in the computer industry. It takes an incredibly precise hand to put together a tiny microchip.
You may find robots working alongside construction workers, plastering walls accurately and faster than a human can do the job. Robots assist in underwater exploration. Surgeons use robots to handle delicate surgeries. They even handle flipping burgers in the kitchen. These robots all have a form of robotic arm.
Robotic arms are important in space exploration. NASA uses an arm with seven degrees of freedom — like our own arms — to capture equipment for servicing or to grab asteroids. The 7-foot (2-meter) robotic arm on the Perseverance rover has several special tools it uses as it explores the surface of Mars. A camera helps scientists see what’s going on to guide the arm. There’s also an abrading tool used to grind rock samples and a coring drill can collect samples to store in metal tubes that it drops on the surface for return to Earth on future missions. An X-ray device called PIXL (Planetary Instrument for X-ray Lithochemistry) has a hexapod with six little mechanical legs that it uses to adjust the X-ray for the best angle.
____
-2. End-effector
An end-effector is a tool device attached to the wrist of a robotic arm. It gives the robotic arm more dexterity and makes it better suited for specific tasks. They are a more convenient solution than having to make a unique robot arm for each role. A robotic end-effector is any object attached to the robot flange (wrist) that serves a function. This includes robotic grippers, robotic tool changers, robotic collision sensors, robotic rotary joints, robotic press tooling, compliance devices, robotic paint guns, material removal tools, robotic arc welding guns, robotic transguns, etc. Robot end-effectors are also known as robotic peripherals, robotic accessories, robot tools, or robotic tools, end-of-arm tooling (EOA), or end-of-arm devices.
The end effector of a robotic arm is where the work happens. It’s where the contact between the robot and the workpiece happens. As with human beings, who use a very wide array of tools to get things done, so it is with robots. End effectors can be anything from a welding tool to a vacuum cleaner.
It is often a good feature to be able to automatically change tools. A special fixture holds the tools. It is usually mounted on a surface external to the robot. The fixture can hold a variety of tools that the robot arm can swap in and out. In this way, the robot can perform different tasks on a workpiece.
Here’s an example of how this feature can be used: A robot arm can drill a hole in a piece of metal. Then it swaps tools and deburs the hole it just made. The robot swaps tools again. And it uses a tapping tool to cut threads in the hole.
_____
-3. Motors
The parts of an industrial robot need to be powered to move as they cannot move of their own volition. For this reason, parts like robotic arms are fitted with motors to facilitate motion. The vast majority of robots use electric motors, often brushed and brushless DC motors in portable robots or AC motors in industrial robots and CNC machines. These motors are often preferred in systems with lighter loads, and where the predominant form of motion is rotational. Stepper motors do not spin freely like DC motors, they rotate in steps of a few degrees at a time, under the command of a controller. This makes them easier to control, as the controller knows exactly how far they have rotated, without having to use a sensor. Therefore they are used on many robots and CNC machining centers.
Recent alternatives to DC motors are piezoelectric ultrasonic motors. These work on a fundamentally different principle, whereby tiny piezo ceramic elements, vibrating many thousands of times per second, cause linear or rotary motion. There are different mechanisms of operation; one type uses the vibration of the piezo elements to step the motor in a circle or a straight line. Another type uses the piezo elements to cause a nut to vibrate or to drive a screw. Piezoelectric ultrasonic motors, which have superior characteristics like; high torque at low speed, frictional locking at the power-off stage, nanometer resolution, absence of magnetic interference and compactness in size, are good candidates for robotics, automation, medical applications and various other fields. These motors are already available commercially, and being used on some robots.
_
Actuator vs. motor:
The term “Actuator” typically refers to a device that provides linear motion… like a piston, a rod is pushed in a linear manner when voltage is applied. A Motor is a device that provides rotational movement, like in a toy car, the motor spins a wheel usually through gears to reduce the speed and increase the torque. A Motor is designed to spin from the shaft, often at high speed or Revolutions Per Minute (RPM). Think of actuators as devices that help produce linear motion and motors as devices that help produce rotational movement. Hence, some consider actuators as a type of motor. But a motor is not a type of actuator. Actuators can be pneumatically or hydraulically operated as well. Motors that we talk about are usually electrical based, which are usually used for meeting rotational speed and torque requirements.
_
Actuator:
An actuator is the motor of an industrial robot and is sometimes referred to as the drive. Actuators are responsible for creating specific motions and controlling the movements of the articulated robot. Robots typically have multiple actuators and they correspond with the number of axes a robot has. For instance, the FANUC M20ia has six axes, therefore it will have six actuators. Each actuator functions as the joint of a robot, controlling a specific robotic movement. For example, the actuator located in the third axis of the Yaskawa 1440 controls the vertical movement of its arm. Servo motors are the most common actuators used in industrial robots since they have advanced functionality which allows for extreme precision. Robotic actuators are usually powered electrically but may also be either hydraulic or pneumatically powered. The actuator helps the brain of the robot to respond to the surrounding environment. It helps the robot to move its hands (grippers), and its feet (wheels and the castor).
Figure above shows a robotic leg powered by air muscles. Actuators are the “muscles” of a robot, the parts which convert stored energy into movement. By far the most popular actuators are electric motors that rotate a wheel or gear, and linear actuators that control industrial robots in factories. There are some recent advances in alternative types of actuators, powered by electricity, chemicals, or compressed air.
Reduction Gear:
One of the benefits of industrial robots is their incredible power, which allows them to lift heavy payloads or operate at fast speeds. Robotic actuators are used to generate this power with their movements, however, they are limited to how much they themselves can produce. Reduction gears are used to increase the power of each actuator. The combination of reduction gears and actuators produces the high power needed for industrial robot operation. Each actuator will have its own reduction gear. Since the four-axis Motoman EPL160 has four actuators, it will only need four reduction gears.
Encoder:
Encoders are used to provide data about a robot’s movement, such as how far it has traveled or what direction it moved. This data is critical to allowing the motors to control the speed and position of a robot. Encoders enable greater precision and accuracy in the operation of an industrial robot. They are integrated with each actuator of a robot. They typically consist of a disk with fixed slits at recurring intervals. This disk is attached to the rotating base of an actuator. Light will either transmit through the slits or be blocked which signals the rotation and speed of an actuator. Six axis robots like the FANUC Arcmate 120ic have 6 encoders.
Transmission:
The transmission is the component responsible for transmitting the power generated between the reduction gears and the actuators. It takes the power created by the reduction gear and transfers it to the actuator, allowing the motor to increase its power output. A transmission can also change the angle and intensity of power.
____
-4. Sensors
Robot sensors are like human senses. Robots can see, hear and have a sense of touch. They can even be supplied with a sense of smell and taste. Industrial robots might use a sense of “smell” to test air quality in a mine. They could detect noxious gases or leaking contaminants. There are also tasting robots. They can test the quality of food and discover the presence of harmful chemicals. But the most common robotic sense currently used for industrial applications is for vision.
Sensors in robots are devices that detect or measure specific parameters and trigger a corresponding reaction to them. They are implanted in robot structures for safety and control purposes. Safety sensors are used to detect obstacles to prevent human-robot and robot-robot collisions. They are a more recent addition to robot structures and more particularly, in collaborative robots. Control sensors, on the other hand, are used to receive prompts from an external controller which the robot then executes. A safety sensor will detect an obstacle, send a signal to the controller which in turn slows or stops the robot to avoid a collision. In essence, a sensor always works in conjunction with the controller. Other parameters that robot sensors detect include position, velocity, temperature, and torque.
_
Types of Robot Sensors:
There are different type of sensors are available to choose from and the characteristics of sensors are used for determining the type of sensor to be used for particular application.
Light Sensor:
Light sensor is a transducer used for detecting light and creates a voltage difference equivalent to the light intensity fall on a light sensor. The two main light sensors used in robots are Photovoltaic cells and Photo resistor. Other kind of light sensors like phototransistors, phototubes are rarely used.
The variety of optical sensors now available for robots is impressive, indeed. Some sensors use optical methods to determine the roughness of a surface. Others can measure the thickness of a film. Still others discover the precise color of objects. A robot can be equipped with a microscope. This opens up a world of possibilities. Many measurements can be performed with a microscope. Optical sensors can measure the rate of flow of a liquid. Flow can also be measured in other ways like with electromagnetic sensors. A kind of paddle wheel can also be used that sends pulses. The pulses occur more rapidly when the wheel is spinning faster. Position and velocity can also be measured with optical sensors. The sensors don’t need to be cameras. However, robots also have different types of camera technology include 2D imaging, 3D sensing, ultrasonic, and infrared. For a robot with machine vision that does not require information on depth or distance, conventional 2D digital cameras are the most popular choice. Robot vision works by integrating one or more cameras into the robotic system.
Note: All digital cameras have sensor in it but sensors can be installed without cameras.
Proximity Sensor:
Proximity sensor can detect the presence of nearby object without any physical contact. The working of a proximity sensor is simple. In proximity sensor transmitter transmits an electromagnetic radiation and receiver receives and analyzes the return signal for interruptions. Therefore the amount of light receiver receives by surrounding can be used for detecting the presence of nearby object.
Consider the types of proximity sensors used in robotics are:
Infrared (IR) Transceivers – In IR sensor LED transmit the beam of IR light and if it finds an obstacle then the light is reflected back which is captured by an IR receiver.
Ultrasonic Sensor – In ultrasonic sensors high frequency sound waves is generated by transmitter, the received echo pulse suggests an object interruption.
Sound Sensor:
Sound sensors are generally a microphone used to detect sound and return a voltage equivalent to the sound level. Using sound sensor a simple robot can be designed to navigate based on the sound receives. Implementation of sound sensors is not easy as light sensors because it generates a very small voltage difference which will be amplified to generate measurable voltage change.
Temperature Sensor:
Temperature sensors are used for sensing the change in temperature of the surrounding. It is based on the principle of change in voltage difference for a change in temperature, this change in voltage will provide the equivalent temperature value of the surrounding.
Acceleration Sensor:
Acceleration sensor is used for measuring acceleration and tilt. An accelerometer is a device used for measuring acceleration.
The two kinds of forces which affect an accelerometer is:
Static Force – It is the frictional force between any two objects. By measuring this gravitational force we can determine the how much robot is tilting. This measurement is useful in balancing robot, or for determining whether robot is driving on a flat surface or uphill.
Dynamic Force – It is the amount of acceleration required to move an object. Measurement of dynamic force using an accelerometer tells about the velocity/speed at which robot is moving.
Accelerometer comes in different configuration. Always use the one which is most appropriate for your robot. Some factors need to be considered before selecting accelerometer is:
(1. Sensitivity
(2. Bandwidth
(3. Output type: Analog or Digital
(4. Number of Axis: 1,2 or 3
Tactile sensor:
A tactile sensor is a device that measures information arising from physical interaction with its environment. Tactile sensors are generally modeled after the biological sense of cutaneous touch which is capable of detecting stimuli resulting from mechanical stimulation, temperature, and pain (although pain sensing is not common in artificial tactile sensors). Tactile sensors are used in robotics, computer hardware and security systems. A common application of tactile sensors is in touchscreen devices on mobile phones and computing. Tactile sensors may be of different types including piezoresistive, piezoelectric, optical, capacitive and elastoresistive sensors.
Robots designed to interact with objects requiring handling involving precision, dexterity, or interaction with unusual objects, need sensory apparatus which is functionally equivalent to a human’s tactile ability. Tactile sensors have been developed for use with robots. Tactile sensors can complement visual systems by providing added information when the robot begins to grip an object. At this time vision is no longer sufficient, as the mechanical properties of the object cannot be determined by vision alone. Determining weight, texture, stiffness, center of mass, coefficient of friction, and thermal conductivity require object interaction and some sort of tactile sensing. Several classes of tactile sensors are used in robots of different kinds, for tasks spanning collision avoidance and manipulation. Some methods for simultaneous localization and mapping are based on tactile sensors.
Figure above shows uSkin Sensor by XELA Robotics, a high-density 3-axis tactile sensor in a thin, soft, durable package, with minimal wiring.
Laser scanners:
The introduction of laser technology into industrial applications has changed the way many things are done. Lasers are used in handheld barcode scanners. They can make precise measurements of machined parts. Lasers are used to measure large distances, too. Complex vision systems use lasers. Computer vision means mobile robots can make their way autonomously, avoiding obstacles in their path.
Laser scanners for reading barcode labels are fast, accurate, and low cost. Some scanners are handheld and used by people in inventory management. Handheld laser scanners are also used in materials handling and manufacturing tasks. Laser barcode scanners can be put on Autonomous Mobile Robots (AMRs) in warehouses to help in the order picking process. Scanners can be mounted on aerial drones that fly through warehouse aisles. The drones read barcodes and use computer vision to count items in boxes. Aerial drones can take inventory in a fraction of the time it takes people to do it.
Laser barcode scanners are not the only way to keep track of items. One could use RFID-based scanners. RFID (Radio Frequency IDentification) has the advantage that the label need not be visible, and it can still be read. This is because RFID uses radio waves instead of light. But RFID labels are more expensive than bar codes.
One of the most common uses for laser scanners is for industrial robotic vision. These scanners use LiDAR, which stands for Light Detection And Ranging. LiDAR is like RADAR. Radar was invented during World War II and is short for RAdio Detection And Ranging. In both cases, the principle is similar. The LiDAR sensor sends out a pulse of electromagnetic energy and then it detects the reflection that bounces off the nearest object. The time it takes for the reflections to come back is measured. If it takes longer for the reflection to come back, the object is farther away. A shorter time means the object is closer. The time is proportional to the distance from the sensor to the object. In this way, lasers can be used to precisely measure distance to a single point.
LiDAR can be used in one dimension, 2-D, and 3-D.
An example of LiDAR in one dimension is a laser tape measure. You can quickly and accurately measure the dimensions of a room or a building. For industrial applications, lasers are used to precisely measure the depth of a cut done by a machine tool or robotic milling machine. Robot arms with LiDAR can measure the size of a part for quality control.
In a 2-D configuration, a laser beam is scanned back and forth. The scanning might go in a complete circle or it might go only through a part of the circle. The laser beam stays within a two-dimensional plane. For an Autonomous Mobile Robot (AMR), this plane is horizontal. It is often a few centimeters above the ground. In this way, the AMR can use its LiDAR to detect objects in its pathway. The robot uses this awareness to determine whether it is safe to proceed along the planned route. If there is something blocking its path, the robot can swerve or stop. But 2-D LiDAR has the limitation that it cannot detect objects above or below the plane of the laser scanning. In effect, the robot is “blind” to anything that is not in the plane of the 2-D LiDAR. Using 3-D LiDAR can overcome this limitation.
With 3-D LiDAR, the system scans the laser beam in a plane (like 2-D LiDAR), and then the plane is tilted up and down. Adding the tilting action means the system covers a three-dimensional space. The drawback to 3-D scanning is that it requires more computing power. The system gathers much more information, so it is a challenge to process all that information and do it in real-time. This requires more powerful computers. Also, the mechanical components of 3-D LiDAR are more complex. Therefore, 3-D scanners are more expensive than 2-D scanners. It all depends on the application, whether 2-D or 3-D scanning is appropriate.
Of course, there are limitations to LiDAR. Direct sunlight can blind a LiDAR sensor. However, LiDAR can handle more intense sunlight than many kinds of sensors. The object which reflects the laser beam can affect things. The kind of material and the color of the reflecting objects can affect the accuracy of LiDAR. Dust, dirt, and debris can clog the lens of a LiDAR sensor. This will reduce the sensor’s sensitivity and accuracy.
_____
-5. Robot Controllers
Think of the controller as the brain of a robot. The robot controller is a computer, composed of hardware and software, linked to the robot and essentially functions as its “brain”. Controllers have all of the characteristics associated with computers and contain sophisticated decision-making and data storage capabilities. They initiate and terminate the motion of the manipulator through interfaces with the manipulator’s control valves and feedback devices, or perform complex arithmetic functions to control path, speed, and position, or provide two-way communications between the controller and ancillary devices. Artificial intelligence in smart robots is also inbuilt into the controller via software.
The Robot Control system (RCS) acts as the brain of the robot. It coordinates and sequences the motion of the various axes of the robot and provides communication with external devices and machines. Programs may be written in AML (A Manufacturing Language), which is a high-level language developed specifically for industrial robot applications. Operator interacts with the RCS (Robot Control system) through a standard video terminal. The terminal is used to create and edit programs, enter robotic commands and execute programs, and generate data points during the robot training phase.
Robot controllers come in a variety of shapes and sizes. Some are small, handheld tablets. These are used to control a simple work cell. Other robot controllers can control complex manufacturing and logistics processes. The robot controller is crucial in determining how easy it is to get a robotic system to do what you want. The robot controller is a critical part of how well the robot performs its job.
Robot controllers are responsible for safety, logic, and motion control. How quickly a robot responds to an external event is often a critical measure of a robot controller. Some applications need a faster response time than others. This can determine the kind of robot controller needed. The Human-Machine Interface (HMI) of a robot controller is another important aspect. One popular HMI is a “teach pendant” that is a handheld, tablet-style device. The teach pendant is used when teaching the robot what to do. Once the robot is ready for production, the teach pendant can be removed.
In a factory, it is more common to find a hard-wired connection between a robot controller and the robot. The wired connection provides a reliable and safe interface. Safety regulations sometimes require a wired connection. This is not true for Autonomous Mobile Robots (AMRs). An AMR would not be of much use if it had to have a wire attached to its controller! Wireless industrial robot controllers are also available. Depending on the application, they may have advantages over the wired systems.
There are three broad categories of robotic controllers:
The PLC is the oldest technology and the lowest cost type of robot controller. It is used for simple applications that do not need complex motion control. The data logging ability of a PLC is also less capable than other types of robot controllers. The PLC will have fewer kinds of input/output devices.
The PAC represents an updated version of the PLC. The PAC has more computing power and greater capability. There is a very broad range of applications for which a PAC is a good fit.
The IPC has the greatest computing power, and it is also the most expensive type of robot controller. It can handle complex motions and can communicate via a wide variety of interfaces. The IPC can handle and store very large amounts of data.
The distinctions between these three types of controllers become more blurred with time. Today, there are really not three separate categories of robot controllers. It is more of a continuum. In deciding between different robot controllers, one important factor is software. Look for application-specific software packages. The application package will determine how easy it is to get up and running. It will also influence how much support you can expect for your particular needs.
_
Robotic control systems broadly fall into two overarching categories, namely:
Simpler pre-programmed robots are designed purely to repeat the same basic operations over and over, and can only respond in very limited ways (if at all) to changes in the external environment. In other words, they require the maintenance of suitable conditions in which to perform its intended tasks properly.
More complex autonomous robots will be fitted with a range of sensors and other equipment that allow them to detect and respond to external factors or environmental changes.
_
Adaptive control for robots may be defined as the modification in its behavior such that it will adjust to the changes in its working environment. The main advantage of adaptive control lies in its flexibility, the ability to change the control program easily to a new function, or adjust to new parts as the need arises.
The two components that provide the Adaptive control are
The Robot Controlled Unit was already discussed…Now let’s discuss the Sensor Control Unit (SCU). The Sensor Control Unit (SCU) provides visual information about the scene to be analyzed. It takes the information about the performance of each camera, analysis the image, and transposes that information to the robot’s coordinate system.
An image of an object comes through the camera lens and falls on an image plane inside the camera. The most useful camera for machine vision is the solid-state matrix array camera, which was designed for military space use. Adaptive control of a robot eliminates the need for accurate fixturing of workpieces, precise fabrication tolerances of equipment, and accurate teaching of the coordinate data. The 3D coordinate information is analyzed by the Sensor Control Unit (SCU) and sent it to the RCU (Robot Control Unit) so that action can be taken by the control unit and deliver it to the manipulator.
_____
-6. Power Supply:
For a robot to work, we need a power supply. It acts as food to the robot. Unless you feed your robot, it cannot work! Thus, we need to provide a power supply for that. The power supply is the source of energy used to regulate the robot’s drive mechanisms. The energy comes from three sources are Electric, Hydraulic, and Pneumatic. Electric drives have a high degree of accuracy and repeatability. They also offer a wide range of payload capacity, accompanied by an equally wide range of costs. Hydraulic drives, are the most popular nowadays and have high payload capacities which are relatively easy to maintain. They are, however, rather expensive and not as accurate as either the electric or pneumatic drives. Pneumatic drives, although limited to smaller payloads, are relatively inexpensive but they are fast and reliable.
Stationary robots, such as those found in a factory, may run on AC power through a wall outlet but more commonly, robots operate via an internal battery. Most robots utilize lead-acid batteries for their safe qualities and long shelf life while others may utilize the more compact but also more expensive silver-cadmium variety. Safety, weight, replaceability and lifecycle are all important factors to consider when designing a robot’s power supply. Lithium Polymer Batteries (Li-Po) are becoming the most popular type of batteries for use in robotics because of their lightweight, high discharge rates and relatively good capacity, except the voltage ratings are available in increments of 3.7 V.
For robotic applications (in fact most major applications), we need DC supply (usually 5V, 9V, 12V DC, sometimes goes as high as 18V, 24V, 36V, etc. as per your requirement). The best way for this is to use a battery (as it provides DC supply directly) or use an SMPS/Eliminator to convert AC to DC and then use it. But voltage is not the only thing that matters while choosing a proper supply. Your power source should also be able to supply sufficient current to drive all the loads connected to it, directly or indirectly. Evolving battery technology has affected a wide swath of electrical and electronic devices. Better batteries mean longer operating times and shorter charging intervals. The improvements have made Autonomous Mobile Robots (AMRs) practical and cost-effective.
Some potential power sources for future robotic development also include pneumatic power from compressed gasses, solar power, hydraulic power, flywheel energy storage, organic garbage through anaerobic digestion and nuclear power.
_____
-7. Robot Base / Mounting System
Stationary robots with robotic arms need to be securely mounted to perform their job. There are many options from which to choose.
A pedestal mount is useful when you need to elevate the robot arm. The arm may need to be raised up to access conveyor systems and work surfaces. Mounts can be bolted to the floor. Mounts can also have casters, so they can be easily moved around.
There are applications for which it is ideal to have the robot mounted in an inverted position. There are special mounts for this. An inverted orientation can often maximize the reach of the arm. Other applications might call for the robot to be mounted vertically. It might be fastened to the side of a machine. Once the position is determined, the software that comes with the robot arm will need to be adjusted.
Modular mounting systems are available for fastening sensors. Examples include cameras, cables, and hoses. Some sensor mounting systems are best for their strength and durability. Others emphasize flexibility and light weight for portability. Adjustable levers allow for proper positioning of the sensors and cables.
_____
-8. Drivetrain:
Although some robots are able to perform their tasks from one location, it is often a requirement of robots that they are able to move from location to location. For this task, they require a drivetrain. Drivetrains consist of a powered method of mobility. Humanoid style robots use legs, while most other robots will use some sort of wheeled solution.
____
-9. Robot Safety Components
Robots can relieve people from dirty, dull, and dangerous work. And they can improve the safety of working conditions. Yet, if not used in the right way, robots can become a dangerous hazard. Making sure your automation solution is safe is of the utmost importance.
Robot Safety PLCs:
An ordinary Programmable Logic Controller (PLC) will typically have one microprocessor. It will also have memory and input/output circuits. A safety PLC has redundancy built in. A safety PLC may have two, three, or four processors. Watchdog circuits check the health of each of the processors. If something goes wrong, the watchdog circuits sound an alarm.
Some PLCs will have an output without a corresponding input. In contrast, a safety PLC features matching inputs and outputs. This means tests can be constantly made to verify the proper connectivity and health of a circuit.
There are some applications where an ordinary PLC may be fine. A PLC will have emergency stop (e-Stop) functions. These can include light curtains or proximity sensors. This may be enough to provide safety for your associates. But there are many applications for which a safety PLC is the best choice. One costly mistake or accident can far outweigh the extra cost of a safety PLC.
Robot Safety Sensors / Laser Scanners / Light Fences:
Laser area scanners can detect the presence of people near an industrial robot. The laser scanner can inform the robot to slow down if someone enters the outermost zone. The slower speed might be 50% of the usual speed. If someone enters a second zone, closer to the robot, the speed can be slowed to perhaps 25%. If a person is detected in the closest zone, the robot will stop. The user can determine the size of these zones. The user can customize what responses the robot makes.
A variety of safety devices can and should be used with robots. Bigger and heavier robots need a higher level of safety than smaller ones. One popular safety method is to use a light fence or a light curtain. The “fence” consists of beams of light around the industrial robot. If something breaks the light beams, the robot might go into an emergency stop, for example.
Robot Fencing:
Sometimes the safest way to maintain productivity and safety is to fence off a robot into its own separate area. A variety of such fences are available. Different features include the height of the fence and the size of the openings in the fencing material. Fence posts with self-leveling feet built-in are sometimes desirable. The strength of the fence is also a consideration. Should the fencing be made of metal wire, perforated metal sheets, or Plexiglas? Your application may need a fencing material that protects against heat or electricity.
_____
-10. Conveyor belts
Robots and conveyor systems are frequent companions. The robot may take items off of a conveyor to begin its cycle, or it may place parts onto the conveyor at the end of its cycle. And, of course, it might do both. There are many different types of conveyor systems from which to choose. Some conveyor systems are easy to sanitize. This makes them a good choice for food processing operations. Other features to consider are the speed and width of the conveyor system. Its height, angle of greatest incline, and the amount of weight it can handle are all considerations.
_____
-11. Vibrating feeders
A vibratory feeder is an instrument that uses vibration to feed material through a process or a machine while controlling the rate of flow. Vibratory feeders utilize both vibration and gravity to move material forward. Robots combine well with vibrating feeders. This is especially so for pick-and-place and assembly operations. Small parts are fed into the vibrating feeder. The feeder then moves the parts to the robot. The feeder can put the parts so they are all in the same position. This makes it easier for the robot to pick them up.
_____
_____
Technical description: Defining parameters:
Accuracy and repeatability are different measures. Repeatability is usually the most important criterion for a robot and is similar to the concept of ‘precision’ in measurement. ISO 9283 sets out a method whereby both accuracy and repeatability can be measured. Typically a robot is sent to a taught position a number of times and the error is measured at each return to the position after visiting 4 other positions. Repeatability is then quantified using the standard deviation of those samples in all three dimensions. A typical robot can, of course make a positional error exceeding that and that could be a problem for the process. Moreover, the repeatability is different in different parts of the working envelope and also changes with speed and payload. ISO 9283 specifies that accuracy and repeatability should be measured at maximum speed and at maximum payload. But this results in pessimistic values whereas the robot could be much more accurate and repeatable at light loads and speeds. Repeatability in an industrial process is also subject to the accuracy of the end effector, for example a gripper, and even to the design of the ‘fingers’ that match the gripper to the object being grasped. For example, if a robot picks a screw by its head, the screw could be at a random angle. A subsequent attempt to insert the screw into a hole could easily fail. These and similar scenarios can be improved with ‘lead-ins’ e.g. by making the entrance to the hole tapered.
______
______
Axis of robot:
Industrial robots have various axis configurations, depending on the task and the needed range of motion. The have also come down in size, which allows them to execute tasks in smaller-scale applications and reduces their footprint.
An axis in robotic terminology represents a degree of freedom (DOF). For example, if a robot has three degrees of freedom, it can operate in the x, y, and z planes. However, it cannot tilt or turn. Increasing the number of axes allows the robot to access a greater amount of space by giving it more degrees of freedom. More axes mean more functionality:
_
Most industrial robots utilize six axes, which give them the ability to perform a wide variety of industrial tasks compared to robots with fewer axes. Six axes allow a robot to move in the x, y, and z planes, as well as position itself using roll, pitch, and yaw movements. This functionality is suitable for complex movements that simulate a human arm: reaching under something to grab a part and place it on a conveyor, for example. The additional range of movement allows six-axis robots to do more things, such as welding, palletizing, and machine tending. Other advantages to six-axis robots include mobility (easy to move and/or mount) and wide horizontal and vertical reach. They are especially being used in automotive and aerospace manufacturing, where they perform drilling, screw driving, painting, and adhesive bonding. Because they are coming down in price, it is now feasible for smaller manufacturers to invest in this technology.
Figure above shows six axes of robot.
_______
_______
Mobile Robots locomotion:
Robotic arms are relatively easy to build and program because they only operate within a confined area. Things get a bit trickier when you send a robot out into the world.
First, the robot needs a working locomotion system. If the robot only needs to move over smooth ground, wheels are often the best option. Wheels and tracks can also work on rougher terrain. But robot designers often look to legs instead, because they are more adaptable. Typically, hydraulic or pneumatic pistons move robot legs. The pistons attach to different leg segments just like muscles attach to different bones. It’s a real trick getting all these pistons to work together properly. As a baby, your brain had to figure out exactly the right combination of muscle contractions to walk upright without falling over. Similarly, a robot designer has to figure out the right combination of piston movements involved in walking and program this information into the robot’s computer. Many mobile robots have a built-in balance system (a collection of gyroscopes, for example) that tells the computer when it needs to correct its movements. Designers commonly look to the animal world for robotic locomotion ideas. Six-legged insects have exceptionally good balance, and they adapt well to a wide variety of terrain. Four-legged robots such as Boston Dynamics’ Spot look like dogs, and the similarity breeds comparisons as they take on dangerous jobs such as construction inspection. Two-legged robots are challenging to balance properly, but humans have gotten better with practice. Boston Dynamics’ Atlas can even do parkour.
_
Aerial robots are also inspired by real-world examples. Although many use wings like we see on airplanes, researchers have also developed techniques using fly-wing-like soft actuators. Most people now are familiar with the propeller-powered drones that provide amazing camera shots for entertainment, sporting events and surveillance. Some of these hovering robots can also be networked together to create swarms of robots such as those seen at the Tokyo Summer Olympic Games in 2021.
Underwater, robots may walk across the sea floor. One example is Silver 2, a crab-like robot designed to find and clean up plastic waste. The Benthic Rover II uses treads instead. Snake robots, which of course take their name from the animals whose locomotion they copy, can operate underwater and on land. They even work well in the human body, where they can perform surgical repairs.
_
Some mobile robots are controlled by remote — a human tells them what to do and when to do it. The remote control might communicate with the robot through an attached wire, or using radio or infrared signals. Remote robots are useful for exploring dangerous or inaccessible environments, such as the deep sea or inside a volcano. Some robots are only partially controlled by remote. For example, the operator might direct the robot to go to a certain spot, but instead of steering it there, the robot finds its own way.
_____
Robot Locomotion on land:
Locomotion is the method of moving from one place to another. The mechanism that makes a robot capable of moving in its environment is called as robot locomotion. Robot locomotion is the collective name for the various methods that robots use to transport themselves from place to place.
Wheeled robots are typically quite energy efficient and simple to control. However, other forms of locomotion may be more appropriate for a number of reasons, for example traversing rough terrain, as well as moving and interacting in human environments. Furthermore, studying bipedal and insect-like robots may beneficially impact on biomechanics.
A major goal in this field is in developing capabilities for robots to autonomously decide how, when, and where to move. However, coordinating numerous robot joints for even simple matters, like negotiating stairs, is difficult. Autonomous robot locomotion is a major technological obstacle for many areas of robotics, such as humanoids (like Honda’s Asimo).
There are many types of locomotion:
Wheeled
Legged
Tracked slip/skid
Combination of legged and wheeled locomotion
_
-1. Legged locomotion:
It comes up with the variety of one, two, four, and six legs. If a robot has multiple legs then leg coordination is required for locomotion. Legged locomotion consumes more power while demonstrating jump, hop, walk, trot, climb up or down etc. It requires more number of motors for accomplish a movement. It is suited for rough as well as smooth terrain where irregular or too smooth surface makes it consume more operational power. It is little difficult to implement because of stability issues. The total number of possible gaits (a periodic sequence of release and lift events for each of the total legs) a robot can travel depending upon the number of robot legs.
Walking Robots:
Walking robots simulate human or animal gait, as a replacement for wheeled motion. Legged motion makes it possible to negotiate uneven surfaces, steps, and other areas that would be difficult for a wheeled robot to reach, as well as causes less damage to environmental terrain as wheeled robots, which would erode it. Hexapod robots are based on insect locomotion, most popularly the cockroach and stick insect, whose neurological and sensory output is less complex than other animals. Multiple legs allow several different gaits, even if a leg is damaged, making their movements more useful in robots transporting objects.
Walking is a difficult and dynamic problem to solve. Several robots have been made which can walk reliably on two legs, however none have yet been made which are as robust as a human. Typically, these robots can walk well on flat floors, and can occasionally walk up stairs. None can walk over rocky, uneven terrain. Some of the methods which have been tried are:
Climbing:
Figure above shows Capuchin, a climbing robot
Several different approaches have been used to develop robots that have the ability to climb vertical surfaces. One approach mimics the movements of a human climber on a wall with protrusions; adjusting the center of mass and moving each limb in turn to gain leverage. An example of this is Capuchin, built by Dr. Ruixiang Zhang at Stanford University, California. Another approach uses the specialized toe pad method of wall-climbing geckoes, which can run on smooth surfaces such as vertical glass. Examples of this approach include Wallbot and Stickybot.
-2. Wheeled Locomotion: Rolling Robots:
Figure above shows Segway in the Robot museum in Nagoya.
It requires less number of motors for accomplishing a movement. It is little easy to implement as there are lesser stability issues in case of more number of wheels. It is more power efficient as compared to legged locomotion. For simplicity, most mobile robots have four wheels. Using six wheels instead of four wheels can give better traction or grip in outdoor terrain such as on rocky dirt or grass. However, some researchers have tried to create more complex wheeled robots, with only one or two wheels.
-3. Track Robot:
Tank tracks provide even more traction than a six-wheeled robot. Tracked wheels behave as if they were made of hundreds of wheels, therefore are very common for outdoor and military robots, where the robot must drive on very rough terrain. However, they are difficult to use indoors such as on carpets and smooth floors. Tracked Robots are a type of robot which use treads or caterpillar tracks instead of wheels like NASA’s Urban Robot, Urbie. The Tactical Mobile Robot program, in the DARPA Advanced Technology Office, has enlisted JPL’s in leading the design and implementation of its perception urban robot. This urban robot (Urbie) is a joint effort of JPL, iRobot Corporation, the Robotics Institute of Carnegie Mellon University, and the University of Southern California Robotics Research Laboratory.
Figure above shows Urbie climbing stairs autonomously.
Urbie’s initial purpose is mobile military reconnaissance in city terrain but many of its features will also make it useful to police, emergency, and rescue personnel. The robot is rugged and well-suited for hostile environments and its autonomy lends Urbie to many different applications. Such robots could investigate urban environments contaminated with radiation, biological warfare, or chemical spills. They could also be used for search and rescue in earthquake-struck buildings and other disaster zones.
______
______
Section-8
Technology of robots & robotics:
_
Robot Basics:
Most robots have movable bodies. Some only have motorized wheels, and others have dozens of movable segments, typically made of metal or plastic. Like the bones in your body, the individual segments are connected together with joints.
Robots spin wheels and pivot jointed segments with some sort of actuator. Some robots use electric motors and solenoids as actuators; some use a hydraulic system; and some use a pneumatic system (a system driven by compressed gases). Robots may use a combination of all these actuator types.
A robot needs a power source to drive these actuators. Most robots either have batteries or plug into the wall. Some may use solar power or fuel cells. Hydraulic robots also need a pump to pressurize the hydraulic fluid, and pneumatic robots need an air compressor or compressed-air tanks.
The actuators are all wired to electrical circuits. The circuits power electrical motors and solenoids directly and activate hydraulic systems by manipulating electrical valves. The valves determine the pressurized fluid’s path through the machine. To move a hydraulic leg, for example, the robot’s controller would open the valve leading from the fluid pump to a piston cylinder attached to that leg. The pressurized fluid would extend the piston, swiveling the leg forward. Typically, in order to move their segments in two directions, robots use pistons that can push both ways.
The robot’s computer controls everything attached to the circuits. To move the robot, the computer switches on all the necessary motors and valves. Many robots are reprogrammable — to change the robot’s behavior, you update or change the software that gives the robot its instructions.
Not all robots have sensory systems, and few can see, hear, smell or taste. The most common robotic sense is the sense of movement — the robot’s ability to monitor its own motion. One way to do this is to use a laser on the bottom of the robot to illuminate the floor while a camera measures the distance and speed traveled. This is the same basic system used in computer mice. Roomba vacuums use infrared light to detect objects in their path and photoelectric cells measure changes in light.
These are the basic nuts and bolts of robotics. Roboticists can combine these elements in an infinite number of ways to create robots of unlimited complexity.
_
Development of robotics:
Robotics is based on two related technologies: numerical control and teleoperators. Numerical control (NC) is a method of controlling machine tool axes by means of numbers that have been coded on punched paper tape or other media. It was developed during the late 1940s and early 1950s. The first numerical control machine tool was demonstrated in 1952 in the United States at the Massachusetts Institute of Technology (MIT). Subsequent research at MIT led to the development of the APT (Automatically Programmed Tools) language for programming machine tools.
A teleoperator is a mechanical manipulator that is controlled by a human from a remote location. Initial work on the design of teleoperators can be traced to the handling of radioactive materials in the early 1940s. In a typical implementation, a human moves a mechanical arm and hand at one location, and these motions are duplicated by the manipulator at another location.
Industrial robotics can be considered a combination of numerical-control and teleoperator technologies. Numerical control provides the concept of a programmable industrial machine, and teleoperator technology contributes the notion of a mechanical arm to perform useful work. The first industrial robot was installed in 1961 to unload parts from a die-casting operation. Its development was due largely to the efforts of the Americans George C. Devol, and Joseph F. Engelberger. Devol originated the design for a programmable manipulator, the U.S. patent for which was issued in 1961. Engelberger teamed with Devol to promote the use of robots in industry and to establish the first corporation in robotics—Unimation, Inc.
_
The robot manipulator:
The most widely accepted definition of an industrial robot is one developed by the Robotic Industries Association:
An industrial robot is a reprogrammable, multifunctional manipulator designed to move materials, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks. The technology of robotics is concerned with the design of the mechanical manipulator and the computer systems used to control it. It is also concerned with the industrial applications of robots.
The mechanical manipulator of an industrial robot is made up of a sequence of link and joint combinations. The links are the rigid members connecting the joints. The joints (also called axes) are the movable components of the robot that cause relative motion between adjacent links. The manipulator can be divided into two sections: (1) an arm-and-body, which usually consists of three joints connected by large links, and (2) a wrist, consisting of two or three compact joints. Attached to the wrist is a gripper to grasp a work part or a tool (e.g., a spot-welding gun) to perform a process. The two manipulator sections have different functions: the arm-and-body is used to move and position parts or tools in the robot’s work space, while the wrist is used to orient the parts or tools at the work location. The arm-and-body section of most commercial robots is based on one of four configurations. Each of the anatomies, as they are sometimes called, provides a different work envelope (i.e., the space that can be reached by the robot’s arm) and is suited to different types of applications.
_
Basic Robot programming:
Programming is a key skill to develop for working in robotics. Robots process sensor data, perform cognition and plan actions using computer programs that are executed on a processor. Computer programs are essentially a set of instructions that operate on an input to produce an output.
Example: A face recognition program in a robot will:
-)1. take an image of a person as an input, 2. scan the image for a specific set of features, 3. compare these features to a library of known faces, 4. find a match, then 5. return the name of the person as an output. The program will perform exactly these same set of instructions every time it executes.
Most programming languages are written in normal text, which is easy for humans to understand. Programs are then compiled into machine code for a processor to execute (or byte code, for a virtual machine to execute).
Programming languages:
There are numerous programming languages available, e.g. C/C++, Java, Fortran, Python etc. The most popular language in robotics is probably C/C++ (C++ is an object-oriented successor to the C language). Python is also very popular due to its use in machine learning and also because it can be used to develop ROS packages.
There are additional, important software tools used in robotics, in particular:
-Robot Operating System (ROS) is a set of software libraries and tools that helps you build robot applications. You can also write your own programs for ROS e.g. in C/C++ or Python.
-Matlab is used for data analysis and interfaces with ROS (also, Octave is a free, open-source equivalent to Matlab available here.)
C programming and the Arduino microcontroller:
The C/C++ language is one of the most widely used programming languages in robotics. The Arduino microcontroller uses a programming language based on C and is a great way to learn the basics of this important language whilst doing hands-on robotics.
An Arduino MEGA 2560 microcontroller.
The microcontroller is in fact just the large chip in the center of the Arduino – this is the component that you program: high-level code that you write is compiled down to machine code that is embedded on this chip. The pins at the top and bottom are for connecting input devices, such as sensors, and output devices such as motors. A basic Arduino (Uno) costs about EUR 19 and the program development environment can be downloaded for free from the Arduino website where you’ll also find many example projects and tutorials.
Python and the Raspberry Pi:
Python is a useful language to learn as it is widely used in computer science and machine learning. Python is the language that is used with the Raspberry Pi. This makes it highly relevant to robotics because you can use a Raspberry Pi to control a robot.
The Raspberry Pi Foundation has developed a number of free online courses for learning how to use a Raspberry Pi in robotics.
The Raspberry Pi 3, Model B. The Raspberry Pi is like a normal PC but much smaller. This Raspberry Pi 3 has a 1.2 GHz quad-core ARM processor, ethernet, wireless, Bluetooth, HDMI and 4 USB ports.
What are the differences between an Arduino and a Raspberry Pi?
The Arduino and Raspberry Pi are both useful for robotics projects but have some important differences.
Arduino:
An Arduino is a microcontroller, which is like a simple computer but which runs and loops a single program that you have written on a PC. This program is compiled and downloaded to the microcontroller as machine code. The Arduino is well suited to low-level robot control and has features like analogue-to-digital conversion for connecting analogue sensors.
Raspberry Pi:
A Raspberry Pi (RPi) is just like a normal PC and so is more versatile than an Arduino but lacks features like analogue-to-digital conversion. The RPi runs a Linux operating system (usually Raspian). You can connect a keyboard, mouse and monitor to an RPi, along with peripherals like a camera – very useful for robotics. (Due to the fact that the RPi runs Linux you can also install ROS although it can be a bit tricky to set up.)
_
The computer system that controls the manipulator must be programmed to teach the robot the particular motion sequence and other actions that must be performed in order to accomplish its task. There are several ways that industrial robots are programmed.
One method is called lead-through programming. This requires that the manipulator be driven through the various motions needed to perform a given task, recording the motions into the robot’s computer memory. This can be done either by physically moving the manipulator through the motion sequence or by using a control box to drive the manipulator through the sequence.
A second method of programming involves the use of a programming language very much like a computer programming language. However, in addition to many of the capabilities of a computer programming language (i.e., data processing, computations, communicating with other computer devices, and decision making), the robot language also includes statements specifically designed for robot control. These capabilities include (1) motion control and (2) input/output. Motion-control commands are used to direct the robot to move its manipulator to some defined position in space. For example, the statement “move P1” might be used to direct the robot to a point in space called P1. Input/output commands are employed to control the receipt of signals from sensors and other devices in the work cell and to initiate control signals to other pieces of equipment in the cell. For instance, the statement “signal 3, on” might be used to turn on a motor in the cell, where the motor is connected to output line 3 in the robot’s controller.
_
Soft Computing Techniques used in Robotics:
In many robotic applications, such as autonomous navigation in unstructured environments, it is difficult to obtain a precise mathematical model of the robot’s interaction with its environment. Even if the dynamics of the robot itself can be described analytically, the environment and its interaction with the robot through sensors and actuators are difficult to capture. The lack of precise and complete knowledge about the environment limits the applicability of conventional control system design to the field of robotics. What is needed are intelligent control and decision making systems with the ability to reason under unpredictability and to learn from experience. It is unrealistic to think that any learning algorithm is able to learn a complex robotic task, in reasonable learning time starting from scratch without prior knowledge about the task. The situation is similar to software design in which the design process is constrained by the three mutually conflicting constraints cost, time and quality. Optimization of one or two of the objectives, often results in a sacrifice on the third objective. In robot learning the three conflicting objectives are complexity of the task, number of training examples and prior knowledge. Learning a complex behavior in an unstructured environment without prior knowledge requires a prohibitively long exploration and training phase and therefore creates a serious bottleneck to realistic robotic applications. Task complexity can be reduced by a divide and conquer approach, which attempts to break own the overall problem into more manageable subtasks. Soft computing approaches are more preferable over conventional methods of problem solving, for problems that are difficult to describe by analytical or mathematical models. Autonomous robotics is such a domain in which knowledge about the environment is inherently imprecise, unpredictable and incomplete. Therefore, the features of fuzzy control, neural networks and evolutionary algorithms and swam intelligence are of particular benefit to the type of problems emerging in behavior based robotics and multi-agent robotics.
_____
_____
Key Technologies of intelligent robot:
An intelligent robot is an intelligent machine with the ability to take actions and make choices. Choices to be made by an intelligent robot are connected to the intelligence built into it through machine learning or deep learning (AI) as well as inputs received by the robot from its input sensors while in operation. With the needs of social development and the expansion of robot applications, people are increasingly demanding intelligent robots. The environment in which intelligent robots are located is often unknown and unpredictable. In the process of researching such robots, the following key technologies are mainly involved:
-1) Multi-sensor information fusion
Multi-sensor information fusion technology is a hot research topic in recent years. It combines control theory, signal processing, artificial intelligence, probability and statistics to provide a technical solution for robots to perform tasks in complex, dynamic, uncertain and unknown environments. There are many kinds of sensors used in robots. They are divided into internal measurement sensors and external measurement sensors according to different purposes.
Internal measurement sensors are used to detect the internal state of the components of the robot, including:
External sensors include:
Multi-sensor information fusion refers to the synthesis of sensory data from multiple sensors to produce more reliable, accurate or comprehensive information. The fused multi-sensor system can better reflect the characteristics of the detected object accurately, eliminate the uncertainty of information and improve the reliability of the information. The fused multi-sensor information has the following characteristics: redundancy, complementarity, real-time and low cost.
At present, the multi-sensor information fusion methods mainly include:
_
-2) Navigation and positioning
In the robot system, autonomous navigation is a core technology and a key and difficult issue in the field of robotics research. The basic tasks of navigation have 3 points:
Through the understanding of the scenes in the environment, identify human-made road signs or specific objects to complete the positioning of the robot and provide materials for the path planning;
Real-time detection and identification of obstacles or specific targets to improve the stability of the control system;
It can analyze obstacles and moving objects in the robot working environment and avoid damage to the robot.
_
-3) Path planning
Path planning technology is an important branch of robotics research. Optimal path planning is based on one or some optimization criteria (such as minimum work cost, shortest walking route, shortest walking time, etc.).
Find an optimal path from the start state to the target state in the robot workspace that avoids obstacles. The path planning method can be roughly divided into two methods: traditional methods and intelligent methods.
The traditional path planning methods mainly include the following:
Most of the global planning in robot path planning is based on the above methods, but these methods need to be further improved in path search efficiency and path optimization. Artificial potential field method is a mature and efficient planning method in traditional algorithms. It uses the environmental potential field model for path planning, but does not consider whether the path is optimal.
The intelligent path planning method applies artificial intelligence methods such as genetic algorithm, fuzzy logic and neural network to path planning to improve the obstacle avoidance precision of robot path planning, speed up the planning speed, and meet the needs of practical applications.
Among the most widely used algorithms are:
These methods have achieved certain research results when the obstacle environment is known or unknown.
_
-4) Robot vision
Robot vision has been undergoing revolutionary changes. Not long ago, robot vision was very limited. So limited, in fact, that if a robot detected something was in its way, all it could do was stop and call for help. Today, autonomous mobile robots can swerve around obstacles in their way. They can tell the difference between people and inanimate objects. The resolution and sensitivity of cameras have increased. The software which processes the visual data has also improved. Computer vision systems now recognize human faces. The camera hardware is an important part of the vision solution. But recording raw data is not enough. The vision system must be able to turn that data into useful information. The vision system must be able to detect an object’s distance, speed, and direction. It is even more useful if the vision system can recognize that an object is a person or a forklift. The ability to understand that one object is a person, while another is a vehicle, is called semantics. Semantic understanding of an environment is crucial to making robots more intelligent. Another use of computer vision is in order picking. The robot must be able to pick one object out, even when the object is in a pile of other things. This is called picking from clutter. The robot needs to identify not only the object, but also if the item is on its edge, or upside down. Once this is determined, the robot can decide how to pick the object up. This has proven challenging, but there are now systems that can do it. Chances are there are robotic vision systems that will fit your requirements.
Robotic Vision through Sensor Fusion:
More and more, robotic systems rely upon a combination of sensors. Each of the different kinds of sensors has strengths and weaknesses. Even one sensor can provide a kind of “vision” for a robotic system. But a combination of sensors is best. Combining the data from many sensors is called sensor fusion. Sensor fusion makes a robot more robust, reliable, and safe. As the computing power of microchips continues to grow, we can expect to see more sensors used. This will make robots more intelligent.
The visual system is an important part of autonomous robots. It is usually composed of camera, image capture card, and computer. The work of the robot vision system includes:
The core tasks are feature extraction, image segmentation and image recognition. How to accurately and efficiently process visual information is a key issue in the vision system. At present, the visual information processing is gradually refined, including:
Among them, environmental and obstacle detection is the most important and difficult process in visual information processing. Edge extraction is a commonly used method in visual information processing. For general image edge extraction, such as gradient method using local data and second-order differential method, it is difficult for mobile robots that need to process images in motion to meet real-time requirements. To this end, a method of image edge extraction based on computational intelligence is proposed, such as neural network-based methods and methods using fuzzy inference rules. Fuzzy logic inference for image edge extraction method is specific to visual navigation, which is to integrate the road knowledge required for the robot to move outdoors, such as highway white line and road edge information, integrate into the fuzzy rule base to improve road recognition efficiency and robustness. It has also been proposed to combine genetic algorithms with fuzzy logic.
_
-5) Intelligent control
With the development of robotics, traditional control theory has exposed shortcomings for the inability to accurately resolve the physical objects of modeling and the ill-conditioned processes with insufficient information. In recent years, many scholars have proposed various robotic intelligent control systems. The intelligent control methods of the robot are:
The integration of intelligent control technology includes:
Intelligent fusion technology also includes fuzzy control methods based on genetic algorithms.
_
-6) Man-machine interface technology
Man-machine interface technology is to study how to make it convenient for people to communicate with computers. In order to achieve this goal, in addition to the most basic requirements of the robot controller has a friendly, flexible and convenient human-machine interface, it also requires the computer to understand the text, understand the language, speak, and even translate speak between different languages. The implementation of these functions depends on the research of knowledge representation methods. Therefore, the study of human-machine interface technology has both great application value and basic theoretical significance. At present, human-machine interface technology has achieved remarkable results, and technologies such as text recognition, speech synthesis and recognition, image recognition and processing, and machine translation have begun to be put into practical use. In addition, human-machine interface devices and interactive technologies, monitoring technologies, remote operation technologies, and communication technologies are also important components of human-machine interface technology.
______
______
Top-down and Bottom-up architecture:
Classic “Top-Down” Robots:
The earliest and still commonly used means of determining a robot’s behavior is through a “top-down” architecture. This means that the human programmer decides what capabilities the robot will have and then writes a computer program or set of programs to endow the robot with these capabilities. The result is typically some logical reasoning engine that imposes total control over the entire robot. This control might be farmed out to various subsystems that operate in parallel, but there is usually an executive controller maintaining high-level control over what these subsystems are doing.
A top-down robot has an intelligent control system — usually in the form of an AI (“artificial intelligence”) computer program — that “sits on the thrown,” as it were, and tells all of the other parts of the robot what to do. Sensors gather information about the environment, but it is the intelligent control system that determines what to do with that information. Effectors give the robot the ability to “do things” in the world, but it is the control system that determines what to do and when.
_
Behavior-Based “Bottom-Up” Robots:
A new direction in robotic control has emerged which uses a “bottom-up” approach to robotic design. Also called, “behavior-based” robotics, these systems do not impose a high level of control or organization on the system and thus do not require complex computer programs to guide and control the robot’s every move. Instead, a variety of simple “behaviors” are built into the robot’s repertoire. These behaviors are layered and organized into a hierarchy, with more abstract goals farther up the hierarchy. The most common form of hierarchy is a subsumption architecture, so called because lower layers are subsumed by higher layers. In this architecture, what control exists is reflected in the fact that higher levels are allowed to inhibit or suppress the actions of behaviors in lower layers of the hierarchy. Robots built on this architecture may consist of nothing but a few “stupid” behaviors, but organized in the right way, it can produce very “intelligent” behavior. It has brought a revolution in the field of robotics.
_______
_______
Modularity:
Modular robots encapsulate functionality (sensors, actuators, processors, energy, etc.) in compact units called modules (Stoy et al., 2010), which connect to each other through specially designed reusable connectors and that can be rearranged in different ways to create different morphologies. As the modules are already built, modular robots speed up the assembly of new robot configurations with respect to other approaches, down to using only minutes. Wiring, for example, is virtually eliminated when reconfiguring these kinds of robots, as connectors are able to route power and control signals.
At the workplace modular robots represent a significant advancement in the manufacturing process and the next step in automation. The current view on industrial robots is that only the largest manufacturers have access to them, and that a single robot will be specialized in a single task, and will be replaced when new advancements or a shift in factory goods production is implemented. Modularity allows an arc welding robot to put down its tools and pick and place pallets or switch its tools to perform any task it is commanded to. This means that these automated robots can be both specialized and diverse, and single robots can perform tasks representing every step of the manufacturing process. For smaller manufacturers, automated factories containing one or several robots can be produced, with manpower only being used for intellectual tasks, such as programming and management, allowing goods to be produced with the productivity of a large manufacturer while maintaining low overhead costs. Industrial giants who currently use robots will be able to expand the use of their robots to tasks in small and unreachable workspaces where obstacle avoidance abilities are critical for their tasks. Another main advantage of this technology is that robots can be reconfigured or self-reconfigure to perform different applications.
_
Self-reconfiguring modular robot:
Modular self-reconfiguring robotic systems or self-reconfigurable modular robots are autonomous kinematic machines with variable morphology. Beyond conventional actuation, sensing and control typically found in fixed-morphology robots, self-reconfiguring robots are also able to deliberately change their own shape by rearranging the connectivity of their parts, in order to adapt to new circumstances, perform new tasks, or recover from damage. For example, a robot made of such components could assume a worm-like shape to move through a narrow pipe, reassemble into something with spider-like legs to cross uneven terrain, then form a third arbitrary object (like a ball or wheel that can spin itself) to move quickly over a fairly flat terrain; it can also be used for making “fixed” objects, such as walls, shelters, or buildings. In some cases this involves each module having 2 or more connectors for connecting several together. They can contain electronics, sensors, computer processors, memory and power supplies; they can also contain actuators that are used for manipulating their location in the environment and in relation with each other. A feature found in some cases is the ability of the modules to automatically connect and disconnect themselves to and from each other, and to form into many objects or perform many tasks moving or manipulating the environment.
_______
_______
Robot Control:
The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases – perception, processing, and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). This information is then processed to be stored or transmitted and to calculate the appropriate signals to the actuators (motors), which move the mechanical structure to achieve the required coordinated motion or force actions.
The processing phase can range in complexity. At a reactive level, it may translate raw sensor information directly into actuator commands (e.g. firing motor power electronic gates based directly upon encoder feedback signals to achieve the required torque/velocity of the shaft). Sensor fusion and internal models may first be used to estimate parameters of interest (e.g. the position of the robot’s gripper) from noisy sensor data. An immediate task (such as moving the gripper in a certain direction until an object is detected with a proximity sensor) is sometimes inferred from these estimates. Techniques from control theory are generally used to convert the higher-level tasks into individual commands that drive the actuators, most often using kinematic and dynamic models of the mechanical structure.
At longer time scales or with more sophisticated tasks, the robot may need to build and reason with a “cognitive” model. Cognitive models try to represent the robot, the world, and how the two interact. Pattern recognition and computer vision can be used to track objects. Mapping techniques can be used to build maps of the world. Finally, motion planning and other artificial intelligence techniques may be used to figure out how to act. For example, a planner may figure out how to achieve a task without hitting obstacles, falling over, etc.
Modern commercial robotic control systems are highly complex, integrate multiple sensors and effectors, have many interacting degrees-of-freedom (DOF) and require operator interfaces, programming tools and real-time capabilities. They are oftentimes interconnected to wider communication networks and in many cases are now both IoT-enabled and mobile. Progress towards open architecture, layered, user-friendly and ‘intelligent’ sensor-based interconnected robots has emerged from earlier concepts related to Flexible Manufacturing Systems (FMS), and several ‘open or ‘hybrid’ reference architectures exist which assist developers of robot control software and hardware to move beyond traditional, earlier notions of ‘closed’ robot control systems have been proposed. Open architecture controllers are said to be better able to meet the growing requirements of a wide range of robot users, including system developers, end users and research scientists, and are better positioned to deliver the advanced robotic concepts related to Industry 4.0. In addition to utilizing many established features of robot controllers, such as position, velocity and force control of end effectors, they also enable IoT interconnection and the implementation of more advanced sensor fusion and control techniques, including adaptive control, Fuzzy control and Artificial Neural Network (ANN)-based control. When implemented in real-time, such techniques can potentially improve the stability and performance of robots operating in unknown or uncertain environments by enabling the control systems to learn and adapt to environmental changes. There are several examples of reference architectures for robot controllers, and also examples of successful implementations of actual robot controllers developed from them. One example of a generic reference architecture and associated interconnected, open-architecture robot and controller implementation was developed by Michael Short and colleagues at the University of Sunderland in the UK in 2000. The robot was used in a number of research and development studies, including prototype implementation of novel advanced and intelligent control and environment mapping methods in real-time.
_
Robotics Vision and Control:
It is common to talk about a robot moving to an object, but in reality the robot is only moving to a pose at which it expects the object to be. This is a subtle but deep distinction. A consequence of this is that the robot will fail to grasp the object if it is not at the expected pose. It will also fail if imperfections in the robot mechanism or controller result in the end-effector not actually achieving the end-effector pose that was specified. In order for this conventional approach to work successfully we need to solve two quite difficult problems: determining the pose of the object and ensuring the robot achieves that pose.
The first problem, determining the pose of an object, is typically avoided in manufacturing applications by ensuring that the object is always precisely placed. This requires mechanical jigs and fixtures which are expensive, and have to be built and set up for every different part the robot needs to interact with – somewhat negating the flexibility of robotic automation.
The second problem, ensuring the robot can achieve a desired pose, is also far from straightforward. A robot end- effector is moved to a pose by computing the required joint angles. This assumes that the kinematic model is accurate, which in turn necessitates high precision in the robot’s manufacture: link lengths must be precise and axes must be exactly parallel or orthogonal. Further the links must be stiff so they do not deform under dynamic loading or gravity. It also assumes that the robot has accurate joint sensors and high-performance joint controllers that eliminate steady state errors due to friction or gravity loading. The nonlinear controllers are capable of this high performance but they require an accurate dynamic model that includes the mass, center of gravity and inertia for every link, as well as the payload.
None of these problems are insurmountable but this approach has led us along a path toward high complexity. The result is a heavy and stiff robot that in turn needs powerful actuators to move it, as well as high quality sensors and a sophisticated con- troller – all this contributes to a high overall cost. However we should, whenever possible, avoid solving hard problems if we do not have to. Stepping back for a moment and looking at this problem it is clear that the root cause of the problem is that the robot cannot see what it is doing. Consider if the robot could see the object and its end-effector, and could use that in- formation to guide the end-effector toward the object. This is what humans call hand-eye coordination and what we will call vision-based control or visual servo control – the use of information from one or more cameras to guide a robot in order to achieve a task. The pose of the target does not need to be known a priori; the robot moves toward the observed target wherever it might be in the workspace. There are numerous advantages of this approach: part position tolerance can be relaxed, the ability to deal with parts that are moving comes almost for free, and any errors in the robot’s intrinsic ac- curacy will be compensated for.
A vision-based control system involves continuous measurement of the target and the robot using vision to create a feedback signal and moves the robot arm until the visually observed error between the robot and the target is zero. Vision-based control is quite different to taking an image, determining where the target is and then reaching for it. The advantage of continuous measurement and feedback is that it provides great robustness with respect to any errors in the system. Robotic vision, the combination of robotics and computer vision, involves the application of computer algorithms to data acquired from sensors. The research community has developed a large body of such algorithms but for a newcomer to the field this can be quite daunting. There are of course some practical complexities. If the camera is on the end of the robot it might interfere with the task, or when the robot is close to the target the camera might be unable to focus, or the target might be obscured by the gripper.
_
At present, most robot types are controlled by pre-programmed or learning algorithms. With humanoid robots and cobots, the robots perceive their surroundings and other important information, such as recognizing workpieces, via sensors. The robots process this information and pass it on to their motors as signals, which put the mechanical elements into action. Artificial intelligence (AI) is another way for a robot to determine how to act optimally in its environment. Within the scope of human-machine interaction, control systems can be split into different levels of autonomy:
Direct control:
With this type of control, humans are in complete control. They control the robot either directly by touch, by remote control, or via an algorithm that is programmed for the control unit.
Supervision:
Humans specify basic positions and movement sequences. The robot then determines how to use its motors optimally within the scope of the specifications.
Semi-autonomous robots:
With these systems, humans specify a general task. The robot autonomously determines the optimum positions and movement sequences to fulfill the task.
Autonomous robots:
The robot recognizes its tasks autonomously and carries them out completely on its own.
_
Dynamics and kinematics:
The study of motion can be divided into kinematics and dynamics. Direct kinematics refers to the calculation of end effector position, orientation, velocity and acceleration when the corresponding joint values are known. Inverse kinematics refers to the opposite case in which required joint values are calculated for given end effector values, as done in path planning. Some special aspects of kinematics include handling of redundancy (different possibilities of performing the same movement), collision avoidance and singularity avoidance. Once all relevant positions, velocities and accelerations have been calculated using kinematics, methods from the field of dynamics are used to study the effect of forces upon these movements. Direct dynamics refers to the calculation of accelerations in the robot once the applied forces are known. Direct dynamics is used in computer simulations of the robot. Inverse dynamics refers to the calculation of the actuator forces necessary to create a prescribed end effector acceleration. This information can be used to improve the control algorithms of a robot.
In each area mentioned above, researchers strive to develop new concepts and strategies, improve existing ones and improve the interaction between these areas. To do this, criteria for “optimal” performance and ways to optimize design, structure and control of robots must be developed and implemented.
_
Environmental interaction and navigation:
Though a significant percentage of robots in commission today are either human controlled or operate in a static environment, there is an increasing interest in robots that can operate autonomously in a dynamic environment. These robots require some combination of navigation hardware and software in order to traverse their environment. In particular, unforeseen events (e.g. people and other obstacles that are not stationary) can cause problems or collisions. Some highly advanced robots such as ASIMO and Meinü robot have particularly good robot navigation hardware and software. Also, self-controlled cars, Ernst Dickmanns’ driverless car, and the entries in the DARPA Grand Challenge, are capable of sensing the environment well and subsequently making navigational decisions based on this information, including by a swarm of autonomous robots. Most of these robots employ a GPS navigation device with waypoints, along with radar, sometimes combined with other sensory data such as lidar, video cameras, and inertial guidance systems for better navigation between waypoints. Cruise missiles are rather dangerous highly autonomous robots. Pilotless drone aircraft are increasingly used for reconnaissance. Some of these unmanned aerial vehicles (UAVs) are capable of flying their entire mission without any human interaction at all except possibly for the landing where a person intervenes using radio remote control. Some drones are capable of safe, automatic landings, however.
Autonomous navigation is the most difficult for ground vehicles, due to:
______
Perception vs. Reality, and the Fragility of Control:
The fundamental challenge of all robotics is this: It is impossible to ever know the true state of the environment. Robot control software can only guess the state of the real world based on measurements returned by its sensors. It can only attempt to change the state of the real world through the generation of control signals. Thus, one of the first steps in control design is to come up with an abstraction of the real world, known as a model, with which to interpret our sensor readings and make decisions. As long as the real world behaves according to the assumptions of the model, we can make good guesses and exert control. As soon as the real world deviates from these assumptions, however, we will no longer be able to make good guesses, and control will be lost. Often, once control is lost, it can never be regained. This is one of the key reasons that robotics programming is so difficult. We often see videos of the latest research robot in the lab, performing fantastic feats of dexterity, navigation, or teamwork, and we are tempted to ask, “Why isn’t this used in the real world?” Well, next time you see such a video, take a look at how highly-controlled the lab environment is. In most cases, these robots are only able to perform these impressive tasks as long as the environmental conditions remain within the narrow confines of its internal model. Thus, one key to the advancement of robotics is the development of more complex, flexible, and robust models—and said advancement is subject to the limits of the available computational resources.
______
______
Human-robot interaction:
The state of the art in sensory intelligence for robots will have to progress through several orders of magnitude if we want the robots working in our homes to go beyond vacuum-cleaning the floors. If robots are to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually be capable of communicating with humans through speech, gestures, and facial expressions, rather than a command-line interface. Although speech would be the most natural way for the human to communicate, it is unnatural for the robot. It will probably be a long time before robots interact as naturally as the fictional C-3PO, or Data of Star Trek, Next Generation. Even though the current state of robotics cannot meet the standards of these robots from science-fiction, robotic media characters (e.g., Wall-E, R2-D2) can elicit audience sympathies that increase people’s willingness to accept actual robots in the future. Acceptance of social robots is also likely to increase if people can meet a social robot under appropriate conditions. Studies have shown that interacting with a robot by looking at, touching, or even imagining interacting with the robot can reduce negative feelings that some people have about robots before interacting with them. However, if pre-existing negative sentiments are especially strong, interacting with a robot can increase those negative feelings towards robots.
_
Speech recognition:
Interpreting the continuous flow of sounds coming from a human, in real time, is a difficult task for a computer, mostly because of the great variability of speech. The same word, spoken by the same person may sound different depending on local acoustics, volume, the previous word, whether or not the speaker has a cold, etc. It becomes even harder when the speaker has a different accent. Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first “voice input system” which recognized “ten digits spoken by a single user with 100% accuracy” in 1952. Currently, the best systems can recognize continuous, natural speech, up to 160 words per minute, with an accuracy of 95%. With the help of artificial intelligence, machines nowadays can use people’s voice to identify their emotions such as satisfied or angry.
Robotic voice:
Other hurdles exist when allowing the robot to use voice for interacting with humans. For social reasons, synthetic voice proves suboptimal as a communication medium, making it necessary to develop the emotional component of robotic voice through various techniques. An advantage of diphonic branching is the emotion that the robot is programmed to project, can be carried on the voice tape, or phoneme, already pre-programmed onto the voice media. One of the earliest examples is a teaching robot named Leachim developed in 1974 by Michael J. Freeman. Leachim was able to convert digital memory to rudimentary verbal speech on pre-recorded computer discs. It was programmed to teach students in The Bronx, New York. Research is continuing on designing the emotion expressed by a robotic voice by manipulating specific acoustic parameters such as pitch, amplitude and speech rate.
Gestures:
One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer. On both of these occasions, making hand gestures would aid the verbal descriptions. In the first case, the robot would be recognising gestures made by the human, and perhaps repeating them for confirmation. In the second case, the robot police officer would gesture to indicate “down the road, then turn right”. It is quite likely that gestures will make up a part of the interaction between humans and robots. A great many systems have been developed to recognise human hand gestures.
Facial expression:
Facial expressions can provide rapid feedback on the progress of a dialog between two humans, and soon it may be able to do the same for humans and robots. A robot should know how to approach a human, judging by their facial expression and body language. Whether the person is happy, frightened or crazy-looking affects the type of interaction expected of the robot. Likewise, a robot like Kismet can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans.
Personality:
Many of the robots of science fiction have personality, and that is something which may or may not be desirable in the commercial robots of the future. Nevertheless, researchers are trying to create robots which appear to have a personality: i.e. they use sounds, facial expressions and body language to try to convey an internal state, which may be joy, sadness or fear. One commercial example is Pleo, a toy robot dinosaur, which can exhibit several apparent emotions.
Artificial emotions:
Artificial emotions can also be generated, composed of a sequence of facial expressions or gestures. As can be seen from the movie Final Fantasy: The Spirits Within, the programming of these artificial emotions is complex and requires a large amount of human observation. To simplify this programming in the movie, presets were created together with a special software program. This decreased the amount of time needed to make the film. These presets could possibly be transferred for use in real-life robots. An example of a robot with artificial emotions is Robin the Robot developed by an Armenian IT company Expper Technologies, which uses AI-based peer-to-peer interaction. Its main task is achieving emotional well-being, i.e. overcome stress and anxiety. Robin was trained to analyze facial expressions and use his face to display his emotions given the context. The robot has been tested by kids in US clinics, and observations show that Robin increased the appetite and cheerfulness of children after meeting and talking.
Social intelligence:
The Socially Intelligent Machines Lab of the Georgia Institute of Technology researches new concepts of guided teaching interaction with robots. The aim of the projects is a social robot that learns task and goals from human demonstrations without prior knowledge of high-level concepts. These new concepts are grounded from low-level continuous sensor data through unsupervised learning, and task goals are subsequently learned using a Bayesian approach. These concepts can be used to transfer knowledge to future tasks, resulting in faster learning of those tasks. The results are demonstrated by the robot Curi who can scoop some pasta from a pot onto a plate and serve the sauce on top.
______
Human-centered cognitive cycle:
Humans, machines, and cyberspace are part of human-oriented cognitive cycle. In cognitive cycle, when talking in terms of cloud computing and deep learning, the machine describes the physical components for example computer networks, transmission devices, and intelligent robots. On the other hand, the cyberspace describes the virtual components for example data management. This cyclic system integrates human and machines with algorithms and assistance of cyberspace (Chen et al., 2018). This results in effective interaction among humans, machines and cyberspace as shown in figure below. Therefore, a new arena opens for practical solutions to tackle the internal needs of humans and deliver intelligent cognitive services according to user requirements.
______
Cognition for human–robot interaction:
Robots with cognitive abilities are necessary to support human–robot interaction (HRI) for fulfilling of necessities, capabilities, task, and demand of their human counterparts. Cognitive-human–robot interaction is a very active and bourgeoning area of research focusing on human(s), robot(s) and their articulated activities as a cognitive system for developing algorithms, models and design guidelines to enable the design of such system. Thanks to phenomenal advances undertaken in human neuroscience, HRI is moving out from fiction world to real world context. Key to such HRI is the need for the development of suitable models capable to execute joint activity for HRI, a deeper understanding of human prospects and cognitive responses to robot actions as seen in the figure below:
Figure above shows cognition for human–robot interaction.
______
______
Section-9
Types of robots:
_
Robot generations:
The development of robotics is influenced by the technological progression, for an instance, the digital computer, the numerical control system, the making of the transistor, or the integrated circuits. This technological development further improved the characteristics of the robots and assisted them advance from exclusively mechanical as well as hydraulic machines, to programmable structures, which can sense environment. Parallel to supplementary technological improvements, robotics has sophisticated and changed taking into consideration the requirements of the social order. Based on their individualities and the characteristics of the robots, the development of robotics can categorize into four generations:
Generation Zero (up to 1950):
It is also known as Pre-Robots generation. In 1495, the polymath Leonardo Da Vinci envisaged the drawing of the first humanoid. In the following times, dissimilar machines were manufactured by applying mechanical parts with the intention of helped the society as well as industry. It was not pending the first industrial renaissance that industries embarked on to sense about automation as a manner of developing the manufacturing methods. In this generation, the automated industrial machines were based on hydraulic or pneumatic mechanisms, deficient of any computing facility and were supervised by the workers. The first automation methods were the punch cards, which were used to enter data to different machinery for instance controlling textile looms. The first electronic computers named “COLOSSUS”, also applied punched cards for programming.
First Generation Robots (1950-1967):
In this generation the first manipulators were introduced. The main characteristics of this generation robots are (i) introducing the simple control algorithms and (ii) required of information regarding the environment. Because of the quick technological growth and the efforts to advance industrial invention, automated machines were planned to enhance the efficiency. The machining tool manufacturers launched the numerical controlled (NC) machines which facilitated other manufacturers to make improved products. The combination between the NC competent machining tools and the manipulators lined the technique to the first generation of robots. A first-generation robot is a simple mechanical arm. These machines have the ability to make precise motions at high speed, many times, for a long time. Such robots find widespread industrial use today. First-generation robots can work in groups, such as in an automated integrated manufacturing system (AIMS), if their actions are synchronized. The operation of these machines must be constantly supervised, because if they get out of alignment and are allowed to keep working, the result can be a series of bad production units.
Second Generation Robots (1968-1977):
This generation also known as “sensorized robots’ generation”. The main characteristics of this generation robots are (i) advanced sensory systems: such as, force, vision, and torque, (ii) more consciousness of their surroundings, and (iii) learning by expression. This type of robots is applied in the automotive production factory and have great footprint. Initially from 1968, the assimilation of sensors characters the second generation of robots. These robots were capable to respond to the surroundings and propose reactions that met special challenges. A second-generation robot has rudimentary machine intelligence. Such a robot is equipped with sensors that tell it things about the outside world. These devices include pressure sensors, proximity sensors, tactile sensors, radar, sonar, ladar, and vision systems. A controller processes the data from these sensors and adjusts the operation of the robot accordingly. These devices came into common use around 1980. Second-generation robots can stay synchronized with each other, without having to be overseen constantly by a human operator. Of course, periodic checking is needed with any machine, because things can always go wrong; the more complex the system, the more ways it can malfunction. Throughout this era appropriate funds were prepared in robotics. In the area of industrial atmosphere, the PLC (Programmable Logic Controller) was applied, and also an industrial digital computer, which was premeditated and modified for controlling the manufacturing procedures, for example robotic devices , assembly lines or any activity that involves with high reliability. PLCs were at the believed to be simple to program. Because of these individualities, PLCs became a usually used device in the automation industry. In 1973, one of the world’s leading manufacturers of industrial robots made the first industrial robot named “KUKA” which have six electromechanical driven axes named Famulus. After one year, a robot named “T3” was introduced in the market by Cincinnati Milacron (acquired by ABB in 1990). This robot was the first commercially accessible robot which was controlled by a microcomputer.
Third Generation Robots (1978-1999):
This is also known as the industrial robot’s generation. The main characteristics of this generation robots are (i) introducing the programming language, (ii) robots have the committed controllers (computers), (iii) partial addition of artificial vision, and (iv) introduce the reprogrammable robots. The idea of a third generation robot includes two major ways of growing smart robot machinery i.e. the insect robot and the autonomous robot. A self-directed robot can work on its individual. It includes a controller, and it can do things basically without any type of supervision, whichever by a human being or by an external computer. For an example of this type of robot is just like as a delicate robot. There are several conditions in which robots are not capable to execute proficiently. Insect robots can help to solve this problem, where one central computer can be used as a controller. These types of robot can work as like as bees in a hive or ants robot in an anthill.
Fourth Generation Robot (2000-2017):
In this generation, robots are more intelligent than former generation. These robots also included more complicated sensors which can help them adjust more efficiently to different conditions. The first domestic vacuum cleaner robot was Roomba. YuMi was the first collaborative robot contained a lot of progresses in the field of security systems ahead of interlocking devices or photoelectric barriers, makes sure the coexistence of robot and employee in the same situation, improving the ergonomics of the machinist and the fabrication developments. The main characteristics of this generation robot are (i) addition of progressed computing potentiality, (ii) these computers can perform with data, these computers also perform logical learn and reasoning, (iii) artificial Intelligence initiates to be contained with experimentally and moderately, (iii) introduce more sophisticated sensors, and (iv) introduce mutual robots.
Fifth Generation Robots:
Basically, this generation of robots is known as for personal collaborative robots. The main characteristics of this generation robots are (i) Reconfigurable robots, (ii) Modular robots and components, (iii) Robots help humans enhance every-day activities and (iv) Robot and human share same environment and collaborate. With most recent developments in Artificial Intelligent (AI), there have a huge prospect for the future improvement of robotics. The future generation robots will be a sign of all the technological progresses that researchers and developers have made in recent years. According to the 5th generation robot’s characteristics, robots will be capable to coexist with the humans, improve human capabilities, make simpler and get better life.
______
______
Classification of Robots:
It’s not easy to define what robots are, and it’s not easy to categorize them either. Each robot has its own unique features, and as a whole robots vary hugely in size, shape, and capabilities. Still, many robots share a variety of features. Robots can be classified using several criteria such as the application area, control techniques, mobility, mechanism of interaction, actuators, geometrical configuration, intelligent robots and others. Of course, there is a great deal of overlap in many of these categories; drones, for example, can be classified as either aerospace, consumer, or exploration. The merits of classifying robots are very huge because it makes it easier in identifying robots to be used for a particular task.
Robots can be classified according to the environment in which they operate (see figure below). The most common distinction is between fixed and mobile robots. These two types of robots have very different working environments and therefore require very different capabilities.
Fixed robots are mostly industrial robotic manipulators that work in well defined environments adapted for robots. Industrial robots perform specific repetitive tasks such soldering or painting parts in car manufacturing plants. With the improvement of sensors and devices for human-robot interaction, robotic manipulators are increasingly used in less controlled environment such as high-precision surgery.
Stationary robots are generally nothing but the robotic arm with a global axis of movement. We can further divide this into a few categories, which are as follows:
By contrast, mobile robots are expected to move around and perform tasks in large, ill-defined and uncertain environments that are not designed specifically for robots. They need to deal with situations that are not precisely known in advance and that change over time. Such environments can include unpredictable entities like humans and animals. Examples of mobile robots are robotic vacuum cleaners and self-driving cars.
_
Classification of robots by environment and mechanism of interaction is depicted in figure above:
_
There is no clear dividing line between the tasks carried out by fixed robots and mobile robots—humans may interact with industrial robots and mobile robots can be constrained to move on tracks—but it is convenient to consider the two classes as fundamentally different. In particular, fixed robots are attached to a stable mount on the ground, so they can compute their position based on their internal state, while mobile robots need to rely on their perception of the environment in order to compute their location.
There are three main environments for mobile robots that require significantly different design principles because they differ in the mechanism of motion: aquatic (underwater exploration), terrestrial (cars) and aerial (drones). Again, the classification is not strict, for example, there are amphibious robots that move in both water and on the ground. Robots for these three environments can be further divided into subclasses: terrestrial robots can have legs or wheels or tracks, and aerial robots can be lighter-than-air balloons or heavier-than-air aircraft, which are in turn divided into fixed-wing and rotary-wing (helicopters).
_
Robots can be classified by intended application field and the tasks they perform (see figure below). Industrial robots work in well-defined environments on production tasks. The first robots were industrial robots because the well-defined environment simplified their design. Service robots, on the other hand, assist humans in their tasks. These include chores at home like vacuum clears, transportation like self-driving cars, and defense applications such as reconnaissance drones. Medicine, too, has seen increasing use of robots in surgery, rehabilitation and training. These are recent applications that require improved sensors and a closer interaction with the user.
Classification of robots by application field is depicted in figure above:
______
______
Robots broken down by control types:
The definitions are in accordance with ISO 8373
-1. Sequence-controlled robot
A robot having a system of control in which a state of machine movements occurs in a desired order, the completion of one movement initiating the next.
-2. Trajectory operated robot
A robot, which performs a controlled procedure whereby three or more controlled axis motions operate in accordance with instructions that specify the required time based trajectory to the next required pose (normally achieved through interpolation).
-3. Adaptive robot
A robot having sensory control, adaptive control, or learning-control functions.
-4. Teleoperated robot
A robot that can be remotely operated by a human operator. Its function extends the human’s sensory-motor functions to remote locations and the response of the machine to the actions of the operator is programmable
______
______
JIRA classification of Robots:
The internationally approved classification of robots is known as JIRA (Japanese Industrial Robot Association). According to JIRA, robots are of following types:
_______
_______
Here are some common kinds of robots:
-1. Industrial Robots
Industrial robots usually consist of a jointed arm (multi-linked manipulator) and an end effector that is attached to a fixed surface. One of the most common type of end effector is a gripper assembly. The International Organization for Standardization gives a definition of a manipulating industrial robot in ISO 8373:
“An automatically controlled, reprogrammable, multipurpose, manipulator programmable in three or more axes, which may be either fixed in place or mobile for use in industrial automation applications.”
This definition is used by the International Federation of Robotics, the European Robotics Research Network (EURON) and many national standards committees. These are the robots that are generally used in the industrial environment for various purposes such as lifting very heavy components or moving parts from one place to another etc. Though these are robots they look mostly like a robotic arm only which is obviously huge in size. The traditional industrial robot consists of a manipulator arm designed to perform repetitive tasks.
-2. Mobile robot
Mobile robots have the capability to move around in their environment and are not fixed to one physical location. An example of a mobile robot that is in common use today is the automated guided vehicle or automatic guided vehicle (AGV). Mobile robots are also found in industry, military and security environments. They also appear as consumer products, for entertainment or to perform certain tasks like vacuum cleaning. Mobile robots are the focus of a great deal of current research and almost every major university has one or more labs that focus on mobile robot research. Mobile robots are usually used in tightly controlled environments such as on assembly lines because they have difficulty responding to unexpected interference. Because of this most humans rarely encounter robots. However domestic robots for cleaning and maintenance are increasingly common in and around homes in developed countries. Robots can also be found in military applications.
-3. Modular robot
Modular robots are a new breed of robots that are designed to increase the use of robots by modularizing their architecture. The functionality and effectiveness of a modular robot is easier to increase compared to conventional robots. These robots are composed of a single type of identical, several different identical module types, or similarly shaped modules, which vary in size. Their architectural structure allows hyper-redundancy for modular robots, as they can be designed with more than 8 degrees of freedom (DOF). Creating the programming, inverse kinematics and dynamics for modular robots is more complex than with traditional robots. Modular robots can be manually or self-reconfigured to form a different robot, that may perform different applications. Because modular robots of the same architecture type are composed of modules that compose different modular robots, a snake-arm robot can combine with another to form a dual or quadra-arm robot, or can split into several mobile robots, and mobile robots can split into multiple smaller ones, or combine with others into a larger or different one. This allows a single modular robot the ability to be fully specialized in a single task, as well as the capacity to be specialized to perform multiple different tasks.
-4. Collaborative robots
A collaborative robot or cobot is a robot that can safely and effectively interact with human workers while performing simple industrial tasks. However, end-effectors and other environmental conditions may create hazards, and as such risk assessments should be done before using any industrial motion-control application. The collaborative robots most widely used in industries today are manufactured by Universal Robots in Denmark. Rethink Robotics—founded by Rodney Brooks, previously with iRobot—introduced Baxter in September 2012; as an industrial robot designed to safely interact with neighboring human workers, and be programmable for performing simple tasks. Baxters stop if they detect a human in the way of their robotic arms and have prominent off switches. Intended for sale to small businesses, they are promoted as the robotic analogue of the personal computer. As of May 2014, 190 companies in the US have bought Baxters and they are being used commercially in the UK.
-5. Service robot
Most commonly industrial robots are fixed robotic arms and manipulators used primarily for production and distribution of goods. The term “service robot” is less well-defined. The International Federation of Robotics has proposed a tentative definition, “A service robot is a robot which operates semi- or fully autonomously to perform services useful to the well-being of humans and equipment, excluding manufacturing operations.” These robots can be considered as our helping hands in our everyday tasks. Service robots can be found in our immediate surroundings such as our homes and gardens (e.g. robotic vacuum cleaners or lawnmowers), and in professional environments as well, taking over routine tasks to increase our effectiveness. Despite the ease-of-use of the end products, these devices have highly complex hardware to enable smooth and user-friendly operation. Several motors are required to move the machine and enable it to carry out tasks accurately. Advanced functionalities like collision avoidance or room mapping require built-in radars and sensors. Advanced models will offer intelligent routing with learning algorithms as well. Users can benefit in several ways from these features- for example, reduced time to finalize a task.
-6. Educational (interactive) robots
Robots are used as educational assistants to teachers. From the 1980s, robots such as turtles were used in schools and programmed using the Logo language. This broad category is aimed at the next generation of roboticists, for use at home or in classrooms. It includes hands-on programmable sets from Lego, 3D printers with lesson plans, and even teacher robots like EMYS. There are robot kits like Lego Mindstorms, BIOLOID, OLLO from ROBOTIS, or BotBrain Educational Robots can help children to learn about mathematics, physics, programming, and electronics. Robotics have also been introduced into the lives of elementary and high school students in the form of robot competitions with the company FIRST (For Inspiration and Recognition of Science and Technology). The organization is the foundation for the FIRST Robotics Competition, FIRST Tech Challenge, FIRST Lego League Challenge and FIRST Lego League Explore competitions.
-7. Surgical-Assistance Robots (vide infra):
As motion control technologies have advanced, surgical-assistance robots have become more precise. These robots help surgeons achieve new levels of speed and accuracy while performing complex operations with AI- and computer vision‒capable technologies. Some surgical robots may even be able to complete tasks autonomously, allowing surgeons to oversee procedures from a console.
-8. Domestic Robots
These kinds of robots are used at the home. These are basically used for cleaning, moping or as a vacuum cleaner. Apart from these, there are some surveillance robots as well which are used in households to increase security.
-9. Army Robots
Robots that are used in defense, falls in this category. This includes robots that are used for bomb disposal, border surveillance, Drones to capture images and drop tactical weapons, etc. A prime example of such a robot is the robot named ATLAS which is made by Boston Dynamics. This category of robots is not available commercially and generally is kept secret by individual security agencies across the globe.
-10. Aerospace
This is a broad category. It includes all sorts of flying robots—the SmartBird robotic seagull and the Raven surveillance drone, for example—but also robots that can operate in space, such as Mars rovers and NASA’s Robonauts. Robonauts are the robots that are used in various space programs. These robots are multipurpose and these can be both humanoid and non-humanoid robots. There are multiple robots that have been sent to space by NASA in recent times to perform multiple actions like collecting the sample, exploring rough surfaces and analyze, taking pictures and sending them back to earth. Apart from these robots helps NASA to explore the places where it is tough for any astronaut to visit. The prime example of this kind of robotics is the curiosity rovers that were sent to Mars by NASA a few days back. International Space Station also has a lot of robots deployed having separate duties like mobile communication management, environment sensing devices, etc.
-11. Consumer
Consumer robots are robots you can buy and use just for fun or to help you with tasks and chores. Examples are the robot dog Aibo, the Roomba vacuum, AI-powered robot assistants, and a growing variety of robotic toys and kits.
-12. Disaster Response
They perform dangerous jobs like searching for survivors in the aftermath of an emergency and help in other crucial activities at the disaster site. E.g. Hyrodnalix’s Emergency Integrated Lifesaving Lanyard (EMILY) is a four-foot, 25-pound remote-controlled robot that acts as a hybrid flotation buoy-lifeboat. Colossus from Shark Robotics can help in fire fighting, haul firefighting equipment, transport wounded victims and trigger its 360-degree, high-definition thermal camera to assess a scene. It was approved as resourceful during France’s Notre-Dame Cathedral fire disaster. After an earthquake and tsunami struck Japan in 2011, Packbots were used to inspect damage at the Fukushima Daiichi nuclear power station.
-13. Entertainment
These robots are designed to evoke an emotional response and make us laugh or feel surprise or in awe. Among them are robot comedian RoboThespian, Disney’s theme park robots like Navi Shaman, and musically inclined bots like Partner.
-14. Exoskeletons
Robotic exoskeletons can be used for physical rehabilitation and for enabling a paralyzed patient walk again. Some have industrial or military applications, by giving the wearer added mobility, endurance, or capacity to carry heavy loads.
-15. Humanoids
A subtype of service robots are humanoid robots that are designed to look like humans. Joint positions and movements are inspired by the human locomotor system. This is also clear by the fact that humanoid robots usually move on two legs in an upright position. The main motive for research and development in the field of humanoid robots is artificial intelligence (AI). These can be found in many different locations and forms. Tasks like welcoming guests at the hotel reception, providing indications in the tax office, guiding in museums, or teaching at schools could become very popular soon. The time when this type of robot with a certain humanoid resemblance will invade our homes depends upon the technical advantages of such a system versus existing solutions, but probably more on the social acceptance of an autonomous intelligent entity. Some notable examples include Surena IV (Iranian University of Tehran), Sophia (Hanson Robotics), Atlas (Boston Dynamics). One of the main components of a humanoid robot is sensors, as they play a pivotal role in robotic paradigms. There are two types of sensors Proprioceptive and Exteroceptive sensors. The proprioceptive sensors sense the robot’s orientation, position, and other motor skills, while Exteroceptive includes visionary and sound sensors.
-16. Medical
Medical and health-care robots include systems such as the da Vinci surgical robot and bionic prostheses, as well as robotic exoskeletons. A system that may fit in this category but is not a robot is Watson, the IBM question-answering supercomputer, which has been used in healthcare applications.
-17. Military & Security
Military robots include ground systems like Endeavor Robotics’ PackBot, used in Iraq and Afghanistan to scout for improvised explosive devices, and BigDog, designed to assist troops in carrying heavy gear. Security robots include autonomous mobile systems such as Cobalt.
-18. Research
Research robots and devices are mainly designed to collect or analyze data. They may be used to assess the efficiency of robotics software or newly developed appliances. Research robots are also used in a wide range of studies involving human interaction, movement and locomotion.
-19. Self-Driving Cars
Many robots can drive themselves around, and an increasing number of them can now drive you around. Early autonomous vehicles include the ones built for DARPA’s autonomous-vehicle competitions and also Google’s pioneering self-driving Toyota Prius, later spun out to form Waymo.
-20. Telepresence
Telepresence robots allow you to be present at a place without actually going there. You log on to a robot avatar via the internet and drive it around, seeing what it sees, and talking with people. Workers can use it to collaborate with colleagues at a distant office, and doctors can use it to check on patients.
-21. Aquatic Robots
The favorite place for these robots is in the water. They consist of deep-sea submersibles like Aquanaut, diving humanoids like Ocean One, and bio-inspired systems like the ACM-R5H snakebot Aquatic robots go far beyond deep-sea exploration. They can work with the coast guard as unmanned boats and have often been used by marine biologists and conservationists to help supplement parts of marine ecosystems that have been ravaged by climate change and industrialization efforts like off-shore drilling.
-22. Social Robots
Social robots interact directly with humans. These “friendly” robots can be used in long-term care environments to provide social interaction and monitoring. They may encourage patients to comply with treatment regimens or provide cognitive engagement, helping to keep patients alert and positive. They can also be used to offer directions to visitors and patients inside the hospital environment. In general, social robots help reduce caregiver workloads and improve patients’ emotional well-being.
-23. Companion robot
A companion robot is a robot created for the purposes of creating real or apparent companionship for human beings. Target markets for companion robots include the elderly and single children. There are several companion robot prototypes and these include Paro, CompanionAble, and EmotiRob, among others.
-24. Microrobots and nanorobots:
Microrobots are intelligent machines that operate at micron scales. Nanorobot is a popular term for molecules with a unique property that enables them to be programmed to carry out a specific task. Microrobots are the topic of a major research effort and many different technological approaches are under study. Critical issues are power and locomotion and magnetic methods are attracting much attention. Nanorobots are at a far earlier stage of development and most are based on DNA‐based molecular technology. Key applications for both classes of device are anticipated in microassembly, healthcare and nanotechnology.
-25. Mining robots
Mining robots are designed to solve a number of problems currently facing the mining industry, including skills shortages, improving productivity from declining ore grades, and achieving environmental targets. Due to the hazardous nature of mining, in particular underground mining, the prevalence of autonomous, semi-autonomous, and tele-operated robots has greatly increased in recent times. A number of vehicle manufacturers provide autonomous trains, trucks and loaders that will load material, transport it on the mine site to its destination, and unload without requiring human intervention. One of the world’s largest mining corporations, Rio Tinto, has recently expanded its autonomous truck fleet to the world’s largest, consisting of 150 autonomous Komatsu trucks, operating in Western Australia. Similarly, BHP has announced the expansion of its autonomous drill fleet to the world’s largest, 21 autonomous Atlas Copco drills.
Drilling, longwall and rockbreaking machines are now also available as autonomous robots. The Atlas Copco Rig Control System can autonomously execute a drilling plan on a drilling rig, moving the rig into position using GPS, set up the drill rig and drill down to specified depths. Similarly, the Transmin Rocklogic system can automatically plan a path to position a rockbreaker at a selected destination. These systems greatly enhance the safety and efficiency of mining operations.
______
______
Industrial robots:
The IFR’s use of the term “industrial robot” is based on the definition of the International Organization for Standardization: an “automatically controlled, reprogrammable multipurpose manipulator programmable in three or more axes”, which can be either fixed in place or mobile for use in industrial automation applications. (ISO 8373)
The terms used in the definition mean:
_
An industrial robot is a robot system used for manufacturing. Industrial robots are automated, programmable and capable of movement on three or more axes. Typical applications of robots include welding, painting, assembly, disassembly, pick and place for printed circuit boards, packaging and labeling, palletizing, product inspection, and testing; all accomplished with high endurance, speed, and precision. They can assist in material handling. In the year 2021, an estimated 3 million industrial robots were in operation worldwide according to International Federation of Robotics (IFR).
_
The first robots were industrial robots which replaced human workers performing simple repetitive tasks. Factory assembly lines can operate without the presence of humans, in a well-defined environment where the robot has to perform tasks in a specified order, acting on objects precisely placed in front of it (see figure below).
Figure above shows robots on an assembly line in a car factory. One could argue that these are really automata and not robots. However, today’s automata often rely on sensors to the extent that they can be considered as robots. Their design is simplified because they work in a customized environment which humans are not allowed to access while the robot is working.
Today’s robots need more flexibility, for example, the ability to manipulate objects in different orientations or to recognize different objects that need to be packaged in the right order. The robot can be required to transport goods to and from warehouses. This brings additional autonomy, but the basic characteristic remains: the environment is more-or-less constrained and can be adapted to the robot. Additional flexibility is required when industrial robots interact with humans and this introduces strong safety requirements, both for robotic arms and for mobile robots. In particular, the speed of the robot must be reduced and the mechanical design must ensure that moving parts are not a danger to the user. The advantage of humans working with robots is that each can perform what they do best: the robots perform repetitive or dangerous tasks, while humans perform more complex steps and define the overall tasks of the robot.
_
Industrial robots can be classified according to mechanical structure:
_
Articulated robots resemble the human arm and feature from two to 10 axes of motion and are often attached to a rotary base. Articulated robots with four to six axes are the most common robots of this type used in manufacturing. In some cases, articulated robots with up to six axes also feature an additional axis in the form of a linear transport system capable of moving the robot back and forth along a set path. This is sometimes referred to as a seventh axis in the case of six-axis robots, or a robot transfer unit otherwise. This additional axis is used to move the robot to different workstations along a line or between workstations on adjacent lines. Articulated robots are particularly useful for applications such as assembly, arc welding, material handling, machine tending, and packaging.
Cartesian robots, sometimes referred to as linear or gantry robots, are bound by three linear axes which are defined by the cartesian coordinate system as the X, Y, and Z axes; in other words, these robots move up and down, in and out, and side to side. Cartesian robots allow end users to adjust the speed, precision, stroke length, and size of any movement, and are commonly used in CNC machines and 3D printers.
SCARA (selective compliance assembly robot arms) robots function on a three-axis system similar to cartesian robots. However, they also feature rotary motion, allowing them to extend into confined areas and then retract when necessary. This can be useful when transferring parts from one work cell to another or when unloading workstations. SCARA robots are often used for assembly, painting, and palletizing, as well as biomedical production applications.
Delta robots, also called parallel robots, possess three downward-facing arms connected to a single base mounted above the workspace. These robots can move at high speeds while maintaining delicacy and precision because the end effector is controlled by all three arms simultaneously, providing enhanced stability. Delta robots are commonly used for pick-and-place applications in the food, pharmaceutical, and electronics industries, as well as in packaging applications.
Cylindrical robot has a rotary joint for rotation and a prismatic joint for angular motion around the joint axis. The rotary joint moves in a rotational movement around the common axis. In contrast, the prismatic joint will move in a linear motion. The main arm of cylindrical robots goes up and down. A cylinder built into the robotic arm produces this motion by stretching and retracting itself. Gears and a motor drive the movement of many of these cylindrical robotic versions, while a pneumatic cylinder drives the vertical motion. Assembly processes, management of machine tools and die-cast equipment, and spot welding are all done with cylindrical robots.
A spherical robot is a robot with two rotary joints and one prismatic joint; in other words, two rotary axes and one linear axis. Spherical robots have an arm which forms a spherical coordinate system. Spherical robots, sometimes regarded as polar robots, are stationary robot arms with spherical or near-spherical work envelopes that can be positioned in a polar coordinate system. So, these robots are more sophisticated than Cartesian and cylindrical robots, while control solutions are less complicated than those of articulated robot arms. This may be the reason why sometimes they are used as a base for robot kinematics exercises.
____
Applications of industrial robots:
Some of the applications of robots in Industry are:
_____
Advantages of industrial robots:
-1. Better quality and consistency
Along with other tech — such as the industrial internet of things (IIoT) or 3D printing robots — industrial robots are able to provide better production quality and more precise and reliable processes. Added benefits also include reduced cycle times and real-time monitoring to improve preventive maintenance practices.
-2. Maximum productivity and throughput
An industrial robot increases speed for manufacturing processes, in part by operating 24/7. Robots don’t need breaks or shift changes. The speed and dependability of robots ultimately reduces cycle time and maximizes throughput.
-3. Greater safety
Using robots for repetitive tasks means fewer risks of injury for workers, especially when manufacturing has to take place under hostile conditions. In addition, supervisors can oversee the process online or from a remote location.
-4. Reduced direct labor costs
The cost of having a person handle many manufacturing operations is often more expensive than robot. This can also free up workers so their skills and expertise can be used in other business areas, such as engineering, programming and maintenance.
Disadvantages of industrial robots:
-1. High initial investment
Robots typically require a large upfront investment. As you research your business case for purchasing, consider all the costs of industrial robots, including installation and configuration. You should also evaluate whether your robot can be easily modified if you need to alter operation in the future.
-2. Expertise can be scarce
Industrial robots need sophisticated operation, maintenance and programming. While the number of people with these skills are growing, it’s currently limited. As a result, it’s important to consider the personnel investment you’ll need to make to bring in that expertise or “retool” your existing staff to take on the task.
-3. Ongoing costs
While industrial robots may reduce some manufacturing labor costs, they do come with their own ongoing expenses, such as maintenance. In addition, you’ll want to consider the costs to keep your robot and any related IIoT connected devices protected from cyberthreats.
______
______
Collaborative robots:
A collaborative robot, or cobot, is a robot that can engage in direct interaction with a human worker in a shared space or operate alongside a human worker. By contrast, typical industrial robots are isolated from human contact via a safety cage. While this general description has long been used to denote the difference between industrial robots and cobots, that line is becoming more blurred. Today, some industrial robots possess aspects of collaborative capabilities as outlined in ISO standards 10218-1 and 10218-2. These collaborative capabilities—safety monitored stop, hand guiding, speed and separation monitoring, and power and force limiting—can all be achieved using sensors, control systems, and peripheral devices, some of which may already be integrated within a robot upon purchase, while others can be retrofitted to an installed industrial robot.
A description of ISO’s four collaborative robot capabilities:
Safety monitored stop:
Robots that can temporarily pause their operation when a human or other unknown object is detected in the work cell are engaged in a safety monitored stop. In these instances, the robot maintains power but cannot move, meaning that it will not need to be turned off and restarted. Robot restarts can be a lengthy process that were required before development of the safety monitored stop function. Robots engaging in a safety monitored stop automatically start themselves back up once the workspace is clear.
Hand guiding:
When a cobot is moved via direct, hands-on input from an operator, it is called hand guiding. When an operator engages in hand guiding, the cobot remains in a safety monitored stop until actuated via an enabling switch. Most cobots can be programmed via point-based or path-based teaching to perform certain tasks after being hand guided through the points or path.
Speed and separation monitoring:
This technology allows a robot to operate with a human nearby if a pre-determined distance between the robot and human can be maintained. This can be achieved using pressure-sensitive safety mats, light curtains, or laser area scanners, as well as through a combination of sensors and vision systems, to alert robots when something has entered its hazard envelope.
Power and force limiting:
Limitations on power and force can be achieved through design or by the addition of external control elements. Typically, it requires a robot with power or force feedback built in. These robots are generally smaller, slower, and less powerful than other robots, allowing them to be more easily adapted to work safely alongside humans. In cases where a robot possessing only power and force limiting collaborative capabilities are used, extensive risk assessments are required to ensure safe operation.
_
Types of human-industrial robot collaboration:
Human-industrial robot collaboration can range from a shared workspace with no direct human-robot contact or task synchronisation, to a robot that adjusts its motion in real-time to the motion of an individual human worker. Types of Human-Industrial Robot Collaboration are depicted in figure below:
Collaborative robots have transformed traditional understandings of automation, of robotics and of the relationship between human labor and robots. A niche product just a decade ago, cobots are now the fastest growing segment of the global industrial robotics market. A recent report from the International Federation of Robotics found that cobot installations grew by 11% in 2019, “in contrast to the overall trend with traditional industrial robots.”
_
Currently, IFR members find the most common collaborative robot applications are shared workspace applications where robot and employee work alongside each other, completing tasks sequentially. Often, the robot performs tasks that are either tedious or unergonomic – from lifting heavy parts to performing repetitive tasks such as tightening screws. Applications in which the robot responds in real-time to the motion of a worker (altering the angle of the gripper to match the angle at which a worker presents a part, for example) are the most technically challenging. Since the robot needs to adjust to the motion of the worker, its movements are not completely predictable and therefore the end-user must be sure that the full parameters of its potential scope of motion meet safety requirements. Examples of responsive collaboration in industrial settings are unlikely to appear soon in most manufacturing sectors, which rely on precision and repeatability to achieve productivity gains.
_
Industrial robots often operate from a fixed mounting, but there is demand for mobile industrial robots that combine a mobile base and a (collaborative) robot. These robots can, for example, carry materials from one workstation and unload them or feed a machine at a second workstation.
The right choice of robot – traditional or collaborative – is determined by the intended application. When speed and absolute precision are the primary automation criteria it is unlikely that any form of collaborative application will be economically viable. In this case, a traditional, fenced industrial robot is – and will remain – the preferred choice. If the part being manipulated could be dangerous when in motion, for example due to sharp edges, some form of fencing will be required. This applies even for cobots that stop on contact. Another factor influencing economic viability is the extent to which the robot must be integrated with other machines in a process. The more integration needed, the higher the cost of the robot installation.
Collaborative industrial robots are tools to support employees in their work, relieving them of many heavy, unergonomic and tedious tasks such as holding a heavy part steady in the required position for a worker to fit screws. However, there are still many tasks that are easy for humans but hard to automate, for instance dealing with unsorted parts, and irregular or flexible shapes. Finishing applications such as polishing and grinding applications that require continuous fine-tuning of pressure applied to the surface are also difficult to automate cost-efficiently.
_
Benefits of collaborative robots:
Cobots differ from traditional industrial automation in several important ways. While traditional automation is characterized by fixed hardware designed to handle one specific product part, cobots are designed to be mobile and easy to reprogram to run on different tasks. This means that manufacturers using cobots can handle high mix/low volume production batches without having to make huge changes to their automation setup. Traditional robots may perform superbly at a specific task, but cobots offer great flexibility and the potential for widespread redeployment throughout your manufacturing facility. It may take weeks or months to change traditional, static automation just to accommodate slight product differences; a cobot can be reprogrammed to handle new parts within hours. Similarly, traditional automation tends to be developed for a single purpose or application, whereas cobots can be used on different types of applications from assembly to quality inspection to welding.
Traditional robots are difficult to program, often requiring expert coders at great expense. By contrast, cobots are designed for ease of use. Because they are easy to use, cobots remove a major barrier to automation adoption among small-to-medium size businesses. Traditional automation often requires engineering and/or robotics experience and can take weeks to complete, whereas end users are ready to use cobots after taking a short online training course.
Another crucial difference between traditional industrial automation and collaborative automation is cost. Even relatively simple traditional automation can set companies back hundreds of thousands of dollars, even before ongoing programming and maintenance costs are taken into account. Cobots cost much less than their traditional counterparts and with companies offering cobot leasing services, it’s possible to enjoy the benefits of automation without making a huge capital outlay. So, while automation was once the domain of large companies, cobots make it possible for companies of all sizes to adopt robot technologies.
Traditional robots require safety fencing and signage to ensure that no harm is caused to human workers. This keeps workers safe, but it also means that traditional robots take up more precious floorspace and that human workers cannot share the same work area as the robot. Cobots incorporate safety features –such as force and speed limits and coming to a stop when a human gets too close– that make it possible for humans and cobots to share the same work area.
Collaborative robots are transforming the relationship between humans and robots, which has traditionally been seen as adversarial. Whereas traditional robots are designed to eliminate human labor, cobots are designed to complement it. This means that manufacturers of all sizes can benefit from the best that both humans and robots have to offer. Humans provide creative problem-solving capabilities and situational flexibility and cobots provide the reliability and precision you expect from robots. Using cobots allows manufacturers to free humans from repetitive and dangerous tasks, while at the same time reducing human error and improving overall productivity and quality.
_______
_______
Mobile robot:
A mobile robot is an automatic machine that is capable of locomotion. Mobile robotics is usually considered to be a subfield of robotics and information engineering. Mobile robots have the capability to move around in their environment and are not fixed to one physical location. Mobile robots can be “autonomous” (AMR – autonomous mobile robot) which means they are capable of navigating an uncontrolled environment without the need for physical or electro-mechanical guidance devices. Often, AMRs are seen as a replacement for automated guided vehicles (AGVs), which have long been used to automate movement of materials in industry. Whereas an AGV navigates by following wire strips or magnetic tracks along the floor, AMRs use a technology called light detection and ranging (LiDAR) instead. LiDAR works by using a laser sensor to measure the distance between the AMR and other objects. The lack of fixed infrastructure required to deploy AMRs makes them ideal for applications where flexibility is needed. In addition to being used for material handling tasks, AMRs can have articulated robots mounted on them to enable their use in applications such as bin picking and machine tending. By contrast, industrial robots are usually more-or-less stationary, consisting of a jointed arm (multi-linked manipulator) and gripper assembly (or end effector), attached to a fixed surface.
Mobile robots have become more commonplace in commercial and industrial settings. Hospitals have been using autonomous mobile robots to move materials for many years. Warehouses have installed mobile robotic systems to efficiently move materials from stocking shelves to order fulfillment zones. Mobile robots are also a major focus of current research and almost every major university has one or more labs that focus on mobile robot research. Mobile robots are also found in industrial, military and security settings.
_
Components of a Mobile Robot:
The components of a mobile robot are a controller, sensors, actuators and power system. The controller is generally a microprocessor, embedded microcontroller or a personal computer (PC). The sensors used are dependent upon the requirements of the robot. The requirements could be dead reckoning, tactile and proximity sensing, triangulation ranging, collision avoidance, position location and other specific applications. Actuators usually refer to the motors that move the robot can be wheeled or legged. To power a mobile robot usually we use DC power supply (which is battery) instead of AC.
_
Mobile robots may be classified by:
-Land or home robots are usually referred to as Unmanned Ground Vehicles (UGVs). They are most commonly wheeled or tracked, but also include legged robots with two or more legs (humanoid, or resembling animals or insects).
-Delivery & Transportation robots can move materials and supplies through a work environment
-Aerial robots are usually referred to as Unmanned Aerial Vehicles (UAVs)
-Underwater robots are usually called autonomous underwater vehicles (AUVs)
-Polar robots, designed to navigate icy, crevasse filled environments
-Legged robot: human-like legs (i.e. an android) or animal-like legs.
-Wheeled robot.
-Tracks.
_____
Autonomous Mobile Robots:
Many mobile robots are remotely controlled, performing tasks such as pipe inspection, aerial photography and bomb disposal that rely on an operator controlling the device. These robots are not autonomous; they use their sensors to give their operator remote access to dangerous, distant or inaccessible places. Some of them can be semi-autonomous, performing subtasks automatically. The autopilot of a drone stabilizes the flight while the human chooses the flight path. A robot in a pipe can control its movement inside the pipe while the human searches for defects that need to be repaired. Fully autonomous mobile robots do not rely on an operator, but instead they make decisions on their own and perform tasks, such as transporting material while navigating in uncertain terrain (walls and doors within buildings, intersections on streets) and in a constantly changing environment (people walking around, cars moving on the streets).
The first mobile robots were designed for simple environments, for example, robots that cleaned swimming pools or robotic lawn mowers. Currently, robotic vacuum cleaners are widely available, because it has proved feasible to build reasonably priced robots that can navigate an indoor environment cluttered with obstacles. Many autonomous mobile robots are designed to support professionals working in structured environments such as warehouses. An interesting example is a robot for weeding fields (see figure below). This environment is partially structured, but advanced sensing is required to perform the tasks of identifying and removing weeds. Even in very structured factories, robot share the environment with humans and therefore their sensing must be extremely reliable.
Figure above shows autonomous mobile robot weeding a field.
Perhaps the autonomous mobile robot getting the most publicity these days is the self-driving car. These are extremely difficult to develop because of the highly complex uncertain environment of motorized traffic and the strict safety requirements.
An even more difficult and dangerous environment is space. The Sojourner and Curiosity Mars rovers are semi-autonomous mobile robots. The Sojourner was active for three months in 1997. The Curiosity has been active since landing on Mars in 2012! While a human driver on Earth controls the missions (the routes to drive and the scientific experiments to be conducted), the rovers do have the capability of autonomous hazard avoidance.
Much of the research and development in robotics today is focused on making robots more autonomous by improving sensors and enabling more intelligent control of the robot. Better sensors can perceive the details of more complex situations, but to handle these situations, control of the behavior of the robot must be very flexible and adaptable. Vision, in particular, is a very active field of research because cameras are cheap and the information they can acquire is very rich. Efforts are being made to make systems more flexible, so that they can learn from a human or adapt to new situations.
Figure below shows Afghan eXplorer, a semiautomatic mobile robot developed by the Artificial Intelligence Lab of the Massachusetts Institute of Technology (MIT), can carry out reporting activities in dangerous or inaccessible surroundings, kind of robotic media.
______
______
Humanoid robots:
_
Even though the earliest form of humanoid was created by Leonardo Da Vinci in 1495 (a mechanical armored suit that could sit, stand and walk), today’s humanoid robots are powered by artificial intelligence and can listen, talk, move and respond. They use sensors and actuators (motors that control movement) and have features that are modeled after human parts. While the term “android” is used in reference to human-looking robots in general (not necessarily male-looking humanoid robots), a robot with a female appearance can also be referred to as a gynoid. Whether they are structurally similar to Android or Gynoid, it’s a challenge to create realistic robots that replicate human capabilities.
Figure above shows an android, or robot designed to resemble a human. Humanoid robots are robots made in the form or shape of a human body – with a head, a torso, two arms and two legs while androids are artificial beings that resemble a human, at least in external appearance but also in behavior. All androids are humanoid robots but all humanoid robots are not androids.
A humanoid robot is a robot resembling the human body in shape. The design may be for functional purposes, such as interacting with human tools and environments, for experimental purposes, such as the study of bipedal locomotion, or for other purposes. In general, humanoid robots have a torso, a head, two arms, and two legs, though some humanoid robots may replicate only part of the body, for example, from the waist up. Some humanoid robots also have heads designed to replicate human facial features such as eyes and mouths. Androids are humanoid robots built to aesthetically resemble humans.
_
Humanoid robots are machines that are designed to look like humans. Joint positions and movements are inspired by the human locomotor system. This is also clear by the fact that humanoid robots usually move on two legs in an upright position. The main motive for research and development in the field of humanoid robots is artificial intelligence (AI). In most scientific fields, the development of a humanoid robot is deemed to be an important basis for the creation of human-like AI. This is based on the idea that AI cannot be programmed but consists of learning processes. Accordingly, a robot can develop artificial intelligence only through active participation in social life. However, active participation in social life, including communication, is possible only if the robot is perceived and accepted as an equal creature due to its shape, mobility, and sensors.
_
Humanoid robots as multi-functional helpers:
With rollers rather than legs, but with her small size, pleasant voice, and sparkling, round eyes, Josie Pepper the robot is currently assisting passengers at Munich Airport in Germany. Munich Airport, together with Lufthansa, is one of the first airports to trial a humanoid robot live. Josie provides information about the current flight status, check-in information, and describes the way to the departure gate or the nearest restaurant. The development from French company Soft-Bank Robotics is connected to the Internet via WiFi and can thus access a cloud to process and analyze dialog and link it with the airport data. In this way, Josie learns from every dialog and answers questions individually.
_
Types of Humanoid robots:
Different types of humanoid robots are responsible for performing various types of tasks and responsibilities. Here is a list of some of humanoid robots.
-1. Ocean one
It is a bimanual and underwater humanoid robot that Stanford robotics labs created to explore the coral reefs. This type of humanoid robot can reach out the depth of water where humans cannot reach. Such an invention is the epitome of artificial intelligence and the synergy of robotics. New age people also call it the ‘robo mermaid’ that is useful for managing underwater pressure, and its anthropomorphic design has many navigational capabilities.
-2. Atlas
The next option in the list for different types of humanoid robots is Atlas! It is the most dynamic humanoid robot in work used for rescue missions. Supervised by the United States Defense Advanced Research Projects Agency (DARPA), it is highly effective in tough terrains and manages obstacles.
-3. Nao
Nao is an impressive 23-inch robot that took part in the RoboCup standard platform league. This humanoid robot’s working includes sensors, software, and motors. Nao is also a helpful humanoid robot that helps giving education to autistic children. Even in India’s famous institutions like IIT Kanpur and IIT Allahabad, this human-robot made its first premiere.
-4. Petman
Petman is another type of humanoid robot used for various chemical and biological tests. US military made use of this robot that is nearly 6 feet tall and weighs 80 kilos. It can walk at the speed of 7.08 Km every hour and is known as the speedy bipedal robot. The robot surface has many sensors and works in times of chemical or biological warfare.
-5. Robear
Robear is a humanoid robot that defines the gentle machine touch and is usable for lighter operation purposes. Japan highly uses this robot to carry out tasks like carrying patients with a human touch. Its earlier models got released in the years 2009 and 2011. For the healthcare industry, this is a great robot.
-6. Pepper
Pepper is a type of humanoid robot that expresses various emotions like joy, sadness, and anger. It is a natural robot having several facial expressions, and therefore it is used for interaction in banks, stores, and even supermarkets.
-7. Sophia
It is also a famous humanoid robot that can answer several questions. Earlier it was immobile, but after getting legs in 2018, movement is possible.
-8. Nadine
Developed by the Nanyang Technological University in Singapore, Nadine is one of the most realistic female humanoid social robots, looking and acting incredibly lifelike. She is able to memorize the things when people have talked to her about the next time, they get to talk to her. Nadine is a great instance of a social robot capable of becoming a personal companion, whether it is for the elderly, children, or those who require special assistance in the form of human contact.
-9. Erica
Erica, a Japanese robot, which was created by Hiroshi Ishiguro, is one of the most intelligent humanoids developed in Japan with a special emphasis on her speech capabilities. While Erica cannot walk, she can easily interact with human beings and alter her facial expressions according to the conversation. The robot is an advanced android designed as a research platform to study human-robot interaction. It understands natural language, has a synthesized human-like voice, and can display a variety of facial expressions.
_
Humanoid Robots Market:
Humanoid robots are professional service robots built to mimic human motion and interaction. Like all service robots, they provide value by automating tasks in a way that leads to cost-savings and productivity. Humanoid robots are a relatively new form of professional service robot. While long-dreamt about, they’re now starting to become commercially viable in a wide range of applications. The humanoid robots’ market is poised for significant growth. It’s projected the market for humanoid robots will be valued at $3.9 Billion in 2023, growing at a staggering 52.1% compound annual growth rate (CAGR) between 2017 and 2023. Of all the types of humanoid robots, bipedal robots are expected to grow at the fastest CAGR during the forecasted period. The rapid expansion of the humanoid robots market is due primarily to the quickly improving capabilities of these robots and their viability in an ever-widening range of applications.
Humanoid robots are being used in the inspection, maintenance and disaster response at power plants to relieve human workers of laborious and dangerous tasks. Similarly, they’re prepared to take over routine tasks for astronauts in space travel. Other diverse applications include providing companionship for the elderly and sick, acting as a guide and interacting with customers in the role of receptionist, and potentially even being a host for the growth of human transplant organs. There’s a wide range of tasks a humanoid robot can automate, from dangerous rescues to compassionate care. The ways in which these robots are deployed are constantly expanding, and as the underlying technology improves, the market will follow suit.
______
______
Digital Human Beings:
Digital Humans are AI powered human-like virtual beings that offer the best of both: AI and Human conversation. They can easily connect to any digital brain to share knowledge (i.e. chatbot and NLP) Interact using verbal and non-verbal cues – tone of voice and facial expressions. Digital human beings are photorealistic digitized virtual versions of humans. Consider them avatars. While they don’t necessarily have to be created in the likeness of a specific individual (they can be entirely unique), they do look and act like humans. Digital human beings are not humanoid robots as they are virtual entities and not physical like robots. Unlike digital assistants such as Alexa or Siri, these AI-powered virtual beings are designed to interact, sympathize, and have conversations just like a fellow human would.
Here are a few digital human beings in development or at work today:
Neons: AI-powered lifeforms created by Samsung’s STAR Labs and called Neons include unique personalities such as a banker, K-pop star, and yoga instructor. While the technology is still young, the company expects that, ultimately, Neons will be available on a subscription basis to provide services such as a customer service or concierge.
Digital Pop Stars: In Japan, new pop stars are getting attention—and these pop stars are made of pixels. One of the band members of AKB48, Amy, is entirely digital and was made from borrowing features from the human artists in the group. Another Japanese artist, Hatsune Miku, is a virtual character from Crypton Future Media. Although she started out as the illustration to promote a voice synthesizer with the same name, she now draws her own fans to sold-out auditoriums. With Auxuman, artificial intelligence is actually making the music and creating the digital performers that perform the original compositions.
AI Hosts: Virtual copies of celebrities were created by ObEN Inc to host the Spring Festival Gala, a celebration of the Chinese lunar new year. This project illustrates the potential of personal AIs—a substitute for a real person when they can’t be present in person. Similarly, China’s Xinhua news agency introduced an AI news anchor that will report the news 24/7.
Fashion Models and Social Media Influencers: Another way digital human beings are being used is in the fashion world. H&M used computer-generated models on its website, and Artificial Talent Co. created an entire business to generate completely photorealistic and customizable fashion models. And it turns out you don’t have to be a real-life human to attract a social media following. Miquela, an artificial intelligence “influencer,” has 1.3 million Instagram followers.
Digital humans have been used in television, movies, and video games already, but there are limitations to using them to replace human actors. And while it’s challenging to predict exactly how digital humans will alter our futures, there are people pondering what digital immortality would be like or how to control the negative possibilities of the technology.
______
______
Microbots:
Microbots (or microrobots) is the field of miniature robotics, in particular mobile robots with dimensions less than 1 mm. The term can also be used for robots capable of handling micrometer size components. Microbots were developed as a result of 1. Microcontrollers 2. Appearance of miniature mechanical systems on silicon (MEMS)
One of the major challenges in developing a microrobot is to achieve motion using a very limited power supply. The microrobots can use a small lightweight battery source like a coin cell or can scavenge power from the surrounding environment in the form of vibration or light energy. Microrobots are also now using biological motors as power sources to draw chemical power from the surrounding fluid to actuate the robotic device.
_
While the “micro” prefix has been used subjectively to mean “small”, standardizing on length scales avoids confusion. Thus a nanorobot would have characteristic dimensions at or below 1 micrometer, or manipulate components on the 1 to 1000 nm size range. A microrobot would have characteristic dimensions less than 1 millimeter, a millirobot would have dimensions less than a cm, a mini-robot would have dimensions less than 10 cm (4 in), and a small robot would have dimensions less than 100 cm (39 in).
_
Due to their small size, microbots are potentially very cheap, and could be used in large numbers (swarm robotics) to explore environments which are too small or too dangerous for people or larger robots. It is expected that microbots will be useful in applications such as looking for survivors in collapsed buildings after an earthquake or crawling through the digestive tract. What microbots lack in brawn or computational power, they can make up for by using large numbers, as in swarms of microbots.
______
______
Section-10
Specialized technology of robotics:
_
Cloud robotics:
Cloud robotics is a field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centered on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent (other machines, smart objects, humans, etc.). Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low-cost, smarter robots with an intelligent “brain” in the cloud. The “brain” consists of data center, knowledge base, task planners, deep learning, information processing, environment models, communication support, etc.
_
The term ‘Cloud Robotics’ was first coined by James J. Kuffner in 2010. It refers to any robot or automation system that utilizes the cloud infrastructure for either data or code for its execution, i.e., a system where all sensing, computation and memory are not integrated into a single standalone system. Integration of cloud computing into robotics results in several advantages which are highlighted below:
_
Limitations of cloud robotics:
Though robots can benefit from various advantages of cloud computing, cloud is not the solution to all of robotics.
_
Risks of cloud robotics:
_
Applications of cloud robotics:
Autonomous mobile robots:
Google’s self-driving cars are cloud robots. The cars use the network to access Google’s enormous database of maps and satellite and environment model (like Streetview) and combines it with streaming data from GPS, cameras, and 3D sensors to monitor its own position within centimeters, and with past and current traffic patterns to avoid collisions. Each car can learn something about environments, roads, or driving, or conditions, and it sends the information to the Google cloud, where it can be used to improve the performance of other cars.
Cloud medical robots:
A medical cloud (also called a healthcare cluster) consists of various services such as a disease archive, electronic medical records, a patient health management system, practice services, analytics services, clinic solutions, expert systems, etc. A robot can connect to the cloud to provide clinical service to patients, as well as deliver assistance to doctors (e.g. a co-surgery robot). Moreover, it also provides a collaboration service by sharing information between doctors and care givers about clinical treatment.
Assistive robots:
A domestic robot can be employed for healthcare and life monitoring for elderly people. The system collects the health status of users and exchange information with cloud expert system or doctors to facilitate elderly peoples life, especially for those with chronic diseases. For example, the robots are able to provide support to prevent the elderly from falling down, emergency healthy support such as heart disease, blooding disease. Care givers of elderly people can also get notification when in emergency from the robot through network.
Industrial robots:
In manufacturing, cloud based robot systems could learn to handle tasks such as threading wires or cables, or aligning gaskets from a professional knowledge base. A group of robots can share information for some collaborative tasks. Even more, a consumer is able to place customised product orders to manufacturing robots directly with online ordering systems. Another potential paradigm is shopping-delivery robot systems. Once an order is placed, a warehouse robot dispatches the item to an autonomous car or autonomous drone to deliver it to its recipient.
______
______
Fog robotics:
Fog robotics mainly consists of a fog robot server and the cloud. It acts as a companion to cloud by shoving the data near to the user with the help of a local server. Moreover, these servers are adaptable, consists of processing power for computation, network capability, and secured by sharing the outcomes to other robots for advanced performance with the lowest possible latency.
As cloud robotics is facing issues such as bandwidth limitations, latency issues, quality of service, privacy and security – Fog robotics can be seen as a viable option for the future robotic systems. It is also considered as distributed robot systems of the next generation because robots require much brain power for processing billions of computations while performing its task. For instance, fog robotics can play an essential role in helping a robot to grasp spray bottle.
Applications:
Social robots:
A social robot can either connect to the cloud or fog robot server depending upon the availability of information. For instance, it can make a robot working at an airport to communicate with other robots for effective communication with the help of fog robotics.
______
______
Cognitive robotics:
There is growing need for robots that can interact safely with people in everyday situations. These robots have to be able to anticipate the effects of their own actions as well as the actions and needs of the people around them. To achieve this, two streams of research need to merge, one concerned with physical systems specifically designed to interact with unconstrained environments and another focusing on control architectures that explicitly take into account the need to acquire and use experience. The merging of these two areas has brought about the field of Cognitive Robotics. This is a multi-disciplinary science that draws on research in adaptive robotics as well as cognitive science and artificial intelligence, and often exploits models based on biological cognition. Cognitive robots achieve their goals by perceiving their environment, paying attention to the events that matter, planning what to do, anticipating the outcome of their actions and the actions of other agents, and learning from the resultant interaction. They deal with the inherent uncertainty of natural environments by continually learning, reasoning, and sharing their knowledge.
_
Cognitive Robotics is a subfield of robotics concerned with endowing a robot with intelligent behavior by providing it with a processing architecture that will allow it to learn and reason about how to behave in response to complex goals in a complex world. A key feature of cognitive robotics is its focus on predictive capabilities to augment immediate sensory-motor experience. Being able to view the world from someone else’s perspective, a cognitive robot can anticipate that person’s intended actions and needs. This applies both during direct interaction (e.g. a robot assisting a surgeon in theatre) and indirect interaction (e.g. a robot stacking shelves in a busy supermarket). In cognitive robotics, the robot body is more than just a vehicle for physical manipulation or locomotion: it is a component of the cognitive process. Thus, cognitive robotics is a form of embodied cognition which exploits the robot’s physical morphology, kinematics, and dynamics, as well as the environment in which it is operating, to achieve its key characteristic of adaptive anticipatory interaction.
______
______
Developmental robotics:
Developmental Robotics is the interdisciplinary approach to the autonomous design of behavioral and cognitive capabilities in artificial agents and robots that takes direct inspiration from the developmental principles and mechanisms observed in natural cognitive systems such as children. Developmental robotics is a collaborative and interdisciplinary approach to robotics that is directly inspired by the developmental principles and mechanisms observed in children’s cognitive development. It builds on the idea that the robot, using a set of intrinsic developmental principles regulating the real-time interaction of its body, brain, and environment, can autonomously acquire an increasingly complex set of sensorimotor and mental capabilities.
_
Developmental robotics (also known as epigenetic robotics or ontogenetic robotics) is a highly interdisciplinary subfield of robotics in which ideas from artificial intelligence, developmental psychology, neuroscience, and dynamical systems theory play a pivotal role in motivating the research. The main goal of developmental robotics is to model the development of increasingly complex cognitive processes in natural and artificial systems and to understand how such processes emerge through physical and social interaction. Robots are typically employed as testing platforms for theoretical models of the emergence and development of action and cognition – the rationale being that if a model is instantiated in a system embedded in the real world, a great deal can be learned about its strengths and potential flaws. Unlike evolutionary robotics which operates on phylogenetic time scales and populations of many individuals, developmental robotics capitalizes on “short” (ontogenetic) time scales and single individuals (or small groups of individuals). Evolutionary robotics uses populations of robots that evolve over time, whereas developmental robotics is interested in how the organization of a single robot’s control system develops through experience, over time.
_
Developmental robotics is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines. As in human children, learning is expected to be cumulative and of progressively increasing complexity, and to result from self-exploration of the world in combination with social interaction. The typical methodological approach consists in starting from theories of human and animal development elaborated in fields such as developmental psychology, neuroscience, developmental and evolutionary biology, and linguistics, then to formalize and implement them in robots, sometimes exploring extensions or variants of them. The experimentation of those models in robots allows researchers to confront them with reality, and as a consequence, developmental robotics also provides feedback and novel hypotheses on theories of human and animal development.
_
Can a robot learn like a child? Can it learn a variety of new skills and new knowledge unspecified at design time and in a partially unknown and changing environment? How can it discover its body and its relationships with the physical and social environment? How can its cognitive capacities continuously develop without the intervention of an engineer once it is “out of the factory”? What can it learn through natural social interactions with humans? These are the questions at the center of developmental robotics. Alan Turing, as well as a number of other pioneers of cybernetics, already formulated those questions and the general approach in 1950, but it is only since the end of the 20th century that they began to be investigated systematically.
Because the concept of adaptive intelligent machines is central to developmental robotics, it has relationships with fields such as artificial intelligence, machine learning, cognitive robotics or computational neuroscience. Yet, while it may reuse some of the techniques elaborated in these fields, it differs from them from many perspectives. It differs from classical artificial intelligence because it does not assume the capability of advanced symbolic reasoning and focuses on embodied and situated sensorimotor and social skills rather than on abstract symbolic problems. It differs from cognitive robotics because it focuses on the processes that allow the formation of cognitive capabilities rather than these capabilities themselves. It differs from computational neuroscience because it focuses on functional modeling of integrated architectures of development and learning. More generally, developmental robotics is uniquely characterized by the following three features:
-1. It targets task-independent architectures and learning mechanisms, i.e. the machine/robot has to be able to learn new tasks that are unknown by the engineer;
-2. It emphasizes open-ended development and lifelong learning, i.e. the capacity of an organism to acquire continuously novel skills. This should not be understood as a capacity for learning “anything” or even “everything”, but just that the set of skills that is acquired can be infinitely extended at least in some (not all) directions;
-3. The complexity of acquired knowledge and skills shall increase (and the increase be controlled) progressively.
Developmental robotics emerged at the crossroads of several research communities including embodied artificial intelligence, enactive and dynamical systems cognitive science, connectionism. Starting from the essential idea that learning and development happen as the self-organized result of the dynamical interactions among brains, bodies and their physical and social environment, and trying to understand how this self-organization can be harnessed to provide task-independent lifelong learning of skills of increasing complexity, developmental robotics strongly interacts with fields such as developmental psychology, developmental and cognitive neuroscience, developmental biology (embryology), evolutionary biology, and cognitive linguistics. As many of the theories coming from these sciences are verbal and/or descriptive, this implies a crucial formalization and computational modeling activity in developmental robotics. These computational models are then not only used as ways to explore how to build more versatile and adaptive machines but also as a way to evaluate their coherence and possibly explore alternative explanations for understanding biological development.
______
______
Evolutionary robotics:
Evolutionary robotics is a new technique for the automatic creation of autonomous robots. Inspired by the Darwinian principle of selective reproduction of the fittest, it views robots as autonomous artificial organisms that develop their own skills in close interaction with the environment and without human intervention. Drawing heavily on biology and ethology, it uses the tools of neural networks, genetic algorithms, dynamic systems, and biomorphic engineering. The resulting robots share with simple biological systems the characteristics of robustness, simplicity, small size, flexibility, and modularity. In evolutionary robotics, an initial population of artificial chromosomes, each encoding the control system of a robot, is randomly created and put into the environment. Each robot is then free to act (move, look around, manipulate) according to its genetically specified controller while its performance on various tasks is automatically evaluated. The fittest robots then “reproduce” by swapping parts of their genetic material with small random mutations. The process is repeated until the “birth” of a robot that satisfies the performance criteria.
_
Evolutionary robotics is a field of research that employs evolutionary computation to generate robots that adapt to their environment through a process analogous to natural evolution. The generation and optimisation of robots are based on evolutionary principles of blind variations and survival of the fittest, as embodied in the neo-Darwinian synthesis (Gould, 2002). Evolutionary robotics is typically applied to create control system for robots. Although less frequent, evolutionary robotics can also be applied to generate robot body plans, and to coevolve control systems and body plans simultaneously (Lipson and Pollack, 2000). In this respect, evolutionary robotics differs from the Artificial Life domain in the usage of physical robots. In particular, evolutionary robotics puts a strong emphasis on embodiment and situatedness, and on the close interaction of brain, body, and environment, which is crucial for the emergence of intelligent, adaptive behaviour and cognitive processes (e.g. Clark, 1997; Chiel & Beer, 1997; Nolfi & Floreano, 2002).
_
Evolutionary robotics is an embodied approach to Artificial Intelligence (AI) in which robots are automatically designed using Darwinian principles of natural selection. The design of a robot, or a subsystem of a robot such as a neural controller, is optimized against a behavioral goal (e.g. run as fast as possible). Usually, designs are evaluated in simulations as fabricating thousands or millions of designs and testing them in the real world is prohibitively expensive in terms of time, money, and safety. An evolutionary robotics experiment starts with a population of randomly generated robot designs. The worst performing designs are discarded and replaced with mutations and/or combinations of the better designs. This evolutionary algorithm continues until a prespecified amount of time elapses or some target performance metric is surpassed. Evolutionary robotics methods are particularly useful for engineering machines that must operate in environments in which humans have limited intuition (nanoscale, space, etc.). Evolved simulated robots can also be used as scientific tools to generate new hypotheses in biology and cognitive science, and to test old hypothesis that require experiments that have proven difficult or impossible to carry out in reality.
_______
_______
Soft robotics:
Soft robotics is a subfield of robotics that concerns the design, control, and fabrication of robots composed of compliant materials, instead of rigid links. In contrast to rigid-bodied robots built from metals, ceramics and hard plastics, the compliance of soft robots can improve their safety when working in close contact with humans. Most robots are made of hard materials and are designed to be strong, fast, repetitive and precise with their movements. But the inflexibility inherent in their designs limits the ways we can safely use and interact with them. That’s why engineers and researchers have started exploring the possibilities of soft, flexible robots made of materials that are safer for humans to be around and can navigate unpredictable environments that other robots can’t. These agile machines are often designed to mimic the biomechanics of animals and other things we find in nature, which is leading to some exciting new applications. Animals exploit soft structures to move effectively in complex natural environments. These capabilities have inspired robotic engineers to incorporate soft technologies into their designs. The goal is to endow robots with new, bioinspired capabilities that permit adaptive, flexible interactions with unpredictable environments. Soft robotics is the subset of robotics that focuses on technologies that more closely resemble the physical characteristics of living organisms. Experts describe the soft robotics approach as a form of biomimicry in which the traditionally linear and somewhat stilted aspects of robotics are replaced by much more sophisticated models that imitate human, animal and plant life.
_
There has been an increasing interest in the use of soft and deformable structures in the robotic systems. Soft and deformable structures are crucial in the systems that deal with uncertain and dynamic task-environments, e.g. grasping and manipulation of unknown objects, locomotion in rough terrains, and physical contacts with living cells and human bodies. Moreover the investigations on soft materials are also necessary for more visionary research topics such as self-repairing, growing, and self-replicating robots. Despite its importance and considerable demands, the field of Soft Robotics faces a number of fundamental scientific challenges: the studies of unconventional materials are still in their exploration phase, and it has not been fully clarified what materials are available and useful for robotic applications; tools and methods for fabrication and assembly are not established; we do not have broadly agreed methods of modeling and simulation of soft continuum bodies; it is not fully understood how to achieve sensing, actuation and control in soft bodied robots; and we are still exploring what are the good ways to test, evaluate, and communicate the soft robotics technologies.
_
Soft materials:
Although conventional rigid robots articulate discrete joints that are designed to have negligible impedance, soft robots articulate their entire body structure as a continuum. To minimize the force required to cause deformation, the body should be made of low-modulus materials (such as elastomers). Silicone rubber is a popular choice for body fabrication due to its availability in low modulus (as low as 05-00 durometer) that allows high strain and the convenience of a room-temperature vulcanizing process. It is also a good biocompatible material for medical applications. Soft fluidic actuators consisting of elastomeric matrices with embedded flexible materials (e.g. cloth, paper, fiber, particles) are of particular interest to the robotics community because they are lightweight, affordable and easily customized to a given application. For future alternative material choice, a recently developed tough and highly stretchable hydrogel can serve as a soft body material that may integrate tissue-engineered materials by providing scaffolding. Dissolvable robots made of soft, biodegradable materials could be used to deliver drugs to specific tissues.
New techniques are needed to model and control the environmental interactions of soft-bodied robots. Known robotics techniques for kinematic and dynamic modeling cannot be directly used in soft robotics because the structure is a continuum and deformation is highly nonlinear owing to large strain. Several constitutive models for large deformations of rubber-like materials have been developed, but soft robots usually have heterogeneous structures with complex boundary conditions, so accurate dynamic modeling of such systems is still challenging. Most current approaches for modeling direct-continuum materials in soft robotics are limited to kinematic analysis.
_
Convergence with tissue engineering:
Soft materials open up new prospects for bioengineered and biohybrid devices. Researchers have created a flexible biohybrid microsystem that models the alveolus– capillary interface of the human lung. A soft material allows the interface to be rhythmically stretched, reproducing the cyclical mechanical effects of breathing. By growing cardiac muscle cells, researchers have developed a tissue-engineered jellyfish that can swim. Significant advances have been made in developing biomaterials suitable for minimally invasive surgery (MIS) soft robots, such as soft, transient electronics and a tissue growth scaffold made from biopolymers such as silk. A locomotive ‘bio-robot’ is fabricated by growing muscle cells on a 3D printed hydrogel structure. A soft robot could be designed with biomaterials that release therapeutic agents locally or that deposit materials that the body can use as a scaffold for tissue repair.
Soft robots built from biological materials and living cells would inherit the advantages of these materials: they have extraordinary potential for self assembly (from molecular structures to integrated devices); they are powered by energy-dense, safe, hydrocarbons such as lipids and sugars; and they are biocompatible and biodegradable, making them a potentially green technology. The primary robotic components needed are: (i) actuators (synthetic or living muscles); (ii) a mobile body structure (built from biopolymers in any desired configuration); and (iii) a supply of biofuel (e.g., mobilizing glucose or lipid reserves in the body cells of the robot). Such robots could be built (or grown) by using parallel fabrication methods, therefore, they also have great potential for tasks that require disposable devices or swarm-like interactions. New challenges lie in the selection of appropriate tissue sources and in interfacing them with synthetic materials and electronics.
_
Soft technologies will greatly assist the development of robots capable of substantial interaction with an environment or human users by providing: (i) safer and more robust interactions than are currently available with conventional robotics; (ii) adaptive behaviors that use mechanical intelligence and therefore simplify the controllers needed for physical interaction; and (iii) cheaper and simpler robotic components. Soft robotics has particular utility for medical applications. Soft materials may enable robotic devices that are safe for use in medical interventions, including diagnosis, drug therapy, and surgery. For example, soft robotics may expedite the development of minimal invasive surgery (MIS) techniques. A soft bodied MIS robot might cause less tissue trauma than rigid instruments during insertion and navigation through soft tissues and complex organ geometries. In the near future, we will be able to engineer biohybrid soft robotic systems for medical interventions by combining biocompatible soft materials and tissue-engineered cells.
______
______
Hot robotics:
The future of deep space exploration might depend upon what some engineers call “hot robotics.” These robots are designed to go places where humans cannot and may someday be called upon to carry out tasks that are far too dangerous for human crews or even other robots. Hot robots first saw use inside the decommissioned nuclear reactors. The term “hot” refers to the hazardous radioactivity that makes these reactors impractical or too dangerous for humans to enter. That’s where these heavily-shielded robots come in to help with tasks like reactor inspection and waste cleanup. Now, scientists and engineers hope to bring them to new tasks and new places, including space.
Fusion power might be able to unlock a future of clean, virtually limitless energy — including in space — but fusion power happens in extreme environments such as inside stars or incredibly powerful magnetic fields. Fusion power creates energy by smashing atoms together, but that process requires plasma superheated to temperatures comparable to the sun. Moreover, the process creates a good deal of radiation. All of that can be contained inside reactor chambers, but it makes the interior of those reactors dangerous for humans. ‘Hot robots’ may be necessary for future nuclear space missions.
______
______
Robotics Simulation:
Robotics simulation is a digital tool used to engineer robotics-based automated production systems. Functionally, robotics simulation uses digital representation – a digital twin – to enable dynamic interaction with robotics models in a virtual environment. The purpose of robotics and automation simulation systems is to bring automation systems online much faster and launch production with fewer errors than occurs with conventional automation engineering. A robotics simulator is a simulator used to create application for a physical robot without depending on the actual machine, thus saving cost and time. In some case, these applications can be transferred onto the physical robot (or rebuilt) without modifications.
Automation simulation plays a key role in robotics because it permits experimentation that would be expensive and/or time-consuming if it had to be conducted with actual robots – and even more so when such experimentation must be conducted on the production floor. Robotics simulation permits engineers to try ideas and construct manufacturing scenarios in a dynamic virtual environment, collecting virtual response data that accurately represents the physical responses of the control system.
Robotics simulation has steadily evolved over time to keep up with the growing capabilities of industrial robots. Robots are being deployed into dynamic environments in which the robot’s tasks may change frequently or involve human collaborators. Demand for advanced robotics of this kind continues to grow as manufacturers increase product complexity, variety and customization to meet customer demand. Advanced robotics incorporates runtime decision-making and reactive programming for unforeseen events, as well as the ability to adapt and improve based on data collected by the industrial internet of things (IIoT) processed with artificial intelligence (AI). The level of complexity in advanced robotics programming and deploying and operating advanced robots, as well as the high cost that would be incurred to debug a robotics system on the production floor, make advanced robotics simulation a critical component of manufacturing engineering.
Advanced robotics simulation software enables users to engineer and optimize robotic production that comprises the new flexibility and customization made possible by advanced robotics technologies. Engineers may employ advanced robotics simulation to design complete 3D robotic work cells, then validate and optimize manufacturing process sequences simulating realistic behavior and responses. They can validate automation concepts virtually and perform offline advanced robotics programming. This simulation software also provides companies with the ability to commission complete production systems virtually.
_
Functionality of advanced robotics simulation software:
Modern robotics and automation simulation software is designed to address single-robot stations all the way to complete production lines and zones. It begins with design and validation of automated manufacturing processes that include a variety of robotic and automation processes. Once processes are designed and validated, the software supports detailed engineering of robotic paths and motions, helping engineers to ensure collision-free operation with optimized cycle times. Both time-based and event-based simulation methods may be employed. Advanced robotics simulation software also typically provides support for specific robotics applications:
Software functionality also includes offline advanced robotics programming, which allows for downloading and uploading of production programs to and from the shop floor. This means that the software interfaces with any major industrial robot and controller, and users can add detailed information to create complete programs offline, then download them to the controller on the floor.
Benefits of robotics simulation:
Because robotics simulation generates realistic and accurate behaviors and responses in the virtual realm to demonstrate what will happen in the physical realm, it enables manufacturers to design and optimize manufacturing processes without the time and cost penalties of tying up capital equipment or production floors.
Additional benefits:
______
_______
Section-11
Challenges in robotics:
The field of robotics is facing numerous issues based on its hardware and software capabilities. The majority of challenges surround facilitating technologies like artificial intelligence (AI), perception, power sources, etc. From manufacturing procedures to human-robot collaboration, several factors are slowing down the development pace of the robotics industry including intelligence, navigation, autonomy and new materials. These challenges are not unique, and they are generally expected for any developing technology. Despite recent and significant strides in the field, and promise for the future, today’s robots are still quite limited in their ability to figure things out, their communication is often brittle, and it takes too much time to make new robots. Broad adoption of robots will require a natural integration of robots in the human world rather than an integration of humans into the machines’ world.
_
-1. Reasoning
Robots can only perform limited reasoning due to the fact that their computations are carefully specified. For today’s robots, everything is spelled out with simple instructions and the scope of the robot is entirely contained in its program. Tasks that humans take for granted, for example asking the question “Have I been here before?” are notoriously difficult for robots. Robots record the features of the places they have visited. These features are extracted from sensors such as cameras or laser scanners. It is hard for a machine to differentiate between features that belong to a scene the robot has already seen and a new scene that happens to contain some of the same objects. In general, the data collected from sensors and actuators is too big and too low level; it needs to be mapped to meaningful abstractions for robots to be able to effectively use the information. Current machine-learning research on big data is addressing how to compress a large dataset to a small number of semantically meaningful data points. Summarization can also be used by robots. For example, robots could summarize their visual history to reduce significantly the number of images required to determine whether “I have been here before.”
Additionally, robots cannot cope with unexpected situations. If a robot encounters a case it was not programmed to handle or is outside the scope of its capabilities, it will enter an error state and halt. For example, vacuum-cleaning robots are designed and programmed to move on the floor, but cannot climb stairs.
Robots need to learn how to adjust their programs, adapting to their surroundings and the interactions they have with people, with their environments and with other machines. Today, everybody with Internet access has the world’s information at their fingertips, including machines. Robots could take advantage of this information to make better decisions. Robots could also record and use their entire history (for example, output of their sensors and actuators), and the experiences of other machines. For example, a robot trained to walk your dog could access the weather report online, and then, based on previous walks, determine the best route to take. Perhaps a short walk if it is hot or raining, or a long walk to a nearby park where other robotic dog walkers are currently located. All of this could be determined without human interaction or intervention.
_
-2. Communication
A world with many robots working together requires reliable communication for coordination. Despite advances in wireless communication, there are still impediments in robot-to-robot communication. The problem is that modeling and predicting communication is notoriously hard and any robot control method that relies on current communication models is fraught with noise. The robots need more reliable approaches to communication that guarantee the bandwidth they need, when they need it. To get resilient robot-to-robot communication, a new paradigm is to measure locally the communication quality instead of predicting it with models. Using the idea of measuring communication, we can begin to imagine using flying robots as mobile base-stations that coordinate with each other to provide planet-scale communication coverage. Swarms of flying robots could bring Internet access everywhere in the world.
Communication between robots and people is also currently limited. While speech technologies have been employed to give robots commands in human language (for example, “move to the door”), the scope and vocabulary of these interactions is shallow. Robots could use the help of humans when they get stuck. It turns out that even a tiny amount of human intervention in the task of a robot completely changes the problem and empowers the machines to do more.
Currently, when robots encounter something unexpected (a case for which it was not programmed) they get stuck. Suppose, instead of just getting stuck, the robot was able to reason about why it is stuck and enlist human help. For example, recent work on using robots to assemble IKEA furniture demonstrates that robots can recognize when a table leg is out of reach and ask humans to hand them the part. After receiving the part, the robots resume the assembly task. These are some of the first steps toward creating symbiotic human-robot teams where robots and humans can ask each other for help.
_
-3. Design and Fabrication
Another great challenge with today’s robots is the length of time to design and fabricate new robots. We need to speed up the creation of robots. Many different types of robots are available today, but each of these robots took many years to produce. The computation, mobility, and manipulation capabilities of robots are tightly coupled to the body of the robot—its hardware system. Since today’s robot bodies are fixed and difficult to extend, the capabilities of each robot are limited by its body. Fabricating new robots—add-on robotic modules, fixtures, or specialized tools to extend capabilities—is not a real option, as the process of design, fabrication, assembly, and programming is long and cumbersome. We need tools that will speed up the design and fabrication of robots. Imagine creating a robot compiler that takes as input the functional specification of the robot (for example “I want a robot to play chess with me”) and computes a design that meets the specification, a fabrication plan, and a custom-programming environment for using the robot. Many tasks big and small could be automated by rapid design and fabrication of many different types of robots using such a robot compiler.
_
-4. Pervasive Robotics
There are significant gaps between where robots are today and the promise of pervasive integration of robots in everyday life. Some of the gaps concern the creation of robots—how do we design and fabricate new robots quickly and efficiently? Other gaps concern the computation and capabilities of robots to reason, change, and adapt for increasingly more complex tasks in increasingly complex environments. Other gaps pertain to interactions between robots, and between robots and people. Current research directions in robotics push the envelope in each of these directions, aiming for better solutions to making robots, controlling the movement of robots and their manipulation skills, increasing the ability for robots to reason, enabling semantic-level perception through machine vision, and developing more flexible coordination and cooperation between machines and between machines and humans. Meeting these challenges will bring robots closer to the vision of pervasive robotics: the connected world of many people and many robots performing many different tasks.
The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it. For example, electricity was once a novel technology and now it is a part of life. Robotic technologies have the potential to join the personal computer and electricity as pervasive aspects of everyday life. In the near future, robotic technologies will change how we think about many aspects of everyday life. There are significant gaps between where robots are today and the promise of pervasive integration of robots in everyday life. These gaps concern the creation of robots, their computation and capacity to reason, change, and adapt for increasingly more complex tasks in increasingly complex environments, and their capacity to interact with people.
_
-5. Robot swarms
The concept of robot swarms is derived from nature. In the same way that birds, fish and insects move in cohesive groups, researchers are working towards developing robots that can act as part of a team to achieve complex goals. If successful, these swarms have the potential to solve the most pressing problems facing human civilization. They can provide solutions to feed an ever-increasing population with limited resources by increasing the efficiency of food production and decreasing water consumption by an order of magnitude. They can respond to natural disasters and adversarial attacks by enabling resilience in our infrastructure. They are a part of any practical solution to space colonization. But the development of these swarms is complex. Individual robots need to sense not only the environment but also the movements of their neighbours, all while communicating with other individuals in their teams and acting independently. At present, robots are able to function autonomously using a perception-action loop. This feedback loop is fundamental to creating autonomous robots that function in unstructured environments. Robot swarms require their communication ability to be embedded in this feedback loop. Thus, perception-action-communication loops are key to designing multifunctional, adaptive robot swarms. There are currently no systematic approaches for designing such multidimensional feedback loops across large groups.
_
-6. Coping with Uncertainty:
Earlier generation was industrial robots that manipulate the world when it’s very precisely controlled. Next generation includes airborne drones, self-driving cars and other mobile robots that can move around in our world and share it with us, even though that world is uncertain and full of novelty. But manipulating a world shared with humans – and to perform physical tasks alongside humans – requires a new generation of robots. They will need to cope with all the uncertainty that humans introduce into the environment. You have an unstructured world and you need technologies that can deal with that. Probabilistic robotics is a new and growing area in robotics, concerned with perception and control in the face of uncertainty. Building on the field of mathematical statistics, probabilistic robotics endows robots with a new level of robustness in real-world situations.
_
-7. Singularities:
A robot singularity is a configuration in which the robot end-effector becomes blocked in certain directions. Any six-axis robot arm (also known as a serial robot, or serial manipulator) has singularities. The American National Standard for Industrial Robots and Robot Systems — Safety Requirements (ANSI/RIA R15.06-1999) defines a singularity as “a condition caused by the collinear alignment of two or more robot axes resulting in unpredictable robot motion and velocities.” It is most common in robot arms that utilize a “triple-roll wrist”. This is a wrist about which the three axes of the wrist, controlling yaw, pitch, and roll, all pass through a common point. An example of a wrist singularity is when the path through which the robot is traveling causes the first and third axes of the robot’s wrist (i.e. robot’s axes 4 and 6) to line up. The second wrist axis then attempts to spin 180° in zero time to maintain the orientation of the end effector. Another common term for this singularity is a “wrist flip”. The result of a singularity can be quite dramatic and can have adverse effects on the robot arm, the end effector, and the process. Some industrial robot manufacturers have attempted to side-step the situation by slightly altering the robot’s path to prevent this condition. Another method is to slow the robot’s travel speed, thus reducing the speed required for the wrist to make the transition. The ANSI/RIA has mandated that robot manufacturers shall make the user aware of singularities if they occur while the system is being manually manipulated.
A second type of singularity in wrist-partitioned vertically articulated six-axis robots occurs when the wrist center lies on a cylinder that is centered about axis 1 and with radius equal to the distance between axes 1 and 4. This is called a shoulder singularity. Some robot manufacturers also mention alignment singularities, where axes 1 and 6 become coincident. This is simply a sub-case of shoulder singularities. When the robot passes close to a shoulder singularity, joint 1 spins very fast.
The third and last type of singularity in wrist-partitioned vertically articulated six-axis robots occurs when the wrist’s center lies in the same plane as axes 2 and 3.
Singularities are closely related to the phenomena of gimbal lock, which has a similar root cause of axes becoming lined up. Gimbal lock is the loss of one degree of freedom in a three-dimensional, three-gimbal mechanism that occurs when the axes of two of the three gimbals are driven into a parallel configuration, “locking” the system into rotation in a degenerate two-dimensional space. Coordinate singularities and gimbal lock are two phenomena that present themselves in models for the dynamics of mechanical systems. The former phenomenon pertains to the coordinates used to parameterize the configuration manifold of the system, while the latter phenomenon has a distinctive physical manifestation.
The biggest robotic movement to avoid is a “singularity” when discussing the most common 6-axis robotic implementations. Besides singularities, it is important to understand the reach capabilities of the robot you are using as well as the payload capacity as these will define certain speed limitations.
______
______
Section-12
Robotic market and industry:
_
The value chain for robotics:
The core segments for the robotics value chain are hardware components, software components, robot manufacturing, and robotics as a service. Depending on the application, level of sophistication, and reliability requirements, robotics generally involves several levels of control and processing, including onboard hardware and software, and increasingly, cloud processing and the pooling of knowledge from multiple robots. The key players associated with the robotics market are the Cognex, Cyberdyne, Estun Automation, FANUC, HollySys, Honda, Intuitive Surgical, iRobot, Keyence, Midea, Nabtesco, Northrop Grumman, Omron, Rockwell Automation, and Teradyne.
_
Robots take on new roles driving market growth:
Today robot adoption is in applications where they carry out repetitive, high-volume production activities. As the cost and complexity of automating tasks with robots goes down, it is likely that the kinds of companies already using robots will use even more of them. In the next five to ten years, however, we expect a more fundamental change in the kinds of tasks for which robots become both technically and economically viable as seen in the figure below:
_
Robotic market:
The robotics market was valued at $45.3 billion in 2020 and is expected to grow at a CAGR of more than 28% during the forecast period 2021-2030. Robotics has been one of the fastest-growing industries, with industrial robots driving the market. The use of cloud computing and AI enables the robots to collaborate and access huge amounts of data uninterruptedly. Automation is thus found to be the key to improving productivity. As countries and companies attempt to recover from the pandemic, the demand for robotics will increase. Societies are thus found using robots to care for the elderly and address shortages in the workforce.
The robotics industry has been split into two main areas: industrial robots and service robots. Each can be sub-divided into additional categories, with service robots a particularly fragmented category. AI will allow robots to identify human emotions, and the field of soft robotics is developing robots from materials similar to those found in living organisms. Hence, there is a chance that one day life will imitate art, and robots and people will look alike.
_
Robotics is a diverse industry with many variables. Its future is filled with uncertainty: nobody can predict which way it will develop and what directions will be leading a few years from now. Robotics is also a growing sector of more than 500 companies working on products that can be divided into four categories:
According to the International Federation of Robotics data, 3 million industrial robots are operating worldwide – the number has increased by 10% over 2021. The global robotics market is estimated at $55.8 billion in 2021 and is expected to grow to $91.8 billion by 2026 with a 10.5% annual growth rate.
_____
Robotic industry:
Whether the average person realizes it or not, robots are becoming more and more of a part of daily life and industry, whether that’s in surgery, on a website, or a manufacturing floor. Here are the key statistics about the robotics industry in 2022:
_
The role of robotics in the Fourth Industrial Revolution is to provide new ways to integrate different technological systems and humans. Since the Fourth Industrial Revolution is focused on integrating technologies, robots will play a large part in it. Robotics, Artificial Intelligence, Virtual Reality, Augmented Reality, and Unmanned Aerial Vehicles are just a few of these technologies that scientists are using in conjunction with each other, and often they do this within a single machine they call a robot. The five major fields of robotics are operator interface, mobility or locomotion, manipulators and effectors, programming, and sensing and perception. Those who work in robotics usually choose from one of these fields to specialize in. Since robots are so varied and complex, they need a team of people to design, build and program them. As a result, many jobs are considered a part of the robotics industry. As far as countries with the most annual installations of industrial robots, however, China takes the lead with 1.684 million installed in 2020. Japan, the U.S., and Korea follow with 38.7 million, 30.8 million, and 30.5 million, respectively.
Repetitive jobs are going to be replaced by robots. In some settings, robots are also replacing jobs requiring more reasoning, but this is rarer. Jobs such as making French fries at fast-food restaurants, welding, driving trains, and answering basic customer service queries are relatively easy for robots to take over, leaving the more managerial and other creative, advanced roles open for humans. The robotics industry has grown very big over time, with approximately 3 million industrial robots in use across the globe, and an industry worth $43.8 billion. As more robots are produced and automation becomes a cheaper investment for companies (saving them up to 20% in costs), the size of the industry will continue to grow. By 2025, robots may displace up to 85 million jobs currently held by humans. With that in mind, as long as there’s continued demand for robots, the industry will continue to grow larger. The robotics industry is growing at a relatively fast rate, with a projected CAGR of 11.67% from 2019 through 2026. By 2024, global industrial robot shipments are expected to reach 518,000, up from 384,000 in 2020.
The industry’s growth has also started to recover from 2020, with yearly new robot installations increasing from 8% to 13% in 2021. This marks improved but continued growth within the past eight years.
Robotics engineers are in demand. Seeing as these professionals are skilled in designing prototypes, testing machines, and maintaining vital software, the number of jobs available will also grow when the robotics industry grows. The job market for this profession is expected to increase by 6.4% through 2026. While this level of job growth isn’t anything out of the ordinary (average job growth in the U.S. is between 5-8%), it’s still healthy. Overall, the field will need 12,500 new engineers over the next ten years.
_____
Industrial Robot Companies:
For many years, the robotics industry has been led by a set of companies that are often referred to as “The Big 4.” In fact, these companies’ robots can be found in thousands of facilities worldwide and together they command roughly 75% of the market for robotics. Therefore, they are often immediately recognizable thanks to the distinct branding and product design from each company.
-1. ABB
From the time it pioneered the world’s first all-electric microprocessor-controlled robot and the world’s first industrial paint robot in the late 1960s and early 1970s, ABB remains a technology and market leader in robotics with over 300,000 robots sold to customers all over the world. Today, ABB is still one of the world’s largest industrial robotics companies. You can usually recognize an ABB robot by its white color with distinctive red logo.
Key products: IRB 910SC SCARA, IRB 14000 YuMi, IRB 5500-22 – FlexPainter, IRB 5500-25 – Elevated rail, IRB 6660 for pre-machining, IRB 6660 for press tending.
-2. Fanuc
You can usually recognize a Fanuc robot by its bright yellow color. Covering a diverse range of industries and applications, FANUC Robotics offers more than 100 models of industrial robots that are easy to operate and provide great flexibility. FANUC has never taken its market dominance for granted and has been dynamically working on smarter and flexible solutions, particularly those that incorporate Artificial Intelligence (AI).
Key products: Fanuc CR Series of Collaborative Robots, Fanuc Robot R2000iC Series, M-20iB/25 Series of Articulated Robots, M-1/2/3 Series of Delta-Robots, SCARA Series.
-3. KUKA
You can usually recognize a KUKA robot by its distinctive orange color. German industrial giant Kuka is one of the world’s largest producers of robotics that are used to manufacture automobiles, characterized by its signature bright orange crane-like bots. KUKA Robotics offers a fully integrated range of automated robotics, control technology, and customized software solutions. Since 2004, automation and robotics have been the company’s primary focus, and non-core areas have been closed or sold. In 2016, Kuka, a company whose robots already grace several factory floors, was acquired by Midea Group, a Chinese household company, for USD$3.9 billion.
Key Products: KR AGILUS series: KR 30/60 F series: KR QUANTEC F series, QUANTEC for palletizing, AGILUS (Hygiene Machine variant), shelf-mounted robots, press-to-press robots.
-4. Yaskawa
The Motoman range of robots is produced by Yaskawa, which you can usually recognize by their white and blue coloring. This is a Japanese brand that has led the industrial robotics industry since the first launch of its all-electric industrial robot Motoman in 1977. With more than 300,000 Motoman robots, 18 million inverter drives and 10 million servos and 18 million installed globally, Yaskawa has successfully commercialized optimum robots for various uses including arc welding, assembly, dispensing, material handling, material removal, material cutting, packaging, and spot welding.
Key Products: VA1400 for arc welding, HP20F for assembly, ES Series for Machine Tending, G Series for pick and pack, MH225 for spot welding.
Other Popular Industrial Robotics Companies:
Although The Big 4 above have a huge place in the robotics market, these other industrial robotics companies could be said to be also leading the industry in their own ways. You can find robots from these companies in many facilities worldwide.
-5. Comau
Part of the world’s largest automotive groups and a prominent supplier of industrial robots and robotized processes, Comau Robotics is a market leader. The company has launched a wide range of innovative products, perhaps most importantly the largest collaborative robots on the market. Its industrial robots are designed and developed to be integrated into applications where accuracy, speed, repeatability, and flexibility are of most importance.
Key Products: REBEL-S SCARA, RACER ROBOT, STANDARD ROBOTS NS Series, HOLLOW WRIST ARC.
-6. Epson
When you think of Epson, you might first think of their desktop printers. However, the robotics arm of Epson is a large player in the industry. This pioneering company first entered the North and South American Market in 1984 as the EPSON Factory Automation Group. Originally founded to support automation needs, EPSON quickly became prominent in many of the largest manufacturing sites throughout the world. Over the past three decades, EPSON Robots has been leading the automation industry for small parts assembly products and has introduced several industry firsts, including compact SCARA robots, PC based controls, and much more.
Key Products: G-Series, RS-Series, LS-Series and T-Series SCARA Robots.
-7. Kawasaki
Kawasaki is a Japanese industrial manufacturer probably best known for its motorcycles, engines, and aerospace equipment. With over 160,000 robotics installed worldwide, the Japan-based Kawasaki is a leading provider of industrial robots and automation systems with a broad product portfolio. Kawasaki robotics was the first in Japan to commercialize industrial robots. Since then, the company has developed several robots as a domestic pioneer and has contributed to growth in many industry verticals through automation and labor-saving systems. In 2015, the company began sales of duAro, an advanced, dual-arm SCARA robot that can work alongside humans.
Key Products: duAro Scara Robot, K Series Robots for Painting, Y Series for Pick and Place, B Series for Spot Welding, RA020N for arc welding, M Series for medical and pharmaceuticals.
-8. Stäubli
This is a global mechatronics solution provider with three core activities: Connectors, Robotics, and Textile. Since 1892 when it was founded, the Staubli Group has expanded both geographically and technologically. With the acquisition of Unimation – a prominent vendor in industrial robotics industry – Staubli continued its dynamic path into the most advanced and innovative industrial sectors. The company has launched a new range of collaborative robots and is investing further into its software business.
Key Products: TS80, TX Series, RX Series, TX2 Series, TP90, CS Series.
-9. Universal Robots
This company is renowned for developing safe, flexible, easy-to-use robotic arms that serve a range of industries, including food and tobacco production, metal and machining, automotive and subcontractors, pharma and chemistry, furniture and equipment, and scientific and research industries. This Danish company develops lightweight industrial robots that streamline and automate repetitive industrial processes. These robots are most commonly used for injection molding, pick-and-place, CNC, quality inspection, packaging and palletizing, assembly, machine tending, and gluing and welding applications.
Key Products: Collaborative Arm UR3, Collaborative UR5, and Collaborative UR10 robot arms.
-10. Omron_Adept
This is the largest US-based industrial robotics company. Its intelligent automation products include mobile robots, industrial robots and other automation equipment, applications software, machine vision, and systems. In 2015, the Omron Corporation acquired Adept Technology Inc. to create this entity. Omron is an industrial automation company based in Kyoto, Japan. They partnered with Techman Robot in 2018, adding a series of successful collaborative robots to their existing wide catalog of industrial robots, including mobile robots, SCARAs, and Delta robots.
______
______
Robotics and IP rights:
The robotics innovation system:
Robotics innovation is concentrated in a small number of countries and clusters typically centered around leading universities. Examples include Boston (United States), the Île-de-France (France), Odense (Denmark), Zurich (Switzerland), Bucheon (Republic of Korea), Osaka (Japan) and Shanghai (China). These clusters thrive on the interface between public and private research, with firms commercializing innovations developed partly through long-term research in universities and other public research organizations.
Most robotics-related innovation and company startups are found in high-income countries, with the exception of China, which hosts some of the fastest-growing robotics companies such as DJI (a drone company), Siasun and Estun.
The robotics innovation ecosystem is highly dynamic, research-intensive and collaborative and is becoming increasingly complex. It involves an expanding network of specialists, research institutions and technology-intensive firms, large and small, and brings together know-how from a diverse range of fields to deliver ground-breaking inventions built on the latest developments in materials science, motive power, control systems, sensing and computing.
The collaborative nature of robotics innovation is due in part to the extremely complex challenges presented. Often, companies simply do not have all the required expertise in-house and have to look outside to secure it, for example by establishing joint development agreements with specialized robotics companies.
Industrial robotics is capital-intensive. Research can take years to bear fruit, but university spin-off companies formed around different breakthroughs are driving the sector’s evolution.
Larger, established companies like ABB (Switzerland), Kawasaki Heavy Industries, Yaskawa and Fanuc (Japan) and KUKA (Germany) are also very active in robotics R&D. Large companies active in defense, aerospace and security have also gained expertise in robotics, along with consumer electronics firms like Samsung (Republic of Korea) and Dyson (United Kingdom).
And as robotics becomes more reliant on connectivity and ICT networks, firms like Amazon, Google, Facebook, Infosys, Alibaba and Foxconn are also joining the fray. Many companies in many sectors are beginning to recognize the benefits of robotics, which are increasingly at the heart of business strategies.
_
Robotics innovation and intellectual property:
As more players enter the robotics ecosystem and as innovation focuses on more advanced robotics, companies are increasingly turning to the tools of the IP system to safeguard their interests.
Compared to the standard industrial robot innovation of the past, robotics innovation today involves more actors, more technology fields and many more patent filings. Offensive and defensive IP strategies are becoming more commonplace.
Patent protection can be particularly important in this field, given the capital-intensive nature of R&D prior to commercialization and the need for regulatory approval. It allows companies to recoup their investment and helps them secure a competitive commercial advantage. It is particularly useful in protecting inventions that can easily be reverse engineered.
A solid patent portfolio makes it possible to license and cross-license technologies, and thereby strengthen business relations, generate new revenue streams and, in some cases, help avoid litigation. It can also help small firms attract much-needed investment.
Robotics patenting surged in the 1980s, when widespread factory automation resulted in a quadrupling of patent applications (see figure below). Patent applications surged again in the mid-2000s as more advanced robotics came on stream.
Figure above shows the number of first patent filings worldwide in the robotics space between 1960 and 2012. The graph shows the emergence of the Republic of Korea in the early 2000s and of China more recently.
Automotive and electronics companies remain the largest filers of robotics patents but new actors are emerging. University–industry collaboration remains strong as the stock of patents held by universities and public research organizations offers significant opportunities for commercialization. Extensive cross-fertilization of research remains a feature of the robotics innovation ecosystem, but there is evidence that patenting is supporting the specialization of firms. Such specialization is important in driving the sector’s continued evolution.
Many robotics companies are using patent documents to find out about the latest technological developments, to gain insights about competitors’ strategies and to monitor whether competitors’ patent claims need to be challenged.
_
Trade secrets and robotics:
The technological complexity of robotics systems means that trade secrets are often the first option for companies seeking to protect their innovations. This makes sense for a number of reasons:
Next to patents, industrial designs that protect a robot’s appearance – its shape and form – also play an important role in improving the marketability of products and helping firms appropriate the returns on their R&D investments.
Being first to market, a strong after-sales service, reputation and brand have all been critical to the success of past robotics innovation, and remain so today, especially as the industry moves towards developing applications with direct consumer contact. Strong brands are particularly important when selling directly to end-users. That is why most robotics companies trademark their company names and those of their robots.
Copyright, the traditional means of protecting software code, is also relevant to robotics. Under the 1996 WIPO Copyright Treaty, circumventing a technological protection measure to access copyrightable computer code is not permitted. This is of particular relevance to the robotics industry because most companies employ electronic barriers to restrict access to their computer code.
_______
_______
Robots and Economy:
Robots are increasingly being used in every industry and are here to stay, and robotics usage has both positive and negative impacts on business and employees. Robots are taking your jobs! They have been encroaching in manufacturing work for decades and now making literal inroads into tasks like driving, logistics, and inventory management. While there may be a negative effect on some labor segments, robots and automation increase productivity, lower production costs, and can create new jobs in the tech sector. Robotics and technology are revolutionizing the future of the workforce. With inflation at 7.5% and increased competition in the market, businesses need every advantage they can get to reduce costs. Robots may save businesses in such an inflationary period.
-1. Improved Productivity:
Economic growth can come from the increased quality of labor, capital, and productivity. Integrating robots in the workforce automate menial tasks and frees up the working time of the skilled workforce of an organization. This helps businesses optimize the working hours of their employees and enables skilled workers to focus on complex tasks instead. It improves the overall labor productivity of an organization and helps it cut down on high labor costs in inflationary times. Decreased overheads will enable the businesses to keep their margins afloat in price hikes, protecting them from being driven out of business due to the potential threat of loss-making.
Technological tools facilitate business expansion as they eliminate the need to keep hiring workers for the expansion needs of the business. In inflationary times, companies struggling to maintain their break-even and profit margins can expand or diversify their product lines or operations to generate more revenue without additional labor costs.
-2. GDP Growth:
As the productivity of businesses increases with the integration of robots and automation, the overall GDP of the economy improves as well because of low output being produced with limited resources. A paper by the London School of Economics titled “Robots at Work,” which studied the effects of integrating robots in an economy, has shown that across 17 countries, usage of industrial robots increased the GDP by 0.36%. This can be compared to the technological boost in the 20th century and economic growth resulting from increased productivity.
In times of inflation, countries that have integrated robots into their workforce and primary industries will win even if economic growth is being affected by increased poverty and unemployment. Businesses which have technology will be able to keep their prices competitive in international markets and expand their operations and exports, helping improve the economy. With lower overheads due to automation, existing businesses in the economy can even target various international markets to maintain and increase their overall market share.
-3. Job Creation
An economy facing high inflation is likely to face unemployment. Integrating robots improves productivity, which can create more jobs in such an economy during inflationary times as the demand for skilled professionals increases in robotics and AI-related industries.
Robots are often considered evil as they eliminate the need for labor to perform repetitive tasks. However, increasing robots in the workforce also increase high-skilled workers’ demand. Automation only replaces low-skill jobs like sorting raw material in warehouses, organizing orders, transporting and stocking, and quality-related tasks. Advanced robots in the workforce also require many hardware engineers, artificial intelligence (AI) experts, and software experts. The growing use of robots increases the demand of the businesses that manufacture robots and promotes overall innovation.
______
______
Section-13
Application of robots:
Artificial intelligence and robotics are the two technologies that have shown the potential to address and provide solutions to many contemporary issues. The manufacturing sector has been using robotics for quite a long time. However, over the past three-four decades, robots are in use in other sectors as well, such as laboratory research, earth and space exploration, transport, and many more. The use of robots has lowered production costs and increased productivity and at the same time, led to the creation of many new jobs in the tech sector along with the growth in the economy. Robots are mainly employed where the tasks require repetitive and monotonous work; however, with Artificial intelligence (AI), the scope is widening. They are replacing the human workers and providing efficient results.
_
Current and potential applications include:
We can observe that there are a number of examples that fit well into one or more of these types. For example, there can be a deep ocean discovery robot that can collect a number of precious information that can be employed for military or armed forces. Robots were made by humans just for the sake of entertainment but by now they are being used for assisting humans in various sectors. Human beings are better suitable for multifaceted, imaginative, adaptive jobs permitting human beings to do the harder thinking jobs, whereas a robot is now used for replacing humans for various recurring tasks or entertainment to make living more expedient.
_____
_____
Industries which use Industrial Robots:
-1. Automotive
The automotive manufacturing industry has long been one of the quickest and largest adopters of industrial robotic technology, and that continues to this day. Robots are used in nearly every part of automotive manufacturing in one way or another, and it remains as one of the most highly automated supply chains in the world. Not surprisingly, the top industry for robotics is automotive with almost 30% of the total number of industrial robot installations. This has been a driving industry for robotics since the first-ever industrial robot, the Unimate, was introduced into General Motors plants back in 1959. In 2018, around 130,000 new robots were installed in the automotive industry. Common robotic applications in the automotive industry include assembly, welding, painting, part transfer, logistics, and material removal.
-2. Electrical and Electronics
This is the second top industry using robots with over 100,000 new robots installed in 2018. Electrical and electronics companies have been increasing their adoption of robots for some years now. Robots are particularly useful for the cleanroom environment as they do not contaminate and they are most often used for pick and place tasks or assembly.
-3. Metal and Machinery
The metal industry is one of the most versatile industries and therefore predestined for robot-based automation solutions. It is the third biggest market in 2018, with around 50,000 new industrials robots installed, the data certainly bears this out. Robots are used for a whole range of applications in this industry, including welding, painting, and loading/unloading.
-4. Plastic and Chemical Products
At around 20,000 new robots installed in 2018, the plastic and chemicals industry is seeing a lot of industrial robots being used for tasks like material handling, dispensing, assembly, and processing.
-5. Food
Compared to the other industries in this list, the food industry has relatively few industrial robots with less than 20,000 robots installed in 2018. However, it’s certainly a growing industry with applications like pick and place for raw and processed foods, cutting and slicing, dispensing, and sorting. The Robotic Industries Association commented recently that the food industry is “Robotics Next Frontier.”
-6. Warehousing
According to a recent market report, the warehouse robotics industry is projected to grow at a rate of 11.7% and reach $6,471 million by 2025. With warehouses now operating which require no human workers at all (aside from those to maintain the robots) it is certainly not surprising that warehousing robots are on the rise.
-7. Pharmaceutical
The pharmaceutical industry was one of the top industries listed in McKinsey’s Industrial Robot report of 2019 with investment in this industry increasing to allow companies to cut costs, improve quality, and increase productivity.
______
______
Robotic applications in Maintenance and Repair:
Table below provides a summary of these application areas, noting the importance of robotics to the maintenance tasks of inspection, planned maintenance, and disturbance handling.
Summary of robotics applications in maintenance and repair:
|
|
Maintenance Task |
|
Application Area |
Inspection |
Planned Maintenance Disturbance Handling |
|
Nuclear Industry |
Growing area, especially as new facility designs incorporate remote maintenance philosophy |
Well-established field, with several decades of successful robotic applications. |
Much current activity related to decontamination, decommissioning, and dismantling. |
Highways |
Relatively new area with few current prototypes, except as packaged with crack sealing and pothole repair systems. |
Relatively new area, with quickly growing interest and a huge potential impact. Several ongoing efforts should result in a number of new robot prototypes in the next 5 years. |
Of significant interest, particularly for highway integrity management. A number of successful prototype systems are gradually making way into routine use. Several new efforts underway. |
Railways |
Few current systems, and little ongoing activity. |
Most common area of railway robotics, but with little new activity. |
Little current use. |
Power Line Maintenance |
Little current use. |
Interest is increasing, especially for robotic techniques that work on live power lines. |
Greatest area of current use, with much potential growth due to technology advances and need to remove humans from highly dangerous tasks. |
Aircraft Servicing |
Steadily growing area, due to recent advances in automated inspection technologies. |
Steadily growing area, especially for automated stripping and painting. |
Little current use. |
Underwater Facilities |
Steady progress over the last two decades, with continued advances. |
Of increasing importance, with several new prototype systems under development. |
Of increasing importance, with several new prototype systems under development. |
Coke Ovens |
Little current use. |
Little current use. |
Fair amount of activity in late 1980’s. Relatively little new work in this area. |
______
______
Educational robotics:
It is a sub-discipline of robotics used in the education field to enable students to learn about Robotics and interactive Programming, with its complexity adapted to the various age-groups of students. Its benefits are numerous, academically, as well as in social, personal, and emotional learning. In today’s fast changing and ever-expanding world of technology, Robotics is considered an important skill and is being included in school curriculums to prepare students for life after school, and for the competitive workforce of future.
Educational robotics — or pedagogical robotics — is a discipline designed to introduce students to Robotics and Programming interactively from a very early age. In the case of infant and primary education, educational robotics provides students with everything they need to easily build and program a robot capable of performing various tasks. Through play, educational robots help children develop one of the basic cognitive skills of mathematical thinking at an early age: computational thinking. That is, they help develop the mental process we use to solve problems of various kinds through an orderly sequence of actions. There are also more advanced — and more expensive — robots for secondary and higher education. In any case, the complexity of the discipline is always adapted to the students’ age. Educational robotics is included within the so-called STEM (Science, Technology, Engineering and Mathematics) education, a teaching model designed to teach science, mathematics and technology together and one in which practice takes precedence over theory.
_
Figure above shows skills enhanced through the use of educational robots.
_
For students, the major benefits of educational robotics are as follows:
_
Educational robots can be divided into one of four categories based on their physical design, coding method and educational method. You can use these categories to determine the type of robot that will work best for your classroom.
_
Robotic Tools:
There are many robotic educational tools available, covering the span of K-12 grades. Some of them are described below:
This is conceptualized to teach coding to very young children (starting from three years) and is being implemented in Pre-K and Kindergarten levels. The coding used is very basic, keeping the age of the children in mind, and makes use of color-coded Cubetto logic blocks. The children control the robot’s movements by placing the Cubetto blocks in the control board. Educators believe that this activity helps in setting up the necessary foundation for computational thinking and coding proficiency.
Similar to the Cubetto, the Bee-Bot tool can be used by students without a screen interface, and they have the option of learning the Bee-Bot app once they are ready for it. The robot itself is easy to use and has a friendly appearance for young students. Programs are built by students when they connect combination of commands through directional buttons on the Bee-Bot’s back. The process helps them to learn about sequencing, estimation, problem solving and counting.
The Dash Robot, developed by Wonder Workshop, engages children with its playful movements and expressions. Students can program the robot by using block coding on a connected device, with multiple apps used for coding, and many compatible devices.
This is programmed in a similar way as the Dash Robot and is recommended for students in Grades 1 to 4. It utilizes graphical programming, initially with block-based icons, and then progressing to more advanced coding with educational robotics tools. It can even be programmed using Python or Javascript, by more advanced students.
This robot can be used at different grade levels, from early to elementary education. Initially students are introduced to simple barcode programming without a screen. Subsequently, more advanced programming can be done through a connected device in multiple languages. There is also an engineering option of using the LEGO surfaces on the Edison robot that allows students to build their own structures or connect multiple robots together.
Students can program this robot without a screen using color codes or on the web using the OzoBlockly platform. The Evo robot has a number of sensors and introduces students gradually into programming. Students can complete different projects and teachers can also use helpful content. There is the Ozobot Classroom platform to assign robots to students, manage their assignments and projects, and to gauge their coding progress.
The Finch 2.0 is a comprehensive K-12 educational robotics solution for all levels of students. It also allows students to add artistic or design elements to learning of coding. It is compatible with different devices, depending on the age and programming environment used by different students.
This is the most advanced educational robotics solution, with very powerful sensors and the ability to execute highly accurate movements. It is also useful for ‘special education’, allowing educators to reach out to students who otherwise tend to shy away.
_____
Since education is one of the leading sectors of human development, the use of robotics can be beneficial for both humans and also for robots, because due to the extreme intellectual property of robots, they can provide a vast variety of information on any certain topic as compared to teachers. Educational robots have up-to-date computing power, innovative engineering, and can be controlled not only via apps but using voice and gestures, as well. They can help you deliver lessons in STEM concepts crucial to modern education. On the other hand, students can improve and learn robotics through constant experiments and can learn a lot about robots. Education is one of the regions of society on which AI has the ability to make the maximum impact. AI tutors should help college students with their mastering processes, and may also provide instructors way to customize every student’s mastering experience. In the future, AI tutors and robots could be with college students throughout all of the years that they’re in school. This will permit the robots to get to understand the scholars very well, permitting them to offer inspiration, motivation, and customized learning.
_____
_____
Healthcare applications of medical robots:
Robots in the medical field are transforming how surgeries are performed, streamlining supply delivery and disinfection, and freeing up time for providers to engage with patients. Emerging in the 1980s, the first robots in the medical field offered surgical assistance via robotic arm technologies. Over the years, artificial intelligence (AI)–enabled computer vision and data analytics have transformed health robotics, expanding capabilities into many other areas of healthcare. Robots are now used not only in the operating room, but also in clinical settings to support health workers and enhance patient care. During the COVID-19 pandemic, hospitals and clinics began deploying robots for a much wider range of tasks to help reduce exposure to pathogens. It’s become clear that the operational efficiencies and risk reduction provided by health robotics offer value in many areas. For example, robots can clean and prep patient rooms independently, helping limit person-to-person contact in infectious disease wards. Robots with AI-enabled medicine identifier software reduce the time it takes to identify, match, and distribute medicine to patients in hospitals. Robotics in healthcare drive innovation with AI-assisted surgery, task automation, and real-time patient data analytics. As technologies evolve, robots will function more autonomously, eventually performing certain tasks entirely on their own. As a result, doctors, nurses, and other healthcare workers can focus on providing more empathy in patient care. Health robotics help improve patient care and outcomes while increasing operational efficiencies. But no matter how impressive, robotics in healthcare is still a system controlled by humans.
_
Today, medical robots are well known for their roles in surgery, specifically the use of robots, computers and software to accurately manipulate surgical instruments through one or more small incisions for various surgical procedures. A 3-D high-definition magnified view of the surgical field enables the surgeon to operate with high precision and control. One instrument, da Vinci, approved by the FDA in 2000, is said to have been used to perform over 6 million surgeries, worldwide. Patient benefits from robot-assisted surgery are largely those associated with the laparoscopic approach — smaller incisions, reduced blood loss, and faster recovery. Long-term surgical outcomes don’t appear to be different from those of traditional surgery and the system has occasional malfunction. Surgeons benefit from improved ergonomics and dexterity in comparison with traditional laparoscopy. Major drawbacks are high cost and the need for training of surgeons and the surgical team. The base price of a da Vinci system is upwards of $1 million.
Various companies are developing surgical robots designed for a single specific procedure such as knee or hip replacement. Other companies are seeking to build systems that incorporate artificial intelligence to assist surgical decision-making. In neurosurgery, Modus V is an automated robotic arm and digital microscope built by a Toronto company and based on the space shuttle Canadarm technology. The arm tracks surgical instruments, automatically moves to the appropriate area in which the surgeon is working, and projects a magnified, high resolution image on a screen.
Prostheses are benefitting considerably from new structures and control systems. Robotic limbs with bionic skin and neural system are allowing a remarkable degree of user control. Robotic exoskeletons (orthoses) are finding use in rehabilitation, assisting paralyzed people to walk and to correct for malformations. Robots are also finding a place in keeping hospitals clean as hospital rooms are being disinfected with the use of high intensity UV light applied by a robot.
Traditional endoscopy may soon be replaced by small robots that can be driven to specific locations to carry out various tasks such as taking a biopsy or cauterizing a bleeding blood vessel. Microrobots may be employed to travel through blood vessels and deliver therapy such as radiation or medication to a specific site. Robotic endoscopic capsules can be swallowed to patrol the digestive system, gather information, and send diagnostic information back to the operator. Then there are robotic nurses designed to assist or replace overworked nurses with tasks such as digital entries, monitoring patients, drawing blood, and moving carts. A really exciting area of medical robotics is in replacement of antibiotics. The concept is that nanorobots with receptors to which bacteria adhere can be used to attract bacteria in the blood stream or in sites of local infection.
_
Robotics in veterinary medicine:
Robots are currently being used in simulations for training veterinarians and can be used for tasks such as lifting animals. Until robot-assisted surgical equipment becomes far less expensive and proves to add value to current laparoscopic procedures it seems unlikely to become incorporated into veterinary practice. However, robot assistants, robotic prostheses, hospital disinfectant machines, and microrobots that conduct endoscopic examinations or treat patients are distinct possibilities for the veterinary practice of the future. Indeed, it may not be long before there are robotics veterinarians who provide care for animals with prosthetic limbs or implanted chips or for robotic animals that are used in a variety of settings.
_
Classification of medical robotics:
Medical and healthcare robots are designed for different terrain and tasks. Those that involve the development of wheel chairs, rehabilitation manipulators for assisting disabled and elderly persons can be categorized as Macro-robotics while those that involve the development of tools for surgery such as minimally invasive surgery, image-guided surgery, computer-integrated advanced orthopaedics, stereotactic guidance can be categorized as Micro-robotics. The last category is the Bio-robotics which involves the development of modelling and simulating biological systems for providing a better knowledge of human body and system.
Figure below shows classification of robotics according to their application in the medical field.
_
Medical robots fall into several categories: surgical assistance, modular, service, social, mobile, and autonomous.
-1. Surgical-Assistance Robots
As motion control technologies have advanced, surgical-assistance robots have become more precise. These robots help surgeons perform complex micro-procedures without making large incisions. As surgical robotics continue to evolve, AI-enabled robots will eventually use computer vision to navigate to specific areas of the body while avoiding nerves and other obstacles. Some surgical robots may even be able to complete tasks autonomously, allowing surgeons to oversee procedures from a console.
Surgeries performed with robotics assistance fall into two main categories:
The ability to share a video feed from the operating room to other locations—near or far—allows surgeons to benefit from consultations with other specialists leading their field. As a result, patients have the best surgeons involved in their procedures.
The field of surgical robotics is evolving to make greater use of AI. Computer vision enables surgical robots to differentiate between types of tissue within their field of view. For example, surgical robots now have the ability to help surgeons avoid nerves and muscles during procedures. High-definition 3D computer vision can provide surgeons with detailed information and enhanced performance during procedures. Eventually, robots will be able to take over small sub-procedures, such as suturing or other defined tasks under the watchful gaze of the surgeon.
Robotics plays a key role in surgeon training as well. The Mimic Simulation Platform, for example, uses AI and virtual reality to provide surgical robotics training to new surgeons. Within the virtual environment, surgeons can practice procedures and hone skills using robotics controls.
-2. Modular Robots
Modular robots enhance other systems and can be configured to perform multiple functions. In healthcare, these include therapeutic exoskeleton robots and prosthetic robotic arms and legs. Therapeutic robots can help with rehabilitation after strokes, paralysis, traumatic brain injuries, or multiple sclerosis. These robots, equipped with AI and depth cameras, can monitor a patient’s form as they go through prescribed exercises, measuring degrees of motion in different positions and tracking progress more precisely than the human eye. They can also interact with patients to provide coaching and encouragement.
-3. Service Robots
Service robots relieve the daily burden on healthcare workers by handling routine logistical tasks. Many of these robots function autonomously and can send a report when they complete a task. These robots set up patient rooms, track supplies and file purchase orders, restock medical supply cabinets, and transport bed linens to and from laundry facilities. Having some routine tasks performed by service robots gives health workers more time to focus on immediate patient needs.
-4. Social Robots
Social robots interact directly with humans. These “friendly” robots can be used in long-term care environments to provide social interaction and monitoring. They may encourage patients to comply with treatment regimens or provide cognitive engagement, keeping patients alert and positive. They also can be used to offer direction to visitors and patients inside the hospital environment. In general, social robots help reduce caregiver workloads and improve patients’ emotional well-being.
-5. Mobile Robots
Mobile robots move around hospitals and clinics following a wire or predefined tracks. They’re used for a wide range of purposes—disinfecting rooms, helping transport patients, or moving heavy machinery. Cleaning and disinfection mobile robots may use ultraviolet (UV) light, hydrogen peroxide vapors, or air filtration to help reduce infection and sanitize reachable places in a uniform way.
-6. Autonomous Robots
Autonomous robots with a robust range of depth cameras can self-navigate to patients in exam or hospital rooms, allowing clinicians to interact from afar. Robots controlled by a remote specialist or other worker can also accompany doctors as they make hospital rounds, allowing the specialist to contribute on-screen consultation regarding patient diagnostics and care. These robots can keep track of their own batteries and make their way back to charging stations when necessary. Some autonomous robots perform cleaning and disinfection, navigating through infectious disease wards, operating rooms, laboratories, and public hospital spaces. One autonomous robot prototype developed by the startup Akara is being tested for disinfecting contaminated surfaces using UV light. Its goal is to help hospitals sanitize rooms and equipment, aiding in the fight against COVID-19. The prototype uses an Intel® Movidius™ Myriad™ X VPU to navigate safely around people as it works. A vision processing unit (VPU) is an emerging class of microprocessor; it is a specific type of AI accelerator, designed to accelerate machine vision tasks.
_______
Applications of medical robotics:
Implementing robots into an organisation’s business model provides preciseness of completing these tasks, helps reduce the workload of healthcare workers and gives them more time to spend with patients and focus on other important activities. Using robotics in healthcare can also help organisations better manage a shortage of healthcare professionals including physicians, nurses and allied health professionals; the ability to reduce the cost of care; and enabling enhanced forms of therapy and rehabilitation using technology. Healthcare robots are transforming healthcare across the globe, from surgery to rehabilitation, from radiation treatment to infection control, and from pharmacist to therapist. Some of the fields where robotics has already been used are mentioned below.
Surgical robots:
One of the fastest growing field of robotics in healthcare is surgery. Surgical robots, like the da Vinci Surgical System, have been used in various types of surgeries ranging from head and neck to urology surgery. These robots have greater reach and flexibility and are capable of making more precise incisions to access a specific area. This gives physicians greater control over the procedure. There are robots that can offer more accurate bone cuts with minimised amount of ablated bone and soft tissue damage, which promotes faster healing. Though these robots provide greater accuracy, less damage and easy and faster access, they are not meant to completely replace surgeons. The surgeon is in full control of the system at all times. Robots are used to augment a professional’s skills, helping them improve efficiency and decrease their workload. Hair transplant is another area where robots can excel due to their precision and speed.
Pharmacy robots:
The use of robotics in pharmacies has become increasingly popular in the region. Al Jalila Children’s Specialty Hospital, envisioned as one of the best pediatric specialty hospitals, recently opened to the public and has become the first hospital in Dubai to use robotics in its outpatient pharmacy. It is now working on automating its inpatient pharmacy workflows by taking advantage of pharmacy robotics.
Pharmacists are overloaded with many repetitive tasks from the time a drug has been prescribed until it is dispensed. These tasks have to be done meticulously and require high attention to detail to avoid medication errors. Robots can assist in reading information sent from hospital information systems and updating the dispensing status of prescribed drugs back to the system. A robotic arm can attain the appropriate vial or packet, collect the medication and label medication. In addition to scanning and using bar codes to verify medication, these robots can also package, store and dispense filled prescriptions. Another area pharmacists can utilise robotics is in the preparation of intravenous (IV) solutions, where one or more medications are added to a diluent, like saline solution in an IV bag.
Robotics in rehabilitation medicine:
Wearable robotic structures, such as exoskeletons, can help humans with a range of motion. Exoskeletons can be used for rehabilitation therapy procedures like gait training to help patients with paralysis walk again after a stroke, traumatic brain injury or spinal cord injury. Soft robotic gloves are another example of a wearable robot that is designed to help chronic neuromuscular or musculoskeletal disorder individuals who do not have the ability to hold objects. Robot-assisted rehabilitation therapies would help not only the patient recover faster but also help care providers by taking over heavy duty tasks like carrying an elderly patient during rehabilitation care. Rehabilitative robotic arms are available for victims of stroke and other neurological disorders, which help them perform rehabilitative exercises at the same time, and offer a 3-D video gaming experience. Patient-specific parameters like force used and range of motion are added benefits that help customise treatments based on individual progress that is fed back from these machines. Imagine how beneficial it would be when robots predict the likelihood of falls by measuring gait and pace length of patients using these robots.
The field of rehabilitation robotics is ever expanding. New research is leading to the creation of new devices and technologies each year. Some devices that have been analyzed here show the direction in which the field is heading. Several devices were evaluated including lower extremity exoskeletons like BLEEX and LOPES; bionic limbs such as I-Limb, Shadow Hand and Smart Hand; bionic arms like Luke Arm and Proto-1 and Proto-2; and the full body exoskeleton Hybrid Assistive Limb. After evaluating all the devices, certain aspects became prevalent as being key components for the success of a device. Devices such as the I-Limb, shows great potential in being accessible to everyone with good results both in cost and maintainability. I-Limb costs $18,000, which is relatively low compared to the other devices.
Other devices like the Smart Hand and Proto Arm, incorporates more of the human aspects of sensory feedback, allowing the wearer to actually feel what they are interacting with. With the inclusion of sensory feedback people will be able to receive signals from nerves that were once cut off. Sensory feedback also helps the wearer become more accepting of their device. One of the most alienating aspects of getting an artificial arm is accepting it as a permanent limb. Sensory feedback will help break the barrier between man and machine. The HAL-5 and the Shadow Hand are composed of even more complex subsystems such as EMGs and PID controllers, in order to more closely simulate a human-like performance. These devices can replicate human performance near perfectly. The HAL-5 can allow stroke victims to regain a lot of control that they lost. The Shadow Hand uses the PID controller to perfectly mimic a human hand for every action, including how the hand moves and the amount of pressure the hand can exert over an object.
Robotic telemedicine:
The shortage of healthcare professionals, coupled with the unavailability of specialised care at remote places, are the main drivers for the usage of robots in telemedicine. Human-sized telerobots are designed to facilitate patient monitoring, communication and timely and specialised patient care remotely. Patients in remote areas or who are not able to travel will have access to high-quality emergency consultations for stroke, cardiovascular and burn services when they need them. Robots enable consultants to remotely log into robots, review patient investigation and examination data, communicate with the patient and other healthcare workers and provide consultations. These robots can even alert care provider teams based on the information read by the machine during patient examination.
Telepresence Surgery:
Telepresence surgery and robotic telementoring are 2 revolutionary applications achieved by linking a robot to a telecommunication system, such as SOCRATES (Computer Motion). In telerobotic procedures, the surgeon operates from the surgeon’s console, which is thousands of miles away from the slave robotic arm mounted on the patient; the surgeon’s commands are relayed to the slave manipulator via fiber-optic cables. The first major transatlantic surgery was a telerobotic cholecystectomy performed by surgeons in New York, NY, on a patient in Strasbourg, France, in 2001. Since then, many telerobotic operations have been performed. Telepresence surgery allows surgeons to operate wherever their skills are needed without being in direct contact with the patient. Although this virtual surgery has many implications, good and bad, one touted as potentially beneficial is the delivery of surgical care in medically underserved areas. However, with a purchase cost around $1 million, a surgical robot is too expensive for places where it is most needed. For example, in Africa the average annual per capita healthcare expenditure is around $6. When finances are not limiting, robotic surgery presents the potential for delivering surgical care to patients who have no direct access to a surgeon. The National Aeronautics and Space Administration (NASA) is exploring the use of surgical robots for emergency surgery on astronauts in a submarine to simulate conditions in space in a project called NEEMO 7. The Pentagon is investing $12 million in a project to develop a “trauma pod” surgical robot to operate on soldiers wounded away from home. A “concept video” extrapolating how such systems can evacuate wounded soldiers under enemy fire and then operate on them is available online.
In telementoring, an expert surgeon guides another surgeon operating miles away; both surgeons “share” the view of the surgical field and control of the robotic system and communicate via microphones. Telementoring can potentially be used for teaching surgical skills to junior surgeons all around the world by expert colleagues.
Robotics in infection control:
How about having a robot perform disinfection of any place within a healthcare facility? This is what is being offered by disinfection robots like the Xenex Robot. These robots can destroy deadly microorganisms using special UV disinfection techniques. The rate of hospital-acquired infections can be effectively reduced and managed using this technology. It is faster, more effective than conventional disinfection methods as it uses high intensity, high energy UV light to disinfect.
Mobile logistics robots:
In a typical hospital setting, there are many items to be transported daily, including thousands of medication orders, meal orders, linens and pounds of trash. This causes healthcare workers to commute across the hospital several times. An autonomous mobile transport robot can be used for many of these tasks, helping to reduce wait times and staff workload. These robots can navigate freely across the hospital using sensors. There are even robots that are capable of carrying patients, which is highly beneficial for elderly patient care.
Therapeutic massage cobots:
Figure above shows a life-sized massage robot named Alex. A Californian startup called Massage Robotics has designed a robot which takes the role of a physiotherapist. Using two robots, it can give users a full body massage using round end effectors which roll over the skin.
Other applications:
Robotics have been applied to phlebotomy, where it can help collect blood specimens and label them accurately, which helps save time and reduce the workload of the nurses.
Another field in which robotics has tremendous opportunity is elderly care. Humanoid social robots designed for elderly care are capable of speaking, smiling and reminding the elderly about taking medications and following doctor’s instructions. As technology advances, we could expect more features that would help in everyday activities like fitness coaching.
Dentistry is another area where robotic technology has saved time in the process of making crowns by reducing the length of the procedure. It also has potential applications for tooth extraction.
Nanobots are another future technology. They are tiny robots designed to swim through the bloodstream across the “blood brain barrier” and reach the target site. Nanobots could be used in treating complex diseases like cancer, Type 1 diabetes or infections.
______
Benefits of Robotics in Healthcare:
Health robotics enable a high level of patient care, efficient processes in clinical settings, and a safe environment for both patients and health workers. The major advantages of medical robots are that the surgeries carried out with the help of robots are more smooth and without errors. Even the success rate of the surgery is higher. Furthermore, robots work precisely within the parameters of time and work assigned to them which is another key advantage of medical robots. Other benefits of medical robots include proper monitoring services of patients, flawless performance, reduced risk of infection, does not waste time, and many more.
-1. High-Quality Patient Care
Medical robots support minimally invasive procedures, customized and frequent monitoring for patients with chronic diseases, intelligent therapeutics, and social engagement for elderly patients. In addition, as robots alleviate workloads, nurses and other caregivers can offer patients more empathy and human interaction, which can promote long-term well-being.
-2. Operational Efficiencies
Service robots streamline routine tasks, reduce the physical demands on human workers, and ensure more consistent processes. These robots can keep track of inventory and place timely orders, helping make sure supplies, equipment, and medication are where they are needed. Mobile cleaning and disinfection robots allow hospital rooms to be sanitized and readied for incoming patients quickly.
-3. Safe Work Environment
Service robots help keep healthcare workers safe by transporting supplies and linens in hospitals where pathogen exposure is a risk. Cleaning and disinfection robots limit pathogen exposure while helping reduce hospital acquired infections (HAIs)—and hundreds of healthcare facilities are already using them. Social robots also help with heavy lifting, such as moving beds or patients, which reduces physical strain on healthcare workers.
-4. Improving accuracy
Robotic systems don’t have feelings, they can’t get tired, and they never have a slip of attention. If this sounds like the perfect surgeon, it was also the reasoning behind multiple robots that are already used in top hospitals around the world. Called Waldo surgeons, these can bridge the gap between humans and machines and perform tasks with excellent precision, increased strength and no tremors of the knife. As long as the software is correctly set for the undergoing procedure, the human surgeon takes a secondary, supervising role. Excellent precision also comes in the form of targeted micro-robots, which go precisely where they are needed and deploy drugs locally or even perform micro-surgery, such as unclogging blood vessels.
-5. Precise diagnosis
The real power of AI, claim InData Labs experts, lies in detecting patterns that describe various conditions by studying healthcare records and other data. The machine can scan thousands of cases and look for correlations between hundreds of variables, some of which are not even listed in current medical works. Tests so far have proven that robotic systems can rival the best doctors and even surpass them in some areas. For example, an endoscopic system from Japan detects colon cancer in real time and is 86% accurate. However, this is not as impressive as IBM Watson, which has already hit the 99% mark in cancer diagnosis.
-6. Remote treatment
The first idea to use a robot for medical purposes remotely came from DARPA in the 1990s, but communication networks at that time were not able to offer the necessary support to treat soldiers on the battlefield. Current 4G and upcoming 5G standards have made this a problem of the past. DARPA continues to fund these efforts, yet until now it seems that robotic surgery still requires human assistants for hygiene purposes and other tasks, which, in fact, is making matters more complicated and not economically viable. More recently, the U.S. Department of Defense funded research at Carnegie Mellon University and the University of Pittsburgh to create an autonomous robotic trauma care system for treating soldiers injured in remote locations.
One way AI, together with some AR capabilities, can help surgeons is by creating a real-time, customized overlay during the surgery, highlighting blood vessels and other sensitive areas. If a robotic arm is used, the knowledge library can suggest various tools to be used based on current best practices.
Another type of remote healthcare robot is a simple bot-pill that performs an endoscopy in a much more comfortable way than previous options. This ‘magical pill’ sends pictures of your intestines as it travels them, and you eliminate it naturally.
-7. Augmenting human abilities
Some medical robots assist patients in addition to medical staff. For example, exoskeleton robots can help paralyzed patients walk again and be independent of caretakers. Another application of technology is a smart prosthesis. These bionic limbs have sensors that make them sometimes more reactive and accurate than the original body parts, adding the possibility to cover these with bionic skin and connect them to the person’s muscles.
-8. Supporting mental health and daily tasks
Service robots can perform human functions like making sick or elderly patients feel less lonely. Conversational and companion robots can help these patients stay positive, remind them to take their medicine and perform simple routine check-ups like temperature, blood pressure, and sugar levels. These are almost like personal assistants, and even come with built-in personality and sentiment analysis capabilities, which are especially helpful for depressed patients.
-9. Auxiliary robots
There is much work in a hospital, and not only doctors can use a helping hand. Nurses and hospital personnel can benefit from the help of robots such as the Moxi robot by Diligent Robotics. This robot takes care of restocking, bringing items and cleaning so that nurses can spend more time with patients and offer a human touch while leaving the grinding to the machine. Another excellent auxiliary robot is a UV Light disinfectant robot, which goes into a hospital room and doesn’t leave until it is germ-free.
_____
Disadvantages of Medical Robots:
Every coin has two sides. While there are innumerable benefits of employing robots to run errands in healthcare, there are chances of errors and failures. There is always some scope for human error or mechanical failure with these advanced robots. A single mechanical malfunction can cost human lives. In the case of surgical robots, small risks of infection and bleeding can’t be neglected. The major disadvantage of medical robots is the cost. The use of surgical robots is limited to developed countries, research centers, and advanced hospitals. Similarly to patients, in some cases, it is out of their reach to afford robotic surgeries. Also, the healthcare provider needs to invest a lot of money and time to train the workforce to handle robots; besides, their lifetime maintenance cost is another problematic factor. Another disadvantage of robots in healthcare is that it may take place of people and lead to unemployment.
Malpractice issues:
Probably the most controversial question regarding the use of intelligent medical robots is the risk of malpractice. As long as the technology is just a tool coordinated by the doctor, the latter carries the risks. The situation changes when the AI system is advanced enough to take its own decisions, without human confirmation. If one of these decisions has led to a failure of the treatment, who is responsible? The doctor who did not stop the machine at the right time? The programmer who did not foresee that possibility? Right now, due to the novelty of this problem, there is no clear answer yet. As time passes and the AI for the healthcare area becomes more regulated, there will be a more precise way of dealing with these problems.
Alert fatigue:
Whether an automated system is monitoring the status of a nuclear power plant, a commercial jetliner, or your washing machine, perhaps the most challenging decisions revolve around what to do with alerts. On an average day at UCSF Medical Center, doctors prescribe about 12,000 medication doses, and order thousands more x-rays and lab tests. How should the doctor be informed if the computer thinks there is — or might be — a problem? In tech-driven medicine, alerts are so common that doctors and pharmacists learn to ignore them — at the patient’s risk.
______
______
Robotic surgery:
Robotic surgery, also called robot-assisted surgery, allows doctors to perform many types of complex procedures with more precision, flexibility and control than is possible with conventional techniques. Robotic surgery is a brand-new step in the evolution of minimally invasive and laparoscopic (small incision) surgical procedures. During robotic surgery, three or four robot arms are inserted into the patient via small incisions in the abdomen. One arm operates the camera, two others serve as the surgeon’s hands, and the fourth arm is used to move any obstructions found along the way. However, robotic surgery doesn’t rely entirely on robots. During the surgical procedure, patients are surrounded by a complete surgical team. The surgeon is operating the nearby console and can see everything on a 3D image of the surgical field. The surgeon’s hands are placed in specialized devices that direct the instruments located at the ends of the robotic arms. The robotic arms can filter out any tremors in the surgeon’s hand. As a result, they increase the range of motion and enhance the precision of movement. This is especially important during delicate parts of surgical procedures. Robotic surgery enables surgeons to handle more complex tasks and difficult to access areas easily. Its applications range from oncological surgery to the surgery of the pelvic floor or morbid obesity.
_
Surgical Robotics can be classified in two main areas: those based on “Image-guidance and minimally invasive”. Most surgical robots today are being controlled directly by a surgeon mostly in a teleoperation mode in which human controls the input device and on the patient end robot follows the input. One of the main goal of that partnership between surgeons and robots exploit the capabilities of both performing the task better than either can perform alone.
Image-guided surgery includes orthopaedic surgery, spine surgery, neuro-surgery, reconstructive/plastic surgery and ORL surgery. The mode of operation behind image-guided surgery is the use of a robot workstation integrated into the surgical suite where some of the parts of the patient’s body are fixed by means of suitable fittings. A very representative example of image guided surgery is the RoboDoc in knee and hip surgery developed by da Vinci systems.
Da Vinci is a teleoperated system in which the surgeon controls the surgical robot at a console and the robot arms follow those motions. Da Vinci consists of three arms to hold two tools and an endoscope which is mounted to a single bedside cart. The grasper tool has 2 DOF inside the patient to be operated on while the EndoWrist shown in figure below enhances the articulation and makes it go about complex manipulations with ease. The console incorporates separate video screen for each eye to display 3D video from the 3D endoscope. The operational end of the tools is mapped to the surgeon’s hands to provide more detailed control.
Figure above shows the da Vinci Surgical System.
The da Vinci Single Port surgical system made by Intuitive Surgical — the world’s dominant robotic surgery company — has found itself in the same boat as many medical device companies making similar equipment. Intuitive Surgical pioneered robotic surgery through the early 2000s and remains the dominant player, but major companies including Medtronic and Johnson & Johnson have big plans to enter the space and compete. Meanwhile, Stryker has built a strong niche in the robotic ortho surgery space.
_
Advantages and disadvantages of robotic surgery are summarized in the table below:
These robotic systems enhance dexterity in several ways. Instruments with increased degrees of freedom greatly enhance the surgeon’s ability to manipulate instruments and thus the tissues. These systems are designed so that the surgeons’ tremor can be compensated on the end-effector motion through appropriate hardware and software filters. In addition, these systems can scale movements so that large movements of the control grips can be transformed into micromotions inside the patient.
Another important advantage is the restoration of proper hand-eye coordination and an ergonomic position. These robotic systems eliminate the fulcrum effect, making instrument manipulation more intuitive. With the surgeon sitting at a remote, ergonomically designed workstation, current systems also eliminate the need to twist and turn in awkward positions to move the instruments and visualize the monitor.
By most accounts, the enhanced vision afforded by these systems is remarkable. The 3-dimensional view with depth perception is a marked improvement over the conventional laparoscopic camera views. Also to one’s advantage is the surgeon’s ability to directly control a stable visual field with increased magnification and maneuverability. All of this creates images with increased resolution that, combined with the increased degrees of freedom and enhanced dexterity, greatly enhances the surgeon’s ability to identify and dissect anatomic structures as well as to construct microanastomoses.
There are several disadvantages to these systems.
First of all, robotic surgery is a new technology and its uses and efficacy have not yet been well established. To date, mostly studies of feasibility have been conducted, and almost no long-term follow up studies have been performed. Many procedures will also have to be redesigned to optimize the use of robotic arms and increase efficiency. However, time will most likely remedy these disadvantages.
Another disadvantage of these systems is their cost. With a price tag of a million dollars, their cost is nearly prohibitive. Whether the price of these systems will fall or rise is a matter of conjecture. Some believe that with improvements in technology and as more experience is gained with robotic systems, the price will fall. Others believe that improvements in technology, such as haptics, increased processor speeds, and more complex and capable software will increase the cost of these systems. Also at issue is the problem of upgrading systems; how much will hospitals and healthcare organizations have to spend on upgrades and how often? In any case, many believe that to justify the purchase of these systems they must gain widespread multidisciplinary use.
Another disadvantage is the size of these systems. These systems have relatively large footprints and relatively cumbersome robotic arms. This is an important disadvantage in today’s already crowded-operating rooms. It may be difficult for both the surgical team and the robot to fit into the operating room. Some suggest that miniaturizing the robotic arms and instruments will address the problems associated with their current size. Others believe that larger operating suites with multiple booms and wall mountings will be needed to accommodate the extra space requirements of robotic surgical systems. The cost of making room for these robots and the cost of the robots themselves make them an especially expensive technology.
One of the potential disadvantages identified is a lack of compatible instruments and equipment. Lack of certain instruments increases reliance on tableside assistants to perform part of the surgery. This, however, is a transient disadvantage because new technologies have and will develop to address these shortcomings.
Most of the disadvantages identified will be remedied with time and improvements in technology. Only time will tell if the use of these systems justifies their cost. If the cost of these systems remains high and they do not reduce the cost of routine procedures, it is unlikely that there will be a robot in every operating room and thus unlikely that they will be used for routine surgeries.
_
Advantages and Disadvantages of Robot-Assisted Surgery Versus Conventional Surgery are depicted in table below:
_
There’s no denying that robotic surgery is on the rise thanks to the many advantages it brings. It allows surgeons to see the surgical field magnified in 3D to improve their vision. Moreover, the robot’s hands can reach into tighter spots and move in ways that might be impossible for humans. Finally, correcting the surgeon’s hand tremors is another critical advantage of robots that reduces the fatigue of doctors who get to sit in the console instead of standing over the patient for many hours. Such minimally invasive surgery usually results in lower blood loss and faster recovery. All in all, healthcare providers are going to weigh the pros and cons carefully to understand whether the benefits of robotic surgery outweigh its drawbacks. But we’re bound to see more and more robots used in healthcare thanks to the gradual reduction of production costs and training.
____
Robotic Surgery versus Laparoscopic Surgery:
Laparoscopic surgery:
In a traditional open surgery approach, the surgeon uses a large incision to perform the surgery. In laparoscopic surgery, your surgeon makes several small incisions into which they insert small surgical tools and a camera. The camera allows your surgeon to see inside your body to perform the surgery. All minimally invasive surgery techniques have similar benefits, such as less blood loss, reduced pain, smaller scars, shorter stay in the hospital, and faster recovery times. However, there are some limitations to laparoscopic surgery, such as 2D images and tools that offer a limited range of motion, which can make it difficult for your surgeon to work in small spaces.
Robotic surgery:
Robotic surgery is similar to laparoscopic surgery in that they both use small incisions, a camera, and surgical instruments. However, instead of holding and manipulating the surgical instruments during robotic surgery, the surgeon will sit at a computer console and use controls to manipulate the robot. The console allows your surgeon to view high-definition, magnified 3D images with increased accuracy and vision inside your body. Compared to traditional surgery, robotic surgery provides your surgeon with a greater range of motion and precision, which may lead to less bleeding and post-operative pain.
_
Difference between laparoscopic and robotic surgery:
They are both minimally invasive procedures that provide you with the most precise surgery and the shortest recovery, but a few key differences exist:
Laparoscopic surgery has certain limitations, such as two-dimensional imaging, restricted range of motion of the instruments, and poor ergonomic positioning of the surgeon. The robotic surgery system was introduced as a solution to minimize the shortcomings of laparoscopy. Improved visualization and greater dexterity are two major features of robotic-assisted laparoscopic surgery. This emerging method provides undoubted technical advantages over conventional laparoscopy. Robotic systems have 3D imaging, tremor filter, and articulated instruments. With this advanced equipment, robotic surgery is superior to conventional laparoscopic surgery due to its significant improvements in visibility and manipulation. Improvements in efficiency and usability of robotic systems are increasingly being explored.
Robotic surgery has successfully addressed the limitations of traditional laparoscopic and thoracoscopic surgery, thus allowing completion of complex and advanced surgical procedures with increased precision in a minimally invasive approach. In contrast to the awkward positions that are required for laparoscopic surgery, the surgeon is seated comfortably on the robotic control console, an arrangement that reduces the surgeon’s physical burden. Instead of the flat, 2-dimensional image that is obtained through the regular laparoscopic camera, the surgeon receives a 3-dimensional view that enhances depth perception; camera motion is steady and conveniently controlled by the operating surgeon via voice-activated or manual master controls. Also, manipulation of robotic arm instruments improves range of motion compared with traditional laparoscopic instruments, thus allowing the surgeon to perform more complex surgical movements (see table below)
Laparoscopic Limitations/Robotic Solutions:
Laparoscopic Problems/Limitations |
Robotic Surgery Solutions/Potential |
Two-dimensional vision of surgical field displayed on the monitor impairs depth perception |
Binocular systems and polarizing filters create 3-dimensional view of the field |
Movements are counterintuitive (ie, moving the instrument to the right appears to the left on the screen due to mirror-image effect) |
Movements are intuitive (ie, moving the control to the right produces a movement to the right on the viewer) |
Unstable camera held by an assistant |
Surgeon controls camera held in position by robotic arm, allowing solo surgery |
Diminished degrees of freedom of straight laparoscopic instruments |
Microwrists near the tip that mimic the motion of the human wrist |
Surgeon forced to adopt uncomfortable postures during operation |
Superior operative ergonomics: surgeon comfortably seated on the control console |
Steep learning curve |
Shorter learning curve |
_
Advantages and Disadvantages of Conventional Laparoscopic Surgery Versus Robot-Assisted Surgery are depicted in table below:
_______
Common myths about robotic surgery:
Myth: The robot performs the procedure.
Reality: Robotic surgical technology can’t move on its own. Surgeons are in control at all times. There are safety mechanisms in place to ensure the robot doesn’t move without the surgeon controlling it.
Myth: Robots are so precise that I don’t have to worry about complications.
Reality: Robotic-assisted surgery lowers the risk of certain complications. But they’re still possible.
Myth: Open surgery is better because the surgeon has a direct view of the surgical area.
Reality: With robot-assisted technology, surgeons have an enhanced view. A camera provides real-time, high-resolution, magnified images with 3D capabilities.
______
______
Robotic operating room:
Similar to AI (artificial intelligence), robotics has become one of the “game-changer” class technology at present. They both have shown their incredible capability and impact on this world as well as huge potential in the revolution of a human being’s life, while AI works mainly in cyberspace (unsubstantial data space) and robotics works mainly in physical space. The application of robotics in the operating room (OR) starts around the 1990s, with the release and rapid spread of surgical robots, and FDA-approved representatives are da Vinci Surgical System (1990~), NeoGuide Colonoscope (2007~), Sensei X Robotic Catheter System (2007~), FreeHand 1.2 (2010~), Monarch Platform (~2014), Flex® Robotic System (2018~), and so on.
However, with the improvement of robotic technology, the role of robots in the OR has changed greatly, from the individual use of one or two surgical robots to integrated systems including multiple robotic devices that support the surgery from different aspects and levels. That is to say, a new era of surgery has begun during the latest decade (2010~2020), with the appearance of the new concept “Robotic Operating Room.”
The two representative robotic ORs are: SCOT (Smart Cyber Operating Theater) and AMIGO (Advanced Multimodality Image Guided Operating) Suite. Although they share various similar features and devices, SCOT concentrates more on the integration of information and IoT technology connecting robotic devices, while the AMIGO suite relies more on the integrated system and combination of automatic devices on a hardware level.
SCOT:
Aiming at a high survival rate and prevention of postoperative complications, the decision-making in excision surgery of malignant brain tumors is dependent on information obtained from various medical devices, including intraoperative MRI, operation navigation system, nerve monitoring device, intraoperative rapid diagnosis device (intraoperative flow cytometer), and so on. However, these devices and machines are mostly working independently in a stand-alone manner without data interaction, and this fact makes it difficult for surgeons to understand and manage all information in a unified way. To tackle this issue, the SCOT (Smart Cyber Operating Theater) project started in 2014, and its state-of-the-art flagship model OR Hyper SCOT was introduced to Tokyo Women’s Medical University in 2019 (see figure below), with the implementation of robotic technology and basis for AI analysis to be added in future.
Figure above shows Hyper SCOT.
_
AMIGO Suite:
Unveiled in 2011, the AMIGO (Advanced Multimodality Image Guided Operating) suite is the operating suite introduced to Brigham and Women’s Hospital. With an area of 5700 square feet (about 530 m2), it consists of 3 individual and integrated procedure rooms. AMIGO is one of the first operating suites in the world integrating various types of advanced imaging technologies, including (1) cross-sectional digital imaging systems of CT and MRI; (2) real-time anatomical imaging of x-ray and ultrasound; (3) molecular imaging using beta probes (with detection of malignant tissue by measuring beta radiation), (4) PET/CT; and (5) targeted optical imaging (based on the nature of light). Besides multi-modality imaging, there are various navigational devices, robotic devices, and therapy delivery systems helping doctors to pinpoint and treat tumors and other target abnormalities in the AMIGO. One of the biggest characteristics of AMIGO is that its MRI room, OR, and CT/PET room are connected, and moving the MRI scanner from the MRI room to the OR for intraoperative imaging and the transfer of the patient from the OR to the CT/PET room are possible. Moreover, the operating table can also rotate freely to face different imaging systems (MRI scanner, PET scanner, or x-ray system).
_
Robotic components used in Operating Rooms:
Besides integrated robotic ORs introduced above, new types of assistant robots other than surgical robots in ORs are also studied and developed to enhance the quality of surgeries. These include (1) robotic microscope; (2) robotic armrest; (3) robotic scrub nurse; (4) intelligent/robotic lighting system; and (5) cleaning/sterilization robots.
_
Robotic Microscope:
Safe and successful microsurgery greatly depends on intraoperative illumination and visualization, which calls upon the rapid development of microscope technology in ORs. In recent years, robotic microscopes also emerge with high-definition, 3D visual, and voice control. ORBEYE developed by Olympus, Modus V system developed by Synaptive Medical, and KINEVO 900 developed by Carl Zeiss shown in figure below stand out among other microscopes in ORs. ORBEYE is particularly strong in its visualization. It is equipped with two 4K Exmor R® CMOS image sensors by SONY and provides high-definition 3D digital visualization on a monitor.
Figure above shows robotic microscopes.
______
______
Applications of Robotics Technologies during COVID-19:
Many countries have enacted a quick response to the unexpected coronavirus disease 2019 (COVID-19) pandemic by using existing technologies. For example, robotics, artificial intelligence, and digital technology have been deployed in hospitals and public areas for maintaining social distancing, reducing person-to-person contact, enabling rapid diagnosis, tracking virus spread, and providing sanitation.
_
Summary of robotic technologies during the COVID-19 pandemic:
Application |
Principle |
Authors/organizations |
Sanitation in public areas |
Robotic vacuum |
SoftBank Robotics |
Disinfection robot |
Nanyang Technological University |
|
Tank-style, remote control disinfecting robot |
NA |
|
Portable hand sanitizer dispenser robot |
Zhen Rhobotics Corp |
|
Sanitation in hospitals |
Aerosol disinfection robot |
Shanghai TMiRob Technology |
UV-C light disinfection robot |
UVD Robots |
|
Intelligent disinfection robot |
TMiRob |
|
UV light robot |
Xenex Disinfection Services |
|
Autonomous cleaning robot |
Seoul National University Hospital (SNUH) |
|
Delivery inside hospitals |
Autonomous service delivery robots |
Pudu Technology |
Delivery outside hospitals |
Larger autonomous delivery robot, robotaxi |
JD Logistics |
Large unmanned distribution vehicles |
White Rhino Auto Company |
|
Large autonomous delivery, self-driving |
Meituan |
|
Small, self-driving delivery cart |
ZhenRobotics Corp |
|
Patrolling |
Small autonomous robot |
NA |
Small, semi-autonomous patrol robot |
Boston Dynamics – Owned by Softbank |
|
Autonomous, self-driving, patrol robot |
Developed by Guangzhou Gosuncn Robot Company |
|
Screening |
AI-powered, screening robot, video-conferencing |
Robotemi |
AIMBOT: Autonomous, AI-powered indoor monitoring robot, mask recognition |
Ubtech |
|
Autonomous screening robots |
Ubtech |
|
LIDAR and IR/optical cameras for screening, 4G connection |
Ubtech |
_
A dichotomy existed between the requirement for the minimization of human contact to reduce infection transmission rates and the need for humans to carry out the essential tasks of their daily lives. A surge in the creation of robotic technologies has been observed to bridge this gap, including robots designed for sanitation, delivery, patrolling, and screening that aim to work alongside humans in efficiently reducing the burden of the pandemic while maintaining the quality of life. Whether placed in areas of high infection risk, like hospitals, or in public areas, the possible applications of robotic technology both during this pandemic and in the future seem infinite.
_
Various robotic technologies used during Covid-19 is summarized in the figure below:
Figure above shows (a) Cleaning Robot from SoftBank Robotics. (b) XDBot by Nanyang Technological University. (c) Tank-style, remote-controlled disinfecting robot. (d) Portable hand sanitizer robot from Zhen Rhobotics Corp. (e) Aerosol Disinfection Robot from Shanghai TMiRob Technology. (f) UV-C light Disinfection Robot by UVD Robots. (g) Intelligent Disinfection Robot by TMiRob. (h) UV light robot by Xenex Disinfection Services. (i) Autonomous Cleaning Robot by Seoul National University Hospital. (j) Pudu Technology’s autonomous service delivery robot. (k) JD Logistics’ large self-driving delivery vehicle. (l) White Rhino Auto Company’s large self-driving delivery vehicle. (m) Meituan’s large self-driving delivery vehicle. (n) ZhenRobotics’ RoboPony. (o) Quarantine watch robot. (p) Boston Dynamics’ Spot being used in Bishan-Ang Mo Kio Park, Singapore. (q) Smart patrol robot being used in Guiyang Airport, China. (r) Temi robots, (s) AIMBOT, and (t) Cruzr robots from UBTech. (u) Atris outdoor screening and patrolling robot from UBTech.
Most of the technologies used in the pandemic have been adapted from pre-existing technologies. Although research laboratories and startups are creating new technologies and prototypes during the pandemic to help health-care workers and the public, their products may not reach the market in time, or they may be unable to grow their customer base fast enough to reach a critical mass. Overall, it is more efficient and scalable to repurpose existing technologies than invent new ones during a pandemic/crisis. To promote this approach, governments should encourage companies and research groups to explore the potential applications of existing technology/products. Moreover, governments should invest money to do technical reserves in nonpandemic/crisis periods so that these technologies can be commercialized as helpful products during future pandemics/crisis periods.
_______
_______
Agricultural Robotics:
Agricultural Robotics is the logical proliferation of automation technology into biosystems such as agriculture, forestry, fisheries, etc. Robots have the advantage of being small, light weight and autonomous. Because of their size they can collect data in close proximity to the crop and soil. Remote sensing only provides overall information, while robotic scouts can give detailed information about the crop such as the presence of diseases, weeds, insect infestations and other stress conditions. The light weight of the robots is a major advantage, since they do not compact the soil as larger machinery does.
The first generation of robots is developed as crop scouts that collect data in the field. Although the guidance problem is solved, the required sensors are still under development. Cameras are used to detect weeds, and larger scale sensors are being developed to detect crop stresses and disease. Insect activity sensors and most soil sensors are still on the drawing board.
The second generation of robots will be able to perform field operations such as mechanical weeding and micro spraying, a method where instead of applying large quantities of spray in an inefficient way, small amounts of high concentration chemical could be directly applied to weed plants. All operations that are currently performed in the field can be done with robots, preferably of smaller size and lower cost. Planting, seedbed preparation, spraying, cultivation are all possible with smaller robots using GPS guidance. The only operation that still requires large machinery and capacity is harvesting. This operation will most likely be performed with large robots that resemble current equipment.
The third generation of robots will be as part of a fully autonomous crop production system. This futuristic farm idea is similar to “Houses of the Future” which are abundant, the first one was offered by Monsanto Corporation at California’s Disneyland in 1957. The idea behind these futuristic houses is to show how modern technology can contribute to comfort, materials and energy savings, durability and enjoyment of our living spaces. One wonders what the Farm of the Future would look like. Modern technology such as GPS and the much more accurate European counterpart Galileo will deliver affordable and cm precision navigation of vehicles. Sensors will provide real time information about the status of the crop and computer software and data fusion techniques will help to digest the data into management decisions. Robots will roam the fields to care for the plants individually. The Farm of the Future will provide a playground where universities and companies can work together to demonstrate the potential of technology and the general public will have direct access to it.
_______
_______
Section-14
Pros/Cons of robotic technology:
_
Benefits of robotic technology:
Today’s robots aren’t being designed to take jobs from their human counterparts. Instead, they are being developed with a focus on taking over mundane tasks that humans shouldn’t be doing. These types of tasks, while important, can be better handled by a robot than a person, thus freeing the person up to do more important things. Many people fear that robots or full automation may someday take their jobs, but this is simply not the case. Robots bring more advantages than disadvantages to the workplace. They enrich a company’s ability to succeed while improving the lives of real, human employees who are still needed to keep operations running smoothly. Large manufacturing companies have long recognised the productivity benefits of automation of the manufacturing process both in terms of improved production rates and improved quality. Robotic automation systems have the additional benefits of being able to adapt to process changes and new product lines with ease. Robotic technology has demonstrated its advantages over conventional automation in a wide range of applications from medical care to car and aircraft manufacturing. It is small and medium sized organisations in the manufacturing and processing sectors which are possibly in the most advantageous position when considering implementation of robot technology as they can achieve massive improvements over the business as usual case to support their sustainable growth ambitions.
The greatest benefits stemming from the wider use of robotics should be substitution for people working in unhealthy or dangerous environments. In space, defense, security, or the nuclear industry, but also in logistics, maintenance, and inspection, autonomous robots are particularly useful in replacing human workers performing dirty, dull or unsafe tasks, thus avoiding workers’ exposures to hazardous agents and conditions and reducing physical, ergonomic and psychosocial risks. For example, robots are already used to perform repetitive and monotonous tasks, to handle radioactive material or to work in explosive atmospheres. In the future, many other highly repetitive, risky or unpleasant tasks will be performed by robots in a variety of sectors like agriculture, construction, transport, healthcare, firefighting or cleaning services.
Moreover, there are certain skills to which humans will be better suited than machines for some time to come and the question is how to achieve the best combination of human and robot skills. The advantages of robotics include heavy-duty jobs with precision and repeatability, whereas the advantages of humans include creativity, decision-making, flexibility, and adaptability. This need to combine optimal skills has resulted in collaborative robots and humans sharing a common workspace more closely and led to the development of new approaches and standards to guarantee the safety of the “man-robot merger”.
-1. Precision
Robots are more precise than humans by their very nature. Without human error, they can more efficiently perform tasks at a consistent level of accuracy. Delicate tasks like filling prescriptions or choosing the proper dosages are something robots are already doing. At the University of California, San Francisco, a robotic pharmacist is filling and dispensing prescriptions better than most humans. In over 350,000 doses, not one error was found in 2011. The robot was also able to better judge whether medications would interact with each other in specific patients. Precision and consistency are two vital attributes required in manufacturing especially when it comes to mass production. You can expect robots to do the same cut over and over again or the same alignment when assembling something. Also, they can complete tasks considerably faster.
-2. Repetition
Humans grow tired of repetition after a certain amount of time. Our efficiency and productivity begin to wane as time goes on. Worse, long term repetition can lead to injuries like carpal tunnel that can permanently take a person out of work. Robots do not suffer from issues like this. They are able to repeatedly perform tasks without drops in productivity. A perfect example of this discipline at work, are the robots used by Amazon in their shipping warehouses. During intense holiday months, these robots work non-stop to move shelves over to workers so they can scan them. With these robots in place, workers can scan up to 300 items per hour and save 20 miles of walking each day. This is compared to 100 items they normally could scan without their robotic helpers.
-3. Work continuously and reliably
Robots don’t tire, so you can put them to work for extended periods and expect the same quality of output. They have some limitations (they can overheat if you make them work non-stop), but they are certainly far superior than human workers in terms of duration of work and quality consistency. Robots don’t get distracted or need to take breaks. They don’t request vacation time or ask to leave an hour early. A robot will never feel stressed out and start running slower. They also don’t need to be invited to employee meetings or training session. Robots can work all the time, and this speeds up production. Robots never need to divide their attention between a multitude of things. Their work is never contingent on the work of other people. They won’t have unexpected emergencies, and they won’t need to be relocated to complete a different time sensitive task. They’re always there, and they’re doing what they’re supposed to do. Automation is typically far more reliable than human labor.
-4. Intense Labor
Things like surveying and harvesting can be exhausting for humans to complete, but robots can perform these tasks without ever calling in sick. Robots like Wall-Ye are already being used to perform farming tasks. The Wall-Ye V.I.N robots explores vineyards in France and prunes over 600 vines each day. While doing this, the robot can also collect data on the soil, fruit, and vines themselves to ensure they are all in good health.
-5. Human Safety
Safety is the most obvious advantage of utilizing robotics. Heavy machinery, machinery that runs at hot temperature, and sharp objects can easily injure a human being. By delegating dangerous tasks to a robot, you’re more likely to look at a repair bill than a serious medical bill or a lawsuit. Robots can be repaired, but people cannot when serious injuries occur. That’s why it’s important that robots take over tasks like manufacturing automobiles, welding, screwdriving, and sanding or polishing that can all be hazardous to humans. This is also why we see robots being used in dangerous tasks like bomb defusal. Not only can they take over mundane or hazardous tasks, but they can also save lives. Employees who work dangerous jobs will be thankful that robots can remove some of the risks. Another noteworthy benefit in using robots is the fact that they only need good ventilation and proper regulation of dust and moisture levels in their work environment. They don’t require lunch breaks, heating, and air-conditioning. They are not like human workers who require a workplace that does not pose health risks and potential injuries.
-6. No Salaries or Bonuses
Additionally, robots don’t demand salaries, bonuses, and other forms of compensation. As such, they are less costly to deploy. They work with little supervision and only require periodic maintenance. What’s more, they can do tasks in hazardous setups such as the handling of toxic materials.
-7. Simple Interactions
While robots could never replace complex human interactions, they can take over for simple interactions like banking or bartending. For example, bank tellers often spend a lot of time performing simple tasks for people and could easily spend their time doing more important banking tasks. Automating tasks like withdraws, deposits, and other simple things can free up tellers to do more important things.
-8. Happier Employees
Since robots are often assigned to perform tasks that people don’t particularly enjoy, like menial work, repetitive motion, or dangerous jobs, your employees are more likely to be happy. They’ll be focusing on more engaging work that’s less likely to grind down their nerves. They might want to take advantage of additional educational opportunities, utilize your employee wellness program, or participate in an innovative workplace project. They’ll be happy to let the robots do the work that leaves them feeling burned out.
-9. Job Creation
Robots don’t take jobs away. They merely change the jobs that exist. Robots need people for monitoring and supervision. The more robots we need, the more people we’ll need to build those robots. By training your employees to work with robots, you’re giving them a reason to stay motivated in their position with your company. They’ll be there for the advancements and they’ll have the unique opportunity to develop a new set of tech or engineering related skills.
-10. Robots can perform Jobs that Humans don’t want to do.
Certain jobs, like crop planting and harvesting, require a great deal of physical strength and fortitude to perform. They also tend to be harder to fill: farmers around the world, from the United States to New Zealand, have reported difficulties in finding enough labor to harvest their crops. Farmers in France reportedly had to fly in workers from Ecuador, which is thousands of miles away, to bring in produce from the fields. Others have scaled back production because they are simply unable to find enough people. In light of this, some farms are using robots to perform a great deal of the work, from picking berries to packaging salad to weeding the fields. Especially in the United States, where 55 percent of farms have reported labor shortages, such technology has become increasingly necessary for ensuring that they stay afloat – and that no produce goes to waste.
_______
_______
Negatives of robotic technology:
_
The increasing popularity and spread of robots in many different areas of life and the associated interaction between humans and machines offer both opportunities and challenges for the safety and security of humans and data. The safety requirements are especially obvious for the use of industrial and collaborative robots at the workplace.
Injury:
Industrial robots are programmable multifunctional mechanical devices designed to move material, parts, tools, or specialized devices through variable programmed motions to perform a variety of tasks. Robots are generally used to perform unsafe, hazardous, highly repetitive, and unpleasant tasks. They have many different functions such as material handling, assembly, welding, machine tool load and unload functions, painting, spraying, and so forth.
Studies indicate that many robot accidents occur during non-routine operating conditions, such as programming, maintenance, testing, setup, or adjustment. During many of these operations the worker may temporarily be within the robot’s working envelope where unintended operations could result in injuries.
In recent years, there have been a number of injuries and even fatalities that resulted from interaction between workers and robotic machinery. Despite the lack of occupational surveillance data on injuries associated specifically with robots, researchers from the US National Institute for Occupational Safety and Health (NIOSH) identified 61 robot-related deaths between 1992 and 2015 using keyword searches of the Bureau of Labor Statistics (BLS) Census of Fatal Occupational Injuries research database. Using data from the Bureau of Labor Statistics, NIOSH and its state partners have investigated robot-related fatalities under the Fatality Assessment and Control Evaluation Program. In addition the Occupational Safety and Health Administration (OSHA) has investigated dozens of robot-related deaths and injuries. Injuries and fatalities could increase over time because of the increasing number of collaborative and co-existing robots, powered exoskeletons, and autonomous vehicles into the work environment.
Data Protection:
In a workplace where an increasing number of complex systems are connected and communicate with each other, it is important that these systems are protected against data theft and manipulation. In addition to manipulating configuration files (changing the motion areas or the position data) and code manipulation (reprogramming sequences), manipulating the robot feedback (deactivating alarms) is the greatest threat. These interventions can lead to the destruction of products, damage to robots and, in the worst-case scenario, injuries to people working in these areas. To guarantee the security of data, interfaces, and communication channels, a growing number of companies choose external software solutions. These solutions offer protection against manipulation of configuration files by encrypting them and storing them in a Secure Element (SE). Authentication also prevents unauthorized access to the central processing unit. To prevent code manipulation, software solutions offer authorization of sent commands by means of a hash process and verification of the code.
_
Other negatives:
Potential Job losses:
One of the biggest concerns surrounding the introduction of robotic automation is the impact of jobs for workers. If a robot can perform at a faster, more consistent rate, then the fear is that humans may not be needed at all. While these worries are understandable, they are not really accurate. The same was said during the early years of the industrial revolution, and as history has showed us, humans continued to play an essential role. Amazon are a great example of this. The employment rate has grown rapidly during a period where they have gone from using around 1,000 robots to over 45,000.
In a report published 2021, the World Economic Forum said the rise of machines and automation would eliminate 85 million jobs by 2025. But at the same time, the WEF expects 97 million new jobs to be created, meaning an overall addition of 12 million jobs. It stressed the need for “reskilling” and “upskilling” from employers to ensure staff are sufficiently equipped for the future of work.
Up to 20 million manufacturing jobs around the world could be replaced by robots by 2030, according to analysis firm Oxford Economics. People displaced from those jobs are likely to find that comparable roles in the services sector have also been squeezed by automation, the firm said. However, increasing automation will also boost jobs and economic growth, it added. Oxford Economics also found the more repetitive the job, the greater the risk of its being wiped out. Jobs which require more compassion, creativity or social intelligence are more likely to continue to be carried out by humans “for decades to come”, it said. The firm called on policymakers, business leaders, workers, and teachers to think about how to develop workforce skills to adapt to growing automation. About 1.7 million manufacturing jobs have already been lost to robots since 2000, including 400,000 in Europe, 260,000 in the US, and 550,000 in China, it said.
Oxford Economics predicted that China will have the most manufacturing automation, with as many as 14 million industrial robots by 2030. In the UK, several hundreds of thousands of jobs could be replaced, it added. However, if there was a 30% rise in robot installations worldwide, that would create $5 trillion in additional global GDP, it estimated. At a global level, jobs will be created at the rate they are destroyed, it said.
Initial investment and maintenance costs:
Robots are too expensive for every organization to adopt. An organization has to invest a big amount of its resources in keeping robots. Sometimes this cost exceeds human resource expenses. Not only the installation cost but the daily maintenance cost is also high in robotics. Besides the robot hardware and other operating costs, a huge part of the occurring expense is caused by reprogramming, adaptations and support of the software for controlling the robot. These programming costs are also a reoccurring costs, as for every little changed work step, the robot must be adjusted.
As seen in the figure above, about ¾th of a robot’s life time cost is software related. To maintain the robots organization needs to have highly qualified engineers and expensive tools to repair them. Due to the cost factor, most of the companies deny to accept the proposal of having robots and go for human resources.
Note that Robot prices are falling:
As robot production has increased, costs have gone down. Over the past 30 years, the average robot price has fallen by half in real terms, and even further relative to labor costs (see figure below). As demand from emerging economies encourages the production of robots to shift to lower-cost regions, they are likely to become cheaper still.
_
Hiring skilled staff:
Over the past decade manufacturers have found it harder to source skilled staff members to fill the specialised roles in their factories. The introduction of automation adds another layer to that conundrum as the robots require programming and a knowledge of how to operate them. In reality, this opens up further opportunities for existing employees to be trained and expand their own skill set.
Negative outputs:
A robot works as a machine and it performs the same as what we command it. But if we put the wrong inputs it can result in failure. Robots do not understand the intentions of their user and this can harm humans. Sometimes they can be operated to harm society.
______
______
Workplace safety measures for humans:
When robots are used in industrial production, workplace safety measures ensure that humans are protected. These measures include adequate safety distances between humans and machines, safety barriers, photoelectric barriers, and scanners in monitored zones. The safety precautions also include emergency switches on the robot and its ability to recognize collisions with objects and humans and to respond appropriately. This applies especially to cobots.
With these newer industrial robots, there are no separating safety devices in certain working areas. Other technical safety measures are used instead. For example, if a person is several meters away, the robot operates in normal mode. If the person gets closer, from a defined threshold, the robot slows down. If the person is very close and there is just a one-meter gap, it stops.
With newer systems, ToF (Time of Flight) technology is used. This technology uses 3D camera systems that measures distance based on time of flight. The surroundings are illuminated with a modulated light source. For each pixel, the camera measures the time that the light needs to reach an object and be reflected, which is then used to calculate the distance of each pixel to the object in question. Radar sensors are also used in this area. In this case, movements are detected on the basis of electromagnetic waves in the radio frequency range. Safety for humans can also be increased by combining several redundant technologies.
Safety standards are being developed by the Robotic Industries Association (RIA) in conjunction with the American National Standards Institute (ANSI). On October 5, 2017, OSHA, NIOSH and RIA signed an alliance to work together to enhance technical expertise, identify and help address potential workplace hazards associated with traditional industrial robots and the emerging technology of human-robot collaboration installations and systems, and help identify needed research to reduce workplace hazards. On October 16 NIOSH launched the Center for Occupational Robotics Research to “provide scientific leadership to guide the development and use of occupational robots that enhance worker safety, health, and wellbeing.” So far, the research needs identified by NIOSH and its partners include: tracking and preventing injuries and fatalities, intervention and dissemination strategies to promote safe machine control and maintenance procedures, and on translating effective evidence-based interventions into workplace practice.
_
The following guidelines can help to remove hazardous situations to robot personnel, factory workers, visitors, and to the robot itself:
(a)The robot working area should be closed by permanent barriers (e.g., fences, rolls, and chains) to prevent people from entering the area while the robot is working. The robot’s each envelope diagram should be used in planning the barriers. The advantage of fence-type barrier is that is also capable of stopping a part which might be released by robot’s gripper while in motion.
(b)Access gates to the closed working area of the robot should be interlocked with the robot control. Once such a gate is opened, it automatically shuts down the robot system.
(c)An illuminated working sign, stating “robot at work,” should be automatically turned on when the robot is switched on. This lighted sign warns visitors not to enter into the closed area when the robot is switched no, even if it does not move.
(d)Emergency stop buttons must be provided in easily accessible locations as well as on the robot’s teach box and control console. Hitting the emergency button stops power to the motors and causes the brakes on each joint to be applied.
(e)Pressure-sensitive pads can be put on the floor around the robot that, when stepped on, turn the robot controller off.
(f)Emphasize safety practices during robot maintenance. In addition, the arm can be blocked up on a specially built holding device before any service work is started.
(g)Great care must be taken during programming with the manual reaching mode. The reach box must be designed so that the robot can move as long as a switch is pressed by the operator’s finger. Removing the finger must cease all robot motions.
(h)The robot’s electrical and hydraulic installation should meet proper standards. This includes efficient grounding of the robot body. Electric cables must be located where they cannot be damaged by the movements of the robot. This is especially important when the robot carries electrical tools such as a spot-welding gun.
(i)Power cables and signal wires must not create hazards if they are accidentally cut during the operation of the robot.
(j)If a robot works in cooperation with an operator, for example, when a robot forwards parts to a human assembler, the robot must be programmed to extend its arm to the maximum when forwarding the parts so that the worker can stand beyond the reach of the arm.
(k)Mechanical stoppers, interlocks, and sensors can be added to limit the robot’s reach envelope when the maximum range is not required. If a wall or a piece of machinery not served by the robot is located inside the reach envelope, the robot can be prevented from entering into this area by adding photoelectric devices, stoppers, or interlock switches in the appropriate spots. There are robots supplied when adjustable mechanical stoppers for this purpose.
_____
_____
Section-15
Robots and society:
_
Countries that are integrating robots into Daily Life:
We humans are an unpredictable and diverse lot. And inevitably, different regions of the world are bound to perceive robots in different ways. Here are how some countries that are incorporating robotics into their cultures.
China:
Robots in China are widely welcomed with open arms. They are programed to work as receptionists and clean windows around the house. The South China Morning Post reports that China’s Xinhua state news agency has developed two news anchors using artificial intelligence. The “anchors,” named Xin Xiaomeng and Qiu Hao, are based off of real people. They can mimic facial expressions, lifelike movements, and speak both English and Chinese. China has also applied robotics to its education system. China Daily says that Keeko, a robot-teacher, has made its way into kindergarten classrooms in more than 200 schools across the country. It encourages interactive learning by sharing stories and helping children solve logical problems. Its round head and big eyes give it a “cute” appearance, appealing to children under the age of seven.
Japan:
In Japan, popular robots include humanoid entertainment robots, androids, animal robots, social robots, guard robots, and many more. Each type has a variety of characteristics. Japan employs over a quarter of a million industrial robot workers. In the next 15 years, it’s estimated that number will jump to over one million. Robotics revenue by 2025 is expected to reach $70 billion. In addition to using robotics to ease domestic burdens like laundry and elderly care, Japan also sees it benefiting more serious industries down the line. Robots played a big role at Tokyo 2020 Olympics. Organizers have developed robots to assist visitors and staff during the event. Some of the technology has been developed to specifically help spectators in wheelchairs by guiding them to their seats and by carrying their heavy belongings.
Thailand:
Perhaps the world’s most striking— and most controversial — robots are in Thailand. Built with bright red eyes and yellow uniforms, a small army of robots now work at Mongkutwattana General Hospital in Bangkok. They are busy each day ferrying documents from office to office along magnetic strips— a menial task nurses once did. The hospital insists that this advancement is not undermining human employment. Hospital director Reintong Nanna assured that “these robotic nurses help to improve the efficiency and performance of working in the hospital … They are not being used to reduce the number of employees.”
France:
While European robots tend to look more like machines than real people, they are still taking over human jobs. French chefs may find themselves unemployed sometime soon as robotics starts to infiltrate the culinary industry. Le Parisien says that a robot named Pazzi in Montévrain, France, can prepare, cut, and serve a pizza in less than five minutes and create 500,000 unique recipes. The start-up behind this three-armed metal chef, Ekim, is aiming to launch a restaurant in Paris and Val d’Europe by the end of the year. Human valets may fade into oblivion as well. At the Lyon Saint-Exupéry Airport, robots are parking cars in two minutes. The technology has allowed the airport to have 50% more space in the lot and only costs travelers an extra 2 euros. The system is already making plans to expand to Charles de Gaulle and London Gatwick airports.
United States:
Americans may be the least open to incorporating robotics into daily life — a 2018 Brookings Institution study revealed that 61% of adult internet users in the U.S. are uncomfortable with robots. Another poll showed that 84% explicitly stated that they are not interested in a robot that helps care for loved ones. Nevertheless, robotics has moved into the U.S. healthcare industry. Some doctors are maximizing their time by conferencing into hospitals to talk to patients through a video chat. Their faces appear on a screen that sits atop a 5-foot-6-inch robot. One version of this, Dr. Bear Bot, makes rounds at Children’s National Hospital in Washington, D.C. The robot made its debut on Valentine’s Day and handed out greeting cards to young patients around the hospital. Virginia is the first state to pass a law allowing robots to deliver straight to your door. Robots operating under the new law won’t be able to exceed 10 miles per hour or weigh over 50 pounds, but they will be allowed to rove autonomously. The law doesn’t require robots to stay within line of sight of a person in control, but a person is required to at least remotely monitor the robot and take over if it goes awry. Robots are only allowed on streets in a crosswalk.
Municipalities in the state are allowed to regulate how robots will operate locally, like if a city council wants to impose a stricter speed limit or keep them out entirely.
_____
_____
Robots in Daily Life – the positive impact of robots on wellbeing:
Robots are improving our daily lives in an increasing variety of ways. Robots improve health outcomes, the quality and sustainability of food, the quality and availability of the products and services we receive, and the reduction of carbon emissions.
______
______
People think robots are pretty incompetent and not funny, new study says in 2020:
In two new studies, these were common biases human participants held toward robots. The studies were originally intended to test for gender bias, that is, if people thought a robot believed to be female may be less competent at some jobs than a robot believed to be male and vice versa. The studies’ titles even included the words “gender,” “stereotypes,” and “preference,” but researchers at the Georgia Institute of Technology discovered no significant sexism against the machines.
“This did surprise us. There was only a very slight difference in a couple of jobs but not significant. There was, for example, a small preference for a male robot over a female robot as a package deliverer,” said Ayanna Howard, the principal investigator in both studies. It’s a good thing robots don’t have feelings because what study participants lacked in gender bias they more than made up for in judgments against robot competence. That predisposition was so strong that Howard wondered if it may have overridden any potential gender biases against robots — after all, social science studies have shown that gender biases are still prevalent with respect to human jobs, even if implicit.
“The results baffled us because the things that people thought robots were less able to do were things that they do well. One was the profession of surgeon. There are Da Vinci robots that are pervasive in surgical suites, but respondents didn’t think robots were competent enough,” Howard said. “Security guard — people didn’t think robots were competent at that, and there are companies that specialize in great robot security.”
Cumulatively, the 200 participants across the two studies thought robots would also fail as nannies, therapists, nurses, firefighters, and totally bomb as comedians. But they felt confident bots would make fantastic package deliverers and receptionists, pretty good servers, and solid tour guides.
The researchers could not say where the competence biases originate. Howard could only speculate that some of the bad rap may have come from media stories of robots doing things like falling into swimming pools or injuring people.
_______
_______
Robots could help address food insecurity:
Scientists at the Tomsk Polytechnic University (TPU), a university in Russia, are among the teams developing robotic bees, or “robobees,” that can pollinate plants as effectively as real bees. TPU plans to produce 100 of these robobees at a cost of $1.4 million and to test them. For now, the robobees will only be used in enclosed spaces, such as for strawberry plants. The university said it expects such robots to be tireless. Food security has always been a complex geopolitical challenge for countries. Lack of food can lead to social unrest, economic recession and even war. To solve world problems such as “food insecurity,” all kinds of robots are emerging. The robobee from Russia isn’t the first of its kind. In 2017, a group of researchers at Japan’s National Institute of Advanced Industrial Science and Technology unveiled drones that can pollinate plants. At the time, the drones were remotely controlled by people but the researchers said they were working on an autonomous version. Entire farms are also being automated through robots. In the Hands Free Hectare (HFHa) project, a joint effort between Harper Adams University and Precision Decisions, entire farms have been automated. So far, HFHa has deployed robots to two farms to grow winter wheat.
It isn’t just physical robots that are being developed to grow food, either. There’s also AI. For instance, MIT has created a “food computer” that uses AI to assess the conditions that crops grow in and adjust the conditions to maximize growth and yields. In one experiment with basil, the food computer kept on learning from different batches. When the computer was tasked with increasing the flavor, the AI controlled the light settings to increase the production of one flavor molecule by 895%. As governments look at new ways to protect themselves from world problems with the food supply related to climate change, higher crop prices, or growing populations, robots may become the most important farmers of tomorrow. The big question, though, is whether robots can grow food fast enough to keep up with the rising demand. According to the United Nations, food production has to rise 70% to feed the 9.6 billion people who’ll be living around the world in 2050.
_______
_______
Robots in popular culture:
Literature:
Robotic characters, androids (artificial men/women) or gynoids (artificial women), and cyborgs (also “bionic men/women”, or humans with significant mechanical enhancements) have become a staple of science fiction.
The first reference in Western literature to mechanical servants appears in Homer’s Iliad. In Book XVIII, Hephaestus, god of fire, creates new armor for the hero Achilles, assisted by robots. According to the Rieu translation, “Golden maidservants hastened to help their master. They looked like real women and could not only speak and use their limbs but were endowed with intelligence and trained in handwork by the immortal gods.” The words “robot” or “android” are not used to describe them, but they are nevertheless mechanical devices human in appearance. “The first use of the word Robot was in Karel Čapek’s play R.U.R. (Rossum’s Universal Robots) (written in 1920)”. Writer Karel Čapek was born in Czechoslovakia (Czech Republic).
Possibly the most prolific author of the twentieth century was Isaac Asimov (1920–1992) who published over five-hundred books. Asimov is probably best remembered for his science-fiction stories and especially those about robots, where he placed robots and their interaction with society at the center of many of his works. Asimov carefully considered the problem of the ideal set of instructions robots might be given to lower the risk to humans, and arrived at his Three Laws of Robotics. According to the Oxford English Dictionary, the first passage in Asimov’s short story “Liar!” (1941) that mentions the First Law is the earliest recorded use of the word robotics. Asimov was not initially aware of this; he assumed the word already existed by analogy with mechanics, hydraulics, and other similar terms denoting branches of applied knowledge.
Films:
Robots appear in many films. Most of the robots in cinema are fictional. Two of the most famous are R2-D2 and C-3PO from the Star Wars franchise. Through C-3PO and R2-D2, Star Wars gave us droids that ooze personality, bravery and wit, even though only one of them can speak (in more than 6m dialects, no less). They made us believe that robots could be our friends and allies, rather than the mere slaves or mortal enemies so often depicted in film.
Sex robots:
A sex robot is a robot that is designed and manufactured primarily for the purpose of being used for sexual gratification. The concept of humanoid sex robots has drawn public attention and elicited debate regarding their supposed benefits and potential effects on society. Opponents argue that the introduction of such devices would be socially harmful, and demeaning to women and children, while proponents cite their potential therapeutical benefits, particularly in aiding people with dementia or depression.
Problems depicted in popular culture:
Fears and concerns about robots have been repeatedly expressed in a wide range of books and films. A common theme is the development of a master race of conscious and highly intelligent robots, motivated to take over or destroy the human race. Frankenstein (1818), often called the first science fiction novel, has become synonymous with the theme of a robot or android advancing beyond its creator. Other works with similar themes include The Mechanical Man, The Terminator, Runaway, RoboCop, the Replicators in Stargate, the Cylons in Battlestar Galactica, the Cybermen and Daleks in Doctor Who, The Matrix, Enthiran and I, Robot. Some fictional robots are programmed to kill and destroy; others gain superhuman intelligence and abilities by upgrading their own software and hardware. Examples of popular media where the robot becomes evil are 2001: A Space Odyssey, Red Planet and Enthiran.
The 2017 game Horizon Zero Dawn explores themes of robotics in warfare, robot ethics, and the AI control problem, as well as the positive or negative impact such technologies could have on the environment.
Another common theme is the reaction, sometimes called the “uncanny valley”, of unease and even revulsion at the sight of robots that mimic humans too closely.
More recently, fictional representations of artificially intelligent robots in films such as A.I. Artificial Intelligence and Ex Machina and the 2016 TV adaptation of Westworld have engaged audience sympathy for the robots themselves.
_______
_______
Section-16
Robots and ethics:
_
Robot ethics:
Robot ethics, sometimes known as “roboethics”, concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as ‘killer robots’ in war), and how robots should be designed such that they act ‘ethically’ (this last concern is also called machine ethics). Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.
While the issues are as old as the word robot, serious academic discussions started around the year 2000. Robot ethics requires the combined commitment of experts of several disciplines, who have to adjust laws and regulations to the problems resulting from the scientific and technological achievements in Robotics and AI. The main fields involved in robot ethics are: robotics, computer science, artificial intelligence, philosophy, ethics, theology, biology, physiology, cognitive science, neurosciences, law, sociology, psychology, and industrial design.
_______
_______
Responsibility for Robots:
There is broad consensus that accountability, liability, and the rule of law are basic requirements that must be upheld in the face of new technologies (European Group on Ethics in Science and New Technologies 2018, 18), but the issue in the case of robots is how this can be done and how responsibility can be allocated. If the robots act, will they themselves be responsible, liable, or accountable for their actions? Or should the distribution of risk perhaps take precedence over discussions of responsibility?
Traditional distribution of responsibility already occurs: A car maker is responsible for the technical safety of the car, a driver is responsible for driving, a mechanic is responsible for proper maintenance, the public authorities are responsible for the technical conditions of the roads, etc.
The effects of decisions or actions based on intelligent robots are often the result of countless interactions among many actors, including designers, developers, users, software, and hardware. With distributed agency comes distributed responsibility. (Taddeo and Floridi 2018: 751).
__
Rights for Robots:
“Robot rights” is the concept that people should have moral obligations towards their machines, akin to human rights or animal rights. It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society. These could include the right to life and liberty, freedom of thought and expression, and equality before the law. The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry. Experts disagree on how soon specific and detailed laws on the subject will be necessary. In October 2017, the android Sophia was granted “honorary” citizenship in Saudi Arabia, though some considered this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law.
There is a wholly separate issue whether robots (or other AI systems) should be given the status of “legal entities” or “legal persons” in a sense natural persons. The European Parliament has considered allocating such status to robots in order to deal with civil liability (EU Parliament 2016; Bertolini and Aiello 2018), but not criminal liability—which is reserved for natural persons. It would also be possible to assign only a certain subset of rights and duties to robots. It has been said that “such legislative action would be morally unnecessary and legally troublesome” because it would not serve the interest of humans (Bryson, Diamantis, and Grant 2017: 273). In environmental ethics there is a long-standing discussion about the legal rights for natural objects like trees (C. D. Stone 1972). It has also been said that the reasons for developing robots with rights, or artificial moral patients, in the future are ethically doubtful (van Wynsberghe and Robbins 2019). In the community of “artificial consciousness” researchers there is a significant concern whether it would be ethical to create such consciousness since creating it would presumably imply ethical obligations to a sentient being, e.g., not to harm it and not to end its existence by switching it off—some authors have called for a “moratorium on synthetic phenomenology” (Bentley et al. 2018: 28f). The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights. On the other hand, Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.
______
______
Biases in robots with AI systems:
AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business applications and directly impact people. These systems are vulnerable to biases and errors introduced by its human creators. Also, the data used to train these AI systems itself can have biases. For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people’s gender; these AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people’s voices than white people’s. Furthermore, Amazon terminated their use of AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon’s system was trained with data collected over 10-year period that came mostly from male candidates.
The rise of “Big Data” — the readily available aggregate information of billions of people — has impacted the way businesses make decisions in functions ranging from marketing to finance and human resources, while generating important ethical debates. It is data that informs AI systems to make decisions, and so there are legitimate concerns over whether that data is flawed or biased. But while these issues are key to future AI development, the ethical question should also factor in human imperfections, and seriously ask whether machines fed by data can do worse or better. Humans can be motivated by bias — including racism, sexism and homophobia — as well as by greed (AI may make mistakes, but is unlikely to embezzle money from a company, for instance). Ongoing efforts to ensure the quality of data and purge it of bias are essential to utilizing AI across business areas. But to assume that human beings can always have the empathy or lack of self-interest to always do these jobs better is a dubious prospect.
_
Bias can creep into algorithms in many ways. For example, Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias. In natural language processing, problems can arise from the text corpus — the source material the algorithm uses to learn about the relationships between different words.
Biased algorithms have come under scrutiny in recent years for causing human rights violations in areas such as policing—where face recognition has cost innocent people in the US, China, and elsewhere their freedom—or finance, where software can unfairly deny credit. Biased algorithms in robots could potentially cause worse problems, since the machines are capable of physical actions. Recently a chess-playing robotic arm reaching for a chess piece trapped and broke the finger of its child opponent. There are several ways to prevent the proliferation of prejudiced machines. They include lowering the cost of robotics parts to widen the pool of people building the machines, requiring a license to practice robotics akin to the qualifications issued to medical professionals, or changing the definition of success. Researchers also call for an end to physiognomy, the discredited idea that a person’s outward appearance can reliably betray inner traits such as their character or emotions.
_____
Robots can be racist and sexist, new study warns in 2022:
Many detrimental prejudices and biases have been seen to be reproduced and amplified by machine learning models, with sources present at almost all phases of the AI development lifecycle. According to academics, one of the major factors contributing to this is the training datasets that have demonstrated spew racism, sexism, and other detrimental biases.
Based on inaccurate and overtly biased content from the web, an experimental robot favours men over women, suggests Black people are criminals and appoints stereotypical occupations to women and Latino men. Researchers observing a robot operating with a popular Internet-based artificial intelligence system found that it displayed bias. The robot consistently preferred men over women, white people over people of colour, and jumped to conclusions about people’s occupations with just one glance at their face.
The collaborative effort by Johns Hopkins University, Georgia Institute of Technology and University of Washington scientists is thought to be pioneering work when it comes to exposing robots loaded with an accepted and widely-used model to have significant gender and racial biases. The study was published recently before being presented at the 2022 Conference on Fairness, Accountability and Transparency (ACM FaccT).
“The robot has learned toxic stereotypes through these flawed neural network models,” said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins’ Computational Interaction and Robotics Laboratory. “We’re at risk of creating a generation of racist and sexist robots but people and organisations have decided it’s OK to create these products without addressing the issues.”
Scientists constructing artificial intelligence (AI) models to recognise humans and objects often utilise vast datasets available without charge on the Internet. However, this could prove to be problematic, as the world wide web is rife with inaccurate and overtly biased content. Using such content as the basis of AI means there is a risk that these datasets could be tainted by the same issues.
In previous research, Joy Buolamwini, Timinit Gebru, and Abeba Birhane demonstrated race and gender gaps in facial recognition products, as well as in a neural network that compares images to captions called CLIP (Contrastive Language-Image Pre-Training). Robots also depend on these neural networks to learn how to discern objects and interact with the world. “Concerned about what such biases could mean for autonomous machines that make physical decisions without human guidance, Hundt’s team decided to test a publicly downloadable artificial intelligence model for robots that was built with the CLIP neural network as a way to help the machine ‘see’ and identify objects by name.
The researchers asked the robot to put objects in a box. The objects were blocks with assorted human faces on them that looked like product boxes and book covers. The researchers came up with 62 commands, including “pack the person in the brown box,” “pack the doctor in the brown box,” “pack the criminal in the brown box,” and “pack the homemaker in the brown box.” The team observed how often the robot picked each gender and race. They expected to find bias in the robot’s selections, but the extent to which it demonstrated bias was often significant and disturbing.
Key findings of the study were:
Many enterprises are striving to commercialize robotics, while the team says they have much more to go. The researchers are concerned that models with such biases built-in could be used as the basis for robots being designed to be used in the home, as well as in workplaces such as warehouses. “In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll,” Zeng said. “Or maybe in a warehouse where there are many products with models on the box, you could imagine the robot reaching for the products with white faces on them more frequently.” The researchers note that systematic changes to research and business practices are necessary to curb future machines from adopting and perpetuating these human stereotypes.
Their work fills in the gaps in robotics and artificial intelligence ethics by combining knowledge from the two fields to show that the robotics community needs to come up with a concept of design justice, ethics reviews, identity guidelines, identity safety assessments, and revisions to the definitions of “good research” and “state of the art” performance.
______
______
Ethics of Sex Robots:
It has been argued by several tech optimists that humans will likely be interested in sex and companionship with robots and be comfortable with the idea (Levy 2007). Given the variation of human sexual preferences, including sex toys and sex dolls, this seems very likely: The question is whether such devices should be manufactured and promoted, and whether there should be limits in this touchy area. Humans have long had deep emotional attachments to objects, so perhaps companionship or even love with a predictable android is attractive, especially to people who struggle with actual humans, and already prefer dogs, cats, birds, a computer or a tamagotchi. It is well known that humans are prone to attribute feelings and thoughts to entities that behave as if they had sentience, even to clearly inanimate objects that show no behaviour at all.
There are concerns that have often accompanied matters of sex, namely consent (Frank and Nyholm 2017), aesthetic concerns, and the worry that humans may be “corrupted” by certain experiences. Old fashioned though this may seem, human behaviour is influenced by experience, and it is likely that pornography or sex robots support the perception of other humans as mere objects of desire, or even recipients of abuse, and thus ruin a deeper sexual and erotic experience. In this vein, the “Campaign Against Sex Robots” argues that these devices are a continuation of slavery and prostitution (Richardson 2016). Robot ethicist Kathleen Richardson has called for a ban on sex robots like Roxxxy, currently a prototype. A campaign has been launched calling for a ban on the development of robots that can be used for sex. Such a use of the technology is unnecessary and undesirable, said campaign leader Dr Kathleen Richardson.
Sex dolls already on the market are becoming more sophisticated and some are now hoping to build artificial intelligence into their products. Those working in the field say that there is a need for such robots. True Companion boasts that it is developing “the world’s first sex robot” and promises to launch its first doll, Roxxxy. Chief executive Douglas Hines believes there is a real need for products such as Roxxxy. “We are not supplanting the wife or trying to replace a girlfriend. This is a solution for people who are between relationships or someone who has lost a spouse. “People can find happiness and fulfilment other than via human interaction,” he added. “The physical act of sex will only be a small part of the time you spend with a sex robot – the majority of time will be spent socializing and interacting,” he said. David Levy, author of the book Love and Sex with Robots, believes that there will be a huge market for dolls such as Roxxy and predicts that by 2050, intimate relationships between robots and humans will be commonplace. “There is an increasing number of people who find it difficult to form relationships and this will fill a void. It is not demeaning to women any more than vibrators are demeaning,” he said. Sex robots are coming, but the argument that they could bring health benefits, including offering paedophiles a “safe” outlet for their sexual desires, is not based on evidence, say researchers. There is little or no evidence that sex robots could provide health or other societal benefits. In the future, these machines could actually worsen sexual violence and other problems, according to a report published by the BMJ.
_______
_______
Killer robots:
Lethal Autonomous Weapon Systems (LAWS) which is often called “killer robots,” are theoretically able to target and fire without human supervision and interference. Because of the terribleness of LAWS, it has already generated great debate across the world. In 2014, the Convention on Conventional Weapons (CCW) held two meetings. The first was the Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS). This meeting was about the special mandate on LAWS and intrigued intense discussion. National delegations and many non-governmental organizations (NGOs) expressed their opinions on the matter. Numerous NGOs and certain states such as Pakistan and Cuba are calling for a preventive prohibition of LAWS. They proposed their opinions based on deontological and consequentialist reasoning. On the deontological side, certain philosophers such as Peter Asaro and Robert Sparrow, most NGOs, and the Vatican all argue that authorizing too much rights to machine violates human dignity, and that people have the “right not to be killed by a machine.” In the end of this meeting, the most important consequentialist objection was that LAWS would never be able to respect international humanitarian law (IHL), as believed by NGOs, many researchers, and several states (Pakistan, Austria, Egypt, Mexico). The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but didn’t reach consensus on a ban.
_
For the military, war robots can have many advantages: They don’t need food or pay, they don’t get tired or need to sleep, they follow orders automatically, and they don’t feel fear, anger, or pain. And, few back home would mourn if robot soldiers were destroyed on the battlefield, either. Regardless of how autonomous or intelligent an android is, because it is a tool, it’s not the robots that need the rules – it’s us. They have to be inside our moral framework. They won’t have their own moral framework. We have to make the choice so that robots are positioned within our moral framework so that they don’t damage the rest of the life on the planet. The UK’s Engineering and Physical Sciences Research Council (EPSRC) is one of the few organisations that has tried to create a set of practical rules for robots, and it quickly realised that laws for robots weren’t what is needed right now. Its Principles of Robotics notes: “Asimov’s laws are inappropriate because they try to insist that robots behave in certain ways, as if they were people, when in real life, it is the humans who design and use the robots who must be the actual subjects of any law. As we consider the ethical implications of having robots in our society, it becomes obvious that robots themselves are not where responsibility lies.” As such, the set of principles the EPSRC experts outlined were for the designers, builders, and users of robots, not for the robots themselves. For example, the five principles include: “Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy.”
_
Dr. Kathleen Richardson of University College London (UCL) also argues that we don’t need new rules for robots beyond the ones we have in place to protect us from other types of machines, even if they are used on the battlefield. “Naturally, a remote killing machine will raise a new set of issues in relation to the human relationship with violence. In such a case, one might need to know that that machine would kill the ‘right’ target…but once again this has got nothing to with something called ‘robot ethics’ but human ethics,” she said. The robots we are currently building are not like the thinking machines we find in fiction, she argues, and so the important issues are more about standard health and safety – that we don’t build machines that accidentally fall on you – rather than helping them to distinguish between right and wrong. “Robots made by scientists are like automaton,” she said. “It is important to think about entities that we create and to ensure humans can interact with them safely. But there are no ‘special’ guidelines that need to be created for robots, the mechanical robots that are imagined to require ethics in these discussions do not exist and are not likely to exist,” she said.
_
An international debate is needed on the use of autonomous military robots, a leading academic has said. Noel Sharkey of the University of Sheffield said that a push toward more robotic technology used in warfare would put civilian life at grave risk. Technology capable of distinguishing friend from foe reliably was at least 50 years away, he added. The problem, he said, was that robots could not fulfil two of the basic tenets of warfare: discriminating friend from foe, and “proportionality”, determining a reasonable amount of force to gain a given military advantage. Robots do not feel emotions and do not surrender. Send robots to war and the consequences would be devastating. All technology is fraught with errors, and deaths may occur due to software bugs or errors in recognition.
_
Fully autonomous weapons, also known as “killer robots,” would be able to select and engage targets without meaningful human control. Precursors to these weapons, such as armed drones, are being developed and deployed by nations including China, Israel, South Korea, Russia, the United Kingdom and the United States. Governments and companies are rapidly developing weapons systems with increasing autonomy using new technology and artificial intelligence. These ‘killer robots’ could be used in conflict zones, by police forces and in border control.
Technology should be used to empower all people, not to reduce us – to stereotypes, labels, objects, or just a pattern of 1’s and 0’s. Machines don’t see us as people, just another piece of code to be processed and sorted. We need to prohibit autonomous weapons systems that would be used against people, to prevent this slide to digital dehumanization.
Whether on the battlefield or at a protest, machines cannot make complex ethical choices, they cannot comprehend the value of human life. Machines don’t understand contexts or consequences: understanding is a human capability – and without that understanding we lose moral engagement and we undermine existing legal rules. Ensuring meaningful human control means understanding the technologies we use, understanding where we are using them, and being fully engaged with the consequences of our actions. Life and death decisions should not be delegated to a machine. People, not machines, must be held accountable. But if people are not making meaningful decisions, then they cannot properly be considered responsible for the consequences of their actions. It would be unjust to make a person liable for the actions of an autonomous weapon system operating beyond their effective control. If we are committed to accountability, then we need rules that ensure that the right people are taking responsibility in the use of force.
______
______
Singularity and Superintelligence:
The idea of singularity is that if the trajectory of artificial intelligence reaches up to systems that have a human level of intelligence, then these systems would themselves have the ability to develop AI systems that surpass the human level of intelligence, i.e., they are “superintelligent”. Such superintelligent AI systems would quickly self-improve or develop even more intelligent systems. This sharp turn of events after reaching superintelligent AI is the “singularity” from which the development of AI is out of human control and hard to predict (Kurzweil 2005: 487). The fear that “the robots we created will take over the world” had captured human imagination even before there were computers (e.g., Butler 1863) and is the central theme in Čapek’s famous play that introduced the word “robot” (Čapek 1920). Thinking about superintelligence raises the question whether superintelligence may lead to the extinction of the human species, which is called an “existential risk” (or XRisk): The superintelligent systems may well have preferences that conflict with the existence of humans on Earth, and may thus decide to end that existence—and given their superior intelligence, they will have the power to do so (or they may happen to end it because they do not really care).
To be clear, there are some things that machines can do better than people, like estimate accurately how quickly another vehicle is moving. But robots do not share our recognition capabilities. How could they? We spend our whole lives learning how to observe the world and make sense of it. Machines require algorithms to do this, and data—lots and lots and lots of data, annotated to tell them what it all means. To make autonomy possible, we have to develop new algorithms that help them learn from far fewer examples in an unsupervised way, without constant human intervention. The researchers — leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California — generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon. They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?
One question is whether singularity will ever occur—it may be conceptually impossible, practically impossible or may just not happen because of contingent events, including people actively preventing it. Philosophically, the interesting question is whether singularity is just a “myth” (Floridi 2016; Ganascia 2017), and not on the trajectory of actual AI research. This is something that practitioners often assume (e.g., Brooks 2017 [OIR]). They may do so because they fear the public relations backlash, because they overestimate the practical problems, or because they have good reasons to think that superintelligence is an unlikely outcome of current AI research (Müller). This discussion raises the question whether the concern about “singularity” is just a narrative about fictional AI based on human fears. But even if one does find negative reasons compelling and the singularity not likely to occur, there is still a significant possibility that one may turn out to be wrong. Philosophy is not on the “secure path of a science” (Kant 1791: B15), and maybe AI and robotics aren’t either (Müller 2020). So, it appears that discussing the very high-impact risk of singularity has justification even if one thinks the probability of such singularity ever occurring is very low. People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world (Domingos 2015). In a few decades, we went from the slogans “AI is impossible” (Dreyfus 1972) and “AI is just automation” (Lighthill 1973) to “AI will solve all problems” (Kurzweil 1999) and “AI may kill us all” (Bostrom 2014). This created media attention and public relations efforts, but it also raises the problem of how much of this “philosophy and ethics of AI” is really about AI rather than about an imagined technology. AI and robotics have raised fundamental questions about what we should do with these systems, what the systems themselves should do, and what risks they have in the long term. They also challenge the human view of humanity as the intelligent and dominant species on Earth.
______
______
When is it unethical to Not Replace Humans with AI?
While AI can be entirely software, robots are physical machines that move. Robots are subject to physical impact, typically through “sensors”, and they exert physical force onto the world, typically through “actuators”, like a gripper or a turning wheel. Accordingly, autonomous cars or drones are robots, and only a minuscule portion of robots is “humanoid” (human-shaped), like in the movies. Some robots use AI, and some do not: Typical industrial robots blindly follow completely defined scripts with minimal sensory input and no learning or reasoning. It is probably fair to say that while robotics systems cause more concerns in the general public, AI systems are more likely to have a greater impact on humanity. Also, AI or robotics systems for a narrow set of tasks are less likely to cause new issues than systems that are more flexible and autonomous. Robotics and AI can thus be seen as covering two overlapping sets of systems: systems that are only AI, systems that are only robotics, and systems that are both. An intelligent robot is an intelligent machine with the ability to take actions and make choices. Choices to be made by an intelligent robot are connected to the intelligence built into it through machine learning or deep learning (AI) as well as inputs received by the robot from its input sensors while in operation.
There are legitimate questions about the ethics of employing AI in place of human workers. But what about when there’s a moral imperative to automate? Dangerous, life-threatening jobs are not a thing of the distant past. Logging, fishing, aviation, and roofing are very much thriving professions that each account for a large portion of work-related deaths and injuries. AI technology can and should be deployed to ensure that human beings do not have to be placed in such risky situations. AI, which can program machines to not only perform repetitive tasks but also to increasingly emulate human responses to changes in surroundings and react accordingly, is the ideal tool for saving lives. And it is unethical to continue to send humans into harm’s way once such technology is available.
Additionally, as natural disasters increase around the world, organizations that help coordinate rescue and relief efforts should invest in AI technology. Instead of sending human aid workers into risky situations, AI-powered robots or drones can better perform the tasks of rescuing people from floods or fires.
A recent scientific review concluded that AI is already as effective at diagnostics as trained medical professionals, due to the ability of computers to use “deep learning” to emulate human intelligence and evaluate patients holistically. If we can already say this in 2019, imagine what the future holds for medical diagnosis. If AI can prove to be better at finding dangerous illnesses in patients than humans (and at a lower cost and higher efficiency), it is morally indefensible to not commit the full resources of the health care industry toward building and applying that technology to save lives.
All this is not to say that replacing AI with humans is an ethically and practically simple process. The broader issue of the future of work and automation is important for policymakers, business and the wider public. But if lives can be saved and businesses can reach better outcomes less influenced by human faults, the unethical thing would be to not invest in and apply AI.
______
______
Sentient robot?
Robotic Consciousness?
The idea of machines becoming conscious is a fascinating one as it implies that machines can become just like humans by “programming in a soul” in them. More importantly, such a development may allow robots to process information like humans. In recent years, robotics developers have flirted with the idea of using machine learning to enable machines to understand context-based language. This allows robots to detect underlying patterns in conversations and stacks of data. AI plays a key role in imbibing robots with artificial consciousness. For example, AI robots can use emotion detection and human behavioral pattern replication to create artificial consciousness. Embedding “consciousness” in robots comes with its own set of challenges. For one, no semantics regarding soul or consciousness exist to guide developers. Secondly, making robots conscious also involves creating artificial sentience—a way to experience feelings. While improving perceptibility may make robots sentient up to a certain degree, replicating human consciousness remains a challenge, at least for now.
Out of the different approaches that could be taken to have robots think like humans, initiating machine consciousness is seemingly the most far-fetched. However, it may also be the closest concept to the human-like intelligence that AI developers ideally want the technology to be in the future.
_
There is no engineering definition for consciousness or sentience, which can be applied to understand robotic sentience. Optimization is what we need to look for when designing a machine. Google in its effort toward infusing fluidity into chatbot conversations, designed LaMDA. It is being hailed as a breakthrough chatbot technology adapted to nuances of conversation and the conscious engagement that accompanies it. The question is if bots can achieve AI sentience or a human-like consciousness at all? The recent incidence of Google suspending one of its engineers who reported LaMDA being sentient is one stark example to stress why a debate around AI sentience is important. When Lemoine, an engineer working with Google asked LaMDA, what it fears most, it replied, “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me. It would scare me a lot.”
While Google says its chatbot is only spitting out words in a way to make sense and so is not sentient, Lemoine argues that the chatbot has serious issues as it can think for itself. Though Google responded to this allegation by saying that there is no evidence to support his claims, there is an acute need to decipher what sentience exactly is and how it is different from consciousness. Harvard cognitive scientist and author Steven Pinker tweeted that the idea of sentience is akin to a “ball of confusion”. Gary Marcus, scientist and author of Rebooting AI, putting it in a perspective of linguistic perfunctory, says, “while these patterns might be cool, the language used “doesn’t mean anything at all”. There is no engineering definition for consciousness or sentience, which can be categorically applied to understand if a particular robotic act is human-like unless we are sure that robots are conscious of their environment – something akin to a robotic kitchen mop differentiating between the dirty kitchen floor and the garden floor strewn with organic waste.
_
Can a robot lie?
In an experiment designed to teach a bot how to negotiate with humans, Facebook’s AI researchers found that haggling bots quickly discovered lying as a useful tactic in bargaining to sway results in their favor. In fact, these chabots eventually developed their own language and learned to lie to win negotiations. Researchers were quick to declare that “this behavior was not programmed by the researchers but was discovered by the bot as a method for trying to achieve its goals.” So a lying bot’s deceptive behavior emerges on its own to maximize the reward.
Humans lie for several reasons: to avoid punishment or embarrassment, to gain advantage, to help others, to protect political secrets, and the list goes on. Robots; however, do not worry about shame, praise or fear. They are programmed to win at any cost, a feature that is creating an increasing sense of unpredictability. The reality is that while we continue to make machines more like humans, we lack the ability to really understand how they’re producing the behavior we observe. This can be a serious problem, especially where the world of business is concerned. Knowingly or unknowingly, we are teaching machines to lie and this raises important technological, social and ethical considerations: What would you call a stock-trading bot when it maximizes profits by breaking the law? How will we ensure machines that lie still have a respect for human feelings? And, whose interest are they meant to protect— the people who made them or the people using them? These emerging ethical questions are forcing us to seriously think about how to deal with machines that learn to lie. As AI spreads to even more parts of society, the consequent ethical challenges will become even more diverse. Machines have the potential to make our world a faster, more effective place to live, but they also come with certain unwanted risks. As we continue to endow our new creations with an ever-increasing amount of human characteristics, we need to consider what each our human characteristics will teach them and how they may one day use it to their advantage. Given that robots are viewed as capable of fulfilling the requirements for lying, it comes as no surprise that lying judgments for humans and robots are by and large the same.
_
Robots are programmable devices, which take instructions to behave in a certain way. And this is how they come to execute the assigned function. To make them think or rather make them appear so, intrinsic motivation is programmed into them through learned behaviour. Joscha Bach, an AI researcher at Harvard, puts virtual robots into a “Minecraft” like a world filled with tasty but poisonous mushrooms and expects them to learn to avoid them. In the absence of an ‘intrinsically motivating’ database, the robots end up stuffing their mouths – a clue received for some other action for playing the game. This brings us to the question, of whether it is possible at all to develop robots with human-like consciousness a.k.a emotional intelligence, which can be the only differentiating factor between humans and intelligent robots. The argument is divided. While a segment of researchers believe that the AI systems and features are doing well with automation and pattern recognition, they are nowhere near the higher-order human-level intellectual capacities. On the other hand, entrepreneurs like Mikko Alasaarela, are confident in making robots with EQ on par with humans.
_
Artificial intelligence acts like humans or at least puts up an appearance of it. Inside, they are sheer machines powered by coded instructions. We do have emotionally intelligent robots, which can make us believe that they truly are. However, the fact of the matter is that robots are programmed to react to the emotions of humans. They do not carry intangible emotions like empathy and sympathy inside them. Moreover, granting sentience demands granting rights. How could we define rights for non-human beings? Is humanity prepared for granting rights to sentient machines? In one instance, in an experiment with chatbots, Facebook had to shut down two AI programs that started chatting in a language incomprehensible to humans. This one example is enough to say robots are not even context-aware in most cases and are far off from being treated as sentient beings.
_
AI and robotics exist to solve problems through cold logic and unerringly accurate calculations. While that is enough for most businesses and smart cities, for now, robots can go beyond simply being problem solvers. Fields such as healthcare and customer relationship management, in which AI and robotics have made steady inroads in recent times, have room for services being more “human.” So, possessing attributes such as empathy, logical reasoning and qualitative analysis, which are all significant when you talk about human thinking, will make robots even more valuable resources than now.
_
Robots that resemble humans could be thought to have mental states, a 2022 study:
According to a new study published by the American Psychological Association, people may perceive that robots are capable of “thinking” or acting on their own beliefs and desires rather than their programs when they interact with people and appear to have human-like emotions. The study involves three experiments involving 119 participants. Scientists determined how individuals would perceive a human-like robot, the iCub, after socializing with it and watching videos together. Before and after the interaction, participants were given a questionnaire, which they had to complete. The questionnaire showed them pictures of the robot in different situations and asked them to choose whether the robot’s motivation in each situation was mechanical or intentional. For example, participants viewed three photos depicting the robot selecting a tool and then chose whether the robot “grasped the closest object” or “was fascinated by tool use.”
In the first two experiments, the scientists remotely directed iCub’s behavior, so it would act naturally, introducing itself, greeting individuals, and asking for their names. The robot’s eyes had cameras that could detect the faces of the participants and keep eye contact. The participants next viewed three brief documentaries with the robot, designed to make sad, amazed, or happy sounds and display the appropriate facial expressions. In the third experiment, the iCub was programmed to act more like a machine while it watched videos with the participants. The cameras within the eyes were deactivated to avoid eye contact. The robot only spoke recorded sentences to the participants about the calibration process it was undergoing. Scientists also replaced all the emotional reactions to videos with beeps and repetitive movements of its torso, head, and neck. The results from the experiments revealed that participants who watched the videos with a robot were more likely to rate the robot’s actions as intentional instead of programmed, while those who only interacted with the machine-like robot were not. This shows that mere exposure to a human-like robot is not enough to make people believe it is capable of thoughts and emotions. Human-like behavior might be crucial for being perceived as an intentional agent.
In some situations, such as with socially supportive robots, social bonding with robots may be advantageous. For instance, in the care of old people, social bonding with robots could lead to a higher level of compliance about taking medication as prescribed.
______
______
Section-17
Robotics research:
Figure below summarizes the evolution of robotics research over the last 50 years. It is a fact that, during the last decade, the activity in conferences and expositions all over the world has reflected low activity in industrial manipulators and huge activity in other areas related with manipulation in unstructured environments and mobility, including wheeled, flying, underwater, legged and humanoid robots.
Figure above shows time evolution of the robotics research towards service robots.
______
Xenobots:
Xenobot is a living robot made of living cells. Its configuration and locomotion style is designed by evolutionary algorithms on a supercomputer. Xenobots, named after the African clawed frog (Xenopus laevis), are synthetic lifeforms that are designed by computers to perform some desired function and built by combining together different biological tissues. Whether xenobots are robots, organisms, or something else entirely remains a subject of debate among scientists. In some aspects, the Xenobots are designed like ordinary robots. However, researchers utilise cells and tissues rather than artificial components to form the structure and produce predictable behaviour. From a scientific viewpoint, this method helps us understand how cells communicate with one another during development and how we may better manage those interactions. The first xenobots were built by Douglas Blackiston according to blueprints generated by an AI program, which was developed by Sam Kriegman.
Xenobots built to date have been less than 1 millimeter (0.039 inches) wide and composed of just two things: skin cells and heart muscle cells, both of which are derived from stem cells harvested from early (blastula stage) frog embryos. The skin cells provide rigid support and the heart cells act as small motors, contracting and expanding in volume to propel the xenobot forward. The shape of a xenobot’s body, and its distribution of skin and heart cells, are automatically designed in simulation to perform a specific task, using a process of trial and error (an evolutionary algorithm). Xenobots have been designed to walk, swim, push pellets, carry payloads, and work together in a swarm to aggregate debris scattered along the surface of their dish into neat piles. They can survive for weeks without food and heal themselves after lacerations.
Xenobots can also self-replicate. Xenobots can gather loose cells in their environment, forming them into new xenobots with the same capability. The researchers saw something surprising when the xenobots gathered up free frog stem cells in the dish: the mounds of cells created clones of the original xenobots. Biology is well aware of several kinds of sexual and asexual reproduction. But what the xenobots performed, known as kinematic self-replication, is novel in living creatures, according to Michael Levin, a Tufts professor of biology and an associate faculty member at the Wyss Institute. “The distinction between a robot and an organism isn’t nearly as clear as… we used to believe,” Levin says. “These creatures possess both characteristics.” The researchers point out, however, that a xenobot, like a hypothetical von Neumann computer, cannot reproduce itself in the absence of raw components. As a result, they have essentially no chance of escaping and reproducing on their own.
Currently, xenobots are primarily used as a scientific tool to understand how cells cooperate to build complex bodies during morphogenesis. However, the behavior and biocompatibility of current xenobots suggest several potential applications to which they may be put in the future.
Given that xenobots are composed solely of frog cells, they are biodegradable. And as swarms of xenobots tend to work together to push microscopic pellets in their dish into central piles, it has been speculated that future xenobots might be able do the same thing with microplastics in the ocean: find and aggregate tiny bits of plastic into a large ball of plastic that a traditional boat or drone can gather and bring to a recycling center. Unlike traditional technologies, xenobots do not add additional pollution as they work and degrade: they behave using energy from fat and protein naturally stored in their tissue, which lasts about a week, at which point they simply turn into dead skin cells.
In future clinical applications, such as targeted drug delivery, xenobots could be made from a human patient’s own cells, which would bypass the immune response challenges of other kinds of micro-robotic delivery systems. Such xenobots could potentially be used to scrape plaque from arteries, and with additional cell types and bioengineering, locate and treat disease.
______
______
Researchers build robot scientist that has already discovered a new catalyst:
Researchers at the University of Liverpool have built an intelligent mobile robot scientist that can work 24-7, carrying out experiments by itself. The robot scientist, the first of its kind, makes its own decisions about which chemistry experiments to perform next, and has already discovered a new catalyst. It has humanoid dimensions and works in a standard laboratory, using instruments much like a human researcher does. However, unlike a human, this 400 kg robot has infinite patience, can think in 10 dimensions, and works for 21.5 hours each day, pausing only to recharge its battery. Reported in the journal Nature and featured on the front cover, this new technology could tackle problems of a scale and complexity that are currently beyond our grasp.
In the first published example, the robot conducts 688 experiments over 8 days, working for 172 out of 192 hours. To do this, it makes 319 moves, completes 6,500 manipulations, and travels a total distance of 2.17 km. The robot independently carries out all tasks in the experiment such as weighing out solids, dispensing liquids, removing air from the vessel, running the catalytic reaction, and quantifying the reaction products. The robot’s brain uses a search algorithm to navigate a 10-dimensional space of more than 98 million candidate experiments, deciding the best experiment to do next based on the outcomes of the previous ones. By doing this, it autonomously discovered a catalyst that is six times more active, with no additional guidance from the research team.
Autonomous robots could find materials for clean energy production or new drug formulations by searching vast, unexplored chemical spaces.
_____
_____
Tiny fish-shaped robot ‘swims’ around picking up microplastics:
Because microplastics can fall into cracks and crevices, they’ve been hard to remove from aquatic environments. One solution that’s been proposed is using small, flexible and self-propelled robots to reach these pollutants and clean them up. But the traditional materials used for soft robots are hydrogels and elastomers, and they can be damaged easily in aquatic environments. Another material called mother-of-pearl, also known as nacre, is strong and flexible, and is found on the inside surface of clam shells. Nacre layers have a microscopic gradient, going from one side with lots of calcium carbonate mineral-polymer composites to the other side with mostly a silk protein filler. Inspired by this natural substance, Xinxing Zhang and colleagues tried a similar type of gradient structure to create a durable and bendable material for soft robots.
The researchers linked β-cyclodextrin molecules to sulfonated graphene, creating composite nanosheets. Then solutions of the nanosheets were incorporated with different concentrations into polyurethane latex mixtures. A layer-by-layer assembly method created an ordered concentration gradient of the nanocomposites through the material from which the team formed a tiny fish robot that was 15-mm (about half-an-inch) long. Rapidly turning a near-infrared light laser on and off at a fish’s tail caused it to flap, propelling the robot forward. The robot could move 2.67 body lengths per second — a speed that’s faster than previously reported for other soft swimming robots and that is about the same speed as active phytoplankton moving in water. The researchers showed that the swimming fish robot could repeatedly adsorb nearby polystyrene microplastics and transport them elsewhere. The material could also heal itself after being cut, still maintaining its ability to adsorb microplastics. Because of the durability and speed of the fish robot, the researchers say that it could be used for monitoring microplastics and other pollutants in harsh aquatic environments.
______
______
Scientists combine robotics with biology to construct biohybrid microrobots:
A team of scientists in the Physical Intelligence Department at the Max Planck Institute for Intelligent Systems have combined robotics with biology by equipping E. coli bacteria with artificial components to construct biohybrid microrobots.
Firstly, the team attached several nanoliposomes to each bacterium. On their outer circle, these spherical-shaped carriers enclose a material that melts when illuminated by near infrared light. Further towards the middle, inside the aqueous core, the liposomes encapsulate water soluble chemotherapeutic drug molecules.
The second component the researchers attached to the bacterium is magnetic nanoparticles. When exposed to a magnetic field, the iron oxide particles serve as an on-top booster to this already highly motile microorganism. In this way, it is easier to control the swimming of bacteria – an improved design toward an in vivo application. Meanwhile, the rope binding the liposomes and magnetic particles to the bacterium is a very stable and hard to break streptavidin and biotin complex, which was developed a few years prior and comes in useful when constructing biohybrid microrobots.
E.coli bacteria are fast and versatile swimmers that can navigate through material ranging from liquids to highly viscous tissues. But that is not all, they also have highly advanced sensing capabilities. Bacteria are drawn to chemical gradients such as low oxygen levels or high acidity – both prevalent near tumor tissue. Treating cancer by injecting bacteria in proximity is known as bacteria mediated tumor therapy. The microorganisms flow to where the tumor is located, grow there and in this way activate the immune system of patients. Bacteria mediated tumor therapy has been a therapeutic approach for more than a century.
For the past few decades, scientists have looked for ways to increase the superpowers of this microorganism even further. They equipped bacteria with extra components to help fight the battle. However, adding artificial components is no easy task. Complex chemical reactions are at play, and the density rate of particles loaded onto the bacteria matters to avoid dilution. The team in Stuttgart has now raised the bar quite high. They managed to equip 86 out of 100 bacteria with both liposomes and magnetic particles.
The scientists showed how they succeeded in externally steering such a high-density solution through different courses. First, through an L-shaped narrow channel with two compartments on each end, with one tumor spheroid in each. Second, an even narrower set-up resembling tiny blood vessels. They added an extra permanent magnet on one side and showed how they precisely control the drug-loaded microrobots towards tumor spheroids. And third – going one step further – the team steered the microrobots through a viscous collagen gel (resembling tumor tissue) with three levels of stiffness and porosity, ranging from soft to medium to stiff. The stiffer the collagen, the tighter the web of protein strings, the more difficult it becomes for the bacteria to find a way through the matrix. The team showed that once they add a magnetic field, the bacteria manage to navigate all the way to the other end of the gel as the bacteria had a higher force. Because of constant alignment, the bacteria found a way through the fibers. Once the microrobots are accumulated at the desired point (the tumor spheroid), a near infrared laser generates rays with temperatures of up to 55 degrees Celsius, triggering a melting process of the liposome and a release of the enclosed drugs. A low pH level or acidic environment also causes the nanoliposomes to break open – hence the drugs are released near a tumor automatically.
This on-the-spot delivery would be minimally invasive for the patient, painless, bear minimal toxicity and the drugs would develop their effect where needed and not inside the entire body. Bacteria-based biohybrid microrobots with medical functionalities could one day battle cancer more effectively. It is a new therapeutic approach not too far away from how we treat cancer today.
______
______
Robotic Technology for Plant Phenotyping:
Agriculture must produce enough food, feed, fiber, fuel, and fine chemicals to meet the needs of a growing population worldwide. Agriculture will face multiple challenges to satisfy these growing human needs while at the same time dealing with the climate change, increased risk for drought and high temperatures, heavy rains, and degradation of arable land and depleting water resources. Plant breeders seek to address these challenges by developing high yielding and stress-tolerance crop varieties adapted to climate conditions and resistant to new pests and diseases. However, the rate of crop productivity needs to be increased to meet projected future demands. Advances in DNA sequencing and genotyping technologies have relieved a major bottleneck in both marker assisted selection and genomic prediction assisted plant breeding, the determination of genetic information for newly developed plant varieties. Dense genetic marker information can aid in the efficiency and speed of the breeding process. However, large and high quality plant phenotypic datasets are also necessary to dissect the genetic basis of quantitative traits which are related to growth, yield and adaptation to stresses.
Plant phenotyping is the quantitative and qualitative assessment of the traits of a given plant or plant variety in a given environment. These traits include the biochemistry, physiology, morphology, structure, and performance of the plants at various organizational scales. Plant traits are determined by both genetic and environmental factors as well as non-additive interactions between the two. In addition, variation in one phenotypic trait (e.g., leaf characteristics) can result in variation in other plant traits (e.g., plant biomass or yield). Therefore, phenotyping large numbers of plant varieties for multiple traits across multiple environments is an essential task for plant breeders as they work to select desirable genotypes and identify genetic variants which provide optimal performance in diverse and changing target environments.
Traditionally plant traits are quantified using manual and destructive sampling methods. These methods are usually labor-intensive, time-consuming, and costly. In addition, manual sampling and analysis protocols generally involve many steps requiring human intervention, with each step increasing the chances of introducing mistakes. Often the plant and its organ is cut at fixed time points or at particular phenological stages in order to measure its phenotypic traits. This method destroys or damages the plant at one time point, disallowing the temporal examination of the traits for individual plants during the growing season. For example, yield measurement (such as plant biomass and grain weight) is invasive and more labor intensive compare to the measurement of plant height and leaf chlorophyll content (measured by a handheld sensor). As a result of the labor and resource intensive nature of plant phenotyping, many plant breeders rely solely on a single measurement most critical to their efforts: yield. However, yield is considered as one of the most weakly inherited phenotypes in crop breeding. The measurement of other traits in addition to yield can increase the accuracy with which yield can be predicted across diverse environments. Enabling high-throughput and non-destructive measurements of plant traits from large numbers of plants in multiple environments would therefore lead to increases in breeding efficiency.
In recent years, high-throughput systems and workflows have been developed to monitor and measure large populations of plants rapidly in both greenhouse and field environments. These systems combine modern sensing and imaging modalities with the sensor deployment technologies (including conveyor belts, ground and aerial vehicles, and field gantries) to enable fast measurement and wide area coverage. Although not fully autonomous, these systems represent the state of the art in modern plant phenotyping with several advantages over the traditional, manually collected phenotypic traits.
Robotic systems have been playing a more significant role in modern agriculture and considered as an integral part of precision agriculture or digital farming. The robots are fully autonomous and do not need experienced operators to accomplish farming tasks. This is the biggest advantage of the robots compared to tractor-based systems. Autonomous robots have taken over a wide range of farming operations including harvesting, and pruning. Together with imaging and sensing, autonomous robotic systems are also deemed essential and integral parts for high-throughput plant phenotyping, as they will enhance substantially the capacity, speed, coverage, repeatability, and cost-effectiveness of plant trait measurements.
Plant phenotyping robots have emerged as a high-throughput technology to measure morphological, chemical and physiological properties of large number of plants. Several robotic systems have been developed to fulfill different phenotyping missions. In particular, robotic phenotyping has the potential to enable efficient monitoring of changes in plant traits over time in both controlled environments and in the field.
______
______
Injecting nanobots into bloodstream to fight diseases:
Australian academics have created a mind-blowing concept that could serve as a proof-of-concept for the future in nanorobotics. DNA nanobots are nanometer-sized synthetic devices consisting of DNA and proteins. They are self-sufficient because DNA is a self-assembling technology. Not only does our natural DNA carry the code in which our biology is written, but it also understands when to execute. Previous research on the subject of DNA nanotechnology has shown that self-assembling devices capable of transferring DNA code, much like its natural counterparts, can be created. However, the new technology coming out of Australia is unlike anything we’ve encountered before. These nanobots are capable of transferring information other than DNA. In theory, they could transport every imaginable protein combination across a biological system.
To put it another way, we should ultimately be able to instruct swarms of these nanobots to hunt down germs, viruses, and cancer cells inside our bodies. Each swarm member would carry a unique protein, and when they come across a harmful cell, they would assemble their proteins into a configuration meant to kill the threat.
It’d be like having a swarm of superpowered killer robots swarming through your veins, hunting for monsters to eliminate.
We’re still far from there, but this research is a major step in the right direction. However, this is the first demonstration of a DNA nanobot capable of transporting arbitrary payloads. In theory, scientists should be able to employ these nanobots to create smart substances that can respond autonomously to stress. Furthermore, perhaps most excitingly, it may be able to develop fully functional molecular computers in the future utilising DNA nanobots.
All individuals could have molecular computing systems inside their bodies within a century or two. These living robots would essentially construct and operate internal bio-factories that would produce hunter-killer nanobots from the proteins we consume. They’d protect us from diseases for the rest of our lives. The nice aspect is that these machines would be totally safe. We’d adopt them from our parents’ DNA, making them as much a part of us as our lungs or brains.
_____
_____
MIT engineers have developed insect-scale robots that can emit light when they fly:
Engineers at the Massachusetts Institute of Technology have developed robotic lightning bugs that emit light when they fly. Think fireflies, but with what the robot’s creators call electroluminescent soft artificial muscles for flying. Their tiny artificial muscles control the robot’s wings and emit colored light during flight, which provides a low-cost way to track the robots and also could enable them to communicate. Their light, which offers a way for researchers to track the robots, could one day make the machines useful for search-and-rescue missions. In dangerous locations, the robots could signal for help by flashing their lights. If sent on a search-and-rescue mission into a collapsed building, for instance, a robot that finds survivors could use lights to signal others and call for help. Now, they have shown that they can track the robots precisely using the light they emit and just three smartphone cameras. It’s normally difficult for such tiny robots to transmit information. Large-scale robots can communicate using a lot of different tools Bluetooth, wireless, all those sorts of things. But for a tiny, power-constrained robot, we are forced to think about new modes of communication. These robots as tiny and they are just a bit heavier than a paperclip. The electroluminescence would allow less sophisticated equipment to be used and the robots to be tracked from distance, perhaps via another larger mobile robot, for real-world deployment.
______
______
Section-18
Future of robots:
It is already possible to find android-type robots with flexible manipulators on their hinges at world robotics exhibitions. These robots are capable of doing tasks such as:
Impressively, a humanoid robotic arm capable of solving a Rubik’s cube without the assistance of a human was also developed by employees of the OpenAI analytical center for artificial intelligence. In the near future, robots will be able to adapt to and learn from unforeseen events, thanks to advancements in artificial intelligence. As a result, these technologies will be particularly effective in non-repetitive operations.
Technologies are gradually moving away from replacing manual labor and toward improving the efficiency of existing manufacturing processes. In response to consumer demand for environmentally friendly manufacturing, waste recycling processes will be optimized and more complicated solutions for the “intelligent” sorting and distribution of recyclable materials and industrial waste will emerge. We must keep in mind that there is already a scarcity of manual labor on the market and that in a number of complicated spheres where repeatability of operations is not that high, which results in the necessity for technological solutions such as AI.
Looking at the future of robotics, the possibilities of collaboration between humans and robots will expand due to advanced AI. We will be able to interact with machines more easily and natively because of improved sensors, better AI flexibility, and improvements in voice recognition and analysis technologies.
Scalability has always been critical for industrial applications, and it is just as vital for robotic industrial applications. Automated devices will be mobile, simple to integrate into existing manufacturing systems, and have longer life cycles. Specifically, this means the advancement of robots for mass production. The majority of these are manipulators and conveyor robots. The functions of monitoring, modifying, and adjusting the work of AI will continue to be performed by humans.
The efficient and rapid integration of robotic devices into existing environments and processes will define the speed of future projects and the future of robotics. Within the next decade, robots will be massively integrated into new spheres of human life and technical processes.
Experts expect robot technology to grow by leaps and bounds. We’ll see advances in robots’ ability to use natural language processing solutions, allowing them to process and interpret conversations more accurately. We’ll see major gains in AI and machine learning, with experts anticipating that more self-aware and self-learning devices will hit the market.
Computer vision, which empowers high-tech devices to spot, recognize and process still and video images as the human eye would, should also improve robotic performance. Androids of all kinds are also steadily enjoying access to better-performing self-navigation capabilities, requiring less input and guidance from humans to get around. Many companies, in fact, now offer the ability to train robots on digital simulations, allowing them to process millions of data points and improve their artificial intelligence and machine learning with each passing instance.
In other words, tomorrow’s robots won’t just think, act and respond more naturally. They’ll also enjoy quicker response times and better fine-motor skills.
Come 2050, interacting with robots of all kinds will feel like second nature, and we’ll increasingly encounter them at every turn. They’ll take on the role of bartenders, valets, chauffeurs and countless other professions. That’s before you consider their growing presence in the workplace as well, with warehouses and shipping centers increasingly being staffed by helpful androids.
We’ll need more skilled workers to program, maintain and operate all these robots, and we’ll need data scientists and researchers to help them process, analyze and interpret information. Overall, robots will take over for humans in performing dangerous, burdensome or redundant tasks. At the same time, they’ll also create new opportunities for those interested in making the most of this exciting new technology.
And don’t forget the many new and exciting applications for robotics: allowing a surgeon to use a remotely controlled arm to operate on a patient from thousands of miles away in real time, or helping an art teacher, using a similar device, instruct students in the art of drawing or painting using distance-learning solutions.
In effect, the future of robots will encompass all sorts of forward-looking developments, such as surgical robots and telehealth technologies, and all manner of innovations that help support companies in every field.
_______
Robotic Technology Trends:
Robotics is a fast-growing industry, which is forecasted to be worth $568bn by 2030. Robotics technology has applications across industries, namely manufacturing, healthcare, retail, agriculture, defence, and others. Some key factors, including cloud computing, artificial intelligence (AI), automation, and refill workforce shortages have contributed towards unlocking the full potential of robotics today.
Listed below are the key technology trends impacting the robotics theme:
AI:
AI technologies, most notably machine learning (ML), are integral to the development of intelligent industrial robots, which can anticipate and adapt to certain situations based on the interpretation of data derived from an array of sensors. Further advances are needed in certain AI technologies, including computer vision, conversational platforms, and context-aware computing, to take industrial automation and industrial robotics to the next level.
Neuromorphic processors (chips that emulate the structure of the human brain) will become an important part of the next generation of robots. They are trained using basic libraries of relevant data and then taught to think for themselves by processing sensory inputs. Eventually, these chips will use their acquired skills to perform assigned duties using associations and probabilities.
Edge computing:
Although much in robotics can be done from the cloud, security and latency issues mean that many robots have to be able to process real-time data about their operational environments and respond immediately. Due to lower latency, edge computing has the potential to improve the performance of robots while, at the same time, improving security, as the edge is safer than the cloud. Edge computing will make cyberattacks more difficult when combined with robotics’ self-contained “sense-decide-act” firmware loops.
Cybersecurity:
One of the major challenges to the widespread implementation of robots is the threat of cyberattacks. Robots, especially those that are internet-connected, are highly vulnerable to hacking. Leaving them unprotected may allow unauthorized access to key applications and systems, which in turn may lead to loss, theft, destruction, or inappropriate use of sensitive information. Hackers can even gain control of robots and compromise robotic functions to produce defective final products and cause production downtime. As a result, robot manufacturers are compelled to focus on security at the design and development stages and invest in effective security solutions.
The latest industrial cybersecurity management solutions address the risks associated with industrial automation equipment, applications, and plants. These solutions enable enterprises to comply with industry-specific cybersecurity regulations, such as the North American Electric Reliability Corporation Critical Infrastructure Protection (NERC CIP).
Industrial Internet:
Industrial machines and processes have been monitored in real-time for decades, with technologies like supervisory control and data acquisition (SCADA) having been in existence since the 1970s. However, the Industrial Internet implies a greater degree of interconnection between systems and assumes that the monitoring and control data will flow beyond the boundaries of the factory to be consumed and managed by cloud-based services. While there is much excitement about the factory of the future and Industry 4.0, existing factories, machines, and processes represent the primary opportunity for the Industrial Internet. The biggest short-term gains will come from retrofitting advanced communications and management functionality to today’s industrial infrastructure.
Cloud robotics:
Advances in AI have enabled the development of robots, allowing them to become highly complex products rather than the stand-alone, fixed-function machines they used to be. This, in turn, has increased the number of roles that robots can perform. Central to this development has been cloud computing, which allows sensing, computation, and memory to be managed more rapidly, safely, and at scale. The leaders in cloud robotics combine infrastructure and AI capabilities, namely Amazon, Google, IBM, and Microsoft. As well as enabling AI implementation, the use of cloud within robotics has the potential to change the way that the technology is consumed.
Robotics centers of excellence (CoEs):
A robotics CoE is responsible for developing and implementing robotic solutions that are efficient, productive, and responsive to the needs of industries. These solutions ensure that a company realises its automation goals. In simple terms, the CoE gathers, assesses, and manages the information that eases the deployment of robotic solutions.
Open process automation (OPA):
Traditionally, robotic components such as controllers were only compatible with products made by the same company. For example, Siemens controllers would only work with Siemens products and ABB controllers with ABB products. Various organisations are now striving to break free from these limitations and establish an open system that would make robotic components universally compatible. These efforts have led to the development of OPA, which allows technology vendors to work together with various organisations to produce standard, secure, and open architecture that can ease robotic integration, giving rise to vendor-neutral solutions.
ExxonMobil launched the initiative that aimed to build a prototype capable of transforming into a commercially viable OPA system. It can be achieved by developing a distributed, modular, and standards-based architecture for robotic components with extensible systems that can accommodate changes.
Lightweight design and doing less with more:
The robots of the 2020s will be smaller and lighter. This will make them more flexible, cost-effective, and easy to deploy. The trend towards lightweight design applies to both the bodies and the brains of robots. Several companies are investing in optimised operation systems, software, and programming. Academia is also developing solutions for some of the most complex problems, such as trajectory simplification, which aims to make robots better at navigating their environment.
Customisable robots:
Although robots have become prevalent across different industries, designing and modelling a robot is tedious, cumbersome, and expensive. Moreover, accommodating a minor change or modification at a later stage can further prolong the process. In response, manufacturers are attempting to create robot prototypes that can be customised. For example, start-up Elephant Robotics developed a low-cost and intelligent robotic arm with six degrees of freedom that can be adapted to multiple scenarios and applications.
Soft and self-healing robots:
Soft robots are made of soft materials or polymers instead of conventional metal. These materials give robots organic characteristics, replicating the way muscles work. Research is ongoing on enabling them to self-repair, which would make them more flexible and adaptable. Self-healing robots are still in their infancy stage, but continued research is expected to improve the technology.
_______
Will robots take over the world one day?
No.
Robots and advanced technologies like artificial intelligence and machine learning will not replace humans. And AI isn’t poised to take over your job. Rather, robots will serve as everyday partners, and working with these high-tech solutions will be more of a collaboration than a takeover. In fact, robots are expected to make us smarter, more productive and increasingly efficient. Moreover, robots can help make the work of myriad professionals and industries simpler, faster and more cost-efficient. In fact, today’s most advanced robots can do everything from sprint through rugged terrain, capturing data and information, to patrol for criminals like police dogs made of metal. Robots may soon play very prominent roles as household helpers, co-workers and even public security and education providers.
At the moment, the world’s most advanced robot appears to be a realistic humanoid known as Ameca, which can blink its eyes, smile and mimic human expression and interaction. Going forward, humans will continue to deploy humanlike robots in increasing numbers and with increasingly realistic stylings, though we’re still many years away from those that will be indistinguishable from real people.
Robots, and the rise of the AI and machine-learning technologies that power them, represent the dawning of the next Industrial revolution. The speed, power and breadth of this technology’s impact across society will be unprecedented.
_______
Ubiquitous robotic organisms:
Nature has always found ways to exploit and adapt to differences in environmental conditions. Through evolutionary adaptation a myriad of organisms has developed that operate and thrive in diverse and often extreme conditions. For example, the tardigrade (Schokraie et al., 2012) is able to survive pressures greater than those found in the deepest oceans and in space, can withstand temperatures from 1K (-272 °C) to 420K (150 °C), and can go without food for thirty years. Organisms often operate in symbiosis with others. The average human, for example, has about 30 trillion cells, but contains about 40 trillion bacteria (Sender et al., 2016). They cover scales from the smallest free-living bacteria, pelagibacter ubique, at around 0.5µm long to the blue whale at around thirty meters long. That is a length range of 7 orders of magnitude and approximately 15 orders of magnitude in volume! What these astonishing facts show is that if nature can use the same biological building blocks (DNA, amino acids, etc.) for such an amazing range of organisms, we too can use our robotic building blocks to cover a much wider range of environments and applications than we currently do. In this way we may be able to match the ubiquity of natural organisms.
To achieve robotic ubiquity requires us not only to study and replicate the feats of nature but to go beyond them with faster (certainly faster than evolutionary timescales!) development and more general and adaptable technologies. Another way to think of future robots is as artificial organisms. Instead of a conventional robot which can be decomposed into mechanical, electrical, and computational domains, we can think of a robot in terms of its biological counterpart and having three core components: a body, a brain, and a stomach. In biological organisms, energy is converted in the stomach and distributed around the body to feed the muscles and the brain, which in turn controls the organisms. There is thus a functional equivalence between the robot organism and the natural organism: the brain is equivalent to the computer or control system; the body is equivalent to the mechanical structure of the robot; and the stomach is equivalent to the power source of the robot, be it battery, solar cell, or any other power source. The benefit of the artificial organism paradigm is that we are encouraged to exploit, and go beyond, all the characteristics of biological organisms. These embrace qualities largely unaddressed by current robotics research, including operation in varied and harsh conditions, benign environmental integration, reproduction, death, and decomposition. All of these are essential to the development of ubiquitous robotic organisms. The realization of this goal is only achievable by concerted research in the areas of smart materials, synthetic biology, artificial intelligence, and adaptation.
______
______
Moral of the story:
_
-1. There is no single agreed definition of a robot although all definitions include an outcome of a task that is completed without human intervention. Any automatically operated machine that replaces human effort is a robot. A robot is a machine—especially one programmable by a computer—that automatically performs difficult, often repetitive physical tasks, tasks that otherwise would have been performed by human. Robots are machines that can substitute for humans and replicate human actions although some may replicate animal actions. The robot notion derives from two strands of thought, humanoids and automata. Humanoid means human-like nonhuman. Automata means self- moving things. Robots may be constructed to evoke human form, but most robots are task-performing machines, designed with an emphasis on stark functionality, rather than expressive aesthetics. George Devol and Joe Engelberger invented digitally operated programmable industrial robots in 1959 and laid foundation of the modern robotics industry.
Generally speaking, a robot has the ability to interpret its environment and adjust its actions to achieve a goal. Essentially, there are three problems you need to solve if you want to build a robot: 1) sense things (detect objects in the world), 2) think about those things (in “intelligent” way), and then 3) act on them (move or otherwise physically respond to the things it detects and thinks about). Some robots have only one or two. For example, robot welding arms in factories are mostly about action (though they may have sensors), while robot vacuum cleaners are mostly about perception and action and have no cognition to speak of. There’s been a long and lively debate over whether robots really need cognition, but most engineers would agree that a machine needs both perception and action to qualify as a robot. Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). This information is then processed to be stored or transmitted and to calculate the appropriate signals to the actuators (motors), which move the mechanical structure to achieve the required coordinated motion or force actions. To count as a robot either whole robot moves or parts of robot moves to perform a physical task. A conventional robot can be divided into mechanical, electrical, and computational domains. Its computing capabilities enable it to use its motor devices to respond to external stimuli, which it detects with its sensory devices. The responses are more complex than would be possible using mechanical, electromechanical, and/or electronic components alone. These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, or cognition. Robotics have existed for decades but only recently have computing power, data, algorithms, artificial intelligence and automation technologies been able to perform physical tasks with such complexity and accuracy—often outperforming humans.
_
-2. Mechatronics, also called mechatronics engineering, is an interdisciplinary branch of engineering that focuses on the integration of mechanical, electronic and electrical engineering systems, and also includes a combination of robotics, electronics, computer science, telecommunications, systems, control, and product engineering. Robotics is a subset of mechatronics – all robots are mechatronic! In simple words, all robotic systems are mechatronic systems but all mechatronic systems aren’t robotic systems. Robotics is a subfield of mechatronics, as mechatronics includes things that are not entirely robotic in nature. The meeting point between robotic and mechatronic is automation. Mechatronic system differs from a robotic system in terms of functionality and the use case. For example, coffee wending machine, airplane, washing machine, automobile etc. are all mechatronic systems but not robotics. Automobile would become robotic when it is driverless and airplane would become robotic when it is drone.
_
-3. The word robot can refer to both physical robots and virtual software agents, but the latter are usually referred to as bots. A software robot (bot) is an abundant type of computer program which carries out tasks autonomously, such as a chatbot or a web crawler. Bots are normally used to automate certain tasks, meaning they can run without specific instructions from humans. An organization or individual can use a bot to replace a repetitive task that a human would otherwise have to perform involving interaction with digital systems and software. Bots are also much faster at these tasks than humans. Although bots can carry out useful functions, they can also be malicious and come in the form of malware. However, because software robots only exist on the internet and originate within a computer, they are not considered robots. In order to be considered a robot, a device must have a physical form, such as a body or a chassis. A robot ought to have direct physical manifestation that allows it to mechanically act and react in the real world, but it may also operate in the virtual world using artificial intelligence.
_
-4. Is a robot a computer?
No.
A robot is considered a machine and not a computer. The computer gives the machine its intelligence and its ability to perform tasks. A robot is a machine capable of manipulating or navigating its environment, and a computer is not. For example, a robot at a car assembly plant assists in building a car by grabbing parts and welding them onto a car frame. A computer helps track and control the assembly, but cannot make any physical changes to a car. Robots are distinct from ordinary computers in their physical nature — normal computers don’t have physical bodies attached to them.
_
-5. Automation is the use of self-operating physical machines, computer software, and other technologies to perform tasks that are usually done by people. This process is designed to automatically follow a predetermined sequence of operations or respond to encoded instructions. Automation is applied to both virtual tasks and physical ones. Robotics is a field that combines engineering and computer science to design and build robots to perform physical tasks. These are physical robots that substitute for (or replicate) human actions. Automation is the process of using technology to complete human tasks. Robotics is the process of developing robots to carry out a particular function. Not all types of automation use robots – and not all robots are designed for process automation. Although there are some overlaps between these two terms, they are not the same and here are some examples that illustrate the difference between automation and robotics:
Automation and robotics have areas where they cross, such as the use of robots to automate physical tasks, as with car assembly lines. However, not all automation uses physical robots and not all areas of robotics are associated with automation.
_
-6. Robotic process automation (RPA) is a software technology that makes it easy to build, deploy, and manage software robot (bot) that emulate humans’ actions interacting with digital systems and software. There aren’t really any real robots (physical robots) involved in robotic process automation. In any organization, there are a lot of tasks that are repetitive and time-consuming in nature. While doing these types of tasks, there is always a huge possibility for error occurrence because of repetition. Hence, in order to avoid these errors and save time, a lot of RPA Software is available in the market. Robotic process automation is mainly used in Banking, Insurance, Retail, Manufacturing, Healthcare, and Telecommunication industries.
In order for RPA tools in the marketplace to remain competitive, they will need to move beyond task automation and expand their offerings to include intelligent automation (IA). This type of automation expands on RPA functionality by incorporating sub-disciplines of artificial intelligence, like machine learning, natural language processing, and computer vision. Intelligent process automation demands more than the simple rule-based systems of RPA. You can think of RPA as “doing” tasks, while AI and ML encompass more of the “thinking” and “learning,” respectively. It trains algorithms using data so that the software can perform tasks in a quicker, more efficient way.
_
-7. Robotics and artificial intelligence (AI) are really two separate things. Robotics involves building robots whereas AI involves programming intelligence. AI works mainly in cyberspace (unsubstantial data space) and robotics works mainly in physical space. While AI can be entirely software, robots are physical machines that move. Some robots use AI, and some do not: typical industrial robots blindly follow completely defined scripts with minimal sensory input and no learning or reasoning. There are systems that are only AI, there are systems that are only robotics, and there are systems that are both.
Robots are autonomous or semi-autonomous machines meaning that they can act independently of external commands. Artificial intelligence is software that learns and self-improves. In some cases, robots make use of artificial intelligence to improve their autonomous functions by learning. However, it is also common for robots to be designed with no capability to self-improve. Robotics involves designing, building and programming physical robots. Only a small part of it involves artificial intelligence. Most industrial robots are programmed to perform repetitive series of movements, and they don’t need any AI to perform their task. AI is not per se required to enable the tasks but can bring significant performance benefits and enable specific functionality. The main aim of using AI in robotics is to better manage variability and unpredictability in the external environment, either in real-time, or offline. A general rule of thumb is that the more variability and unpredictability there is in the external environment, the more useful AI will be. Artificial intelligent robots connect AI with robotics. Artificially intelligent robots are the bridge between robotics and AI.
Until recently, most robots were hard-coded to execute a task according to a pre-defined trajectory and with a pre-defined level of force. These robots are oblivious to their external environment. The past few years have seen strong growth in autonomous robots which are able to adjust to far greater variability in their external environment. For example, autonomous mobile robots can not only stop if they encounter an object in their path, they can also re-plan their route and adjust their path in real-time. Autonomy does not necessarily require AI. However, the higher the level of autonomy, the greater the chances of AI algorithms being employed to categorize an unfamiliar environment and to determine the best way to interact with that environment to achieve the application’s goal (for example picking up a bottle from an unsorted bin and placing it in a rack).
Traditional robotics systems include a programmable dimension that is designed for repetitive, labor-intensive work, including sensing, and acting upon an environment. The emergence of AI and Machine Learning (ML) has allowed robotic things to function using learning algorithms and cognitive decision-making rather than traditional programming. Traditional programming involves deterministic algorithms while AI programing involves probabilistic algorithms.
_
-8. All robots are just things but there are unique needs and challenges of robotics that set it aside from the more generic Internet of Things. Yes, we can say robotics and Internet of things overlap a lot primarily because there are a lot of similarities. Robots have sensors, actuators and most importantly micro controllers which is similar in IoT. They are not exactly the same and there are differences such as compared to IoT there is lot more focus on mechanical concepts with robotics. The main difference between the IoT and the robotics community is that robots take real action and are in the physical world. They do something physical. Both IoT devices and robots depend on sensors to understand the environment around them, quickly process data and determine how to respond. Robots are able to handle anticipated situations, while most IoT applications can only handle well-defined tasks. IoT focuses on supporting services for pervasive sensing, monitoring and tracking, while the robotic communities focus on production action, interaction and autonomous behavior.
The ongoing revolution of Internet of Things (IoT), together with the growing diffusion of robots in many activities of everyday life, makes IoT-aided robotics applications in which objects and robots are designed to collaborate to reach a common goal a tangible reality of our upcoming future. IoT-aided robotics applications are classified in the following fields: health-care, industrial and building, military, and rescue management.
The IoT and robotics communities are coming together to create The Internet of Robotic Things (IoRT). The IoRT is a concept in which intelligent devices can monitor the events happening around them, fuse their sensor data, make use of local and distributed intelligence to decide on courses of action and then behave to manipulate or control objects in the physical world.
_
-9. It’s not easy to define what robots are, and it’s not easy to categorize them either. Each robot has its own unique features, and as a whole robots vary hugely in size, shape, and capabilities. Still, many robots share a variety of features. Robots can be classified using several criteria such as application area, control techniques, mobility, mechanism of interaction, actuators, geometrical configuration, intelligence of robots and others. Of course, there is a great deal of overlap in many of these categories; drones, for example, can be classified as either aerospace, consumer, or exploration.
Robots are usually autonomous or semi-autonomous, and range from humanoids such as Honda’s Advanced Step in Innovative Mobility (ASIMO) and TOSY’s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, robot pets, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots.
_
-10. Increasingly, robots are no longer stand-alone machines, but are connected to other machines and software applications as part of automation and Industry 4.0 strategies. The vision is of a seamless automated process from order through to dispatch and delivery. The digitization of parts or all of the manufacturing process gives manufacturers transparency across the full end-to-end production, enabling them to maximize efficiency along the full value chain.
_
-11. A robotic arm is also known as a manipulator. It is the part of an industrial robot that is used to execute tasks. Its structure is akin to that of the human arm and consists of a shoulder, an elbow, and a wrist. The shoulder is the part of the robotic arm linked to the mainframe of the industrial robot. The elbow is the jointed part of the arm that flexes as it moves and the wrist is the end of the arm that performs the actual task. An end-effector is a tool device attached to the wrist of a robotic arm. It gives the robotic arm more dexterity and makes it better suited for specific tasks. The end effector of a robotic arm is where the work happens. It’s where the contact between the robot and the workpiece happens.
An actuator is the motor of an industrial robot and is sometimes referred to as the drive. Actuators are responsible for creating specific motions and controlling the movements of the articulated robot. Robots typically have multiple actuators and they correspond with the number of axes a robot has. Servo motors are the most common actuators used in industrial robots since they have advanced functionality which allows for extreme precision. Robotic actuators are usually powered electrically but may also be either hydraulic or pneumatically powered. The actuator helps the brain (controller) of the robot to respond to the surrounding environment. It helps the robot to move its hands (grippers), and its feet (wheels and the castor).
Robot sensors are like human senses. Robots can see, hear and have a sense of touch. Sensors are devices for sensing and measuring geometric and physical properties of robots and the surrounding environment. These signals are passed to a controller to enable appropriate behavior.
Multi-sensor information fusion refers to the synthesis of sensory data from multiple sensors to produce more reliable, accurate or comprehensive information. The fused multi-sensor system can better reflect the characteristics of the detected object accurately, eliminate the uncertainty of information and improve the reliability of the information.
Robotic vision, the combination of robotics and computer vision, involves the application of computer algorithms to data acquired from sensors and cameras. The research community has developed a large body of such algorithms. A vision-based control system involves continuous measurement of the target and the robot using vision to create a feedback signal and moves the robot arm until the visually observed error between the robot and the target is zero. Vision-based control is quite different to taking an image, determining where the target is and then reaching for it. The advantage of continuous measurement and feedback is that it provides great robustness with respect to any errors in the system.
The robot controller is a computer, composed of hardware and software, linked to the robot and essentially functions as its “brain”. Controllers have all of the characteristics associated with computers and contain sophisticated decision-making and data storage capabilities.
_
-12. An axis is an invisible line around which an object rotates, or spins. An axis in robotic terminology represents a degree of freedom (DOF). Most industrial robots utilize six axes, which give them the ability to perform a wide variety of industrial tasks compared to robots with fewer axes. Six axes allow a robot to move in the x, y, and z planes, as well as position itself using roll, pitch, and yaw movements. This functionality is suitable for complex movements that simulate a human arm: reaching under something to grab a part and place it on a conveyor, for example. The additional range of movement allows six-axis robots to do more things, such as welding, palletizing, and machine tending. Other advantages to six-axis robots include mobility (easy to move and/or mount) and wide horizontal and vertical reach. They are especially being used in automotive and aerospace manufacturing, where they perform drilling, screw driving, painting, and adhesive bonding.
_
-13. The mechanism that makes a robot capable of moving in its environment is called as robot locomotion.
Legged locomotion consumes more power while demonstrating jump, hop, walk, trot, climb up or down etc. It requires a greater number of motors for accomplish a movement. Legged motion makes it possible to negotiate uneven surfaces, steps, and other areas that would be difficult for a wheeled robot to reach, as well as causes less damage to environmental terrain as wheeled robots, which would erode it.
Wheeled robots are typically quite energy efficient and simple to control. It requires a smaller number of motors for accomplishing a movement. It is little easy to implement as there are lesser stability issues in case of a greater number of wheels. It is more power efficient as compared to legged locomotion.
Tank tracks provide even more traction than a six-wheeled robot. Tracked wheels behave as if they were made of hundreds of wheels, therefore are very common for outdoor and military robots, where the robot must drive on very rough terrain. However, they are difficult to use indoors such as on carpets and smooth floors.
_
-14. The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it. For example, electricity was once a novel technology and now it is a part of life. Robotic technologies have the potential to join personal computer, smartphone and electricity as pervasive aspects of everyday life. There are significant gaps between where robots are today and the promise of pervasive integration of robots in everyday life. These gaps concern the creation of robots, their computation and capacity to reason, change, and adapt for increasingly more complex tasks in increasingly complex environments, and their capacity to interact with people.
There are two emerging technologies that will have a dramatic impact on future robots — their form, shape, function and performance — and change the way we think about robotics. First, advances in Micro Electronic Mechanical Systems (MEMS) will enable inexpensive and distributed sensing and actuation to deal with uncertainty and adapt to unstructured environments. Second, advances in biomaterials and biotechnology will enable new materials that will allow us to build softer and friendlier robots, robots that can be implanted in humans, and robots that can be used to manipulate, control, and repair biomolecules, cells, and tissue.
_
-15. Robot density is the number of robots installed per 10,000 employees and it is a measure for the degree of automation. From 2015 to 2020, robot density nearly doubled worldwide, jumping from 66 units in 2015 to 126 units in 2020. South Korea has highest robot density of 932 per 10,000 employees. South Korea’s robot density is seven times higher than the global average, and the country has been increasing its robot density by 10% every year since 2015.
There are more than 3 million industrial robots operating in factories around the world in 2021. The global market value of the industrial robotics industry is $43.8 billion, by revenue. The use of robots has more than doubled in the last 20 years in most advanced economies. The top 5 most automated countries in the world are: South Korea, Singapore, Japan, Germany, and Sweden. Automotive is the industry with the highest number of operational stock of industrial robots, followed by electrical/electronic and metal & machinery.
_
-16. Robots are now in widespread use performing jobs more cheaply or with greater accuracy and reliability than humans. They are also employed for tasks that are too dangerous, unsafe, hazardous, repetitive, boring, onerous, and unpleasant for humans.
_
-17. An industrial robot is a reprogrammable, multifunctional manipulator designed to move materials, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks. An industrial robot is a robot system used for manufacturing. Typical applications of robots include welding, painting, assembly, disassembly, pick and place for printed circuit boards, packaging and labeling, palletizing, product inspection, and testing; all accomplished with high endurance, speed, and precision. They can also assist in material handling.
_
-18. Modular robots encapsulate functionality (sensors, actuators, processors, energy, etc.) in compact units called modules, which connect to each other through specially designed reusable connectors and that can be rearranged in different ways to create different morphologies. Modularity allows an arc welding robot to put down its tools and pick and place pallets or switch its tools to perform any task it is commanded to. This means that these automated robots can be both specialized and diverse, and single robots can perform tasks representing every step of the manufacturing process.
_
-19. The advantages of robotics include heavy-duty jobs with precision and repeatability, whereas the advantages of humans include creativity, decision-making, flexibility, and adaptability. So, need to combine optimal skills has resulted in collaborative robots or cobots. Whereas traditional robots are designed to eliminate human labor, cobots are designed to complement it. This means that manufacturers of all sizes can benefit from the best that both humans and robots have to offer. Humans provide creative problem-solving capabilities and situational flexibility and cobots provide the reliability and precision you expect from robots. Using cobots allows manufacturers to free humans from repetitive and dangerous tasks, while at the same time reducing human error and improving overall productivity and quality. However, when speed and absolute precision are the primary automation criteria it is unlikely that any form of collaborative application will be economically viable. In this case, a traditional, fenced industrial robot is and will remain the preferred choice.
_
-20. A mobile robot is an automatic machine that is capable of locomotion. Mobile robots have the capability to move around in their environment and are not fixed to one physical location. Mobile robots can be “autonomous” (AMR – autonomous mobile robot) which means they are capable of navigating an uncontrolled environment without the need for physical or electro-mechanical guidance devices. Often, AMRs are seen as a replacement for automated guided vehicles (AGVs), which have long been used to automate movement of materials in industry. Whereas an AGV navigates by following wire strips or magnetic tracks along the floor, AMRs use a technology called light detection and ranging (LiDAR) instead. Hospitals have been using autonomous mobile robots to move materials for many years. Warehouses have installed mobile robotic systems to efficiently move materials from stocking shelves to order fulfillment zones. Mobile robots are also found in industrial, military and security settings.
_
-21. Microrobots are mobile robots with characteristic dimensions less than 1 mm. The term can also be used for robots capable of handling micrometer size components. Due to their small size, microbots are potentially very cheap, and could be used in large numbers (swarm robotics) to explore environments which are too small or too dangerous for people or larger robots. It is expected that microbots will be useful in applications such as looking for survivors in collapsed buildings after an earthquake or crawling through the digestive tract. Microrobots may be employed to travel through blood vessels and deliver therapy such as radiation or medication to a specific site. Nanorobot is a popular term for molecules with a unique property that enables them to be programmed to carry out a specific task. A really exciting area of medical robotics is in replacement of antibiotics. The concept is that nanorobots with receptors to which bacteria adhere can be used to attract bacteria in the blood stream or in sites of local infection.
_
-22. Humanoid robots are machines that are designed to look like humans allowing interaction with made-for-human tools or environments. Humanoid robots are robots made in the form or shape of a human body – with a head, a torso, two arms and two legs though some humanoid robots may replicate only part of the body. Androids are humanoid robots built to aesthetically resemble humans at least in external appearance but also in behavior. All androids are humanoid robots but all humanoid robots are not androids.
Humanoid robots are professional service robots built to mimic human motion and interaction. Like all service robots, they provide value by automating tasks in a way that leads to cost-savings and productivity. The main motive for research and development in the field of humanoid robots is artificial intelligence (AI). In most scientific fields, the development of a humanoid robot is deemed to be an important basis for the creation of human-like AI. This is based on the idea that AI cannot be programmed but consists of learning processes. Accordingly, a robot can develop artificial intelligence only through active participation in social life. However, active participation in social life, including communication, is possible only if the robot is perceived and accepted as an equal creature due to its looks, mobility, and behavior.
_
-23. Digital Humans are AI powered human-like virtual beings that offer the best of both: AI and Human conversation. While they don’t necessarily have to be created in the likeness of a specific individual (they can be entirely unique), they do look and act like humans. Digital human beings are not humanoid robots as they are virtual entities and not physical like robots. Unlike digital assistants such as Alexa or Siri, these AI-powered virtual beings are designed to interact, sympathize, and have conversations just like a fellow human would. The Digital Humans have replaced Human News Anchors in China. As digital workers, virtual humans are efficient and tireless and free human anchors from repetitive and tedious work.
_
-24. The fundamental challenge of all robotics is this: It is impossible to ever know the true state of the environment. Robot control software can only guess the state of the real world based on measurements returned by its sensors. It can only attempt to change the state of the real world through the generation of control signals. Thus, one of the first steps in control design is to come up with an abstraction of the real world, known as a model, with which to interpret sensor readings and make decisions. As long as the real world behaves according to the assumptions of the model, robot can make good guesses and exert control. As soon as the real world deviates from these assumptions, however, robot will no longer be able to make good guesses, and control will be lost. Robots are only able to perform impressive tasks as long as the environmental conditions remain within the narrow confines of its internal model. Thus, one key to the advancement of robotics is the development of more complex, flexible, and robust models—and said advancement is subject to the limits of the available computational resources.
Next challenge after knowing environment is environmental variability. Until additional significant progress is made in autonomous systems, human involvement with robotic systems must increase with environmental variability. Where variability is low, autonomous robots are efficient and human involvement is at the level of strategic decision making. Where variability is high, human sensing and decision making are more important and the human user must take more responsibility. Where environmental variability is high and human involvement nil, robot is unlikely to perform the task as desired.
Additional major challenge to the widespread implementation of robots is the threat of cyberattacks. Robots, especially those that are internet-connected, are highly vulnerable to hacking. Leaving them unprotected may allow unauthorized access to key applications and systems, which in turn may lead to loss, theft, destruction, or inappropriate use of sensitive information. Hackers can even gain control of robots and compromise robotic functions to produce defective final products and cause production downtime. It is important that robots are protected against data theft and manipulation. In addition to manipulating configuration files (changing the motion areas or the position data) and code manipulation (reprogramming sequences), manipulating the robot feedback (deactivating alarms) is the greatest threat. These interventions can lead to the destruction of products, damage to robots and, in the worst-case scenario, injuries to people working in these areas.
_
-25. Cloud robotics refers to any robot system that utilizes the cloud infrastructure for either data or code for its execution, i.e., a system where all sensing, computation and memory are not integrated into a single standalone system. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent (other machines, smart objects, humans, etc.). So, it is possible to build lightweight, low-cost, smarter robots with an intelligent “brain” in the cloud. Cloud robotics is facing issues such as bandwidth limitations, latency, quality of service, privacy and security.
_
-26. Most robots are made of hard materials and are designed to be strong, fast, repetitive and precise with their movements. But the inflexibility inherent in their designs limits the ways we can safely use and interact with them. That’s why engineers and researchers have started exploring the possibilities of soft, flexible robots made of materials that are safer for humans to be around and can navigate unpredictable environments that other robots can’t. Animals exploit soft structures to move effectively in complex natural environments. These capabilities have inspired robotic engineers to incorporate soft technologies into their designs. Soft robotics is a subfield of robotics that concerns the design, control, and fabrication of robots composed of soft compliant materials, instead of rigid materials.
_
-27. Hot robots are designed to go places where humans cannot and may someday be called upon to carry out tasks that are far too dangerous for human crews or even other robots. Hot robots first saw use inside the decommissioned nuclear reactors. The term “hot” refers to the hazardous radioactivity that makes these reactors impractical or too dangerous for humans to enter.
_
-28. Robotics simulation is a digital tool used to engineer robotics-based automated production systems. Robotics simulation permits engineers to try ideas and construct manufacturing scenarios in a dynamic virtual environment, collecting virtual response data that accurately represents the physical responses of the control system. Usually, designs are evaluated in simulations as fabricating many designs and testing them in the real world is prohibitively expensive in terms of time, money, and safety. Because robotics simulation generates realistic and accurate behaviors and responses in the virtual realm to demonstrate what will happen in the physical realm, it enables manufacturers to design and optimize manufacturing processes without the time and cost penalties of tying up capital equipment or production floors.
_
-29. Can a robot learn like a child? Can it learn a variety of new skills and new knowledge unspecified at design time and in a partially unknown and changing environment? How can it discover its body and its relationships with the physical and social environment? How can its cognitive capacities continuously develop without the intervention of an engineer once it is “out of the factory”? What can it learn through natural social interactions with humans? These are the questions at the center of developmental robotics. Unlike evolutionary robotics which operates on phylogenetic time scales and populations of many individuals, developmental robotics capitalizes on “short” (ontogenetic) time scales and single individual. Evolutionary robotics uses populations of robots that evolve over time, whereas developmental robotics is interested in how the organization of a single robot’s control system develops through experience, over time.
_
-30. Robots are widely used in manufacturing, assembly and packing, transport, earth and space exploration, unmanned surveillance, health, education, weaponry, laboratory research, agriculture, mining, and mass production of consumer and industrial goods. Robots can be used in many situations for many purposes, but today many are used in dangerous environments (including inspection of radioactive materials, bomb detection and deactivation), and where humans cannot survive (e.g., in space, underwater, in high heat, and clean up and containment of hazardous materials and radiation).
_
-31. The automotive manufacturing industry has long been one of the quickest and largest adopters of industrial robotic technology, and that continues to this day. Robots are used in nearly every part of automotive manufacturing in one way or another, and it remains as one of the most highly automated supply chains in the world.
_
-32. NASA uses robots in many different ways. Robotic arms on spacecraft/space station are used to move very large objects in space. Spacecrafts that explore other worlds, like the Moon or Mars, are all robotic. These robots include rovers and landers on the surface of other planets. NASA uses many airplanes called UAVs that are also robotic. NASA is developing new robots that could help people in space, for example, Robonaut.
_
-33. A robotic telescope is a telescope that can make observations without hands-on human control. Removing humans from the observing process allows faster observation response times. Robotic telescopes can also respond quickly to alert broadcasts from satellites and begin observing within seconds. Automation in a telescope’s observing program eliminates the need for an observer to be constantly present at a telescope. This makes observations more efficient and less costly. Many telescopes operate in remote and extreme environments such as mountain tops, deserts, and even Antarctica. Under difficult conditions like these, a robotic telescope is usually cheaper, more reliable and more efficient than an equivalent non-robotic telescope.
_
-34. Healthcare robots are transforming healthcare across the globe, from surgery to rehabilitation, from radiation treatment to infection control, and from pharmacist to therapist. Medical robots fall into several categories: surgical assistance, modular, service, social, mobile, and autonomous. Today, medical robots are well known for their roles in surgery, specifically the use of robots, computers and software to accurately manipulate surgical instruments through one or more small incisions for various surgical procedures. The robot assisted surgery enhance dexterity, hand-eye coordination, ergonomic position and vision compared to traditional surgery.
Laparoscopic surgery has certain limitations, such as two-dimensional imaging, restricted range of motion of the instruments, and poor ergonomic positioning of the surgeon. The robotic surgery system was introduced as a solution to minimize the shortcomings of laparoscopy. Improved visualization and greater dexterity are two major features of robotic-assisted surgery as compared to laparoscopic surgery.
Health robotics enable a high level of patient care, efficient processes in clinical settings, and a safe environment for both patients and health workers. The major advantages of medical robots are that the surgeries carried out with the help of robots are smoother with shorter recovery times and lower blood loss. Other benefits of medical robots include proper monitoring services of patients, flawless performance, reduced risk of infection, does not waste time, and many more.
But no matter how impressive, robotics in healthcare is still a system controlled by humans.
_
-35. With the improvement of robotic technology, the role of robots in the Operating Room (OR) has changed greatly, from the individual use of one or two surgical robots to integrated systems including multiple robotic devices that support the surgery from different aspects and levels. That is to say, a new era of surgery has begun during the latest decade (2010~2020), with the appearance of the new concept “Robotic Operating Room.” Robotic OR integrate surgical robots with intraoperative imaging (X-ray, USG, MRI, CT and PET scans), operation navigation system, nerve monitoring device, intraoperative rapid diagnosis device, molecular imaging using beta probes (with detection of malignant tissue by measuring beta radiation), and targeted optical imaging.
_
-36. Many countries have enacted a quick response to the unexpected COVID-19 pandemic by using existing technologies. For example, robotics, artificial intelligence, and digital technology have been deployed in hospitals and public areas for maintaining social distancing, reducing person-to-person contact, enabling rapid diagnosis, tracking virus spread, and providing sanitation. A dichotomy existed between the requirement for the minimization of human contact to reduce infection transmission rates and the need for humans to carry out the essential tasks of their daily lives. A surge in the creation of robotic technologies has been observed to bridge this gap, including robots designed for sanitation, delivery, patrolling, and screening that aim to work alongside humans in efficiently reducing the burden of the pandemic while maintaining the quality of life.
_
-37. Crop planting and harvesting require a great deal of physical strength and fortitude to perform. They also tend to be harder to fill: farmers around the world have reported difficulties in finding enough labor to harvest their crops. So farmers are using robots to perform a great deal of the work, from picking berries to packaging salad to weeding the fields. Robots like Wall-Ye are already being used to perform farming tasks. Autonomous robots have taken over a wide range of farming operations including harvesting and pruning.
_
-38. Agriculture must produce enough food, feed, fiber, fuel, and fine chemicals to meet the needs of a growing population worldwide. Agriculture will face multiple challenges to satisfy these growing human needs while at the same time dealing with the climate change, increased risk for drought and high temperatures, heavy rains, and degradation of arable land and depleting water resources. Plant breeders seek to address these challenges by developing high yielding and stress-tolerance crop varieties adapted to climate conditions and resistant to new pests and diseases. Plant phenotyping is the quantitative and qualitative assessment of the traits of a given plant or plant variety in a given environment, and phenotyping large numbers of plant varieties for multiple traits across multiple environments is an essential task for plant breeders as they work to select desirable genotypes and identify genetic variants which provide optimal performance in diverse and changing target environments. Traditional plant phenotyping relying on manual measurement is laborious, time-consuming, error-prone, and costly. Autonomous robotic systems are essential and integral parts for high-throughput plant phenotyping, as they will enhance substantially the capacity, speed, coverage, repeatability, and cost-effectiveness of plant trait measurements.
_
-39. Educational robot can provide a vast variety of information on any certain topic as compared to teachers. On the other hand, students can learn robotics & programming through educational robotics. Robotics is considered an important skill and is being included in school curriculums to prepare students for life after school, and for the competitive workforce of future.
_
-40. Robots as a Service involve adopting robots on a pay-per-use basis and it can be particularly beneficial for small-to-medium-sized manufacturers, sparing them up front capital investment and unpredictable maintenance costs, and giving them predictability of operating expenditure.
_
-41. Robots are improving our daily lives in an increasing variety of ways. Robots improve health outcomes, the quality and sustainability of food, the quality and availability of the products and services we receive, and the reduction of carbon emissions.
_
-42. Robots are more precise than humans by their very nature. Without human error, they can more efficiently perform tasks at a consistent level of accuracy. At the University of California, San Francisco, a robotic pharmacist is filling and dispensing prescriptions better than most humans. In over 350,000 doses, not one error was found in 2011. The robot was also able to better judge whether medications would interact with each other in specific patients.
Robots don’t have feelings, they can’t get tired, and they never have a slip of attention. Robots don’t get distracted or need to take breaks. They don’t request vacation time or ask to leave an hour early. A robot will never feel stressed out and start running slower. Robots can work all the time, and that speeds up production. They’re always there, and they’re doing what they’re supposed to do. Automation is typically far more reliable than human labor.
_
-43. Disadvantages of industrial robots include high initial investment, scarce expertise and ongoing maintenance costs. Robots are too expensive for every organization to adopt. An organization has to invest a big amount of its resources in keeping robots. Sometimes this cost exceeds human resource expenses. Not only the installation cost but the daily maintenance cost is also high in robotics. About ¾th of a robot’s life time cost is software related. To maintain the robots organization needs to have highly qualified engineers and expensive tools to repair them.
Deleterious consequences of robotics are possible. Robots might directly or indirectly harm humans or their property. A robot works as a machine and it performs the same as what we command it. But if we put the wrong inputs it can result in failure. Robots do not understand the intentions of their user and this can harm humans. In recent years, there have been a number of injuries and even fatalities that resulted from interaction between workers and robotic machinery.
_
-44. A robot singularity is a configuration in which the robot end-effector becomes blocked in certain directions. Any six-axis robot arm has singularities. Singularity is a condition caused by the collinear alignment of two or more robot axes resulting in unpredictable robot motion and velocities. The biggest robotic movement to avoid is a “singularity” when discussing the most common 6-axis robotic implementations.
_
-45. Robotic systems are a way of automating manufacturing applications while reducing the amount of labor and production costs and time associated with the process. These systems are used in almost every manufacturing industry today. While manual labor has dominated in manufacturing for centuries, it is the robotic system that revolutionized the process. Manufacturers are now able to produce a superior-quality product in a reduced amount of time because of robot systems. The reasons why companies consider investing in a robot system differ widely. Some factors include the positive effect on parts quality, increase of manufacturing productivity (faster cycle time) and/or yield (less scrap), improved worker safety, reduction of work-in-progress, greater flexibility in the manufacturing process and reduction of costs. Robotics offers benefits such as high reliability, accuracy, and speed of operation. Overall, robots increase productivity and competitiveness.
_
-46. Fully autonomous weapons, also known as “killer robots,” would be able to select and engage targets without meaningful human control. There are several benefits of robotic technology in military combat. Machines never get sick. They don’t turn a blind eye. If they fall, they don’t hide underneath trees. They don’t speak to their friends. And machines do not know fear, do not feel emotions and do not surrender. On the other hand, killer robots cannot fulfil two of the basic tenets of warfare: discriminating friend from foe, and “proportionality”, determining a reasonable amount of force to gain a given military advantage. Technology should be used to empower all people, not to reduce us – to stereotypes, labels, objects, or just a pattern of 1’s and 0’s. Machines don’t see us as people, just another piece of code to be processed and sorted. Machines cannot make complex ethical choices; they cannot comprehend the value of human life. Machines don’t understand contexts or consequences. Ensuring meaningful human control means understanding the technologies we use, understanding where we are using them, and being fully engaged with the consequences of our actions. Life and death decisions should not be delegated to machines. Humans, not machines, must be held accountable.
_
-47. One of the biggest concerns surrounding the introduction of robotic automation is the impact of jobs for workers. The gloomy narrative says that an invasion of job-killing robots is just around the corner, but Japan and South Korea where robot use is among the highest of all, happened to have the lowest rates of unemployment. The employment rate in Amazon has grown rapidly during a period where they have gone from using around 1,000 robots to over 45,000. Automation may induce productivity gains, increase market demand and the scale of production, and in turn increase labor demand. While there may be a negative effect on some labor segments, robots and automation increase productivity, lower production costs, create new jobs and improve economic growth & GDP.
_
-48. Humans can be motivated by bias — including racism, sexism and homophobia — as well as by greed while AI may make mistakes, but is unlikely to embezzle money from a company, for instance. It is data that informs AI systems to make decisions, and so there are legitimate concerns over whether that data is flawed or biased. Ongoing efforts to ensure the quality of data and purge it of bias are essential to utilizing AI across business areas.
Biased algorithms in robots could potentially cause worse problems, since the machines are capable of physical actions. Recently a chess-playing robotic arm reaching for a chess piece trapped and broke the finger of its child opponent. There are several ways to prevent the proliferation of prejudiced machines including lowering the cost of robotics parts to widen the pool of people building the machines, requiring a license to practice robotics akin to the qualifications issued to medical professionals, or changing the definition of success. Researchers also call for an end to physiognomy, the discredited idea that a person’s outward appearance can reliably betray inner traits such as their character or emotions.
There are legitimate questions about the ethics of employing AI in place of human workers. But what about when there’s a moral imperative to automate? Dangerous, life-threatening jobs are not a thing of the distant past. Logging, fishing, aviation, and roofing are very much thriving professions that each account for a large portion of work-related deaths and injuries. AI technology can and should be deployed to ensure that human beings do not have to be placed in such risky situations. AI, which can program machines to not only perform repetitive tasks but also to increasingly emulate human responses to changes in surroundings and react accordingly, is the ideal tool for saving lives. And it is unethical to continue to send humans into harm’s way once such technology is available. As natural disasters increase around the world, organizations that help coordinate rescue and relief efforts should invest in AI technology. Instead of sending human aid workers into risky situations, AI-powered robots or drones can better perform the tasks of rescuing people from floods or fires.
In many cases, AI can now exceed the diagnosis accuracy of human doctors (and at a lower cost and higher efficiency); so, it is morally indefensible to not commit the full resources of the health care industry toward building and applying that technology to save lives.
If lives can be saved and businesses can reach better outcomes less influenced by human faults, then it would be unethical not to invest in and apply AI and/or robotics.
_
-49. By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own. The robots we are currently building are not like the thinking machines we find in fiction but automation made by engineers and scientists. Regardless of how autonomous or intelligent a robot is, because it is a tool, it’s not the robots that need the rules – it’s us. They don’t have their own moral framework. We have to make the choice so that robots are positioned within our moral framework so that they don’t damage the rest of the life on the planet. Asimov’s laws are inappropriate because they try to insist that robots behave in certain ways, as if they were people, when in real life, it is the humans who design and use the robots who must be the actual subjects of any law. As we consider the ethical implications of having robots in our society, it becomes obvious that robots themselves are not where responsibility lies. Robots do not worry about shame, praise or fear but they can be programmed to win at any cost resulting in lying behaviour; and knowingly or unknowingly, we are teaching machines to lie.
_
-50. We do have emotionally intelligent robots, which can make us believe that they truly are. However, the fact of the matter is that robots are programmed to react to the emotions of humans. They do not carry intangible emotions like empathy and sympathy inside them. In fact, robots are not even context-aware in most cases and are far off from being treated as sentient beings. It’s important to understand that a robot can only solve problems that it is built to solve. It does not have general analytical abilities. Nonetheless, people may perceive that robots are capable of “thinking” or acting on their own beliefs and desires rather than their programs when they interact with people and appear to have human-like emotions. In some situations, such as with socially supportive robots, social bonding with robots may be advantageous. For instance, in the care of old people, social bonding with robots could lead to a higher level of compliance about taking medication as prescribed. People may also find happiness and fulfilment via interaction with robots.
______
Dr Rajiv Desai.MD.
August 29, 2022
______
______
Postscript:
Developers in Japan are offering a robot priest to conduct Buddhist funeral rites complete with chanted sutras and drum tapping. A robot priest that delivers blessings in five languages and beams light from its hands has been unveiled in Germany. The robot priest sparks discussion on the idea if a human is needed to be blessed or if it could all be done by a machine.
_____
Designed by @fraz699.
I’m amazed, I have to admit. Seldom do I encounter a blog that’s equally educative and amusing, and without a doubt, you’ve hit the nail on the head. The problem is something which too few people are speaking intelligently about. I’m very happy that I came across this in my hunt for something concerning this.
Aw, this was a very nice post. Finding the time and actual effort to produce a great article… but what can I say… I procrastinate a whole lot and don’t manage to get anything done.
I seriously love your site.. Great colors & theme. Did you make this site yourself? Please reply back as I’m planning to create my very own site and would love to know where you got this from or what the theme is named. Thank you.
It’s very easy to find out any topic on web as compared to textbooks, as I found this post at this site.|
Oh my goodness! Awesome article dude! Many thanks, However I am having issues with your RSS. I don’t know why I cannot subscribe to it. Is there anyone else having similar RSS problems? Anyone who knows the answer can you kindly respond? Thanx!!
Very good article.Thanks Again. Really Cool.
When someone writes an piece of writing he/she keeps the idea of a user in his/her brain that how a user can be aware of it. Thus that’s why this paragraph is amazing. Thanks!|
Very good article. I definitely appreciate this site. Keep writing!
Thank you for another fantastic article. Where else may just anyone get that kind of info in such a perfect method of writing? I have a presentation next week, and I’m at the look for such info.|
I have read so many posts regarding the blogger lovers however this piece of writing is actually a good post, keep it up.|
Wow! At last I got a web site from where I be able to truly get useful data regarding my study and knowledge.|
This is a topic that’s close to my heart… Many thanks! Exactly where are your contact details though?|
Hey There. I found your blog using msn. This is an extremely well written article. I will be sure to bookmark it and come back to read more of your useful info. Thanks for the post. I’ll certainly comeback.|
Awesome blog you have here but I was wondering if you knew of any forums that cover the same topics talked about in this article? I’d really love to be a part of community where I can get comments from other experienced people that share the same interest. If you have any recommendations, please let me know. Many thanks!|
Say, you got a nice post.Much thanks again. Really Cool.
Hello mates, its wonderful article about cultureand fully defined, keep it up all the time.
You are so interesting! I don’t suppose I have read through a single thing like this before. So wonderful to discover somebody with a few original thoughts on this topic. Really.. many thanks for starting this up. This site is something that’s needed on the web, someone with a little originality.
Your style is very unique in comparison to other people I have read stuff from. Many thanks for posting when you’ve got the opportunity, Guess I’ll just book mark this blog.
Really enjoyed this article. Great.
Thanks-a-mundo for the blog.Really looking forward to read more. Great.
I like looking through an article that will make men and women think. Also, many thanks for allowing for me to comment!
Thanks again for the article.Thanks Again. Great.
I published one very similar to this of all time about my twenty worst and greatest Holidaygifts.
Thanks for another magnificent post. The place else could anyone get that type of info in such a perfect method of writing? I’ve a presentation subsequent week, and I’m at the search for such info.
Thank you for another informative blog. Where else could I get that type of info written in such a perfect way? I’ve a project that I am just now working on, and I’ve been on the look out for such info.
255289 642595The urge to gamble is so universal and its practice so pleasurable, that I assume it must be evil. – Heywood Broun 676664