Springer handbook of robotics free download






















This book is the result of course notes developed over many years for the course M The evolving course notes have been posted on the internet for years to support these classes. The for-purchase version of the book from Cambridge University Press has improved layout and typesetting, updated figures, different pagination and fewer pages , and more careful copyediting, and it is considered the "official" version of the book.

But the online preprint version of the book has the same chapters, sections, and exercises, and it is quite close in content to the Cambridge-published version. We are posting four versions of the book. All versions have exactly the same contents and pagination.

They differ only in the sizes of the margins and the size of the print, as manipulated in Adobe Acrobat after latex'ing. Two of the versions have working hyperlinks for navigating the book on your computer or tablet. With working hyperlinks. To navigate the book using the hyperlinks, click on the hyperlink. To go back where you came from, choose the button or keystroke appropriate to your pdf reader.

For example, on the Mac with Acrobat or Acrobat Reader, use cmd-left arrow. With Preview on the Mac, use cmd-[. Some readers on other operating systems use alt-left arrow. You can google to see which solution works for your pdf reader.

These files have been compressed to about 7 MB. Let us know if you have any problems reading them. Please note that some versions of the default Mac OS X pdf reader, Preview, are known to have bugs displaying certain images in pdf files. If a figure is not appearing properly, please try a better pdf viewer, like Acrobat Reader.

Videos are made with Northwestern's Lightboard. You can see an excellent collection of robotics videos at the Springer Handbook of Robotics Multimedia Extension. You can download summary slides for classroom teaching , covering much of the material in Chapters 2, 3, 4, 5, 6, 8, 9, 11, and In my current class, I [Lynch] do not have time to cover any material from Chapters 7, 10, and 12, nor parts of the material from Chapters 8, 11, and These slides are summaries only, leaving out full derivations, and they are used in class after students have watched the videos on their own time.

In my class, students also complete lecture comprehension problems on Coursera before attending the live class. I project the slides in powerpoint and write on them using powerpoint's "Draw" function and a Wacom One tablet during class. Or I print them out and use a doc cam to project as I write on the printouts. Writing on the slides helps with pacing and makes the class more interactive.

In the first part of class I review the material from the videos and reading, perhaps asking questions of the whole class. At the end of most slide decks, there are rather simple conceptual problems for small-group discussion. No computers or difficult calculation needed. This is great, but their web page for this is not super friendly, and expects you to download some Excel sheet to figure out what they have on offer.

Compiled in exasperation by Harish Narayanan. Aw, Snap! This web page hopes to make it easier to access all this knowledge. Follow me on Twitter! Visit my website. Rated 3. Psychology, Religion, and Spirituality James M. Nelson Go get it. Frick, Christopher T.

Barry, Randy W. Kamphaus 3rd ed. Everly, Jr. Lating 3rd ed. Cheng, Jessica L. Tracy, Cameron Anderson Go get it. Rated 4. Dombrowski Go get it. Ogg, Ashley N. Sundman-Wheat, Audra St. John Walsh Go get it. International Perspectives on Psychotherapy Stefan G. Hofmann 1st ed. Psychology of Perception Simon Grondin 1st ed. Rated 5. Perceptual Organization Stephen Handel 1st ed. Vliek 1st ed. Developmental Neurobiology Mahendra S. Rao, Marcus Jacobson 4th ed. Fundamentals of Biomechanics Duane Knudson 2nd ed.

Pons 2nd ed. The Joy of Science Richard A. Lockshin Go get it. Integrative Human Biochemistry Andrea T. Castanho 1st ed. Human Chromosomes Orlando J. Miller, Eeva Therman 4th ed. Phylogenomics Christoph Bleidorn 1st ed. Matson, Peter Vitousek 2nd ed. Learning Landscape Ecology Sarah E. Gergel, Monica G.

Turner 2nd ed. Epidemiological Research: Terms and Concepts O. Miettinen Go get it. Cleophas, Aeilko H. Zwinderman Go get it. Zwinderman 2nd ed. Schmahmann, Ying Shen 1st ed. Iaizzo 3rd ed. Pharmaceutical Biotechnology Daan J.

Crommelin, Robert D. Sindelar, Bernd Meibohm 4th ed. An Introduction to Biomechanics Jay D. Humphrey, Sherry L. Applied Bioinformatics Paul M. Selzer, Richard J. Rated 2. Punekar 1st ed. Lal 1st ed. Food Fraud Prevention John W. Spink 1st ed. Database Marketing Robert C.

Neslin Go get it. Microeconomics Peter Dorman Go get it. Rated 1. Search Methodologies Edmund K. Burke, Graham Kendall 2nd ed. Linear Programming Robert J Vanderbei 4th ed. Corporate Social Responsibility John O. Okpara, Samuel O. Idowu Go get it. Turban 8th ed. Econometrics Badi H. Baltagi 5th ed. Logistics Harald Gleissner, J. Christian Femerling Go get it. Game Theory Hans Peters 2nd ed. Linear and Nonlinear Programming David G. Luenberger, Yinyu Ye 4th ed.

Olson, Desheng Dash Wu 2nd ed. Multinational Management Rien Segers 1st ed. Corbett, Jan C. Fransoo, Tarkan Tan 1st ed.

Turban 9th ed. Managing Sustainable Business Gilbert G. Lenssen, N. Craig Smith 1st ed. Customer Relationship Management V. Kumar, Werner Reinartz 3rd ed. Houston 1st ed. Analytical Corporate Finance Angelo Corelli 2nd ed. Excel Data Analysis Hector Guerrero 2nd ed. Digital Business Models Bernd W. Wirtz 1st ed.

Social Marketing in Action Debra Z. Basil 1st ed. Ceramic Materials C. Barry Carter, M. Grant Norton Go get it. Composite Materials Krishan K. Chawla 3rd ed. Transmission Electron Microscopy David B. Williams, C. Barry Carter 2nd ed.

Sensory Evaluation of Food Harry T. Lawless, Hildegarde Heymann 2nd ed. Principles of Polymer Chemistry A. Ravve 3rd ed. Nanotechnology: Principles and Practices Sulabha K.

Kulkarni 3rd ed. Grant Norton 2nd ed. Statistical Mechanics for Engineers Isamu Kusaka 1st ed. Food Analysis Laboratory Manual S. Suzanne Nielsen 3rd ed. Food Chemistry H. Belitz, Werner Grosch, Peter Schieberle 4th ed. Food Analysis S. Suzanne Nielsen 5th ed. Thus, it becomes important that morality based decision-making becomes a part of artificial intelligence systems. These systems must be able to evaluate the ethical implications of their possible actions. This could be on several levels, including if laws are broken or not.

Most engineers would probably prefer not to develop systems that could hurt someone. Nevertheless, this can potentially be difficult to predict.

We can develop a very effective autonomous driving system that reduces the number of accidents and save many lives, but, on the other hand, if the system takes lives because of certain unpredictable behaviors, it would be socially unacceptable. It is also not an option to be responsible for creating or regulatory approve a system where there is a real risk for severe adverse events.

We see the effect of this in the relatively slow adoption of autonomous cars. Below follows first an overview of possible ethical challenges we are facing with more intelligent systems and robots in our society, followed by how countermeasures related to technology risks can be taken including with machine ethics and designer precautions, respectively.

Our society is facing a number of potential challenges from future highly intelligent systems regarding jobs and technology risks:. This has been a fear for decades, but experience shows that the introduction of information technology and automation creates far more jobs than those which are lost Economist, Further, many will argue that jobs now are more interesting than the repetitive routine jobs that were common in earlier manufacturing companies.

Artificial intelligence systems and robots help industry to provide more cost-efficient production especially in high cost countries. Thus, the need for outsourcing and replacing all employees can be reduced. Still, recent reports have argued that in the near future, we will see overall loss of jobs Schwab and Samans, and Frey and Osborne, However, other researchers mistrust these predictions Acemoglu and Restrepo, Fewer jobs and working hours for employees could tend to benefit a small elite and not all members of our society.

One proposal to meet this challenge is that of a universal basic income Ford, Further, current social security and government services rely on the taxation of human labor—pressure on this system could have major social and political consequences.

If machines do everything for us, life could, in theory, become quite dull. Normally, we expect that automating tasks will result in shorter working hours.

However, what we see is that the distinction between work and leisure becomes gradually less evident, and we can do the job almost from anywhere. Mobile phones and wireless broadband gives us the opportunity to work around the clock. Requirements for being competitive with others result in many today working more than before although with less physical effort than in jobs of the past. Although artificial intelligence contributes to the continued development of technology and this trend, we can simultaneously hope that automated agents might take over some of our tasks and thus also provide us some leisure time.

The foundation for our society for hundreds of years has been training humans to make things, function, work in and understand our increasingly complex society. However, with the introduction of robots, and information and communication technology, the need for human knowledge and skills is gradually decreased with robots making products faster and more accurately than humans.

Further, we can seek knowledge and be advised by computers. This lessens our need to train and utilize our cognitive capabilities regarding memory, reasoning, decision making etc. This could have a major impact on how we interact with the world around us. It would be hard to take over if the technology fails and challenging to make sure we get the best solution if only depending on information available on the web.

The latter is already today a challenge with the blurred distinction between expert knowledge and alternative sources on the web. Thus, there seems to be a need for training humans also in the future to make sure that the technology works in the most effective way and that we have competence to make our own judgments about automatic decision making.

Although mostly remotely controlled today, artificial intelligence is expected to be much applicable for future military unmanned aircrafts drones in air and for robots on to the ground.

It saves lives in the military forces, but can, by miscalculations, kill innocent civilians. Similarly, surveillance cameras are useful for many purposes, but many are skeptical of advanced tracking of people using artificial intelligence.

Nevertheless, disclosures e. Almost any technology can be misused and cause severe damage if it gets into the wrong hands. As discussed in the introduction, a number of writers and filmmakers have addressed this issue through dramatic scenes where technology gets out of control. However, the development of technology has not so far led to a global catastrophe.

Nuclear power plants have gotten out of control, but the largest nuclear power plant accidents at Chernobyl in Russia and Fukushima in Japan were due to human and mechanical failure, not the failure of control systems.

At Chernobyl, the reactor exploded because too many control rods were removed by experimentation. In Fukushima cooling pumps failed and reactors melted as a result of the earthquake and subsequent tsunami. The lesson of these disasters must be that it is important that systems have built in mechanisms to prevent human errors and help to predict risk of mechanical failure to the extent possible.

Looking back, new technology brings many benefits, and damage is often in a different form than we first would think of. Misuse of technology is always a danger, and it is probably a far greater danger than the technology itself getting out of control.

An example of this is computer software which today is very useful for us in many ways, while we are also vulnerable from those who abuse the technology to create malicious software in the form of infecting and damaging virus programs.

In , the Melissa virus spread through e-mails leading to the failures of the e-mail systems in several large companies such as Intel and Microsoft due to overload. There are currently a number of people sharing their concerns regarding lethal autonomous weapons systems Lin et al.

Others argue that such systems could be better than human soldiers in some situations, if they are programmed to never break agreed laws of war representing the legal requirements and responsibilities of a civilized nation Arkin et al. The book Moral Machines which begins with the somewhat frightening scenario discussed earlier in this article, also contains a thorough review of how artificial moral agents can be implemented Wallach and Allen, This includes the use of ethical expertise in program development.

It proposes three approaches: formal logical and mathematical ethical reasoning, machine learning methods based on examples of ethical and unethical behavior and simulation where you see what is happening by following different ethical strategies. A relevant example is given in the book. Imagine that you go to a bank to apply for a loan. The bank uses an AI-based system for credit evaluation based on a number of criteria.

If you are rejected, the question arises about what the reason is. You may come to believe that it is due to your race or skin color rather than your financial situation. The bank can hide behind saying that the program cannot be analyzed to determine why your loan application was rejected. At the same time, they might claim that skin color and race are parameters not being used.

A system more open for inspection can, however, show that the residential address was crucial in this case. It has given the result that the selection criteria provide effects almost as if unreasonable criteria should have been used.

It is important to prevent this behavior as much as possible by simulating AI systems to detect possibly unethical actions. However, an important ethical challenge related to this is determining how to perform the simulation, e. It is further argued that all software that will replace human evaluation and social function should adhere to criteria such as accountability, inspectability, robustness to manipulation, and predictability.

All developers should have an inherent desire to create products that deliver the best possible user experience and user safety.

It should be possible to inspect the AI system, so if it comes up with a strange or incorrect action, we can determine the cause and correct the system so that the same thing does not happen again. The ability to manipulate the system must be restricted, and the system must have a predictable behavior. The complexity and generality of an AI system influences how difficult it is to deal with the above criteria.

It is obviously easier and more predictable for a robot to move in a known and limited environment than in new and unfamiliar surroundings. Developers of intelligent and adaptive systems must, in addition to being concerned with ethical issues in how they design systems, try to give the systems themselves the ability to make ethical decisions Dennis et al.

This is referred to as computer ethics , where one looks at the possibility of giving the actual machines ethical guidelines. The machines should be able to make ethical decisions using ethical frameworks Anderson and Anderson, It is argued that ethical issues are too interdisciplinary for programmers alone to explore them.

The book discusses why and how to include an ethical dimension in machines that will act autonomously. A robot assisting an elderly person at home needs clear guidelines for what is acceptable behavior for monitoring and interaction with the user.

Medically important information must be reported, but at the same time, the person must be able to maintain privacy. Maybe video surveillance is desirable for the user by relatives or others , but it should be clear to the user when and how it happens. Other work focuses on the importance of providing robots with internal models to make them self-aware which will lead to enhanced safety and potentially also ethical behavior in Winfield It could also be advantageous for multiple robots to share parts of their internally modeled behavior with each other Winfield, The models can be organized in a hierarchical and distributed manner Demiris and Khadhouri, Several works apply artificial reasoning to verify whether a robotic behavior satisfies a set of predetermined ethical constraints which, to a large extent, have been defined by a symbolic representation using logic Arkin et al.

However, future systems would probably combine the programmed and machine learning approach Deng, While most work on robot ethics is tested by simulation, there are some work that has been implemented on real robots. In Winfield et al. This represents a contribution toward making robots that are ethical, as well as safe.

Implementing ethical behavior in robots inspired by the simulation theory of cognition has also been proposed Vanderelst and Winfield, This is by utilizing internal simulations of a set of behavioral alternatives, which allow the robot to simulate actions and predict their consequences. Professor and science fiction writer Isaac Asimov — was already in foresighted to see the need for ethical rules for robot behavior.

Subsequently, his three rules Asimov, have often been referenced in the science fiction literature and among researchers who discuss robot morality:. A robot may not harm a human being, or through inaction, allow a human to be injured. A robot must obey orders given by human beings except where such orders would conflict with the first law.

A robot must protect its own existence as long as such protection does not conflict with the first or second law. It has later been argued that such simple rules are not enough to avoid robots resulting in harm Lin et al.

The term roboethics was introduced in by the Italian robot scientist Gian Marco Veruggio Veruggio and Operto, He saw a need for development guidelines for robots contributing to making progress in the human society and help preventing abuse against humanity. Veruggio argues that ethics are needed for robot designers, manufacturers and users. We must expect that the robots of the future will be smarter and faster than the people they should obey. It raises questions about safety, ethics and economics.

How do we ensure that they are not being misused by persons with malicious intent? Is there any chance that the robots themselves, by understanding that they are superior to humans, would try to enslave us?

We are still far from the worst scenarios that are described in books and movies, yet there is reason to be alert. First, robots are mechanical systems that might unintentionally hurt us. Then, with an effective sensory system, there is a danger that the collected information can be accessed by unauthorized people and be made available to others through the Internet.

Today this is a problem related to intrusion on our computers, but future robots may be vulnerable to hacking as well. This would present be a challenge for robots that collect a lot of audio and video information from our homes. We would not like to be surrounded by robots unless we are sure that sensor data are staying within the robots only.

Another problem is that robots could be misused for criminal activities such as burglary. A robot in your own home could either be reprogrammed by people with criminal intent or they might have their own robots carry out the theft. So, having a home robot connected to the Internet will place great demands on security mechanisms to prevent abuse. Although we must assume that anyone who develops robots and AI for them has good intentions, it is important that the developers also have possible abuse in mind.

These intelligent systems must be designed so that the robots are friendly and kind, while difficult to abuse for malicious actions in the future. That is, e. The discussion is natural for several reasons including that military applications are an important driving force in technology development.

At the same time, military robot technology is not all negative since it may save lives by replacing human soldiers in danger zones. However, giving robotic military systems too much autonomy increases the risk of misuse including toward civilians. In the first international symposium on roboethics was held in Sanremo, Italy.

The EU has funded a research program, ETHICBOTS, where a multidisciplinary team of researchers was to identify and analyze techno-ethical challenges in the integration of human and artificial entities.

The European Robotics Research Network Euronet funded the project Euronet Roboethics Atelier in , with the goal of developing the first roadmap for roboethics Veruggio, That is, undertaking a systematic assessment of the ethical issues surrounding robot development.

The focus of this project was on human ethics for designers, manufacturers, and users of robots. Here are some examples of recommendations made by the project participants for commercial robots:.

There must be a password or other keys to avoid inappropriate and illegal use of a robot. Robots should have serial numbers and registration number similar to cars.

Software and hardware should be used to encrypt and password protect sensitive data that the robot needs to save. The studies of ethical and social implications of robotics continue and books and articles disseminate recent findings Lin et al. It is important to include the user in the design process and several methodologies have been proposed. Value-sensitive design is one consisting of three phases: conceptual, empirical, and technical investigations accounting for human values.

The investigations are intended to be iterative, allowing the designer to modify the design continuously Friedman et al. They proposed regulating robots in the real world with the following rules Boden et al. Robots are multiuse tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.

Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy. Robots are products.



0コメント

  • 1000 / 1000