The assigned article mentioned how incredibly
difficult it was for a robot to autonomously map its surroundings and understand
its position within them. A human being is able to simply open his/her eyes,
look around at the surroundings and can recognize landmarks and other important
indicators of his/her whereabouts. The extraordinary difficulty for a robot to complete
a very basic human task interested me and was the basis for me when looking through
articles for this week’s discussion.
The first article is an experimental summary from
the late 1990’s titled “A Robot Map Creation Algorithm.” This article provided both
the logic and experimentation behind an early attempt at constructing an algorithm
for a robot to both construct a map of its surroundings and localize itself
within them. The theory for this process was simple enough and made sense, the
robot would drive around to 28 different locations on a map and would take
sonar readings at each location. These sonar readings would essentially be
pictures for the robot to study but would have a unique imprint for each one.
Then, once the robot completed its journey to the 28 stops, it would compile
the sonar readings it took into a sonar map. Finally, to test the robot’s
ability to understand its surrounding, the robot would be placed at any of the
locations, would take a sonar reading, compare it to the stored sonar map and
would output which location most directly related to its current location. The
author mentioned that some initial problems included sonar readings of the actual
robot as some “stealth” readings, where the sonar would essentially bounce off
angled surfaces. The robot was programmed to discard these outliers so as not
to interfere with the actual sonar readings. In addition, the author mentioned
that there were some errors with the actual sensor and that the reported
distances varied somewhat from the actual distances. However, I would expect
that since the article was posted, over ten years ago, that the technology in
sensors has increased greatly.
The second article dealt with the application of
these autonomous vehicles and the possibility of an autonomous highway. The
article started out by describing how the growing traffic issues have been an
ongoing problem and have been calculated to cost the US economy alone over $50
billion. The past solutions to this problem of expanding highways and making
more lanes would be nearly impossible given the densities of the areas where
these highways are in. However, making traffic flow more efficiently would be
able to solve the problem. The combination of autonomous vehicles, which can be
programmed to drive at a very consistent pace with a very close proximity to
other cars, and a complex matrix of computers would allow for more capacity on
the current highway lanes. An increase from 2,000 to 6,000 cars per hours
passing could be expected from this improvement. This increase is only possible
because of the creation of robots. Without the precision and ability for
several cars to drive within a few feet of each other at high speeds, the
current lane capacity would remain fixed. The two second rule for judging an
appropriate distance to allow for the car ahead of you could now be replaced
with a standard two foot gap, regardless of what speed the cars are traveling.
This article really provided a great application of the robotic technologies
and showed how the world as we know it may likely change in the next few years!
No comments:
Post a Comment