Quantcast
Channel: Clearpath Robotics – Robohub
Viewing all 81 articles
Browse latest View live

ROS 101: Intro to the Robot Operating System

$
0
0

ROS101_Clearpath

Clearpath Robotics brings us a new tutorial series on ROS!

Since we practically live in the Robot Operating System (ROS), we thought it was time to share some tips on how to get started with ROS. We’ll answer questions like where do I begin? How do I get started? What terminology should I brush up on? Keep an eye out for this ongoing ROS 101 blog series that will provide you with a top to bottom view of ROS that will focus on introducing basic concepts simply, cleanly and at a reasonable pace. This guide is meant as a groundwork for new users, which can then be used to jump into in-depth data at wiki.ros.org. If you are totally unfamiliar with ROS, Linux, or both, this is the place for you!

The ROS Cheat Sheet

This ROS Cheat Sheet is filled with tips and tricks to help you get started and to continue using once you’re a true ROS user. This version is written for ROS Hydro Medusa. Download the ROS Cheat Sheet here.

What is ROS?

ROS (Robot Operating System) is a BSD-licensed system for controlling robotic components from a PC. A ROS system is comprised of a number of independent nodes, each of which communicates with the other nodes using a publish/subscribe messaging model. For example, a particular sensor’s driver might be implemented as a node, which publishes sensor data in a stream of messages. These messages could be consumed by any number of other nodes, including filters, loggers, and also higher-level systems such as guidance, pathfinding, etc.

Why ROS?

Note that nodes in ROS do not have to be on the same system (multiple computers) or even of the same architecture! You could have a Arduino publishing messages, a laptop subscribing to them, and an Android phone driving motors. This makes ROS really flexible and adaptable to the needs of the user. ROS is also open source, maintained by many people.

General Concepts

Let’s look at the ROS system from a very high level view. No need to worry how any of the following works, we will cover that later.

ROS starts with the ROS Master. The Master allows all other ROS pieces of software (Nodes) to find and talk to each other. That way, we do not have to ever specifically state “Send this sensor data to that computer at 127.0.0.1. We can simply tell Node 1 to send messages to Node 2.

ros101-1

Figure 1

How do Nodes do this? By publishing and subscribing to Topics.

Let’s say we have a camera on our Robot. We want to be able to see the images from the camera, both on the Robot itself, and on another laptop.

In our example, we have a Camera Node that takes care of communication with the camera, a Image Processing Node on the robot that process image data, and a Image Display Node that displays images on a screen. To start with, all Nodes have registered with the Master. Think of the Master as a lookup table where all the nodes go to find where exactly to send messages.

ros101-2

Figure 2

In registering with the ROS Master, the Camera Node states that it will Publish a Topic called /image_data (for example). Both of the other Nodes register that they are Subscribed to the Topic /image_data.

Thus, once the Camera Node receives some data from the Camera, it sends the /image_data message directly to the other two nodes. (Through what is essentially TCP/IP)

ros101-3

Figure 3

Now you may be thinking, what if I want the Image Processing Node to request data from the Camera Node at a specific time? To do this, ROS implements Services.

A Node can register a specific service with the ROS Master, just as it registers its messages. In the below example, the Image Processing Node first requests /image_data, the Camera Node gathers data from the Camera, and then sends the reply.

ros101-4

Figure 4

We will have another tutorial “ROS 101 – Practical Example” next week.

 

See all the ROS101 tutorials here. If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

 


ROS 101: A practical example

$
0
0

ROS101_Clearpath

In the previous ROS 101 post, we provided a quick introduction to ROS to answer questions like What is ROS? and How do I get started? Now that you understand the basics, here’s how they can apply to a practical example. Follow along to see how we actually ‘do’ all of these things …

First, you will need to run Ubuntu, and have ROS installed on it. For your convenience, you can download our easy-to-use image here:

https://s3.amazonaws.com/CPR_PUBLIC/LEARN_ROS/Learn_ROS-disk1.vmdk
https://s3.amazonaws.com/CPR_PUBLIC/LEARN_ROS/Learn_ROS.ovf

Login (username): user
Password: learn

Get VMWare Player, and use the virtual disk above. If you don’t want to use the provided image, follow the tutorial here (after installing Ubuntu 12.04):

http://wiki.ros.org/hydro/Installation/Ubuntu

Throughout the rest of the tutorials, we will be referencing the ROS cheatsheet available here.

Open a new Terminal Window (Ctrl + Alt + T).
In the new terminal, type roscore (And press enter).
This should produce something similar to the below.

ROSpic1

ROS Figure 1

What you have just done is started the ROS Master as we described above. We can now experiment with some ROS commands.
Open a new Terminal, and type in rostopic. This will give you a list of all the options that the rostopic command can do. For now, we are interested in rostopic list
Type in rostopic list. (And press enter). This should give you a window like the following:

rospic2

ROS Figure 2

The two entries listed above are ROS’s built in way of reporting and aggregating debug messages in the system. What we want to do is publish and subscribe to a message.
You can open a new Terminal again, or open a new tab on the same terminal window (Ctrl + Shift + T).
In the new Terminal, type in: rostopic pub /hello std_msgs/String “Hello Robot”

ROSpic3

ROS Figure 3

Let’s break down the parts of that command.
rostopic pub – This commands ROS to publish a new topic.
/hello – This is the name of the new topic. (Can be whatever you want)
std_msgs/String – This is the topic type. We want to publish a string topic. In our overview examples above, it was an image data type.
“Hello Robot” – This is the actual data contained by the topic. I.E. the message itself.
Going back to the previous Terminal, we can execute rostopic list again.
We now have a new topic listed! We can also echo the topic to see the message. rostopic echo /hello

rospic4

ROS Figure 4

We have now successfully published a topic with a message, and received that message.
Type Ctrl + C to stop echoing the /hello topic.
We can also look into the node that is publishing the message. Type in rosnode list. You will get a list similar to the one below. (The exact numbers beside the rostopic node may be different)

rospic5

ROS Figure 5

Because we asked rostopic to publish the /hello topic for us, ROS went ahead and created a node to do so. We can look into details of by typing rosnode info /rostopic_…..(whatever numbers)
TIP: In ROS, and in Linux in general, whenever you start typing something, you can press the Tab key to auto-complete it. If there is more than one entry, double tap Tab to get the list. In the above example, all I typed was rosnode info /rost(TAB)

rospic6

ROS Figure 6

We can get info on our topic the same way, by typing rostopic info /hello.

rospic7

ROS Figure 7

You will notice that the node listed under “Publishers:” is the same node we requested info about.

Up till now, we have covered the fundamentals of ROS, how to use rostopic, rosnode.

Next time, we will compile a short example program, and try it out.

 

See all the ROS101 tutorials here. If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

ROS 101: Drive a Husky!

$
0
0

ROS101_Clearpath

In the previous ROS 101 post, we showed how easy it is to get ROS going inside a virtual machine, publish topics and subscribe to them. If you haven’t had a chance to check the out all the previous ROS 101 tutorials, you may want to do so before we go on. In this post, we’re going to drive a Husky in a virtual environment, and examine how ROS passes topics around.

An updated version of the Learn_ROS disk is available here:

https://s3.amazonaws.com/CPR_PUBLIC/LEARN_ROS/Learn_ROS-disk1.vmdk
https://s3.amazonaws.com/CPR_PUBLIC/LEARN_ROS/Learn_ROS.ovf

Login (username): user
Password: learn

If you just downloaded the updated version above, please skip the next section. If you have already downloaded it, or are starting from a base install of ROS, please follow the next section.

Updating the Virtual Machine

Open a terminal window (Ctrl + Alt + T), and enter the following:

sudo apt-get update
sudo apt-get install ros-hydro-husky-desktop

Running a virtual Husky

Open a terminal window, and enter:

roslaunch husky_gazebo husky_empty_world.launch

Open another terminal window, and enter:

roslaunch husky_viz view_robot.launch

You should be given two windows, both showing a yellow, rugged robot (the Husky!)

Screenshot-from-2014-03-14-07_34_30

 

The first window shown is Gazebo. This is where we get a realistic simulation of our robot, including wheel slippage, skidding, and inertia. We can add objects to this simulation, such as the cube above, or even entire maps of real places.

Screenshot-from-2014-03-14-07_35_36

The second window is RViz. This tool allows us to see sensor data from a robot, and give it commands (We’ll talk about how to do this in a future post). RViz is a more simplified simulation in the interest of speed.

We can now command the robot to go forwards. Open a terminal window, and enter:

rostopic pub /husky/cmd_vel geometry_msgs/Twist -r 100 '[0.5,0,0]' '[0,0,0]'

In the above command, we publish to the /husky/cmd_vel topic, of topic type geometry_msgs/Twist, at a rate of 100Hz. The data we publish tells the simulated Husky to go forwards at 0.5m/s, without any rotation. You should see your Husky move forwards. In the gazebo window, you might notice simulated wheel slip, and skidding.

Using rqt_graph

We can also see the structure of how topics are passed around the system. Leave the publishing window running, and open a terminal window. Type in:

rosrun rqt_graph rqt_graph

This command generates a representation of how the nodes and topics running on the current ROS Master are related. You should get something similar to the following:

Screenshot-from-2014-03-14-08_10_25

 

The highlighted node and arrow show the topic that you are publishing to the simulated Husky. This Husky then goes on to update the gazebo virtual environment, which takes care of movement of the joints (wheels) and the physics of the robot. The rqt_graph command is very handy to use, when you are unsure who is publishing to what in ROS. Once you figure out what topic you are interested in, you can see the content of the topic using rostopic echo.

Using tf

In Ros, tf is a special topic that keeps track of coordinate frames, and how they relate to each other. So, our simulated Husky starts at (0,0,0) in the world coordinate frame. When the Husky moves, it’s own coordinate frame changes. Each wheel has a coordinate frame that tracks how it is rotating, and where it is. Generally, anything on the robot that is not fixed in space, will have a tf describing it. In the rqt_graph section, you can see that the /tf topic is published to and subscribed from by many different nodes.

One intuitive way to see how the tf topic is structured for a robot is to use the view_frames tool provided by ROS. Open a terminal window. Type in:

rosrun tf2_tools view_frames.py

Wait for this to complete, and then type in:

evince frames.pdf

This will bring up the following image.

Screenshot-from-2014-03-18-12_20_59Here we can see that all four wheel are referenced to the base_link, which is referenced from the base_frootprint. (Toe bone connected to the foot bone, the foot bone….). We also see that the odom topic is driving the reference of the whole robot. This means that if you write to the odom topic (IE, when you publish to the /cmd_vel topic) then the whole robot will move.

Tweet to drive a robot!

$
0
0

tweetbot

Happy Birthday!! Clearpath is officially 5 years old and what better way to celebrate than let all of our fans drive a robot. No matter where you are in the world, you can experience what it’s like to drive Husky – well, a very mini, hacked-together Husky that is. We’ve put together ‘twit-bot’ for your enjoyment so you can move our bot around all from the convenience of your smartphone using Twitter.

Here’s how it works:

Step 1: Mention Clearpath’s Twitter handle (@ClearpathRobots)
Step 2: Hash tag #MoveRobot
Step 3: Write the action you’d like it to take (examples are below)
Step 4: Watch it move on the live feed: http://www.twitch.tv/twitbot_cpr
Step 5: Share with your friends!

How does it move?

This little twit-bot can go just about anywhere and in any direction using the commands below (case insensitive).  The delay between the tweet and the streaming is about 30 seconds:

  • “forward” or “fwd”
  • “backward” or “bck”
  • “right” or “rght”
  • “left” or “ft”
  • “stop” or “stp”

You can also tweet colors to change the colors of the LED lights: blue, red, white, etc.

Of course, there are some hidden key words – easter eggs – in there too that you’ll just have to figure out on your own. I wonder if pop-a-wheelie is on the list?…

If you liked this article, you may also be interested in Clearpath’s ROS 101 Tutorials:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

ROS 101: Drive a Grizzly!

$
0
0

ROS101_logo

So you have had a taste of driving a virtual Husky in our previous tutorial, but now want to try something a little bigger? How about 2000 lbs bigger?

Read on to learn how to drive a (virtual) Grizzly, Clearpath Robotic’s largest and meanest platform. If you are totally new to ROS, be sure to check out our tutorial series starting here and the ROS Cheat Sheet. Here is your next ROS 101 dose.

An updated version of the Learn_ROS disk is available here:

https://s3.amazonaws.com/CPR_PUBLIC/LEARN_ROS/Learn_ROS-disk1.vmdk
https://s3.amazonaws.com/CPR_PUBLIC/LEARN_ROS/Learn_ROS.ovf

Login (username): user
Password: learn

Updating the Virtual Machine

Open a terminal window (Ctrl + Alt + T), and enter the following:

sudo apt-get update
sudo apt-get install ros-hydro-grizzly-simulator 
sudo apt-get install ros-hydro-grizzly-desktop 
sudo apt-get install ros-hydro-grizzly-navigation

Running a virtual Grizzly

Open a terminal window, and enter:

roslaunch grizzly_gazebo grizzly_empty_world.launch

Open another terminal window, and enter:

roslaunch grizzly_viz view_robot.launch

You should be given two windows, both showing a yellow, rugged robot (the Grizzly!). The left one shown is Gazebo. This is where we get a realistic simulation of our robot, including wheel slippage, skidding, and inertia. We can add objects to this simulation, or even entire maps of real places.

Grizzly_gazebo-2

Grizzly RUV in simulation

The right window is RViz. This tool allows us to see sensor data from a robot, and give it commands (in a future post). RViz is a more simplified simulation in the interest of speed.

Grizzly_rviz

RViz – sensor data

We can now command the robot to go forwards. Open a terminal window, and enter:

rostopic pub /cmd_vel geometry_msgs/Twist -r 100 '[0.5,0,0]' '[0,0,0]'

In the above command, we publish to the cmd_vel topic, of topic type geometry_msgs/Twist, at a rate of 100Hz. The data we publish tells the simulated Grizzly to go forwards at 0.5m/s, without any rotation. You should see your Grizzly move forwards. In the gazebo window, you might also notice simulated wheel slip, and skidding. Enjoy and stay tuned for more soon!

 

See all the ROS101 tutorials here. If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Grizzly on the Mars Emulation Terrain at NCFRN

$
0
0

ncfrnblog3

by Ryan Gariepy

I recently spent over a week in Montreal talking with – and learning from – the Canadian field robotics community, first at the Computer and Robot Vision (CRV) conference and then at the NCFRN Field Trials. This was my first time attending CRV since I spoke at the 2012 conference in Toronto, and it’s quite inspiring how much robotics is spreading throughout Canada. Keynote speakers were Raff D’Andrea from ETH-Z and Frank Dellaert from Georgia Tech, both of whom had excellent talks. Bonus points to Frank for pointing out factor graphs to me for the first time, and Raff for bringing the best tabletop demo ever!

ncfrnblog4

Testing out new software algorithms on Kingfisher

CRV transitioned right into the opening meetings of the NCFRN Field Trials. Or, for those of you who don’t like nested acronyms, the National Science and Engineering Research Council Canadian Field Robotics Network Field Trials. The NCFRN makes up Canada’s leading field robotics researchers, companies, and supporting government organizations, and has just marked its second year in existence. The first two days are filled with presentations, seminars, and poster sessions, and from there it’s into the eponymous field. Something which was both surprising and inspiring for me was seeing that Clearpath hardware represented every single ground vehicle and surface vessel in the poster session.

Spread from the Mars Emulation Terrain at the Canadian Space Agency to the east (aka, the “Mars Yard”) to the McGill MacDonald campus to the west, the field trials were quite the experience. Researchers and partners were able to see the algorithms operating live. They were able to collaborate on combining their technologies, and, in the most extreme cases, swapped code between robots to test how robust their approaches were. In my opinion, one of the most valuable aspects of the field trials is improving how testing and deploying autonomous vehicles in remote locations is completed. Teams’ preparations ranged from being as minimal and lightweight as possible to turning an entire cube van into a remote lab.

This years’ field trials also mark an important milestone for Clearpath Robotics: It’s the first time we’ve seen someone make coffee using our robots (at least, the first time we’re aware of it). Apparently it’s easier to plug a coffee maker into a Grizzly than it is to run an extension cord. I can understand why they were making coffee; this was the ASRL team which did 26 repeated traverses of nearly a kilometer each based on a single, visually taught route. And, since the ASRL has a thing for setting videos of our robots to music, here’s the research set to music.

ncfrnblog5

Taking Husky for a walk in Montreal at NCFRN

I’ll close off by sharing the kind of experience which keeps us going: On the last day of the trials, I could only see six robots left rolling around the Mars Yard. There were three kinds, and all were robots designed and built by the Clearpath team – we’re thrilled to see that our work is having an impact on the robotics community and we can’t wait to see how Year 3 Trials will go next spring!

Clearpath introduces Jackal, their new unmanned ground vehicle

$
0
0

We’ve been keeping a huge secret and now we’re ready to share it… Earlier this year, Army Research Labs (ARL) contracted us to design an affordable, portable research platform for outdoor use. Working with them during the development phase, their team provided feedback for iteration on the platform’s performance and capabilities. And now, it’s ready to release to the world. Introducing the newest addition to Clearpath’s black and yellow robot family: Jackal Unmanned Ground Vehicle!

Jackal offers end-to-end integration including a built-in GPS, IMU, and computer, and a configurable top-plate. Engineered for the great outdoors, Jackal’s sturdy metal chassis, weatherproof design (IP 62) and skid-steer drive enables all-terrain operation, as well as lab experimentation, for a variety of applications.

Read more on Clearpath Robotics Inc.

ROS 101: Creating a publisher node

$
0
0

ROS101_logo

By Martin Cote

In our previous post, we graduated from driving a Husky to taking on a Grizzly! Now it’s time to get down and dirty with what ROS is really made of: nodes! We will first be creating a workspace to work from, then we will write a simple publisher that will make our virtual Husky drive around randomly. If this is your first time visiting a Clearpath Robotics ROS 101 blog, get started here.

Creating a workspace and package

Before we begin writing a node, we need to create a workspace and a package. Workspaces are simply directories to store all of your packages. First we will need to create a new directory.

mkdir ~/ros101

This created a directory in your home folder, which we will use as a workspace directory. We now need to create a subdirectory in your workspace directory to store all your source code for your packages.

mkdir ~/ros101/src

The last step to creating the workspace will be to initialize the workspace with catkin_init_workspace.

cd ~/ros101/src
catkin_init_workspace

Now that our workspace has been created, we will create a package in the src directory we just created. This can be done by navigating to the ~/ros101/src directory, which you should have already done in the last step, and using the catkin_create_pkg command  followed by what we want the package to be named, and then followed by what other packages our package will be dependent on; this command creates another directory for your new package, and two new configuration files inside that directory with some default settings.

catkin_create_pkg random_husky_driver roscpp std_msgs

You can see that this created CMakeLists.txt and package.xml inside the random_husky_driver directory; this is also where you will store all your source code for your packages. The roscpp and std_msgs  dependencies were added into the CMakeLIst.txt and package.xml.

Writing the publisher

As mentioned in our previous post, a publisher publishes messages to a particular topic. For this tutorial, we will be publishing random commands to the /husky/cmd_vel topic to make your Husky visualization drive itself. Start by creating a file in your ~/ros101/src/random_husky_driver directory called random_driver.cpp, and copy the following code.

#include <ros/ros.h>
#include <geometry_msgs/Twist.h> 
#include <stdlib.h> 

int main(int argc, char **argv) {
     //Initializes ROS, and sets up a node
     ros::init(argc, argv, "random_husky_commands");
     ros::NodeHandle nh;

     //Ceates the publisher, and tells it to publish
     //to the husky/cmd_vel topic, with a queue size of 100
     ros::Publisher pub=nh.advertise<geometry_msgs::Twist>("husky/cmd_vel", 100);

     //Sets up the random number generator
     srand(time(0));

     //Sets the loop to publish at a rate of 10Hz
     ros::Rate rate(10);

       while(ros::ok()) {
            //Declares the message to be sent
            geometry_msgs::Twist msg;
           //Random x value between -2 and 2
           msg.linear.x=4*double(rand())/double(RAND_MAX)-2;
           //Random y value between -3 and 3
           msg.angular.z=6*double(rand())/double(RAND_MAX)-3;
           //Publish the message
           pub.publish(msg);

          //Delays untill it is time to send another message
          rate.sleep();
         }
}

Let’s break down this code line by line:

#include <ros/ros.h>
#include <geometry_msgs/Twist.h> 

These lines includes the headers that we are going to need. The <ros/ros.h> header is required for ROS functionality and the <geometry_msgs/Twist.h> is added so that we can create a message of that type.

ros::init(argc, argv, "random_husky_commands");
ros::NodeHandle nh;

The first line, ros::init,  is used to initialize the ROS node, and name it “random_husky_commands”, while ros:NodeHandle starts the node.

ros::Publisher pub=nh.advertise<geometry_msgs::Twist>("husky/cmd_vel", 100);

Publishing a message is done using ros:Publisher pub=nh.advertise, followed by the message type that we are going to be sending, in this case it is a geometry_msga::Twist, and the topic that we are going to be sending it too, which for us is husky/cmd_vel.

The 100 is the message queue size; that is, if you are publishing message faster than what roscpp can send, 100 messages will be saved in the queue to be sent. The larger the queue, the more delay in robot movement in case of buffering.

Therefore in a real life example, you will want to have a smaller queue in the case of robot movement, where delay in movement commands are undesirable and even dangerous, but dropped messages are acceptable. In the case of sensors, it is recommended to use a larger queue, since delay is acceptable to ensure no data is lost.

ros::Rate rate(10)
...
rate.sleep()

ROS is able to control the loop frequency  using ros:Rate to dictate how rapidly the loop will run in Hz. rate.sleep will delay a variable amount of time such that your loop cycles at the desired frequency. This accounts for time consumed by other parts of the loop. All Clearpath robots require a minimum loop rate of 10Hz.

while(ros::ok())

The ros::ok function will return true unless it receives a command to shut down, either by using the rosnode kill command, or by the user pusing Ctrl-C in a terminal.

geometry_msgs::Twist msg;

This creates the message we are going to send, msg, of the type geometry_msgs:Twist
msg.linear.x=4*double(rand())/double(RAND_MAX)-2;
msg.angular.z=6*double(rand())/double(RAND_MAX)-3;

Screen-Shot-2014-09-19-at-10.16.36-AM-300x151

These lines calculate the random linear x and angular z values that will be sent to Husky.

pub.publish(msg)

We are finally ready to publish the message! The pub.publish adds msg to the publisher queue to be sent.

Compiling the random Husky driver

Compilation in ROS in handled by the catkin build system. The first step would usually be to set up our package dependencies in the CMakeLists.txt and package.xml. However this has already been done for us when we created the package and specified our dependencies. The next step is then to declare our new node as a executable, this is done by adding the following two lines to the CMakeLists.txt files in ~/ros101/src/random_husky_driver

add_executable(random_driver random_driver.cpp)
target_link_libraries(random_driver ${catkin_LIBRARIES})

The first line creates the executable called random_driver, and directs ROS to its source files. The second lines specifies what libraries will be used. Now we need to build our workspace using the catkin_make command in the workspace directory

cd ~/ros101
catkin_make

Let’s bring up the husky visualization as we did in a previous blog post.

roslaunch husky_gazebo husky_empty_world.launch

The final step is to source your setup.bash file in the workspace you have created. This script allows ROS to find the packages that are contained in your workspace. Don’t forget this process will have to be done on every new terminal instance!

source ~/ros101/devel/setup.bash

It’s now time to test it out! Make sure you have a instance of roscore running in a separate terminal, then start the node.

rosrun random_husky_driver random_driver

You should now see Husky drive around! In a new terminal window, we can make sure that our node is publishing to the /husky/cmd_vel topic by echoing all messages on this topic.

rostopic echo /husky/cmd_vel

Screen-Shot-2014-08-27-at-1.11.36-PM-300x300

You should now see a stream of random linear x and angular z values.

See all the ROS101 tutorials here


Building a robotics company

$
0
0

gariepy-ryan_Clearpath
By Ryan Gariepy

As we’ve made more of a name for ourselves within various startup communities, we’re commonly asked how we moved from our beginnings with little resources and no connections to a worldwide concern in robotics, especially with the field exploding as it has. In truth, there is a great deal to be said for “being in the right place at the right time”, but there are a few key simple lessons we’ve learned along the way that others with good ideas might be able to benefit from.

Some relevant background first. McKinsey & Company popularized the “3 Horizons” concept, where an existing company should be doing three things:

  • It should be looking for new opportunities and markets (Horizon 3)
  • It should build businesses from these new opportunities and markets when they become viable (Horizon 2)
  • It should sustain and extend its core businesses (Horizon 1)

One way to frame the goal of a startup is to get a business model from Horizon 3 to Horizon 1 before the money runs out. The big difference between a startup and an established company is that an established company tends to have time and resources on its side. A group of four of us working from a combination of school labs, living rooms, and basements doesn’t.

However, the concept still holds. How does one get from having an Idea to being a Company, where the latter hopefully implies things like having vacation time and salaries?

I’m not going to go into detail here about how one should or should not define or develop a product best, because there are innumerable ways and opinions put forth on that topic in particular, but we’ve found there are three basic stages you’ll probably go through while building out your team and company that correspond very closely to the 3 Horizons. Ideally, you’ll make it through these stages as fast as you can, because making it to the end means you can start all over again…but bigger! First, there’s the Idea.

Phase 1: The Idea
Phase 1: The Idea

This is why I’m usually asked about this. I run into a person or six at a conference and they have an Idea. Since this is probably a robotics conference of some kind, this Idea has a Prototype already, which is probably good enough for putting up on a crowdfunding site. And, because startup communities in tech hubs the world over are doing their jobs, these people probably have a Market in mind. Our initial work is done. Unfortunately, they’re wanting to know what they need next, and the answer is usually “everything”. Or, at least, a “Startup”.

Phase 2: The Everything
Phase 2: The Everything

The idea is still important, but now getting it into shape for customers is important. Of equal importance is defining or creating your market, supporting your customers, making sure there’s cash in the bank, determining which features are releasing when, and it goes on. Here lies the difficulty; every new idea needs a startup built with a slightly different recipe.

The common theme I’d like to get across is that once you’ve decided to move past that first idea, you’re on the path of having to build the rest of the Startup, and you’re going to be learning a lot along the way. You’ll have a lot of different things to start caring about all of a sudden over and above your idea, and you’ll also be taking on more financial and personal risk. So, it’s best that you stop being a Startup as soon as possible and become a Company. This is one of the key reasons behind the “Lean Startup” approach – startups are risky and unstable, so get from Idea to Company as fast as possible .

Phase 3: Sustainability
Phase 3: Sustainability

Now, you’ve not only incorporated the idea into a sustainable business (Horizon 1), you’re building a team and structure that help you get your other ideas into the world, whether by applying your ideas into similar products and markets (Horizon 2), or by going all the way back to formula with new ideas based on the experience and team you’ve built so far (Horizon 3).

For those of you with ideas, don’t be discouraged by how small the idea might appear in the end once it’s examined as part of the greater whole. It is necessary. However, it’s not sufficient by far, and it’s best to go into starting an endeavour like this with at least a bit of awareness of what’s waiting on the other side.

About the Author

Ryan Gariepy is Co-Founder and CTO of Clearpath Robotics. He drives the development of Clearpath’s autonomous control software modules while guiding the continued expansion of Clearpath’s research platform lines, environmental surveying technology, and custom industrial automation solutions.

NERVE Center: The robot playground

$
0
0
Jackal on the water ramp at the University of Massachusetts Lowell NERVE Center in Boston.
Jackal on the water ramp at the University of Massachusetts Lowell NERVE Center in Boston.

By Meghan Hennessey

What’s the NERVE Center, you ask? It’s the robot testing facility at the University of Massachusetts Lowell and THE place to be when visiting the Boston area. We couldn’t pass up the opportunity to check out the site and test our bots across all NERVE Center courses. 

Kingfisher in the NERVE Center's fording basin.
Kingfisher in the NERVE Center’s fording basin.

What to expect at NERVE Center

Since we were in Boston for RoboBusiness 2014, the NERVE Center opened its doors to robot manufactures who were in town for the conference; we took full advantage of the invitation to run Husky, Jackal and Kingfisher on as many test courses as we could. We drove through the sand and gravel bays, the “Ant Farm” (a series of tunnels filled with obstacles), and ran up and down the wet ramp in simulated rain. We even threw Kingfisher into the mix so it could swim some laps in the water filled “fording basin.” All the course layouts (and additional details about NERVE Center) can be seen here: http://nerve.uml.edu/test-courses.php

It was a blast to run to run our robots through the courses, and we’re proud to say, they passed with flying colors! We owe a huge thanks to the center for allowing us to stop by for some fun on the robot-friendly playground.
Nerve-Centre-2

Why is NERVE Center so useful?

The NERVE Center was started in 2012 to provide standardized test courses, testing services, and applications prototyping robotic vehicles – it’s like it was made for our bots!

With the NERVE Center’s standardized courses, developers can test and validate their robots’ specifications, durability and function. Field robots are often required to meet strict performance requirements, and the NERVE Center offers 3rd party validation for questions such as:

  • How large a gap can the robot cross?
  • How steep of a slope can the robot climb?
  • What kind of obstacles can a robot traverse?

This type of testing is incredibly useful for robot developers as it provides 3rd party validation of the robot’s performance and specs. Many of the courses were developed in conjunction with NIST (National Institute of Standards and Technology) and the US army.

See the robot testing for yourself!

We’ve told you how awesome this robot wonderland is, and now, it’s time to see it for yourself. Watch as the Clearpath robots invade NERVE Center and get tested to the max!

This post originally appeared on the Clearpath Robotics blog.

ROS101: Creating a subscriber using GitHub

$
0
0

ROS101_logoBy Martin Cote

We previously learned how to write a publisher node to move Husky randomly. BUT: what good is publishing all these messages if no one is there to read it? In this tutorial we’ll write a subscriber that reads Husky’s position from the odom topic, and graph its movements. Instead of just copy-pasting code into a text file, we’ll pull the required packages from GitHub, a very common practice among developers.

Before we begin, install Git to pull packages from GitHub, and pygame, to provide us with the tools to map out Husky’s movements:

sudo apt-get install git
sudo apt-get install python-pygame

Pulling from GitHub

GitHub is a popular tool among developers due to its use of version control – most ROS software has an associated GitHub repository. Users are able to “pull” files from the GitHub servers, make changes, then “push” these changes back to the server. In this tutorial we will be using GitHub to pull the ROS packages that we’ll be using. The first step is to make a new directory to pull the packages:

mkdir ~/catkin_ws/src/ros101
cd ~/catkin_ws/src/ros101
git init

Since we now know the URL that hosts the repositories, we’ll easily be able to pull the packages from GitHub. Access the repositories using the following command:

git pull https://github.com/mcoteCPR/ROS101.git

That’s it! You should see an src and launch folder, as well as a CMakelist.txt and package.xml in your ros101 folder. You now have the package “ros101″, which includes the nodes “random_driver.cpp” and “odom_graph.py”.

Writing the subscriber

We’ve already gone through the random_driver C++ code in the last tutorial, so this time we’ll go over the python code for odom_graph.py. This node uses the Pygame library to track Husky’s movement. Pygame is a set of modules intended to create video games in python; however, we’ll focus on the ROS portion of this code. More information on Pygame can be found on their website. The code for the odom_graph node can be found at:

gedit -p ~/catkin_ws/src/ros101/src/odom_graph.py

Let’s take a look at this code line by line:

import rospy
from nav_msgs.msg import Odometry

Much like the C++ publisher code, this includes the rospy library and imports the Odometry message type from nav_msgs.msg. To learn more about a specific message type, you can visit http://docs.ros.org to see it’s definition, for example, we are using http://docs.ros.org/api/nav_msgs/html/msg/Odometry.html . The next block of code imports the pygame libraries and sets up the initial conditions for our display.

def odomCB(msg)

This is the odometry call back function, which is called every time our subscriber receives a message. The content of this function simply draws a line on our display between the last coordinates read from the odometry position message. This function will continually be called in our main loop.

def listener():

The following line starts the ROS node, anonymous=True means multiples of the same node can run at the same time:

rospy.init_node('odom_graph', anonymous=True)

Subscriber sets up the node to read messages from the “odom” topic, which are of the type Odometry, and calls the odomCB functions when it receives a message:

rospy.Subscriber("odom", Odometry, odomCB)

The last line of this function keeps the node active until it’s shut down:

rospy.spin()

Putting it all together

Now it’s time to test it out! Go ahead and close the odom_graph.py file and build your workspace using the catkin_make function in your workspace directory.

cd ~/catkin_ws
catkin_make

The next step is to launch our Husky simulation to start up ROS and all the Husky related nodes

roslaunch husky_gazebo husky_emepty_world.launch

In this tutorial we have provided a launch file that will start the random_driver and odom_graph node. The launch file is located in ~/ros101/src/launch and is called odom_graph_test.launch. If you want to learn more about launch files, check out our launch file article on our support knowledge base. We will now source our workspace and launch both nodes with the launch file in a new terminal window.

source ~/catkin_ws/devel/setup.bash
roslaunch ros101 odom_graph_test.launch

ROS101-GitHub1-1024x767

There you have it! Our subscriber is now listening to messages on the odom topic, and graphing out Husky’s path.

 

See all the ROS101 tutorials here

Do more with Udev

$
0
0

udev_blog

By Paul Bovbel

Udev is a device manager for Linux that dynamically creates and removes nodes for hardware devices. In short, it helps your computer find your robot easily. By default, hardware devices attached to your Linux (Ubuntu) PC will belong to the root user. This means that any programs (e.g. ROS nodes) running as an unpriveleged (i.e. not root) user will not be able to access them. On top of that, devices will receive names such as ttyACMx and ttyUSBx arbitrarily based on the order in which they were plugged in. Luckily, you can solve this, and more, with udev rules.

You probably already have at least one udev rule on your system that solves the naming problem for network devices, and you can take a peek at it in the /etc/udev/rules.d/ folder – it’s probably named 70-persistent-net.rules.

Some driver/software packages will already provide udev rules you can use. Check the /etc/udev/rules.d/ folder to see if there’s anything installed already. If the package is lazy and gives you a udev rule to install yourself, you can do this using:

sudo cp <rule file> /etc/udev/rules.d/

Writing a new udev rule:

If you still need to write your own rule to setup naming and permissions for your device, read on. Rules can get extremely complex, but the one below should cover 99% of use cases for ROS applications. If you’re looking for 99.9%, I suggest you start here. As an example, we will examine the udev rule provided by the urg_node driver in ROS:

SUBSYSTEMS==”usb”, KERNEL==”ttyACM[0-9]*”, ACTION==”add”, ATTRS{idVendor}==”15d1″, ATTRS{idProduct}==”0000″, MODE=”666″, PROGRAM=”/opt/ros/hydro/lib/urg_node/getID /dev/%k q”, SYMLINK+=”sensors/hokuyo_%c”, GROUP=”dialout”

A udev rule is made up of a bunch of comma separated tags, as above. The tags are divided into two parts: matching and configuration, however they can be written into the rule in any order (confusingly enough).

Matching:

The matching part lets the udev device manager match the rule to the device you want. The manager will try to match all new devices as they get plugged in, so it’s important that the rule be specific enough to capture only the device you’re looking for, otherwise you’ll end up with a /dev/hokuyo symlink to an IMU. There are many potential matching tags, and the best way to pick the useful ones is to get all the device attributes straight from udev.

Run the following command, inserting the <devpath> such as /dev/ttyACM0:

udevadm info -a -p $(udevadm info -q path -n <devpath>)

You will get a list of all device attributes visible to udev. looking at device ‘…/ttyACM0′:

KERNEL=="ttyACM0"
SUBSYSTEM=="tty"
DRIVER==""
looking at parent device '...':
KERNELS=="3-3:1.0"
SUBSYSTEMS=="usb"
DRIVERS=="cdc_acm"
ATTRS{bInterfaceClass}=="02"
ATTRS{bInterfaceNumber}=="00"
looking at parent device '...':
...
ATTRS{idVendor}=="0483"
ATTRS{idProduct}=="5740"
...

Each of the device attributes is a potential tag. You can use any of the tags in the first section to filter, along with tags from a parent device. Use regex to make matching more flexible (e.g. [0-9] to match any number, * to match anything at all).

Example:

SUBSYSTEM=="tty", KERNEL=="ttyACM[0-9]*", ATTRS{idVendor}=="0483", ATTRS{idProduct}=="5740", ...

Due to the way udev is designed, you can only pull in the tags from one parent device. So in the above example you can always use the KERNEL and the SUBSYSTEM tag in your udev rule, but you cannot use both DRIVERS and ATTRS{idVendor} to do matching. Usually this is not a problem, since idVendor and idProduct are always present, and identify the majority of device types uniquely.

You should also add the ACTION tag, which is usually “add”, sometimes “remove” if you want to do something when the device is unplugged.

..., ACTION=="add", ...

Configuration:

Now that you have a rule that matches the device you want, you can add several configuration tags:

Tag Usage
MODE="0666"
Set permissions to allow any user read/write access to the device.
SYMLINK+=”hokuyo”
Create a symlink in /dev/ for this device.
RUN+=”/bin/echo ‘hello world’”
Execute an arbitrary command. (Advanced usage, more here)

It is good practice to make sure symlinks are unique for each device, so the above is actually poor practice! If you have multiple devices of the same type (e.g. 2 Hokuyos), or if you have multiple devices using a generic USB-to-Serial converter (e.g. FTDI), a basic idVendor and idProduct rule will not properly discriminate between these devices, since udev will map all matching devices to the same symlink. There are several approaches:

Directly through device attributes:

If your device has a unique identifier, such as a serial number, encoded in its attributes, you can painlessly create a unique symlink:

…, SYMLINK+=”device_$attr{serial}”, …

 

This will usually not be the case. If a parent device has a serial number, you can use the following trick using environment variables. Create a udev rule for the parent device to store an environment variable:

…, <match parent device>…, ENV{SERIAL_NUMBER}=”$attr{serial_number}”

And a rule for the child device that uses the variable in the symlink:

..., <match child device>..., SYMLINK+="device_$env{SERIAL_NUMBER}"

External Program:

If the manufacturer does not expose a unique identifier through the device attributes, you can execute an external command using the PROGRAM tag:

PROGRAM="/bin/device_namer %k", SYMLINK+="%c"

Unlike the RUN tag which spins off, this command will be executed before the rule is fully processed, so it must return quickly. The urg_node driver above uses this tag to execute a ROS binary:

PROGRAM="/opt/ros/hydro/lib/urg_node/getID /dev/%k q", SYMLINK+="sensors/hokuyo_%c"

Substitution argument %k refers to the device path relative to /dev/, and %c refers to the output of the PROGRAM tag.

Running a new udev rule:

Once you have sudo cp’ed your rule into the /etc/udev/rules.d/ folder, you can test it with your device. To get udev to recognize your rule, run the following command:

sudo udevadm control --reload-rules && sudo service udev restart && sudo udevadm trigger

You should be able to find a symlink in /dev/ that links to the full device path (e.g. /dev/ttyACM0), and the permissions on the device path should be read/write for all users.

If your permissions aren’t being set, and you symlink is not being created in /dev/ as expected, you can try simulating udev’s processing of the rule by running the following with the appropriate device path:

udevadm test $(udevadm info -q path -n /dev/ttyACM0)

Things to keep in mind

Check that your rule follows the naming convention – <priority>-<device name>.rules. Technically you can have multiple rules for the same device, and the number determines what order they’d get executed in. Since we’re writing addon rules, a priority of 99 is safest.

You can have multiple rules in a file separated by newlines. Make sure that each individual rule is on one line.

Check that all tags (matching and configuration) are comma separated.

Check that your rule file has a trailing newline.

Check that your rule is owned by the root user – ll /etc/udev/rules.d/ should say ‘root root’ for the rule file.

The post Do More with Udev appeared first on Clearpath Robotics Inc..

 

ROS, Arduino, and a Husky simulation

$
0
0

By Martin Cote

The Arduino family of micro controllers has quickly become a go-to board for hobbyists due to its ease of use. Often times roboticists must create communication protocols to allow their embedded hardware to communicate with a computer. One of the most powerful aspects of ROS is that it can communicate with a large variety of hardware. That’s where rosserial comes in! Rosserial is a general protocol for sending ROS messages over a serial interface, such as the UART on Arduino. This allows you to easily interface any sensors attached to Arduino into your ROS environment. Read on to learn more!

This tutorial will get you started  by setting up the Arduino IDE and installing the rosserial libraries. Then, we’ll test it out by driving the Husky simulation using a neat trick with the Arduino board. It should goes without saying you’ll need an Arduino to complete this tutorial. We’ll also be using a Husky simulator, so make sure to run through our “Drive a Husky” tutorial if you haven’t done so yet.

Setup

The first step is to set up your Arduino environment, begin by installing the Arduino IDE and rosserial using:

sudo apt-get install arduino arduino-core ros-hydro-rosserial ros-hydro-rosserial-arduino

Once you open up the Arduino IDE for the first time, a directory named “sketchbook” will be created in your home directory. Now we need to import the rosserial library into your arduino IDE. If your sketchbook directory is empty, you may need to create a directory named “libraries” inside of it.

mkdir ~/sketchbook/libraries
cd ~/sketchbook/libraries
rosrun rosserial_arduino make_libraries.py .

Restart your Arduino IDE and you should see the ros_lib part of your libraries!

ros_lib-check

 

You’ll want to ensure that your Ubuntu user is part of the “Dailout” group, which grants you access to serial ports. You can check by using

groups "username”

If you don’t see “dailout”, you can easily add yourself using

sudo gpasswd --add "username" dialout

One last step to make our lives easier will be to create udev rules so your Arduino is  recognized when plugged in and the correct permissions are set. For more information on udev rules check out our udev article . Begin by plugging in your Arduino.

NOTE:If you are using a virtual machine, you will have to connect Arduino to the virtual machine after plugging it in.

attaching-device-1024x571

You can confirm your system is actually connected to Arduino by running

ls -l /dev

Depending on your specific Arduino board, you should either see a line with ttyAMC# or ttyUSB#, where # is a number. This will also tell you the current permissions for your Arduino. Chances are you wouldn’t be able to upload code to your Arduino at the moment because you don’t sufficient permissions, but we will soon solve that by creating a new udev rule!

The udev rule will require the product and vendor ID to be able to identify when our arduino is connected which you can easily find using

lsusb

You should see a line similar to

Bus 002 Device 005: ID 2341:0043

To confirm this is indeed your Arduino, disconnect it and run the command again, taking note which entry has disappeared. Remember the ID numbers, in this case, 2341 is the vendor ID and 0043 is the product ID. Now venture over to your udev rules at:

cd /etc/udev/rules.d/

and create our new rules file, the naming conventions for rules files follows “##-name.rules”. Chose a number that isn’t in use!

sudo gedit 97-arduino.rules

Copy the following into your new rules file, replacing #### with your product ID and vendor ID. For more information about what these tags mean, check out our article on udev rules.

SUBSYSTEMS=="usb", ATTRS{idProduct}=="###", ATTRS{idVendor}=="####", SYMLINK+="arduino",MODE="0777",

All that is left is to update your udev rules and reboot your system

udevadm trigger
sudo reboot

You should now see “arduino” as an entry in ls -l /dev with full permissions! (rwxrwxrwx)

Code

We’re now set to upload our code to Arduino! The code is fairly straight forward, however if you have any difficulties following along, check out our “Creating a publisher” tutorial. Copy the following code into the Arduino IDE and click upload. If your udev rules were set correctly you should be able to upload without any errors.

If you encounter any errors, verify your arduino is coming up as “arduino” in a ls -l /dev and proper permissions are set. You may also have to point the Arduino IDE towards the correct USB port in tools -> serial port.

#include <ArduinoHardware.h>
#include <ros.h>
#include <geometry_msgs/Twist.h>

ros::NodeHandle nh;

geometry_msgs::Twist msg;

ros::Publisher pub("husky/cmd_vel", &msg);

void setup()
{
 nh.initNode();
 nh.advertise(pub);
} void loop()
{
 if(digitalRead(8)==1)
 msg.linear.x=-0.25;
 
 else if (digitalRead(4)==1)
 msg.linear.x=0.25;
 
 else if (digitalRead(8)==0 && digitalRead(4)==0)
 msg.linear.x=0;
 
 pub.publish(&msg);
 nh.spinOnce();
 }

Driving Husky

Now that Arduino is loaded with our code and publishing velocity commands, we can pass these messages along into our ROS environment. We’ll start by launching a Husky simulation:

roslaunch husky_gazebo husky_empty_world.launch

All that’s left is to attach the Arduino into our ROS environment using:

rosrun rosserial_python serial_node.py _port:=/dev/arduino

We’re ready to try it out! Go ahead and touch the digital pin 8 and you should see Husky drive forward! Similarly if you touch digital pin 4 Husky will drive backwards.

ArduinoUno_r2_front450px

This trick is made possible by a phenomenon known as parasitic capacitance, which is usually an unwanted effect in electronics design, but serves nicely for the purpose of our example. That being said, this isn’t the most reliable method, and is intended to provide a simple example with minimal equipment.  If you are having difficulties moving your simulated Husky, try using rostopic echo /husky/cmd_vel to verify some commands are in fact being sent to Husky when you touch the pins.

Be sure to go through the rest of our ROS tutorials on our knowledge base. If you want to learn more about rosserial, be sure to visit the rosserial page of the ROS wiki.

Looking for other Clearpath tutorials? You might like these ones! Or visit www.support.clearpathrobotics.com.

ROS101: Creating your own RQT Dashboard

$
0
0

ROS101_Clearpath

By Martin Cote

After working in the terminal, gazebo and RViz, it’s time for a change of pace. For this ROS101 tutorial we’ll be detailing the basics of creating your own rqt dashboard! A dashboard is a single rqt window with one or more plugins displayed in movable, re-sizable frames. Dashboards generally consist of a number of plugins that, in combination, provide a suite of UI capabilities for working with robots and robot data.

Dashboards can be populated and configured interactively in an rqt session. A preferred configuration can be saved as a “Perspective,” which saves the plugins loaded, their layout, where they are supported, their settings, and last-saved initial parameters (such as what topic we were last plotting). In this tutorial we’ll be working with the Husky simulation in ROS Indigo. To install ROS Indigo, please see these instructions, and visit our Husky simulation page to install the Husky simulation.

Getting Started

The first step is to install rqt! We’ll also be installing some common plugins to create our dashboard:

sudo apt-get install ros-indigo-rqt ros-indigo-rqt-common-plugins ros-indigo-rqt-robot-plugins

We can then launch RQT by simply using:

rqt

In the Plugins menu, select each plugin you want to load. You can change the layout by dragging and rescaling each plugin by its title bar or edges.

1-Blank-1024x576

A practical example

We’ll now create a rqt dashboard with a few useful plugins and demonstrate a potential use case for this particular dashboard. For this tutorial we’ll be using our virtual Husky to simulate sensor data. Open Husky up in Gazebo using:

roslaunch husky_gazebo husky_playpen.launch

You can minimize Gazebo for now as we set up our rqt dashboard. Begin by opening the following plugin from the plugins drop down menu, and re size them as you like,

Rviz
Plot x2
Bag
Robot Steering

1-With-Plugins-1024x576

Each plugin has their own uses and settings, for more information about a particular plugin, visit the rqt plugins page of the ROS wiki

When you’re happy with a dashboard configuration, you can save the perspective by selecting Perspectives, Create Perspective, giving it a name, and asking it to clone the current perspective. These perspectives are saved locally and persist between sessions.

To export a perspective for closer management such as sharing or persisting to a repository, select Perspectives, Export and give it a name with the filename extension, .perspective

Loading a Perspective

A perspective can be loaded interactively in RQT by selecting Perspectives, import. However a more useful way is to launch them from the command-line, which allows us to wrap them in a script that can be rosrun or roslaunched:

rqt --perspective-file "$(rospack find my_ros_rqt_package)/config/my_dashboard.perspective"

Some plugins allow you to configure options that impact its installation and behavior. For example, the Python Console plugin allows you to choose which console implementation to use. You can access these options for any plugin by selecting the gear icon in its title bar. If no gear icon is present, the plugin has not been configured to provide an options menu.

Rviz: To load Husky into your Rviz plugin, select “open config” from the drop down menu, and navigate to /opt/ros/indigo/share/husky_viz/rviz/view_robot.rviz. You should now see a model of Husky loaded in Rviz! By default, this config file will include the simulated laser, and you can see the object in Husky’s path in the Gazebo environment.

 

3-load-config-rqt-1024x576

3-with-plugins-Gazebo-1024x576

Plot: The Plot tool is very useful to plot a particular topic in real time, for this example we will be plotting the commanded odometery topic versus the simulated odometrey. In the input window on the top right of the plot plugin, add the follow topic in each plot.

/odometry/filtered/twist/twist/angular/z

and

/husky_velocity_controller/odom/twist/twist/angular/z

Robot Steering: The robot steering plugin provides us with a simple way to manually drive Husky, all that is required is to specify  the  topic which accepts the velocity commands to move your Robot, for virtual Husky, that topic is /cmd_vel.

It’s time to put it together! Try commanding Husky to turn in place using the robot steering plugin, and watch your Husky is RViz turn in place while it’s updating the laser scan! You should also see the commanded odometry in one of your plots, while the actual odometry lags slightly behind as it catches up to the desired value.

4-graph-topics-1024x576

Rqt bag: Rosbag is an extremely useful tool  for logging, and our support team may often ask for a bag file to take a closer look at your system. It’s possible to record a bag through the terminal, but using rqt is much simpler and more intuitive. Let’s record  a bag file of Husky driving around by clicking the record button, and selecting the topics you want to record. Once you’re happy with the data recorded, stop the recording.

Playing a bag file back is just as simple. Let’s go ahead and close rqt and Gazebo so ROS is no longer running, then start ROS again with just roscore:

roscore

And open rqt back up and load the ROS bag plugin again:

rqt

This time we are going to open up the bag file we just recorded by clicking the 2nd button. You’ll now see all the topics that were recorded, and when messages were sent over that topic. You can take a closer look at a particular ticket by right clicking and selecting to view either the values or plot a particular topic.

 

5-Bag-reply-1024x576

For more information regarding rqt, please visit the ROS Wiki page, if you have any questions regarding this particular tutorial, please don’t hesitate to contact us!

Looking for other Clearpath tutorials? Here’s one you might like! See all the ROS101 tutorials here.

 

MATLAB Robotics System Toolbox and ROS

$
0
0

Capture

By Ilia Baranov

Recently, Mathworks released a new toolbox for Matlab. This is exciting for a number of reasons: it includes everything from data analysis to coordination of multiple robots. In today’s post, we explore using this Robotics System Toolbox to connect to both real and virtual Jackal robots.

The toolbox has made a number of improvements since the “beta” version that we wrote a tutorial on a while ago. Matlab now supports services, parameters, analyzing rosbag data, and has a very robust series of tutorials. They even support generating code in Matlab Simulink, and then having it run on a ROS robot, with no extra downloads needed. This should make development of control algorithms faster for robots, and enable fairly detailed testing outside of ROS.

The following video looks at running a Jackal in circles, in both real and virtual space. It also compares a simple obstacle avoidance program, using lidar, that wanders around the environment.

To run the obstacle avoidance sample on a Jackal, ensure that the Matlab Robotics System Toolbox is installed, and download this file: Jackal_New.m

Change the first IP address to your Jackal (real or simulated) and the second to your own computer (the one running Matlab).

If you are ready to try analyzing bag data, download our sample file from the run shown in the video, and plot the laser data.

laserAvoid_Real.bag , scan_plot.m

Link to the toolbox:

http://www.mathworks.com/products/robotics/

Looking for other Clearpath tutorials? Check these out.


Clearpath and Christie demo 3D video game with robots

$
0
0

Christie-Clearpath-Hack-WeekBy Ryan Gariepy 

For our recent “hack week” we teamed up with one of the most innovative visual technology companies in the world, Christie, to make a 3D video game with robots. How did it all come together?

As seems to be the norm, let’s start with the awesome:

 

Inspiration

In late October, a video from MIT was making the rounds around at Clearpath. Since we usually have a lot of robots on hand and like doing interesting demos, we reached out to Shayegan to see if we could get some background information. He provided some insights into the challenges he faced when implementing this, and I was convinced that our team could pull a similar demo together so we could see how it worked in person.

Here’s what MIT’s setup looked like:

(Video: Melanie Gonick, MIT News. Additional footage and computer animations: Shayegan Omidshafiei)

At the most fundamental, this demo needs:

  • A small fleet of robots
  • A way for those robots to know where they are in the world
  • A computer to bring all of their information together and render a scene
  • A method of projecting the scene in a way which is aligned with the same world the robots use

In MIT’s case, they have a combination of iRobot Creates and quadrotors as their robot fleet, a VICON motion capture system for determining robot location, a Windows computer running ROS in a virtual machine as well as projector edge-blending software, and a set of 6 projectors for the lab.

There were three things we wanted to improve on for our demo. First, we wanted to run all-ROS, and use RViz for visualizations. That removes the performance hit from running RViz in a VM, and also means that any visualization plugins we came up with could be used anywhere Clearpath uses ROS. Second, we wanted to avoid using the VICON system. Though we have a VICON system on hand in our lab and are big fans, we were already using it for some long-term navigation characterization at the time, so it wasn’t available to us. Finally, we wanted to make the demo more interactive.

Improvement #1: All-ROS, all the time!

To get this taken care of, we needed a way to either run edge-blending software on Linux, or to use projectors that did the edge blending themselves. Fortunately, something that might be a little known fact to our regular audience is that Christie is about 10 minutes away from Clearpath HQ and they make some of the best digital projectors in the world that, yes, do edge blending and more. A few emails back and forth, and they were in!

For this project, Christie arrived with four Christie HD14K-M 14,000 lumens DLP® projectors and two cameras. The projectors use Christie AutoCal™ software and have Christie Twist™ software embedded right in. Christie rigged the four projectors in a 2 x 2 configuration on the ceiling of our warehouse. The cameras captured what was happening on the floor and sent that information on the Christie AutoCal™ software, which then automatically aligned and blended the four projectors into one giant seamless 30 foot projection mapped digital campus.

Christie sets up the 3D Projection system. (Photo courtesy of Christie.)
Christie sets up the 3D Projection system. (Photo courtesy of Christie.)

 

Improvement #2: No motion capture

Getting rid of the motion capture system was even easier. We already have localization and mapping software for our robots and the Jackals we had on hand already had LIDARs mounted. It was a relatively simple matter to map out the world we’d operate in and share the map between the two robots.

Now, I will make a slight aside here… Multi-robot operation in ROS is still not yet what one would call smooth. There are a few good solutions, but not one that is clearly “the one” to use. Since all of our work here had to fit into a week, we took the quick way out. We configured all of the robots to talk to a single ROS Master running on the computer connected to the computers, and used namespaces to ensure the data for each robot stayed tied to each robot. The resulting architecture was as follows:

Augmented-Reality-Blog-Diagram-1

All we had to do to sync the robot position and the projector position was to start training the map from a marked (0,0,0) point on the floor.

Improvement #3: More interactivity

This was the fun part. We had two robots and everyone loves video games, so we wrote a new package that uses Python (via rospy), GDAL, and Shapely to create a real-life PvP game with our Jackals. Each Jackal was controlled by a person and had the usual features we all expect from video games – weapons, recharging shields, hitpoints, and sound effects. All of the data was rendered and projected in real-time along with our robots’ understanding of their environment.

And, as a final bonus, we used our existing path planning code to create a entire “AI” for the robots. Since the robots already know where they are and how to plan paths, this part was done in literally minutes.

The real question

How do I get one for myself?

Robots: Obviously, we sell these. I’d personally like to see this redone with Grizzlies.

Projectors: I’m sure there are open-source options or other prosumer options similar to how MIT did it, but if you want it done really well, Christie will be happy to help.

Software: There is an experimental RViz branch here which enables four extra output windows from RViz.

The majority of the on-robot software is either standard with the Jackal or is slightly modified to accommodate the multi-robot situation (and can also be found at our Jackal github repository). We intend on contributing our RViz plugins back, but they too are a little messy. Fortunately, there’s a good general tutorial here on creating new plugins.

The game itself is very messy code, so we’re still keeping it hidden for now. Sorry (sad)
If you’re a large school or a research group, please get in touch directly and we’ll see how we can help.

Happy gaming!

Robots 101: Lasers

$
0
0

mapping
By Ilia Baranov

In this new Robots 101 series, we will be taking a look at how robots work, what makes designing them challenging, and how engineers at Clearpath Robotics tackle these problems. To successfully operate in the real world, robots need to be able to see obstacles around them, and locate themselves. Humans do this mostly through sight, whereas robots can use any number of sensors. Today we will be looking at lasers, and how they contribute to robotic systems.

the-day-the-earth-stood-still-gort3Overview

When you think of robots and lasers, the first image that comes to mind might come from science fiction; robots using laser weapons. However, almost all robots today use lasers for remote sensing. This means that the robot is able to tell, from a distance, some characteristics of an object. This can include size, reflective, color, etc. When the laser is used to measure distance in an arc around the robot, it is called LIDAR. LIDAR is a portmanteau of Light and Radar, essentially think of a sweeping radar beam shown in the films, using light.

Function and Concepts

By Mike1024, via Wikimedia Commons

All LIDAR units operate using this basic set of steps:

1. Laser light is emitted from the unit (usually infrared)
2. Laser light hits an object and is scattered
3. Some of the light makes it back to the emitter
4. The emitter measures the distance (more on how later)
5. Emitter turns, and process begins again

A great visual deception of that process to the right.

Types of LIDAR sensing

How exactly the laser sensor measures the distance to an object depends on how accurate the data needs to be. Three different methods commonly found on LIDAR sensors are:

Time of flight
The laser is emitted, and then received. The time difference is measured, and distance is simply (speed of light) x (time). This approach is very accurate, but also expensive due to the extremely high precision clocks needed on the sensor. Thus, it is usually only used on larger systems, and at longer ranges. Rare to use on robots


Phase difference

In this method, the emitted laser beam is modulated to a specific frequency. By comparing the phase shift of the returned signal on a few different frequencies, the distance is calculated. This is the most common way laser measurement is done. However, it tends to have limited range, and is less accurate than time of flight. Almost all of our robots use this.

Untitled-140x300Angle of incidence
The cheapest and easiest way to do laser range-finding is by angle of incidence. Essentially, by knowing what angle the reflected laser light hits the sensor, we can estimate the distance. However, this method tends to be of low quality. The Neato line of robotic vacuum cleaners use this technology.

Another thing to note is that so far, we have only discussed 2D LIDAR sensors. This means they are able to see a planar slice of the world around them, but not above or below. This has limitations, as the robot is then unable to see obstacles lower or higher than that 2d plane. To fix this issue, either multiple laser sensors can be used, or the laser sensor can be rotated or “nodded” up and down to take multiple scans. The first solution is what the Velodyne line of 3D LIDAR sensors employ, while the second solution tends to be a hack done by roboticists. The issue with nodding a LIDAR unit is a drastic reduction of refresh rate, from tens of Hz, down to 1 Hz or less.

Manufacturers and Uses

Some of the manufacturers we tend to use are SICK, Hokuyo, and Velodyne.
This is by no means an exhaustive list, just the ones we use most often.

SICK
Most of our LIDARs are made by SICK. Good combo of price, size, rugged build, and software support. Based in Germany

Hokuyo
Generally considered cheaper than SICK, in larger quantities. Some issues with fragility, and software support is not great. Based in Japan

Velodyne
Most expensive, used only when robot MUST see everything in the environment. Based in US.

Once the data is collected, it can be used for a variety of reasons. For robots, these tend to be:

  • Mapping
    • Use the laser data to find out dimensions and locations of obstacles and rooms. This allows the robot to find its position (localisation) and also report dimensions of objects. The title image of this article shows the Clearpath Robotics Jackal mapping a room.
  • Survey
    • Similar to mapping, however this tends to be outside. The robot collects long range data on geological formations, lakes, etc. This data can then be used to create accurate maps, or plan out mining operations.
  • Obstacle Avoidance
    • Once an area is mapped, the robot can navigate autonomously around it. However, obstacles that were not mapped (for example, squishy, movable humans) need to be avoided.
  • Safety Sensors
    • In cases where the robot is very fast or heavy, the sensors can be configured to automatically cut out motor power if the robot gets too close to people. Usually, this is a completely hardware-based feature.

 

Selection Criteria

A lot of criteria is used to select the best sensor for a given application. The saying goes that an engineer is someone who can do for $1 what any fool can do for $2. Selecting the right sensor for the job not only reduces cost, but also ensures that the robot has the most flexible and useful data collection system.

  • Range
    • How far can laser sensor see? This impacts how fast a robot is able to move, and how well it is able to plan.
  • Light Sensitivity
    • Can the laser work properly outdoors in full sunlight? Can it work with other lasers shining at it?
  • Angular Resolution
    • What is the resolution of the sensor? More angular resolution means more chances of seeing small objects.
  • Field of view
    • What is the field of view? A greater field of view provides more data.
  • Refresh rate
    • How long does it take the sensor to return to the same point? The faster the sensor is, the faster the robot can safely move.
  • Accuracy
    • How much noise is in the readings? How much does it change due to different materials?
  • Size
    • Physical dimensions, but also mounting hardware, connectors, etc
  • Cost
    • What fits the budget?
  • Communication
    • USB? Ethernet? Proprietary communication?
  • Power
    • What voltage and current is needed to make it work?
  • Mechanical (strength, IP rating)
    • Where is the sensor going to work? Outdoors? In a machine shop?
  • Safety
    • E-stops
    • regulation requirements
    • software vs. hardware safety

For example, here is a collected spec sheet of the SICK TIM551.

Range (m) 0.05 – 10 (8 with reflectivity below 10%)
Field of View (degrees) 270
Angular Resolution (degrees) 1
Scanning Speed (Hz) 15
Range Accuracy (mm) ±60
Spot Diameter (mm) 220 at 10m
Wavelength (nm) 850
Voltage (V) 10 – 28
Power (W) (nominal/max) 3
Weight (lb/kg) 0.55/0.25
Durability (poor) 1 – 5 (great) 4 (It is IP67, metal casing)
Output Interface Ethernet, USB (non-IP67)
Cost (USD) ~2,000
Light sensitivity This sensor can only be used indoors.
Other Can synchronize for multiple sensors.Connector mount rotates nicely.

 


If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

 

AgBot Robotic Seeding Challenge powered by $50K grant for Grizzly RUV

$
0
0

agbot_seeding_challenge_2016_clearpath By Rachel Gould

The 2016 agBOT Robotic Seeding Challenge challenges participants to build unmanned robotic equipment to plant, measure and track multiple crop seeds to improve farming efficiency. The objective of the agBOT challenge is to reduce the harmful chemical by-products and erosion caused by inefficient farming techniques. The challenge will also inspire new solutions to reduce farms’ carbon footprints while increasing production – essential given that the agricultural sector must support the projected global population of nine billion by the year 2050. Clearpath Robotics, in conjunction with airBridge, is proud to offer a $50,000 grant toward the purchase of the Grizzly Robotic Utility Vehicle for teams in the 2016 challenge.

More about agBOT

The 2016 agBOT competition will be held on May 7, 2016. Additional challenges are set for 2017 and 2018. The 2016 competition will be hosted by Gerrish Farms in Rockville, Indiana, where competitors will be challenged to revolutionize the industry by improving precision and efficiency.

Feeding the world’s nine billion people

Although farming has become mechanized, the evolution of agricultural techniques to include unmanned robots provides a unique opportunity. The agricultural sector can increase productivity and sustainability by using intelligent robotics to analyze current farming practices including fertilization and seedling variety. This concept is paramount in the world of shrinking farming circles and an ever-growing population.

The bot to get it done!

Grizzly RUV seeding a field.
Grizzly RUV seeding a field.

This year’s agBOT competition requires a robot that can function as an unmanned crop seeder; it must plant two types of seeds over half-mile-long rows. It must also supply real-time data using a mobile tracking antenna and a variety of analytics including down pressure and variety placement. Participating teams are responsible for developing all software, sensors and human-machine control interfaces to control tasks.

This complex list of requirements requires a flexible, rugged, high performing solution, which is why we’re excited to partner with airBridge to offer a $50,000 grant for Grizzlies that are used in the competition.

Clearpath_Grizzly_Agbot_Seeding_Challenge
Grizzly RUV in a corn field.

Grizzly is Clearpath’s largest all-terrain battery operated robot. The mobile research platform offers the performance of a mini-tractor and the precision of a robot with a max payload of 1320 lbs, max speed of 12 mph, 8 inches of ground clearance, and 5V, 12V, 24V and 48V user powers. See here for all technical specs.

Ready to participate in the agBOT 2016 challenge? Want to take advantage of this unique Grizzly grant opportunity? Get in touch with one of our unmanned experts.


If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Simulating in MapleSim

$
0
0

maplesimBy Ilia Baranov

Robots are expensive and they usually depend on batteries. What if you want to run an experiment with 100 robots, running for 10 hours? To help answer this question, ROS has built in support for robot simulation in the form of Gazebo. While this works quite well, it is currently unable to simulate physical properties like batteries, temperature, surface roughness. If your robotics research depends on accurate models, you may want to consider looking at MapleSim® 2015 – a high performance physical modeling and simulation tool developed by Maplesoft™.

Below we can see a video of the Grizzly RUV taking an open loop control path around a surface. Elements like current and voltage provided by the batteries, surface slipperiness and weight distribution all play a role in where the Grizzly actually ends up. To see what we’re referring to, watch the quick simulation video below:

The MapleSim model features a 200 Ah lead-acid battery pack with a nominal voltage of 48 V, similar to the Type B battery pack used in the robot, to provide electrical power to move the vehicle on an uneven terrain. The lead acid battery used in the model is part of MapleSim’s Battery Component Library. The physical behaviors of the battery are described by mathematical expressions curve-fitted based on experiment measurements to provide the accurate battery voltage and state of charge during the operations of the robot.

The interaction forces and moments at the tire-terrain contact points are generated based on a 3D tire model. A 3D mathematical expression is used to describe the terrain surface to allow the tire-terrain contact points to be calculated based on the position of the vehicle. This 3D function is also used to generate the STL graphics of the terrain for animation (see Figure 1).

The model also outputs electric motor torques, speeds, and battery state of charge as shown below:

Results_window-1024x619

Setting up the simulation involves using the Grizzly RUV MapleSim model, available at the bottom of this post. The graphical representation is easy to understand and also quick to modify.

unnamed

 

Maple and MapleSim provide a testing and analysis environment based on the virtual prototypes of the model. A number of analysis can be performed:

  • Virtual testing and analysis: an engineer can easily test the operations of the robot for any design in a virtual environment through simulations in MapleSim. Using Maple, different terrain surface conditions and tire force models to fit different test scenarios can be generated. As an example, the plot below shows different battery energy consumption (state of charge) rates under different terrain conditions.

graph

  • Battery optimization: the developed MapleSim model can be coupled with Maple’s powerful optimization toolboxes to determine the optimal battery size and optimize the Battery Management System (BMS) to minimize energy consumption, reduce battery temperature, and increase battery service life.
  • Motor sizing: the robot is equipped with four electric motors that are independently controlled to provide wheel torques and steering maneuvers. The seamless integration with Maple will allow the motor sizing optimization to be performed based on MapleSim simulations.
  • Chassis design and payload distribution: the virtual prototype of the system will allow engineers to easily vary payload locations and distributions and analyze their effects, e.g., roll-over, stability, controllability, etc., on certain tasks.
  • Path planning: Using Maple, different terrain surface conditions and tire force models to fit different test scenarios can be generated for path planning.
  • Model-based controller design: the MapleSim model will allow the control strategies to be designed and tested for accuracy before being deployed on a real vehicle.
  • Localization and mapping: the high-fidelity dynamic model of the robot will allow state estimation algorithms, such as Kalman filter and other Bayesian-based filtering algorithms, to be performed at a high accuracy.
  • Optimized code generation: optimized C code can be generated from the MapleSim model for purpose of implementations of control, localization, and path planning strategies.

See here for more information on simulating the Grizzly in MapleSim.

 

3 supply chain trends to watch for in 2016

$
0
0

factory_arm_robot_packing_manipulationThe New Year is upon us and with that comes predictions of what 2016 has in store. Will Automated Guided Vehicle (AGVs) continue to drive materials on the factory floor? What is ‘Industry 4.0’ and when will it take shape? The factory of the future is around the corner and these three supply chain trends for 2016 are the ones that will take us there.

1. Increased reliance on automation and robots

Industry Week reports that 2016 promises “vastly more automation for concentrating human effort on details and customization.” With warehouses experiencing increased cost pressures and supply chain complexities, operators are seeing just how much technology has advanced and how automation in particular can provide flexible transportation solutions. Solutions such as OTTO, for instance, offer infrastructure free navigation and virtually limitless flexibility for material transport in factories. With this level of automation, robots can be relied upon to complete simple yet important tasks such as material transport or line side delivery, while the human workforce will be able to focus efforts on complex tasks, problem solving and strategy.

2. Realization of the Industry 4.0 vision

Internet_of_Things_Smart_phone_IoT_conveyor_Belt_industry_factoryIn Gartner’s top 10 Strategic Technologies for 2016, analysts predict that the Internet of Things will have significant growth and impact over the next year in particular. Automation is only a piece of the puzzle when considering Industry 4.0; these evolutionary ‘smart factories’ will emerge with interconnected, centrally controlled solutions that are starting to be available to the marketplace. These solutions not only communicate with other material transport units (i.e.: the fleet) on the factory floor, they also communicate with people by providing perceptive light displays and integrating into the existing WMS to receive dispatched tasks. In an interview with Forbes, Rethink Robotics CPMO Jim Lawson summed things up quite nicely, “We’ll see an aggressive push by sector leaders to accelerate the realization of the Industry 4.0 vision. Specifically, big data analytics combined with advances in cloud-based robotics.”

3. Service chains will become more important than product chains

warehouse_forklift_people_work_supply_chain_managementA supply chain is a system of organizations, people, activities, information, and resources that work together to move a product, whereas a production chain is the steps that needs to take place in order to transform raw materials into finished goods. The supply chain trend of service chains becoming more important than product chains is something that will develop over the next 10 years, although it’s already becoming reality. Providing great, reliable products is a standard expectation in the marketplace; whereas service is often perceived as a ‘nice-to-have’ within manufacturing. SupplyChain247 explains, “Increasingly, discerning consumers are demanding more from pre and post-sales service for the goods they buy.” We can apply this statement to technology suppliers as well – as Industry 4.0 takes form, and automation and robots become paramount, providers of those technologies must offer factory operators more than the technology itself. A full solution package will include hardware, software and service to offer ongoing support and true business relationships within the supply chain.

 

Viewing all 81 articles
Browse latest View live