My initial tests of the Pixy2's object recognition were disconcerting. The camera's color-based object recognition seemed to be very light dependent, and prone to false positives.
I had hoped that with more even lighting, and the consistent background of the arena, the recognition would be more consistent.
The Pixy2's sensitivity can be set, and I tried adjusting that from its default value.
This seemed to produce more accurate results without the false positives I had seen previously. The object was still recognized at the maximum 1.5 meter distance of the arena, but the recognition was lost beyond that distance.
In Tidy Up the Toys, the colored blocks start at a distance of 1.1 meters from the robot. At this distance, the object was reliably recognized.
In lower light conditions, the object was still recognized, although sporadic false positives appeared.
There were no false positives with similar objects of different colors.
The Pixy2 can store 7 different objects. After training the camera on each of the three different colored objects, it was able to recognize all three.
I found that it was necessary to train the Pixy2 in a well lit environment. This seemed to give the camera a better understanding of the objects' shapes, allowing the Pixy2 to still recognize the objects when the lighting was worse.
I also found that different colors required different sensitivities to avoid false positives.
We started off with the idea that we would try to progress with our bot as normally as possible. Unfortunately, the current situation the world is dealing with has put that to a halt. Currently New Mexico is in another shutdown as of November 16th, 2020. This has decreased the availability of components and shipping, along with the fact our team has yet to be able to meet up in person. We have continued to meet up weekly online to visit and talk about ideas and work on problems together.
Currently Colin is working on object recognition, Jimmy is working on voice recognition to hopefully allow the bot to have a different way to be controlled. Joseph is continuing to try and refine the bot’s armature and grappling claw. I am looking into new ways to remotely control the bot itself. We are continuing to do this with our personal Dexter robots, as we only have one competition robot. The plan is hopefully we should not have many problems transferring over what we have learned with the Dexter bots to the competition robot.
The Pixy2 uses color to recognize objects, and returns the coordinates of a bounding box around those objects, similar to the TensorFlow algorithms. In the Pi Wars challenges, the toy blocks and fish tank should be distinctly colored objects against a non-colored arena wall, so the Pixy2 seems like a promising option.
The Pixy2 is trained by holding a colored object in front of the camera.
The camera uses a region growing algorithm to find the connected pixels in the image that make up the object in front of the camera. It seems to use the dominant color of the object to decide which pixels are part of the object. The more pixels that the Pixy2 is able to detect, the more accurate its understanding of the object's shape will be.
Once the Pixy2 has been "trained" to detect the object, it can then identify the same object in front of it.
This is where the big limitation of the Pixy2 appears. Color is inherently tied to light. The Pixy2 seems to have poor light sensitivity in general, which makes it difficult to use in rooms that aren't well lit.
Even when the object is still clearly visible in the camera's image, it's not always detected. Why? Because the difference in lighting has changed the saturation of the object's color, such that the object no longer matches the color that the Pixy2 was trained to recognize.
Adding more light can cause a similar problem.
Will this make the Pixy2 unusable? Possibly. We plan to run the challenges in a well lit space, with relatively consistent lighting. Hopefully the combination of consistent lighting and consistent arena walls will mitigate these problems.
As PiWars comes closer, CNM HackerSpace has registered and been accepted in the intermediate category. A good portion of the team are new to PiWars and we have varying degrees of experience in programming and design. To get the team onto a similar level we have borrowed a few Dexter GoPiGo 3s to learn how various sensors and motors work.
Our first task as a team was to get our Dexter robots built and moving. Once we all had our robots moving, we started attaching sensors and getting our robots moving based on input from those sensors. The first sensor we messed with was the GoPiGo distance sensor. Below is an image of one of our experiments using that sensor.
Once we got the distance sensor working we added a servo to make distance sensor turn in both directions so we can see which direction is the better direction to go as seen below.
-At the beginning of Fall Semester 2020 CNM’s PiWars competitive robotics team faces a major hurdle. We are attempting to recreate what has been done in a social setting; remotely. Not only has the current situation created issues with meeting up remotely to work collectively, but we needed to figure out how to share one robot between four team members.
The current fix for this is Kerry Bruce, CNM instructor and one of the heads of CNM Hackerspace, supplying us with four different Dexter robots to practice on while we discuss what the collective plan is for the team bot. Luckily, we will be using an existing robot built for last year’s PiWars competition by the previous team. This year’s team does consist of two previous PiWars team members, with two newcomers. Jimmy Alexander has the competition robot and will be leading the way on the team’s bot.
Most weekly team discussions have consisted of talks about what to change on the competition bot deemed HAL 4.0 to try and polish it up from last year’s iteration. When not talking about what to change on the bot, the team members are also working with the dexter bots to practice and play with the programming features of certain sensors that we feel will be necessary to HAL 4.0
As a potentially easier alternative to TensorFlow, we decided to try out the Pixy2 camera. This camera has built-in object recognition, similar to TensorFlow, using color. The PiWars "toys" and "fish tank" can be colored objects, set inside a white arena, so identifying them should be doable.
I attached the Pixy2 PCB to the front of the Dexter robot, and plugged it into the Pi via USB. The LED on the Pixy2 turned on, and I followed the provided instructions to install the Pixy2 driver libraries and dependencies.
I tried running the provided test program, but pixy.init() failed, suggesting that the camera wasn't connected properly.
The provided PixyMon application can be used to auto-detect the USB Pixy2. I tried connecting the Pixy2 to PixyMon on my computer, and it still failed. I then tried swapping out the USB wire, and the camera connected!
How can our robot understand what it's looking at? How can it search for, say, a colored block, or the "fish tank"? One possibility is to use a camera with some sort of object recognition. I currently have a Pi Camera Module connected to my test robot, and I was wondering if there's a simple way to use that to identify objects. I searched around a bit on YouTube, and found some examples of object detection, using neural networks to classify objects in the camera's image frame.
Most of the examples I found used the open source library TensorFlow Light to run an object classifier. TensorFlow Light is a less resource-intensive version of the TensorFlow library, and is more suitable for devices like the Pi. This method of object detection supposedly works on any camera with adequate resolution.
But, how do these neural network classifiers work? As I understand it, TensorFlow uses a particular classification algorithm, called a Deep Neural Network object detection algorithm. This is a module that can be trained to search for a particular type of object, and then draw boxes around each instance of that object it sees in the image.
The algorithm is trained by giving it a large collection of training pictures, and another collection of test pictures. For each training picture, you have to manually draw a box around the object in that picture, and feed that information into the algorithm, along with the image. The neural network will then incrementally adjust itself, over and over, to find patterns between the training images. It will then look at the test images, and try to find that same object in those. You also draw identifying boxes in the test images, so the algorithm can see how accurate it was. You keep running the algorithm over the training and test images until its test-image accuracy reaches an adequate level. At that point, the values of the neural network's connections can be saved as a classifier file, and loaded onto the Raspberry Pi for use in the robot.
In our case, if the arena floor and walls are a uniform color that's distinct from the objects, then it should be relatively easy for the classifier to identify the objects. We could place the blocks in various places in the arena, take a bunch of training pictures of them, and then train the classifier.
It looks like there are multiple DNN classifer algorithms that TensorFlow can use. The SSD Mobilenet algorithm seems to run faster than the others, and is designed for low-powered devices, so that's probably a good place to start.
Once the classifier file is loaded onto the Pi, actually using it seems fairly simple. You can load it using the TensorFlow Python library, then take an image from the camera, and pass that to the classifier. The classifier will then return an array of box coordinates, drawn around each instance of the object it found in that frame, along with a confidence percentage.
If we want the robot to turn toward an object, we could turn the robot in a circle a few degrees at a time, and take a camera picture after each turn, then run the classifier on that picture. If the object is found in the circle, we could keep turning toward it until the robot is facing directly toward the object. Then, we could determine the distance to the object by the relative size of its bounding box.
I'm not sure if we could determine the orientation to the object easily using these tools. For example, I don't know if these classification algorithms could tell us whether the robot is facing the corner of the green box, or its side.
Hello everyone, my name is Joey Ferreri. It is unfortunate that PiWars will not be held in person this year, but we are still excited to participate this year in PiWars at home. Our team is comprised of both veteran PiWars goers and newbies. This year we will continue iterating on CNM Hackerspace’s HAL robot that we have incrementally upgraded for the past several years. To get everyone warmed up and prepare to work on the real deal, we’ve been using Dexter Industries’ GoPiGo3 robots for programming and practicing with various sensors as well. Specifically, we have been orientating ourselves with the distance sensor, light and color sensor, and the camera, all of which will aid us in autonomous completion of the challenges. We are hoping to all acquire the skills to develop our robot to its best iteration yet. Here’s to a great PiWars at home!
Here is the main code for the distance sensor on my Dexter robot:
from easygopigo3 import EasyGoPiGo3
import easygopigo3 as easy
gpg = easy.EasyGoPiGo3()
distance = gpg.init_distance_sensor()
# create an EasyGoPiGo3 object
gpg3_obj = EasyGoPiGo3()
# and now let's instantiate a Servo object through the gpg3_obj object
# this will bind a servo to port "SERVO1"
servo = gpg3_obj.init_servo()
def goOrNo(gpg, distance):
leftAngle = 150
rightAngle = 30
centerAngle = 90
leftDist = -1
rightDist = -1
leftDist = distance.read_mm()
rightDist = distance.read_mm()
if leftDist > rightDist:
tooClose = 100
run = True
center = 90
while run == True:
d = distance.read_mm()
if d < tooClose:
This program allows autonomous function of the robot. It drives until the value from the distance sensor goes below a certain value, then backs up a predetermined amount and checks its surroundings. it looks left, then right and stores the value of each side and turns to whichever side has a greater distance value.
By Daniel Brown
This year one of our biggest concerns was maintaining constant power for our robot. In the past we have used rechargeable AA batteries that on several occasions lost too much juice to maintain the robot for the duration of a challenge or would bounce around in the chassis and randomly disconnect during a Pi-Noon battle... So a big part of our early testing this year was in changing out the power plant for Hal 4k. After testing our prototype 18650 battery setup, we ordered more battery trays and new batteries. We now have 4 ready to go battery trays, and 2 spares! We also have a total of 12 18650 batteries now. If Pi-Wars 2020 goes as planned (I’ve been told Pi-Wars NEVER goes as planned), we should not even need to swap out a battery pack, let alone either of the spares we plan to have with us… Compared to last year, this is a major improvement considering the team almost lost power mid challenge in 2019 and had to recharge mid challenge multiple times. In the process of finalizing our battery setup, I learned how to solder (big thanks to Rob Garner, one of the programming instructors on campus here at CNM and a mentor here in HackerSpace for soldering lessons!). Below are some photos of our battery setup mid-completion. They still need shrink wrap and the wiring will need to be measured, trimmed, and cleaned up. This time next week, we should have enough powaah to jump start other teams robots if need be. ;)
BATTERY TESTING & UPGRADES!
By Daniel Brown
After joining CNM Hackerspace to work on the Hal 4K for Pi-Wars I was disappointed in two things about the Hal 3000: it didn’t handle very well (like an RC car would) and the battery life was miserable. After testing the motors at 100% capacity, the battery life was about 11 minutes. The handling was poor due to a high center of gravity, and a majority of the weight being biased towards the front of the body (and above the front wheels). Based on my experience in building RC cars and powering high powered flashlights, I immediately knew that I could be of help in the development of Hal 4K for Pi-Wars 2020.
Hal 3000 wasn’t an easy bot to control, mainly because the Raspberry Pi 3B was mounted high in the body, and the battery pack was above the front wheels. This biased the weight towards the front, making the rear of the robot very light and therefore hard to control, especially at speed. Just as with a real car, the lower the center of gravity and the closer to a 50/50 front/rear weight bias, the better handling the vehicle will be. With Hal 4K we have mounted the batteries centrally and much lower inside the body. The Raspberry Pi 4 1GB (also a new upgrade for 2020!) is also mounted above the batteries, in a somewhat central location - in an attempt to achieve a nearly perfect 50/50 front/rear weight bias and much lower center of gravity than
I discovered that the 2019 CNM Hackerspace Pi-Wars team utilized rechargeable Ni-Mh (Nickel-Metal Hydride) AA batteries for power. These were used in a configuration of 8 AA batteries, providing 1.2V and 2000mAh per battery. These batteries were also worn down, after being recharged well over 250 times! After a few minutes of research on alternative power sources, as a team we decided to go with 18650 lithium-ion rechargeable batteries because they pack such a heavy punch in a rather little package. Each 18650 battery provides us with 3.7V (up from 1.2V), 3000mAh (up from 2000mAh), and 35 amps of power. In terms of available power and battery life, this a HUGE upgrade.
While this was not a cheap upgrade, we believe this was money well spent. After testing with our new battery setup, we can run the Raspberry Pi 4 and the motors on Hal 4K at 100% for well over an hour with no noticeable drop in voltage and/or robot performance. This also allows us to expand and experiment in the future using new sensors that previously would have affected performance or battery life further.
After we decided on batteries, we had to develop some sort of quick disconnect system to allow us to swap out battery packs on the fly. Instead of incorporating an on/off switch to a hardwired battery pack (swap individual batteries instead of a pack), we wired an inline quick dis-connector, similar to an RC car battery pack. Not only does this allow us to swap battery packs out on the fly in under a minute, this also acts as the on/off switch for the robot.
Being my first year participating in Hackerspace and Pi-Wars, I’ve realized that there is a place for everyone on a robotics team (and in robotics in general!) and wish I would’ve done this years ago. I don’t know much about computers, coding, and robots, but I still found a spot on the team and was able to help out. With a few people that can code, one or two that know 3D design, my knowledge in power and RC cars…. Together we can come together and create something bigger than any one of us is capable of alone.