yc2443@cornell.edu
Yuqian Cao got a B.Eng degree in Electrical Engineering from Zhejiang University(ZJU) in 2022 with the honor of outstanding undergraduate students.
Yuqian Cao has many hobbies, such as traveling and skiing. Also very fond of robots! ECE5160 is his first course in systematic learning of Robotics
For the first part, just follow the instructions on the web page. Among them, note that the board type we use is nano type redboard, the wrong choice of board type will lead to the failure of the function. In the following video, I compare the LED blinking with the clock, presumably the unit of the number after the delay function is microsecond, because delay(1000) has a high consistency with the clock which changes by second. I do not need to adjust the baud rate and I can see the LED blinking normally
Open the serial monitor and Artemis outputs something specific first. Sending a specific string to Artemis, you can see from the serial monitor that Artemis immediately replies with the same content, as if it were "echo". This part verifies that Artemis is communicating properly with the PC in both directions.
Run the analogRead example program, and Artemis will continuously output the real-time temperature value (expressed in five digits) through the serial monitor. Blowing or touching the chip made some changes on the temperature value, but these changes were subtle.
Running the corresponding example program, Artemis first outputs some basic information for the FFT operation, and then continuously outputs the real-time FFT result, i.e. the frequency with the highest current energy.
I downloaded a piano simulator on my phone and chose to use a pleasing A3 note as the subject of my experiment. Running the FFT program just now, I found that the note has an characteristic frequency of 2220 Hz, i.e., 2220 Hz appears in the FFT result only when this note is played. The part of the program is relatively simple, combining the Part I and Part IV programs. Specifically, when the MicrophoneOutput program uses the Serial.printf function to output, just add a few lines of code to guide the state of the LEDs. The final result is not bad, see the video below.
if(ui32LoudestFrequecy == 2220)
{
digitalWrite(LED_BUILTIN, HIGH);
}
else
{
digitalWrite(LED_BUILTIN, LOW);
}
In this lab, BLE (Bluetooth Low Energy) method is used for communication between Artemis (C compiled by Arduino) and Laptop (Python running in Jupyter lab).
The setup for lab2 is relatively complicated. On Laptop, a virtual environment with Python was installed. The Jupyter Lab is launched to run the Python code module by module and show the communication content. On the Artemis side, the ArduinoBLE library was added.
After running ble_arduino.ino, from the serial monitor, I saw the MAC address of my Artemis is C0:83:B4:69:AC:3C.
Copy this address in connection.yaml and use uuid4 function to generate the new UUID as below. Use the new UUID to replace the demo UUID in several places. Then the setup of this lab is finished.
The main part of the codebase is demo.ipynb which can run in Jupyter Lab with the help of other commands from library ble.py. The successful run of this demo code means the correct setup of hardware and environment. After running the demo, the content shown in Jypyter corresponds with the content in the serial monitor as below. The MAC address of my laptop is shown through the serial monitor (F4:5C:89:C1:4F:A8).
On Artemis side, code as below to read the string from the laptop and send it back with prefixes and suffixes. Also, output the content in serial monitor with prefix.
On the Laptop side, call the ECHO command with a specific string. The results of this task are as below.
Use the millis function to get the time of Artemis (in ms), convert it into a string and send it to the laptop.
Design a “strprocessor” to deal with the string from Artemis, split it with “:” and the second part is the value of time without prefix. Use ble.start_notify function to call it, which makes sure it will work when a new string comes in.
Design a new strprocessor to deal with a group of five temperatures with time to make the raw string more readable.
For the purpose of getting 60 temperatures in 5 seconds, we do not have to make them by groups. We can just convey the information one by one, and delay for 80ms between every communication.
From 1405545 to 1410442, 60 temperatures with time are transferred and processed successfully.
Suppose X% RAM is for the use of sending, total bit for sending = 384*1000*8*X% = 3072000*X%. Total bit in one module of the specific form = 5*16*150 = 12000. So, the maximum number of this type of module is 256*X%.
I use the simplified ECHO command to measure the time interval between sending and receiving with 1B, 2B, 3B, …, 120B messages. Calculate the data rate by byte/(2*time).
Plot in Excel. I found that the longer packet can make a bigger Byte rate. However, there are a few downward fluctuations in the speed of large packets, which may be caused by transmission errors, checksum retransmissions.
I cancel the delay in temperature read rapidly command and use it to send 5000 groups of data. After checking, the data are all right. BLE communication is reliable, there should be some verification mechanism in BLE communication.
In this lab, the TOF sensors are set up and used. There are several specific problems that need to be solved during the application.
I put 3 devices on the I2C wire, two TOF sensors, and one IMU.
Run the I2C example code. I got the result below. Only two devices are identified (0x29 and 0x69). That makes sense because two TOF sensors share the same I2C address.
To simultaneously make use of two TOF sensors, shut down any one of them and change its I2C address. The XSHUT pin will be used to realize the method. Do not forget to turn it on after setting the new address. The sketch is as below.
For the placements of these two sensors, I think putting them on the left-front and right-front is a good idea, especially for the purpose of forward obstacle avoidance.
There are only two modes in fact, short for 1.3m and long for 4m. Taking the size of the robot(car) into consideration, 1.3m is enough for the guidance of its movement. If we want to position the car using the distance from the boundary, the long mode may be more appropriate. Therefore, I use the short mode for the following tasks.
I measure the distance of 100mm, 200mm, 300mm, …, and 1300mm using one TOF sensor under two light conditions, got the result, and plot them using excel. Generally, it is accurate, maybe a calibration can be applied to deal with the unidirectional errors. The result in dark circumstances is more closed, probably caused by less light disturbance.
To use two TOF sensors simultaneously, I code as below when setup these sensors and take the outcome as a video.
Slightly modify the loop of the example as below.
In the result, the clock values were output with intervals of 4ms, the results were output with intervals of roughly 100ms. That means I can get the double distance information 10 times per second. The velocity of ADC may be the key factor that limits the speed.
Use the framework in lab2, and create a new command called TOF in Jupyter Lab. On the Artemis side, fit the setup part and get-value part in.
Infrared light is invisible light and there are a lot of infrared rays present in the daily environment. All these infrared rays will inevitably affect the infrared sensor. Therefore, when designing, choose a specific wavelength of infrared light as the medium to resist interference as much as possible; when using, do not use it in places where the light source is too strong. In the first part, the TOF sensor can show better performance under dark light conditions.
I tried several colors of reflective surfaces on hand, which had little effect on the results. In theory, darker objects would absorb more light to cause less return energy, but in practice I did not notice this effect. For glass-like surfaces, I found that the sensor even went through the glass to measure the distance to the object behind the glass, which is unacceptable. Presumably, ultrasonic distance sensors can avoid this problem to some extent, due to the different properties of electromagnetic and mechanical waves.
In this lab, the IMU sensor is set up and used. There are several specific problems that need to be solved during the application.
I use the break board to connect all the sensors I have together, through one I2C wire.
Run the IMU example code. The result in the video indicated that I set up the system rightly. Actually, the code has been run in our lecture before. I added some LED blink when start-up and IMU sudden move.
AD0_VAL can be used to change the I2C address of the IMU, I found it in the datasheet.
Apparently, it is very convenient when we want to use 2 IMUs on the same I2C wire. We do not have to connect a new wire and set a different address manually like TOFs in lab 3.
By using the function introduced in the slide, I could get the pitch and roll from the calculation of AccX, AccY, and AccZ. The code and demo are shown in the video below. I think it is very accurate and does not need the two-point calibration. However, the data fluctuated at a high frequency.
In the serial monitor, I opened the toggle timestamp and notice that there are 30 sets of data per second. I copy 60 sets of pitch and roll in 2 seconds and analyst them with FFT in python (modified from the tutorial provided on our lab webpage).
Pitch and roll have similar FFT result. That makes sense because they are all from the initial Acc data. I noticed that the noise was mainly concentrated at 400Hz and with a small amount present at other frequencies. In this case, the cut-off frequency of the low pass filter can be set around 200Hz to suppress the noise with little impact on the data we want.
Use the formula in the lecture to calculate pitch, roll, and yaw. This section uses pitch to illustrate what is true of roll and yaw as well. The code is as below (The roll and yaw parts are temporarily commented).
Gyro calculates pitch by integration, which causes it to have a cumulative error, i.e. the previously accumulated error cannot be eliminated, as shown in the video. The problem was not solved by using the delay to reduce the sample rate.
The complimentary filter somehow combines the advantages of Acc and Gyro. In the complimentary filter formula, alpha is used to adjust the weights of Acc and Gyro. I finally tuned the alpha as 0.7, the outcome is very nice as shown with a very small jitter, and eliminated cumulative errors.
I successfully combine the code of IMU and TOF. The code and the final result are as below. The key point is that IMU data does not need to wait for the TOF data. In 5s, there are 315 sets of data(63Hz), but only 7 of them are with TOF data.
The float type value is 4 bytes long, while the int type value is 2 bytes long. The complete set of values includes 3 float values (P,R,Y) and 3 int values (T,D0,D1), which uses 18 bytes. Artemis has 284k Bytes of RAM. Therefore, 15778 sets of data can be stored, in approximately data in 250 seconds.
Cut wires as needed and do some soldering, the final connection of the systems is as below. I use the smaller (650mAh) battery for the Artemis and sensors because apparently they do not use as much electricity as motors.
I try to control the car with the remote. However, it runs very fast and often bumps into obstacles. It is much more powerful than the toy cars I played with as a kid!
Equip the car with the Artemis system. Sent command IMU through BLE and try to control this crazy car meanwhile. The data can be viewed on Jupyter Lab. When the car hit the wall, it was able to observe a sudden change in the data.
In this lab, the RC car was disassembled and properly reconnected so that the control from Artemis and the drive from motor drivers are applied to the system.
The connections between Artemis, motor drivers, batteries, and motors are as below. From the datasheet choose the pins with “~” as the signal channels from Artemis to motor drivers because analogWrite functions can be used on those pins. The small battery is for Artemis and defines the signal ground on Artemis and the signal parts of motor drivers. The big battery is for the motor drivers and defines the power ground on the power parts of the motor drivers. The signal circuit and power circuit are totally isolated to avoid mutual interference, especially from power circuits to signal circuits.
I have done lots of soldering work before and I am confident in my soldering skill and stability. Therefore, I soldered all the lines needed in this lab as my first step. The outcome is as below.
The following video shows the code I use to test the motors and motor drivers. I set pin 7 and pin 12 zero for convenience, then use pin 6 and pin 11 to control the power on the left motor and right motor. The power system is so powerful that I set a small duty cycle of the PWM signals which can be observed by scope. One more thing, the left part and the right part are not that symmetrical, probably caused by some mechanical connections and motors because I believe in the consistency of motor driver chips. I set the duty cycle of 25/155 on the left and 30/255 on the right so that they can just start from a standstill and maintain about the same speed when idling.
Slightly modify the code as below and make two motors spin in different direction.
Organize the wire and fix the TOF sensors as below. Considering that this is not definitely the final hardware layout, I did not secure the individual devices firmly. The main consideration here is to not interfere with the rotation of the wheel.
After further tuning of the parameters, I made the car complete 2 meters of straight travel (30 cm per brick, the car travels over 6.5 bricks). My car needs to distribute 5 parts of power on the left and 7 parts on the right to complete a straight line, and I will keep this ratio in mind.
After many attempts, I found the minimum PWM value that can maintain the motion as shown below. In the video, the car does not start by itself, but if you give it a gentle push, the car is able to maintain its motion. Note: the motor should not be energized but not in motion for too long, there is a risk of damaging the motor! I'll keep this set of data in mind as well.
The motor driver is a power electronics circuit. From Prof. Afridi’s course, I know a higher switching frequency means a lot to power electronics. In this case, if we can manually configure the timers to generate a faster PWM signal, the output DC voltage for the motor can be more stable. This may make the motor movement more stable.
Overcoming static friction requires more force to start, while a smaller force can maintain the movement of the cart. Through my tests, this minimum sustaining PWM value is 25/255 for left and 35/255 for right. As shown in the previous section, I found this minimum PWM value. The PWM value can be reduced to this value between travels in the process of controlling the motor motion later.
In this lab, PID closed-loop control is applied in the robot system to control the distance to the wall. (I chose task A_position control)
Although we can observe the behaviors of our car, knowing the key values from the sensor and Artemis is very important for our PID tuning. Therefore, I build a new string processing function in Python, together with storing the data and plotting functions as below.
On Artemis, the data is sent in the package of time, distance, and PWM. I get rid of the symbols and only split them with “|” to reduce the time for data sending so that the control period can be small. (We only have one core for both PID control and data sending). The code will be shown in the following part with PID control.
From my point of view, proportional control is the basic part of PID, the integral part is used to eliminate the static error, and finally add the derivative parameter to make the system respond in advance if needed. Tuning the PID parameters should make the system stable first, then accurate, and finally fast.
I realized the PID control by the following code. It is worth noting that the car has a threshold PWM value to overcome the friction (42 left, 35 right for my car). Therefore, PWM below this value is meaningless in physics, I add these threshold values when setting the GPIO. I also set 30 as the limitation of PWM (left) to make the car easier to control.
The code is as below, together with BLE data-sending functions. I finally tuned my proportional parameter to 0.7 and my integral parameter to 0.1.
In the short-distance starting test, the car responded as below and I plot the data as required. I think the car did a good job.
In the long-distance starting test, the overshooting became big due to the longer acceleration. The car almost hit the wall but the car decelerated to zero and backed up at the last moment.
The integral part can eliminate the static error but also causes the lagging, i.e. the former error will influence the latter action by the integrator. This lagging can be reduced by the limitation of the integrator and we can also make some leading part (derivative part) to offset to some extent. In my lab, I set the limitation of PWM output to limit the influence of the integrator.
1
1
In this lab, a 2D map is built by the robot. The robot rotated and scanned from 4 points, and get several distances. Then I merged these data together and compare them with the real map.
The robot is required to rotate one circle and get 20 distances with same angle intervals. For the rotation, an open-loop strategy is applied.
I drive each move by applying a PWM of 200/255 on the right (backward), and a PWM of 170/255 on the left (forward) because of the asymmetry within the system. These PWMs are kept for 100ms, then brakes are applied for 100ms to make sure the robot is still when measuring the distance. Under these values, each move is 18 degrees, and 20 moves to 360 degrees as shown in the video below.
I have to admit that open-loop control may not be the best choice because the robot acted precisely only when the battery voltage is 3.65-3.7V. When the battery voltage is high, the robot will rotate more than one turn and vice versa. In addition to performing closed-loop control, I think there are two other ways to improve the stability of the robot with respect to the battery voltage -- using a lower PWM but a longer delay time to accomplish the same move or applying a voltage stabilizer chip to the battery.
Even though the robot had some problems as above, I monitored the battery voltage to make sure my robot performed well at the four measurements.
Data from (5, -3) is logged and drawn in the polar coordinate system as below.
Data from (-3, -2) is logged and drawn in the polar coordinate system as below.
Data from (0, 3) is logged and drawn in the polar coordinate system as below.
Data from (5, 3) is logged and drawn in the polar coordinate system as below.
Then I merge these points in the cartesian coordinate system. I am more familiar with metric units so I convert the unit to millimeters and build the map with the unit of millimeters.
The photo of the lab field looks like this.
I draw the boundaries in the real world together with the map I get by merging the data.
I code in Jupyter Notebook to do the data processing. Each of my scan begins towards the windows and the robot makes the first rotation and measure the first distance, so an offset of angle is applied.
The past website from Ryan helps me a lot. Thanks!
This lab is in simulation and all of the work is done on my computer. Bayes filter is applied for the localization.
Beginning with the framework, we need to realize the idea of the Bayes filter. As in the tutorial, numpy and math modules are used in the code. The whole thing can be divided into 5 parts.
The status of the robot contains its position and direction. The robot can switch from one status to another status by 3 steps. Firstly, rotate toward the new position, then move the proper distance and adjust to the new orientation. This part is actually figuring out these three parameters given the old and new status.
This part uses an odometry model to return the probability that the robot moves to a certain status, given two status and the control.
As shown in line 3, the first step of the Bayes filter is to make the prediction. I used a triple loop to iterate through all the possibilities. It is important to find a minimum threshold for the probabilities below which we do not consider. I found that some classmates set this value to be 0.0001. I tried it too but my computer fan was spinning like crazy, but it couldn't stop heating up and the computer looked like it was going to die at any moment. The code running also got stuck. I changed the value to 0.01 then and everything is ok.
Do some rotation to get some measurements about the current status and store them.
As shown in line 4, update using the result from the above measurements and calculations.
Plug these modules in and run the code, I got the blue line (BF) to fit the green line (actual trajectory) much better than the red line(Odometry).
I also got the data in the text which shows the most probable state after each iteration of the Bayes filter as well as its probability.
The past websites from Ryan and Linda help me a lot. Thanks!
In this lab, we will implement localization on the robot, exactly by running Bayesian filters on the computer to determine the robot's position based on the data returned by the robot.
Run this simulation ro make sure I set up correctly as in Lab10.
I think my open-loop control in lab 9 is OK. Therefore, I just simply modified the code to meet the requirement of this lab (18 measures for each circle).
Based on the framework given, we need to code under perform_observation_loop to plug the measurements in the localization algorithm. I use the same idea of my previous BLE communication based on the string processor.
However, although the distance array can have 18 values at last, the function did not wait for all of the 18 data. Some delay should be applied here. I try to use while~if~break, timer, and even flag variables but they did not work because they all blocked the running of my string processor. With the help from Anya, I finally figure out the solution using “asyncio.run(asyncio.sleep(8))” and all things work after that.
My first test is from (5,-3)
Then from (-3,-2)
From (0,3)
From (5,3)
My algorithm performs well at the last three points and has a relatively large error at the first point. From the graph, the measured value at point (5, -3) should include many larger numbers, and the lack of accuracy of the sensor at long distances may be the cause of the problem. Also, as described in lab 9, my open-loop control is highly dependent on the battery voltage, and although I have tried my best to adjust it, it is difficult to ensure that the car has turned exactly one full revolution in each test. I think my experiment worked well overall, but there is room for improvement.
This is a relatively free lab. The object is to make the robot pass through 9 designated points in order. The methods of implementation can be very diverse.
When I begin my work, some classmates are already testing their robots. Therefore, my ideas have been greatly expanded by the communications. Thank you all!
Among the ideas, I am not a big fan of the methods needing to continue sending commands to the robot during the process. In my opinion, the robots should totally act by themselves, because if we can continue to control the robot, the most effective way should be directly commanding the robot to move backward/forward or left/right, rather than using a series of algorithms.
Observing the path given and the field in the lab. I think it is difficult to implement PID control for point 1 ~ point 6, because of the lack of reference in short distances. ( I think our sensor may not work well facing the long distance). For point 6 ~ point 9, this is simply a scenario designed for PID control! PID to point 7 then turn left 90 degrees, PID to point 8 then turn left 90 degrees, and PID to the origin (point 9).
Therefore, my initial idea is to use open loop control to pass through point 1 ~ point 6 and end with the correct position and orientation. Three PID controls and two open-loop turns are implemented then.
However, when I succeed to tune the first stage of open-loop control finally. I found my position and orientation at point 6 to be very accurate, the whole thing is almost there, I can use open-loop control from beginning to end because the parts that are supposed to be PID are easier for open-loop control than previous parts.
Although I was mentally prepared, the process of debugging open-loop control was still more complicated than I imagined, mainly because the effect of control and battery voltage are highly correlated. Debugging out the parameters alone is meaningless; it makes sense to combine the voltages. I maintain my battery voltage between 4V-4.1V and the deviation caused by this range is within acceptable limits. I measure the battery voltage before each test to ensure it is within range.
I use modules of forward, backward, left and right. I design the parameters to make “forwards“results in moving forward for half tile, “left” and “right” results in turn ~60 degrees, “back” for moving back a little.
The advantage is obvious, we can use modules in Python and do not need to plug in the USB cable every time. However, accuracy is definitely not as high as direct parameter adjustment. In other words, I chose the appropriate granularity of operation.
Another advantage is that modules can make my tuning linear. I notice that delay (200) usually will not result in two times of movement than delay (100) because There is a process of acceleration and deceleration. I make each module ends with a brake for 100ms to decouple the two actions before and after.
The modules are written as below.
This is how I call them. Using loops is a good idea for readability, but I present here the code that was running at the time.
There were a lot of people at the site while I was debugging. Since I was doing full open loop control, I marked the new site with tape in the hallway to save everyone's time and my own time.
Let's see the results! I am very happy with it. (Delay at point 6 is set to 5 seconds for identification because not turning is required there.)