感知笔记2:ROS 视觉 - 沿线行走

  • 如何在 ROS 中使用 OpenCV
  • 如何跟踪线路
  • 如何根据颜色查找不同元素
  • 跟踪多条路径并做出决定
  • 为线路跟踪创建基本的 PID

在本章中,您将学习如何使用 ROS 中最基本、最强大的感知工具:OpenCV。

OpenCV 是最广泛、最完整的图像识别库。有了​​它,您可以以前所未有的方式处理图像:应用滤镜、后期处理以及以任何您想要的方式处理图像。

2.1   如何在 ROS 中使用 OpenCV

您可能已经猜到了,OpenCV 不是 ROS 库,但它已通过 OpenCV_bridge 完美集成到 ROS 中。此包允许 ROS 成像主题使用 OpenCV 图像变量格式。

例如,OpenCV 图像采用 BGR 图像格式,而常规 ROS 图像采用更标准的 RGB 编码。OpenCV_bridge 提供了一个很好的功能来在它们之间进行转换。此外,还有许多其他函数可以透明地将图像传输到 OpenCV 变量。

要使用 OpenCV,您将使用 TurtleBot 机器人的 RGB 摄像头:

注意这个 TurtleBot 处在一个陌生的环境中。地板上有一条涂成黄色的小路和不同颜色的星星。

您将让机器人在这个环境中沿着黄线移动。为此,我们将工作分为以下几个阶段:

  • 从 ROS 主题中获取图像并将其转换为 OpenCV 格式
  • 使用 OpenCV 库处理图像以获取我们想要用于任务的数据
  • 根据获得的数据,让机器人沿着黄线移动

2.2 从 ROS 主题中获取图像并使用 OpenCV 显示

首先,创建一个名为 my_following_line_package 的新包。在该包内,创建两个文件夹:文件夹 1,名为 launch;文件夹 2,名为 scripts。

cd ~/catkin_ws/src
catkin_create_pkg my_following_line_package rospy cv_bridge image_transport sensor_msgs
cd ~/catkin_ws/
catkin_make
source devel/setup.bash
rospack profile
roscd my_following_line_package
mkdir scripts;cd scripts
# We create empty file
touch line_follower_basics.py
chmod +x *.py

从 ROS 主题中获取图像,将其转换为 OpenCV 格式,然后在图形界面窗口中将其可视化。

请参阅以下示例以了解如何执行此操作:

line_follower_basics.py
#!/usr/bin/env pythonimport roslib
import sys
import rospy
import cv2
import numpy as np
from cv_bridge import CvBridge, CvBridgeError
from geometry_msgs.msg import Twist
from sensor_msgs.msg import Imageclass LineFollower(object):def __init__(self):self.bridge_object = CvBridge()self.image_sub = rospy.Subscriber("/camera/rgb/image_raw",Image,self.camera_callback)def camera_callback(self,data):try:# We select bgr8 because its the OpneCV encoding by defaultcv_image = self.bridge_object.imgmsg_to_cv2(data, desired_encoding="bgr8")except CvBridgeError as e:print(e)cv2.imshow("Image window", cv_image)cv2.waitKey(1)def main():line_follower_object = LineFollower()rospy.init_node('line_following_node', anonymous=True)try:rospy.spin()except KeyboardInterrupt:print("Shutting down")cv2.destroyAllWindows()if __name__ == '__main__':main()
self.image_sub = rospy.Subscriber("/camera/rgb/image_raw",Image,self.camera_callback)

图像主题的订阅者发布 sensor_msgs/Image 类型的信息。执行以下命令可查看此消息类型内的不同变量:

rosmsg show  sensor_msgs/Image
std_msgs/Header header                                           uint32 seq                                           time stamp                                           string frame_id                                          
uint32 height                                          
uint32 width                                           
string encoding                                          
uint8 is_bigendian                                           
uint32 step                                          
uint8[] data

您可以通过执行以下操作从某些变量中提取数据:

rostopic echo -n1 /camera/rgb/image_raw/height
rostopic echo -n1 /camera/rgb/image_raw/width 
rostopic echo -n1 /camera/rgb/image_raw/encoding
rostopic echo -n1 /camera/rgb/image_raw/data

它应该会给你一些类似于下面所看到的内容:

user ~ $ rostopic echo -n1 /camera/rgb/image_raw/height                                                                                                      
480                                   
---                                                                                                                                                          
user ~ $ rostopic echo -n1 /camera/rgb/image_raw/width                                                                                                       
640                                                                                                                                                          
---                                                                                                                                                          
user ~ $ rostopic echo -n1 /camera/rgb/image_raw/encoding                                                                                                    
rgb8                                                                                                                                                         
---  
user ~ $ rostopic echo -n1 /camera/rgb/image_raw/data 
[129, 104, 104, 129, 104,
...
129, 104, 104, 129, 104]

这里最重要的信息是:

  • Height and width:这些是相机像素的尺寸。在本例中为 480 x 640。
  • Encoding:这些像素的编码方式。这意味着数据数组中的每个值的含义。在本例中为 rgb8。这意味着数据值将是一个以 8 位整数表示为红/绿/蓝的颜色值。
  • Data:图像数据。

借助 cv_bridge 包,我们可以轻松地将图像传感器数据中的图像数据转换为 OpenCV 可以理解的格式。通过将其转换为 OpenCV,我们可以利用该库的功能来处理机器人的图像。

try:# We select bgr8 because its the OpenCV encoding by defaultcv_image = self.bridge_object.imgmsg_to_cv2(data, desired_encoding="bgr8")
except CvBridgeError as e:print(e)

从 ROS 主题中检索图像并将其存储在 OpenCV 变量中,变量 data包含 ROS 消息以及摄像头捕获的图像。

cv2.imshow("Image window", cv_image)
cv2.waitKey(1)

这将打开一个 GUI,您可以在其中查看变量 cv_image 的内容。这对于查看不同过滤器的效果以及随后对图像进行裁剪至关重要。

cv2.destroyAllWindows()

当程序终止时,此命令将关闭所有图像窗口。

rosrun my_following_line_package line_follower_basics.py

启动时,您可能会收到类似于以下错误消息的错误消息。如果是,请不要担心。再次执行该命令,它应该可以正常启动。

(Image window:4285): Gdk-ERROR **: 10:56:23.030: The program 'Image window' received an X Window System error.

该程序将生成类似下图的图像:

2.3 对图像应用滤镜

原始图像是无用的,除非你过滤它以仅查看要跟踪的颜色。你可以裁剪不需要的图像部分,以便程序运行得更快。

我们还需要从图像中提取一些数据来移动机器人跟随线路。

第一步:获取图像信息并裁剪图像

在开始使用图像检测事物之前,请考虑以下两点:

  • 处理图像所需的最基本数据之一是尺寸。是 800 x 600?1200 x 1024?60 x 60?这对于定位图像中检测到的元素至关重要。
  • 第二点是裁剪图像。尽可能使用任务所需的最小图像尺寸非常重要。这会使检测系统更快。
# We get image dimensions and crop the parts of the image we dont need
# Bear in mind that because its image matrix first value is start and second value is down limit.
# Select the limits so that they get the line not too close, not too far, and the minimum portion possible
# To make the process faster.
height, width, channels = cv_image.shape
descentre = 160
rows_to_watch = 20
crop_img = cv_image[(height)/2+descentre:(height)/2+(descentre+rows_to_watch)][1:width]

为什么是这些值而不是其他值?这取决于任务。在这种情况下,您感兴趣的是距离机器人不太远也不太近的线条。如果您专注于太远的线条,机器人将不会跟随线条,它只会穿过地图。另一方面,专注于太近的线条不会让机器人有时间适应线条的变化。

优化裁剪后的图像区域也很重要。如果它太大,将处理太多数据,导致程序太慢。另一方面,它必须有足够的数据来处理。您必须定期调整图像。

第二步:从BGR转换为HSV

请记住,出于历史原因,OpenCV 使用 BGR 而不是 RGB(OpenCV 创建时,有些相机使用 BGR)。

似乎使用 RGB 或 BGR 来区分颜色都不太容易。这就是使用 HSV 的原因。HSV 背后的想法是去除色彩饱和度的成分。这样,在不同光线条件下识别相同颜色就更容易了,这是图像识别中的一个严重问题。

有关更多信息,请访问 https://en.wikipedia.org/wiki/HSL_and_HSV。

如果您尝试使用 HSV 格式,就会发现它非常容易理解。我们最好的建议是在您的计算机上安装 Blender。在将材质添加到立方体后,尝试使用 HSV 颜色。您将看到以下内容:

H:控制颜色(红色、蓝色、黄色、绿色等)

S:控制颜色饱和度(从白色 --> H 中选择的颜色最饱和的版本)

V:颜色有多深或多亮(从黑色 --> H 中选择的最亮的颜色)

查看此 GIV 以更好地理解 HSV:

# Convert from RGB to HSV
hsv = cv2.cvtColor(crop_img, cv2.COLOR_BGR2HSV)# Define the Yellow Colour in HSV
#RGB
#[[[222,255,0]]]
#BGR
#[[[0,255,222]]]
"""
To know which color to track in HSV, Put in BGR. Use ColorZilla to get the color registered by the camera
>>> yellow = np.uint8([[[B,G,R ]]])
>>> hsv_yellow = cv2.cvtColor(yellow,cv2.COLOR_BGR2HSV)
>>> print( hsv_yellow )
[[[ 34 255 255]]
"""
lower_yellow = np.array([20,100,100])
upper_yellow = np.array([50,255,255])

在上面的代码中,您将 cropped_image (crop_img) 转换为 HSV

接下来,您将选择要跟踪的 HSV 颜色,从颜色锥的底部选择一个点。由于 HSV 值难以生成,因此最好使用 ColorZilla 等颜色选择器工具来选择要跟踪的颜色的 RGB 编码。在本例中,它是模拟中的黄线。

获得它后,使用 Python 终端中给出的示例代码或创建一个小程序,使用 NumPy 作为 np 和 cv2 将其转换为 HSV。
在给出的示例中,线条的颜色为 HSV = [[[ 34 255 255]]。

最后,选择一个上限和下限来定义锥体底部的黄色区域。区域越大,所选颜色的渐变越多。这将取决于您的机器人如何检测图像中的颜色变化以及混合相似颜色的重要性。

额外步骤:创建 HSV 采样器
在第 1 章中,我们使用了 hsv_detector.py。现在我们将创建自己的样本,以了解检测特定颜色的范围。

roscd my_following_line_package/scripts
touch hsv_detector.py
touch rgb_camera_robot_sensor.py
chmod +x *.py
hsv_detector.py 
#!/usr/bin/env python
# -*- coding: utf-8 -*-import cv2
import argparse
from operator import xor
from rgb_camera_robot_sensor import RobotRGBSensors
import rospydef callback(value):passdef setup_trackbars(range_filter):cv2.namedWindow("Trackbars", 0)for i in ["MIN", "MAX"]:v = 0 if i == "MIN" else 255for j in range_filter:cv2.createTrackbar("%s_%s" % (j, i), "Trackbars", v, 255, callback)def get_trackbar_values(range_filter):values = []for i in ["MIN", "MAX"]:for j in range_filter:v = cv2.getTrackbarPos("%s_%s" % (j, i), "Trackbars")values.append(v)return valuesdef main():rospy.init_node("hsv_detector", log_level=rospy.DEBUG)rospy.logwarn("Starting....")sensors_obj = RobotRGBSensors("/camera/rgb/image_raw")cv_image = sensors_obj.get_image()range_filter = "HSV"setup_trackbars(range_filter)while not rospy.is_shutdown():image = sensors_obj.get_image()frame_to_thresh = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)v1_min, v2_min, v3_min, v1_max, v2_max, v3_max = get_trackbar_values(range_filter)thresh = cv2.inRange(frame_to_thresh, (v1_min, v2_min, v3_min), (v1_max, v2_max, v3_max))preview = cv2.bitwise_and(image, image, mask=thresh)cv2.imshow("Preview", preview)if cv2.waitKey(1) & 0xFF is ord('q'):breakrospy.logwarn("Shutting down")cv2.destroyAllWindows()if __name__ == '__main__':main()
rgb_camera_robot_sensor.py
#!/usr/bin/env pythonimport sys
import rospy
import cv2
import numpy as np
from cv_bridge import CvBridge, CvBridgeError
from geometry_msgs.msg import Twist
from sensor_msgs.msg import Imageclass RobotRGBSensors(object):def __init__(self, rgb_cam_topic , show_raw_image = False):self._show_raw_image = show_raw_imageself.bridge_object = CvBridge()self.camera_topic = rgb_cam_topicself._check_cv_image_ready()self.image_sub = rospy.Subscriber(self.camera_topic,Image,self.camera_callback)def _check_cv_image_ready(self):self.cv_image = Nonewhile self.cv_image is None and not rospy.is_shutdown():try:raw_cv_image = rospy.wait_for_message(self.camera_topic,Image, timeout=1.0)self.cv_image = self.bridge_object.imgmsg_to_cv2(raw_cv_image, desired_encoding="bgr8")rospy.logdebug("Current "+self.camera_topic+" READY=>")except:rospy.logerr("Current "+self.camera_topic+" not ready yet, retrying for getting "+self.camera_topic+"")return self.cv_imagedef camera_callback(self,data):try:# We select bgr8 because its the OpneCV encoding by defaultself.cv_image = self.bridge_object.imgmsg_to_cv2(data, desired_encoding="bgr8")except CvBridgeError as e:print(e)if self._show_raw_image:cv2.imshow("Image window", self.cv_image)cv2.waitKey(1)def get_image(self):return self.cv_imagedef main():sensor_object = RobotRGBSensors("/camera/rgb/image_raw")rospy.init_node('robot_rgb_sensor', anonymous=True)try:rospy.spin()except KeyboardInterrupt:print("Shutting down")cv2.destroyAllWindows()if __name__ == '__main__':main()

现在您可以启动它并获取黄线的 HSV 范围值:

rosrun my_following_line_package hsv_detector.py

 

可以看到值的范围是:

  • 23 < H < 34
  • 98 < S < 255
  • 0 < V < 255

如您所见,它会有所不同,具体取决于您希望检测的严格程度。

  • lower_yellow = np.array([20,100,100])
  • upper_yellow = np.array([50,255,255])

S 和 V 之间存在巨大差异是正常的,因为这取决于光照条件。但 H 通常或多或少保持不变。

第三步:应用掩码
生成一个裁剪后图像的版本,其中只显示两种颜色:黑色和白色。白色代表你认为是黄色的所有颜色,其余部分为黑色。这是一个二值图像。

为什么需要这样做?它有两个作用:

  • 首先,这样做可以避免连续检测。要么是颜色,要么不是;没有中间状态。这对于后续的质心计算至关重要,因为计算只基于“是”或“否”的原则。
  • 其次,它将允许之后生成结果图像。你可以提取图像中的所有内容,除了颜色线,只关注你感兴趣的部分。
# Threshold the HSV image to get only yellow colors
mask = cv2.inRange(hsv, lower_yellow, upper_yellow)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(crop_img,crop_img, mask= mask)

将 HSV 中裁剪的彩色图像与二值掩码图像合并。此步骤将仅对检测结果进行着色,其余部分则为黑色。

第四步:获取质心,为质心画一个圆并显示图像

质心本质上表示空间中质量集中的点——重心。就本课程而言,质心和重心是同一回事。它们是使用积分计算的。

这被推断到图像中。但是,我们拥有的不是质量,而是颜色。您要寻找的颜色较多的地方就是质心所在的地方。它是图像中看到的斑点的重心。

这就是您应用掩码使图像变为二进制的原因。这样,您就可以轻松计算出重心的位置。这是因为它是一个离散函数,而不是连续函数。这意味着它允许我们谨慎地进行积分,而不需要描述整个区域颜色数量波动的函数。

质心在斑点跟踪中至关重要,因为它们为您提供了斑点在空间中的精确点。您将使用它来跟踪斑点,从而跟踪线条。

这对于计算颜色斑点的质心是必需的。您可以使用图像矩来实现此目的:

# Calculate centroid of the blob of binary image using ImageMoments
m = cv2.moments(mask, False)
try:cx, cy = m['m10']/m['m00'], m['m01']/m['m00']
except ZeroDivisionError:cy, cx = height/2, width/2

如您所见,您将获得裁剪后的图像坐标,其中检测到正黄色的质心。如果没有检测到任何内容,它将位于图像的中心。

请记住,您必须分配正确的CyCx值。不要混淆高度和宽度,否则您将在以下练习中遇到问题。

如果您想要更详细地解释 OpenCV 练习以及可以使用轮廓特征获得的所有信息,您可以转到以下链接:http://docs.opencv.org/trunk/dd/d49/tutorial_py_contour_features.html

如果您对所有数学依据感兴趣,请访问 https://en.wikipedia.org/wiki/Image_moment

# Draw the centroid in the resultut image
# cv2.circle(img, center, radius, color[, thickness[, lineType[, shift]]]) 
cv2.circle(res,(int(cx), int(cy)), 10,(0,0,255),-1)cv2.imshow("Original", cv_image)
cv2.imshow("HSV", hsv)
cv2.imshow("MASK", mask)
cv2.imshow("RES", res)cv2.waitKey(1)

OpenCV 允许你在图像上绘制很多东西,不仅仅是几何形状。在这种情况下,一个圆圈就足够了。

cv2.circle(res,(centre_cicle_x, centre_cicle_y), LineWidth,(BGRColour of line),TypeOfLine)

我们正在利用这个特征在计算出的质心位置画一个圆。

最后,显示所有图像变量及其标题:

2.4 根据质心位置移动 TurtleBot

error_x = cx - width / 2;
twist_object = Twist();
twist_object.linear.x = 0.2;
twist_object.angular.z = -error_x / 100;
rospy.loginfo("ANGULAR VALUE SENT===>"+str(twist_object.angular.z))
# Make it start turning
self.movekobuki_object.move_robot(twist_object)

这种运动基于比例控制。这意味着 TurtleBot 会剧烈振荡,并且可能会出现错误。但这是移动机器人并完成工作的最简单方法。

它提供恒定的线性运动,而 Z 角速度取决于 X 轴上的质心中心与图像中心之间的差异。

要移动机器人,您可以使用此模块:

roscd my_following_line_package/scripts
# We create empty file
touch move_robot.py
touch follow_line_step_hsv.py
chmod +x *.py
move_robot.py
#!/usr/bin/env python
import rospy
from geometry_msgs.msg import Twistclass MoveKobuki(object):def __init__(self):self.cmd_vel_pub = rospy.Publisher('/cmd_vel', Twist, queue_size=10)self.last_cmdvel_command = Twist()self._cmdvel_pub_rate = rospy.Rate(10)self.shutdown_detected = Falsedef move_robot(self, twist_object):self.cmd_vel_pub.publish(twist_object)def clean_class(self):# Stop Robottwist_object = Twist()twist_object.angular.z = 0.0self.move_robot(twist_object)self.shutdown_detected = Truedef main():rospy.init_node('move_robot_node', anonymous=True)movekobuki_object = MoveKobuki()twist_object = Twist()# Make it start turningtwist_object.angular.z = 0.5rate = rospy.Rate(5)ctrl_c = Falsedef shutdownhook():# works better than the rospy.is_shut_down()movekobuki_object.clean_class()rospy.loginfo("shutdown time!")ctrl_c = Truerospy.on_shutdown(shutdownhook)while not ctrl_c:movekobuki_object.move_robot(twist_object)rate.sleep()if __name__ == '__main__':main()

下面有一个如何将所有这些代码组合在一起的示例:

follow_line_step_hsv.py
#!/usr/bin/env python
import rospy
import cv2
import numpy as np
from cv_bridge import CvBridge, CvBridgeError
from geometry_msgs.msg import Twist
from sensor_msgs.msg import Image
from move_robot import MoveKobukiclass LineFollower(object):def __init__(self):self.bridge_object = CvBridge()self.image_sub = rospy.Subscriber("/camera/rgb/image_raw",Image,self.camera_callback)self.movekobuki_object = MoveKobuki()def camera_callback(self,data):try:# We select bgr8 because its the OpneCV encoding by defaultcv_image = self.bridge_object.imgmsg_to_cv2(data, desired_encoding="bgr8")except CvBridgeError as e:print(e)# We get image dimensions and crop the parts of the image we don't need# Bear in mind that because the first value of the image matrix is start and second value is down limit.# Select the limits so that it gets the line not too close and not too far, and the minimum portion possible# To make process faster.height, width, channels = cv_image.shapedescentre = 160rows_to_watch = 20crop_img = cv_image[(height)/2+descentre:(height)/2+(descentre+rows_to_watch)][1:width]# Convert from RGB to HSVhsv = cv2.cvtColor(crop_img, cv2.COLOR_BGR2HSV)# Define the Yellow Colour in HSV#RGB#[[[222,255,0]]]#BGR#[[[0,255,222]]]"""To know which color to track in HSV, Put in BGR. Use ColorZilla to get the color registered by the camera>>> yellow = np.uint8([[[B,G,R ]]])>>> hsv_yellow = cv2.cvtColor(yellow,cv2.COLOR_BGR2HSV)>>> print( hsv_yellow )[[[ 34 255 255]]"""lower_yellow = np.array([20,100,100])upper_yellow = np.array([50,255,255])# Threshold the HSV image to get only yellow colorsmask = cv2.inRange(hsv, lower_yellow, upper_yellow)# Calculate centroid of the blob of binary image using ImageMomentsm = cv2.moments(mask, False)try:cx, cy = m['m10']/m['m00'], m['m01']/m['m00']except ZeroDivisionError:cy, cx = height/2, width/2# Bitwise-AND mask and original imageres = cv2.bitwise_and(crop_img,crop_img, mask= mask)# Draw the centroid in the resultut image# cv2.circle(img, center, radius, color[, thickness[, lineType[, shift]]]) cv2.circle(res,(int(cx), int(cy)), 10,(0,0,255),-1)cv2.imshow("Original", cv_image)cv2.imshow("HSV", hsv)cv2.imshow("MASK", mask)cv2.imshow("RES", res)cv2.waitKey(1)error_x = cx - width / 2;twist_object = Twist();twist_object.linear.x = 0.2;twist_object.angular.z = -error_x / 100;rospy.loginfo("ANGULAR VALUE SENT===>"+str(twist_object.angular.z))# Make it start turningself.movekobuki_object.move_robot(twist_object)def clean_up(self):self.movekobuki_object.clean_class()cv2.destroyAllWindows()def main():rospy.init_node('line_following_node', anonymous=True)line_follower_object = LineFollower()rate = rospy.Rate(5)ctrl_c = Falsedef shutdownhook():# works better than the rospy.is_shut_down()line_follower_object.clean_up()rospy.loginfo("shutdown time!")ctrl_c = Truerospy.on_shutdown(shutdownhook)while not ctrl_c:rate.sleep()if __name__ == '__main__':main()
rosrun my_following_line_package follow_line_step_hsv.py

notice:如果您看到错误消息 **libdc1394 错误:无法初始化 libdc1394**,请不要担心。它根本没有任何效果。

尝试一些改进:

  • 降低机器人的速度以测试是否能提高其性能。更改线性和角速度。
  • 更改机器人的行为。考虑创建当机器人失去线路时的恢复行为。

练习 2.4.1

尝试通过跟踪三颗不同的星星来更改要跟踪的颜色。

  • 红色星星
  • 绿色星星
  • 蓝色星星
  • 创建一个名为 follow_star_step_hsv.py 的脚本。执行时,为脚本提供四种颜色中的一种,它将只搜索该颜色(红色、黄色、绿色或蓝色)。

下面,查看我跟踪蓝色星星时屏幕上的图像示例:

接下来,使用下面提供的信息更改上限和下限的值并查看结果:

  • LOOSE color detection: lower_yellow = np.array([0,50,50]), upper_yellow = np.array([255,255,255])
  • STRICT color detection: lower_yellow = np.array([33,254,254]), upper_yellow = np.array([36,255,255])

下面是一个示例,展示了更改下限和上限时应该看到的内容:

LOOSE color detection:

.

STRICT color detection:

在 LOOSE 检测中,所有绿色和黄色都会被检测到。而在 STRICT 检测中,您可以看到即使在黄线中也会有一部分被切断,因为它与相机传感器略有不同(饱和度不够)。

练习 2.4.1结束

练习2.4.1解决方案

下面请参阅为此目的创建的脚本:

follow_star_step_hsv.py
#!/usr/bin/env python
import sys
import rospy
import cv2
import numpy as np
from cv_bridge import CvBridge, CvBridgeError
from geometry_msgs.msg import Twist
from sensor_msgs.msg import Image
from move_robot import MoveKobukiclass StarFollower(object):def __init__(self, star_color):self._star_color = star_color# name : {"upper":[H,S,V], "lower":[H,S,V]}self.star_HSV_ranges_colour_db = {"green":{"upper":[64,255,255],"lower":[45,142,0]},"red":{"upper":[0,255,255],"lower":[0,185,0]},"blue":{"upper":[111,255,255],"lower":[104,134,0]},"yellow":{"upper":[34,255,255],"lower":[23,98,0]},}self.bridge_object = CvBridge()self.image_sub = rospy.Subscriber("/camera/rgb/image_raw",Image,self.camera_callback)self.movekobuki_object = MoveKobuki()def get_range_hsv_color(self, color_name):if color_name in self.star_HSV_ranges_colour_db:ranges_dict = self.star_HSV_ranges_colour_db[color_name]upper = ranges_dict["upper"]lower = ranges_dict["lower"]upper_color = np.array([upper[0],upper[1],upper[2]])lower_color = np.array([lower[0],lower[1],lower[2]])return upper_color, lower_colorelse:return None, Nonedef update_star_color(self, new_color):self._star_color = new_colordef camera_callback(self,data):try:# We select bgr8 because its the OpneCV encoding by defaultcv_image = self.bridge_object.imgmsg_to_cv2(data, desired_encoding="bgr8")except CvBridgeError as e:print(e)height, width, channels = cv_image.shape# Noetic integer conversionheight = int(height)width = int(width)channels = int(channels)descentre = 0rows_to_watch = 200aux1 = int(((height)/2)+descentre)aux2 = int((height)/2+(descentre+rows_to_watch))crop_img = cv_image[aux1:aux2][1:width]# Convert from RGB to HSVhsv = cv2.cvtColor(crop_img, cv2.COLOR_BGR2HSV)upper_hsv_color , lower_hsv_color = self.get_range_hsv_color(self._star_color)if upper_hsv_color is not None and lower_hsv_color is not None:# Threshold the HSV image to get only yellow colorsmask = cv2.inRange(hsv, lower_hsv_color, upper_hsv_color)# Calculate centroid of the blob of binary image using ImageMomentsm = cv2.moments(mask, False)try:cx, cy = m['m10']/m['m00'], m['m01']/m['m00']except ZeroDivisionError:cy, cx = height/2, width/2# Bitwise-AND mask and original imageres = cv2.bitwise_and(crop_img,crop_img, mask= mask)# Draw the centroid in the resultut image# cv2.circle(img, center, radius, color[, thickness[, lineType[, shift]]]) cv2.circle(res,(int(cx), int(cy)), 10,(0,0,255),-1)cv2.imshow("Original", cv_image)cv2.imshow("HSV", hsv)cv2.imshow("MASK", mask)cv2.imshow("RES", res)cv2.waitKey(1)error_x = cx - width / 2twist_object = Twist()twist_object.linear.x = 0.2twist_object.angular.z = -error_x / 100rospy.loginfo("ANGULAR VALUE SENT===>"+str(twist_object.angular.z))# Make it start turningself.movekobuki_object.move_robot(twist_object)else:print("Colour not in database=="+str(self._star_color))def clean_up(self):self.movekobuki_object.clean_class()cv2.destroyAllWindows()def main():rospy.init_node('star_following_node', anonymous=True)arguments = sys.argvif len(arguments) > 1:color_name = arguments[1]else:print("Use: python follow_star_step_hsv color_name[red,green,blue or yellow]")color_name = "yellow"star_follower_object = StarFollower(star_color=color_name)rate = rospy.Rate(5)ctrl_c = Falsedef shutdownhook():# works better than the rospy.is_shut_down()star_follower_object.clean_up()rospy.loginfo("shutdown time!")ctrl_c = Truerospy.on_shutdown(shutdownhook)while not ctrl_c:rate.sleep()if __name__ == '__main__':main()

如您所见,它是相同的结构,但现在我们有了 self.star_HSV_ranges_colour_db,我们在其中保存了从我们之前编写的脚本中检索到的所有不同范围的 HSV。

另外,请注意,处理的图像尺寸更大。这是为了让 TurtleBots 的生活更轻松,因为星星不那么容易看到。

这些是有效的值。您的值可能会略有不同,具体取决于您想要的严格程度。

从第一次体验中,你会发现有两个问题:

  • 当机器人沿着一条线行走时,它可以正常工作,但是当有多种路径可用时,它会变得有点疯狂,直到只有一条路径在视线范围内。这是因为你没有用于多个斑点检测的信息,因此你应用策略来选择一个方向。
  • 当在图像的远端检测到质心时,机器人会变得疯狂并转动、振荡。这是由于使用的比例控制。可以使用完整的 PID 控制来解决此问题。

我们现在将解决上述两个问题。

2.5  附加步骤:遵循多个质心

我们将使用相同的代码(follow_line_step_hsv.py),只不过现在它可以跟踪多个 blob,让您选择要跟随的路径。

contours, _, ___ = cv2.findContours(mask, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_TC89_L1)
rospy.loginfo("Number of centroids==>"+str(len(contours)))
centres = []
for i in range(len(contours)):moments = cv2.moments(contours[i])try:centres.append((int(moments['m10']/moments['m00']), int(moments['m01']/moments['m00'])))cv2.circle(res, centres[-1], 10, (0, 255, 0), -1)except ZeroDivisionError:pass

此代码使用 findContours() 函数提取所有轮廓,然后计算每个轮廓的矩。

如果 m00 为空,则认为中心无用,不会绘制它们。

然后,它在每个轮廓中心绘制一个绿色圆圈。这会产生以下结果:

请参阅以下完整代码以供参考:

roscd my_following_line_package/scripts
# We create empty file
touch follow_line_step_multiple.py
chmod +x *.py

follow_line_step_multiple.py

#!/usr/bin/env python
import rospy
import cv2
import numpy as np
from cv_bridge import CvBridge, CvBridgeError
from geometry_msgs.msg import Twist
from sensor_msgs.msg import Image
from move_robot import MoveKobukiclass LineFollower(object):def __init__(self):self.bridge_object = CvBridge()self.image_sub = rospy.Subscriber("/camera/rgb/image_raw",Image,self.camera_callback)self.movekobuki_object = MoveKobuki()def camera_callback(self,data):try:# We select bgr8 because its the OpneCV encoding by defaultcv_image = self.bridge_object.imgmsg_to_cv2(data, desired_encoding="bgr8")except CvBridgeError as e:print(e)# We get image dimensions and crop the parts of the image we dont need# Bear in mind that because the first value of the image matrix is start and second value is down limit.# Select the limits so that it gets the line not too close and not too far, and the minimum portion possible# To make the process faster.height, width, channels = cv_image.shape# Noetic integer conversionheight = int(height)width = int(width)channels = int(channels)descentre = 160rows_to_watch = 20aux1 = int(((height)/2)+descentre)aux2 = int((height)/2+(descentre+rows_to_watch))crop_img = cv_image[aux1:aux2][1:width]# Convert from RGB to HSVhsv = cv2.cvtColor(crop_img, cv2.COLOR_BGR2HSV)lower_yellow = np.array([20,100,100])upper_yellow = np.array([50,255,255])# Threshold the HSV image to get only yellow colorsmask = cv2.inRange(hsv, lower_yellow, upper_yellow)# Bitwise-AND mask and original imageres = cv2.bitwise_and(crop_img,crop_img, mask= mask)contours, _, = cv2.findContours(mask, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_TC89_L1)rospy.loginfo("Number of centroids==>"+str(len(contours)))centres = []for i in range(len(contours)):moments = cv2.moments(contours[i])try:centres.append((int(moments['m10']/moments['m00']), int(moments['m01']/moments['m00'])))cv2.circle(res, centres[-1], 10, (0, 255, 0), -1)except ZeroDivisionError:passrospy.loginfo(str(centres))#Select the right centroid# [(542, 39), (136, 46)], (x, y)most_right_centroid_index = 0index = 0max_x_value = 0for candidate in centres:# Retrieve the cx valuecx = candidate[0]# Get the Cx more to the rightif cx >= max_x_value:max_x_value = cxmost_right_centroid_index = indexindex += 1try:cx = centres[most_right_centroid_index][0]cy = centres[most_right_centroid_index][1]rospy.logwarn("Winner =="+str(cx)+","+str(cy)+"")except:cy, cx = height/2, width/2# Draw the centroid in the resulting image# cv2.circle(img, center, radius, color[, thickness[, lineType[, shift]]]) cv2.circle(res,(int(cx), int(cy)), 5,(0,0,255),-1)cv2.imshow("Original", cv_image)#cv2.imshow("HSV", hsv)#cv2.imshow("MASK", mask)cv2.imshow("RES", res)cv2.waitKey(1)error_x = cx - width / 2twist_object = Twist()twist_object.linear.x = 0.2twist_object.angular.z = -error_x / 100rospy.loginfo("ANGULAR VALUE SENT===>"+str(twist_object.angular.z))# Make it start turningself.movekobuki_object.move_robot(twist_object)def clean_up(self):self.movekobuki_object.clean_class()cv2.destroyAllWindows()def main():rospy.init_node('line_following_node', anonymous=True)line_follower_object = LineFollower()rate = rospy.Rate(5)ctrl_c = Falsedef shutdownhook():# works better than the rospy.is_shut_down()line_follower_object.clean_up()rospy.loginfo("shutdown time!")ctrl_c = Truerospy.on_shutdown(shutdownhook)while not ctrl_c:rate.sleep()if __name__ == '__main__':main()

启动时,您可以看到检测到两个斑点,但只选择了一个。右侧的斑点。

如您所见,这是相同的代码,但它只使用多个质心。它选择 X 值较高的那个,因此总是向右移动。您还可以看到在选定的质心处绘制了一个红色圆圈。

练习 2.5.1

创建一个名为 line_objective.py 的新脚本,当在主题“objective”上发布时,它将执行以下操作:

  • 如果为黄色,它将遵循循环路径。
  • 如果为红色,它将遵循循环路径,直到到达红色星形路径然后停止。
  • 如果为绿色,它将遵循循环路径,直到到达绿色星形路径然后停止。
  • 如果为蓝色,它将遵循循环路径,直到到达蓝色星形路径然后停止。

练习 2.5.1 解决方案

解决方案代码如下:

line_objective.py
#!/usr/bin/env python
# license removed for brevity
import time
import rospy
from std_msgs.msg import String
from move_robot import MoveKobuki
from geometry_msgs.msg import Twist
from follow_line_step_multiple_complete import LineFollowerclass LineObjective(object):def __init__(self):# There are two modes, line and starself.search_mode = "line"self.lost_star = 0self.star_found = Falseself.objective_tp = "/objective"self._check_objective_ready()rospy.Subscriber(self.objective_tp, String, self.objective_clb)# Moverof Turtlebotself.movekobuki_object = MoveKobuki()# Line Followersself.init_line_followers()self.set_line_folowers_states()self.rate = rospy.Rate(5)self.ctrl_c = Falserospy.on_shutdown(self.shutdownhook)def init_line_followers(self):line_yellow_right = LineFollower(star_color="yellow", go_right=True)line_yellow_left = LineFollower(star_color="yellow", go_right=False)line_red = LineFollower(star_color="red")line_green = LineFollower(star_color="green")line_blue = LineFollower(star_color="blue")self.line_folowers_dict = {"yellow_left": line_yellow_left,"yellow_right": line_yellow_right,"red": line_red,"green": line_green,"blue": line_blue}print("Init Done="+str(self.line_folowers_dict))def clean_line_folowers(self):for key, value in self.line_folowers_dict.items():value.clean_up()def set_line_folowers_states(self):self.search_mode = "line"for key, value in self.line_folowers_dict.items():# print("key="+str(key)+",mission_objective="+str(self.mission_objective))if key == self.mission_objective:print("Activating="+str(key))value.set_detection_state(True)else:# If the mission is not yellowif "yellow" not in self.mission_objective and key=="yellow_left":print("Activating Yellow Left for Start Search="+str(key))value.set_detection_state(True)self.search_mode = "star_"+str(self.mission_objective)else:# We deactivate the detection value.set_detection_state(False)def get_cmd_vel(self):cmd_vel = Twist()for key, value in self.line_folowers_dict.items():if key == self.mission_objective:cmd_vel = value.get_current_twist()if "star" in self.search_mode :# We see if The Star_red LineFollow is detecting somethingif cmd_vel.linear.x != 0.0:rospy.logwarn("STAR FOUND USING ITS CMD_VEL VALUES="+str(cmd_vel.linear.x))                       # Finish scriptself.star_found = Trueelse:# We then have to get the cmd_vel form the line_follow yellow_leftif self.lost_star >= 1:rospy.logwarn("STAR FOUND AND LOST...FINISHING")time.sleep(3)self.ctrl_c = Truerospy.logwarn("STAR_"+str(self.search_mode)+", WAS LOST, num="+str(self.lost_star))cmd_vel = self.line_folowers_dict["yellow_left"].get_current_twist()# Only if we found the start once at least we start counting if we lose itif self.star_found:self.lost_star +=1else:passreturn cmd_veldef objective_clb(self,msg):self.mission_objective = msg.datadef _check_objective_ready(self):self.mission_objective_msg = Nonewhile self.mission_objective_msg is None and not rospy.is_shutdown():try:self.mission_objective_msg = rospy.wait_for_message(self.objective_tp, String, timeout=1.0)rospy.logdebug("Current "+self.objective_tp+" READY=>" + str(self.mission_objective_msg))except:rospy.logerr("Current "+self.objective_tp+" not ready yet, retrying.")self.mission_objective = self.mission_objective_msg.datadef shutdownhook(self):# works better than the rospy.is_shut_down()rospy.loginfo("Cleaning and STopping Turtlebot")self.clean_line_folowers()self.movekobuki_object.clean_class()rospy.loginfo("shutdown time!")self.ctrl_c = Truedef loop(self):while not self.ctrl_c:rospy.loginfo("Mission=="+str(self.mission_objective))self.set_line_folowers_states()cmd_vel = self.get_cmd_vel()# rospy.loginfo("cmd_vel=="+str(cmd_vel))if cmd_vel.linear.x == 0.0:# Recovery moderospy.logwarn("LOST, RECOVERY MODE")cmd_vel.linear.x = 0.2cmd_vel.angular.z = 0.1else:passself.movekobuki_object.move_robot(cmd_vel)self.rate.sleep()def main():rospy.init_node('line_objective', anonymous=True, log_level=rospy.DEBUG)line_obj = LineObjective()line_obj.loop()if __name__ == '__main__':try:main()except rospy.ROSInterruptException:pass
follow_line_step_multiple_complete.py
#!/usr/bin/env python
import rospy
import cv2
import numpy as np
from cv_bridge import CvBridge, CvBridgeError
from geometry_msgs.msg import Twist
from sensor_msgs.msg import Imageclass LineFollower(object):def __init__(self, star_color="yellow", go_right=True):self._star_color = star_colorself._go_right = go_rightself.detection_state_on = False# name : {"upper":[H,S,V], "lower":[H,S,V]}self.star_HSV_ranges_colour_db = {"green":{"upper":[64,255,255],"lower":[45,142,0]},"red":{"upper":[0,255,255],"lower":[0,185,0]},"blue":{"upper":[111,255,255],"lower":[104,134,0]},"yellow":{"upper":[34,255,255],"lower":[23,98,0]},}self.current_twist = Twist()self.bridge_object = CvBridge()self.image_sub = rospy.Subscriber("/camera/rgb/image_raw",Image,self.camera_callback)def get_current_twist(self):return self.current_twistdef update_turning_tendency(self,go_right):self._go_right = go_rightdef get_range_hsv_color(self, color_name):#rospy.loginfo("Getting HSV for Color=="+str(color_name))if color_name in self.star_HSV_ranges_colour_db:ranges_dict = self.star_HSV_ranges_colour_db[color_name]upper = ranges_dict["upper"]lower = ranges_dict["lower"]upper_color = np.array([upper[0],upper[1],upper[2]])lower_color = np.array([lower[0],lower[1],lower[2]])return upper_color, lower_colorelse:return None, Nonedef update_star_color(self, new_color):self._star_color = new_colordef set_detection_state(self,is_on):self.detection_state_on = is_ondef camera_callback(self,data):if self.detection_state_on:try:# We select bgr8 because its the OpneCV encoding by defaultcv_image = self.bridge_object.imgmsg_to_cv2(data, desired_encoding="bgr8")except CvBridgeError as e:print(e)# We get image dimensions and crop the parts of the image we dont need# Bear in mind that because the first value of the image matrix is start and second value is down limit.# Select the limits so that it gets the line not too close and not too far, and the minimum portion possible# To make the process faster.height, width, channels = cv_image.shape# Noetic integer conversionheight = int(height)width = int(width)channels = int(channels)descentre = 160rows_to_watch = 20aux1 = int(((height)/2)+descentre)aux2 = int((height)/2+(descentre+rows_to_watch))crop_img = cv_image[aux1:aux2][1:width]# Convert from RGB to HSVhsv = cv2.cvtColor(crop_img, cv2.COLOR_BGR2HSV)upper_hsv_color , lower_hsv_color = self.get_range_hsv_color(self._star_color)if upper_hsv_color is not None and lower_hsv_color is not None:# Threshold the HSV image to get only yellow colorsmask = cv2.inRange(hsv, lower_hsv_color, upper_hsv_color)# Bitwise-AND mask and original imageres = cv2.bitwise_and(crop_img,crop_img, mask= mask)contours, _, = cv2.findContours(mask, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_TC89_L1)#rospy.loginfo("Number of centroids==>"+str(len(contours)))centres = []for i in range(len(contours)):moments = cv2.moments(contours[i])try:centres.append((int(moments['m10']/moments['m00']), int(moments['m01']/moments['m00'])))cv2.circle(res, centres[-1], 10, (0, 255, 0), -1)except ZeroDivisionError:pass# rospy.loginfo(str(centres))#Select the right centroid# [(542, 39), (136, 46)], (x, y)selected_centroid_index = 0index = 0max_x_value = 0min_x_value = widthfor candidate in centres:# Retrieve the cx valuecx = candidate[0]if self._go_right:# Get the Cx more to the rightif cx >= max_x_value:max_x_value = cxselected_centroid_index = indexelse:# Get the Cx more to the leftif cx <= min_x_value:min_x_value = cxselected_centroid_index = indexindex += 1try:cx = centres[selected_centroid_index][0]cy = centres[selected_centroid_index][1]#rospy.logwarn("Winner =="+str(cx)+","+str(cy)+"")# Draw the centroid in the resulting image# cv2.circle(img, center, radius, color[, thickness[, lineType[, shift]]]) cv2.circle(res,(int(cx), int(cy)), 5,(0,0,255),-1)cv2.imshow("Original_"+str(self._star_color), cv_image)#cv2.imshow("HSV", hsv)#cv2.imshow("MASK", mask)cv2.imshow("RES_"+str(self._star_color), res)cv2.waitKey(1)error_x = cx - width / 2twist_object = Twist()twist_object.linear.x = 0.2twist_object.angular.z = -error_x / 100# rospy.loginfo("ANGULAR VALUE SENT===>"+str(twist_object.angular.z))self.current_twist = twist_objectexcept:#cy, cx = height/2, width/2cv2.imshow("Original", cv_image)#cv2.imshow("HSV", hsv)#cv2.imshow("MASK", mask)cv2.imshow("RES", res)cv2.waitKey(1)twist_object = Twist()rospy.logwarn("NO BLOB Found===>")self.current_twist = twist_objectelse:rospy.logerr("Color To follow not supported, stopping Robot="+str(self._star_color))twist_object = Twist()self.current_twist = twist_objectelse:twist_object = Twist()self.current_twist = twist_objectdef clean_up(self):cv2.destroyAllWindows()def main():rospy.init_node('line_following_node', anonymous=True)line_follower_object = LineFollower()rate = rospy.Rate(5)ctrl_c = Falsedef shutdownhook():# works better than the rospy.is_shut_down()line_follower_object.clean_up()rospy.loginfo("shutdown time!")ctrl_c = Truerospy.on_shutdown(shutdownhook)while not ctrl_c:rate.sleep()if __name__ == '__main__':main()

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/428611.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Docker实操:安装MySQL5.7详解(保姆级教程)

介绍 Docker 中文网址: https://www.dockerdocs.cn Docker Hub官方网址&#xff1a;https://hub.docker.com Docker Hub中MySQL介绍&#xff1a;https://hub.docker.com/_/mysql ​ 切换到“Tags”页面&#xff0c;复制指定的MySQL版本拉取命令&#xff0c;例如 &#xff1a…

uv-ui组件的使用——自定义输入框的样式

一、官网的使用 二、自定义修改样式 我是在小程序中使用此组件 想要自定义修改样式的话&#xff0c;需要placeholderClass加上 placeholderStyle配合使用 tip1&#xff1a;单独使用placeholderClass&#xff0c;他只会第一次渲染时生效&#xff0c;输入文字再清除后就不生效…

十六,Spring Boot 整合 Druid 以及使用 Druid 监控功能

十六&#xff0c;Spring Boot 整合 Druid 以及使用 Druid 监控功能 文章目录 十六&#xff0c;Spring Boot 整合 Druid 以及使用 Druid 监控功能1. Druid 的基本介绍2. 准备工作&#xff1a;3. Druid 监控功能3.1 Druid 监控功能 —— Web 关联监控3.2 Druid 监控功能 —— SQL…

(蓝桥杯)STM32G431RBT6(TIM4-PWM)

一、基础配置 这个auto-reload preload是自动重装载值&#xff0c;因为我们想让他每改变一个占空比&#xff0c;至少出现一次周期 Counter Period(Autoreload Regisiter)这个设值为10000&#xff0c;那么就相当于它的周期是10000 脉冲宽度可以设置为占周期的一半&#xff0c;那…

Python酷库之旅-第三方库Pandas(123)

目录 一、用法精讲 546、pandas.DataFrame.ffill方法 546-1、语法 546-2、参数 546-3、功能 546-4、返回值 546-5、说明 546-6、用法 546-6-1、数据准备 546-6-2、代码示例 546-6-3、结果输出 547、pandas.DataFrame.fillna方法 547-1、语法 547-2、参数 547-3、…

opencv图像透视处理

引言 在图像处理与计算机视觉领域&#xff0c;透视变换&#xff08;Perspective Transformation&#xff09;是一种重要的图像校正技术&#xff0c;它允许我们根据图像中已知的四个点&#xff08;通常是矩形的四个角&#xff09;和目标位置的四个点&#xff0c;将图像从一个视…

Ubuntu 与Uboot网络共享资源

1、NFS 1.1 Ubuntu 下 NFS 服务开启 sudo apt-get install nfs-kernel-server rpcbind 等待安装完成&#xff0c;安装完成以后在用户根目录下创建一个名为“Linux”的文件夹&#xff0c;以后所有 的东西都放到这个“Linux”文件夹里面&#xff0c;在“Linux”文件夹里面新建…

[Simpfun游戏云1]搭建MC Java+基岩互通生存游戏服务器

众所周知&#xff0c;MC有多个客户端&#xff0c;像常见的比如Java Edition和基岩等&#xff0c;这就导致&#xff0c;比如我知道一个超级好玩的JE服务器&#xff0c;但我又想使用基岩版来玩&#xff0c;肯定是不行的&#xff0c;因为通讯协议不一样。 这就有一些人才发明了多…

搜索引擎onesearch3实现解释和升级到Elasticsearch v8系列(四)-搜索

搜索 搜索内容比较多&#xff0c;onesearch分成两部分&#xff0c;第一部分&#xff0c;Query构建&#xff0c;其中包括搜索词设置&#xff0c;设置返回字段&#xff0c;filter&#xff0c;高亮&#xff1b;第二部分分页和排序。第一部分是映射引擎负责&#xff0c;映射通用表…

常见中间件漏洞靶场(tomcat)

1.CVE-2017-12615 开启环境 查看端口 查看IP 在哥斯拉里生成一个木马 访问页面修改文件后缀和文件内容 放包拿去连接 2.后台弱⼝令部署war包 打开环境 将前边的1.jsp压缩成1.zip然后改名为1.war 访问页面进行上传 在拿去连接 3.CVE-2020-1938 打开环境 访问一下 来到kali …

错题集锦之C语言

直接寻址和立即寻址 算法的又穷性是指算法程序的运行时间是有限的 未经赋值的全局变量值不确定 集成测试是为了发现概要设计的错误 自然连接要求两个关系中进行比较的是相同的属性&#xff0c;并且进行等值连接&#xff0c;在结果中还要把重复的属性列去掉 赋值运算符 赋值…

【计算机网络篇】电路交换,报文交换,分组交换

本文主要介绍计算机网络中的电路交换&#xff0c;报文交换&#xff0c;分组交换&#xff0c;文中的内容是我认为的重点内容&#xff0c;并非所有。参考的教材是谢希仁老师编著的《计算机网络》第8版。跟学视频课为河南科技大学郑瑞娟老师所讲计网。 目录 &#x1f3af;一.划分…

无线安全(WiFi)

免责声明:本文仅做分享!!! 目录 WEP简介 WPA简介 安全类型 密钥交换 PMK PTK 4次握手 WPA攻击原理 网卡选购 攻击姿态 1-暴力破解 脚本工具 字典 2-Airgeddon 破解 3-KRACK漏洞 4-Rough AP 攻击 5-wifi钓鱼 6-wifite 其他 WEP简介 WEP是WiredEquivalentPri…

浅谈vue2.0与vue3.0的区别(整理十六点)

目录 1. 实现数据响应式的原理不同 2. 生命周期不同 3. vue 2.0 采用了 option 选项式 API&#xff0c;vue 3.0 采用了 composition 组合式 API 4. 新特性编译宏 5. 父子组件间双向数据绑定 v-model 不同 6. v-for 和 v-if 优先级不同 7. 使用的 diff 算法不同 8. 兄弟组…

2024年及未来:构筑防御通胀的堡垒,保护您的投资

随着全球经济的波动和不确定性&#xff0c;通货膨胀已成为投资者不得不面对的现实问题。通胀会侵蚀货币的购买力&#xff0c;从而影响投资的实际回报。因此&#xff0c;制定有效的策略来保护投资免受通胀影响&#xff0c;对于确保资产的长期增值至关重要。在2024年及未来&#…

nginx架构篇(三)

文章目录 一、Nginx实现服务器端集群搭建1.1 Nginx与Tomcat部署1. 环境准备(Tomcat)2. 环境准备(Nginx) 1.2. Nginx实现动静分离1.2.1. 需求分析1.2.2. 动静实现步骤 1.3. Nginx实现Tomcat集群搭建1.4. Nginx高可用解决方案1.4.1. Keepalived1.4.2. VRRP介绍1.4.3. 环境搭建环境…

口碑最好的头戴式耳机是哪些?高品质头戴式耳机对比测评揭晓

头戴式耳机以其出色的音质表现和舒适的佩戴体验&#xff0c;成为了音乐爱好者和日常通勤用户的热门选择。而在众多品牌和型号中&#xff0c;口碑最好的头戴式耳机是哪些&#xff1f;面对市场上丰富的选择&#xff0c;找到一款音质优良、佩戴舒适且性价比高的耳机并不容易。今天…

美畅物联丨技术前沿探索:H.265编码与畅联云平台JS播放器的融合应用

一、H.265 编码&#xff1a;视频压缩技术的重大变革 H.265&#xff0c;即被熟知为高效视频编码&#xff08;HEVC&#xff0c;High Efficiency Video Coding&#xff09;&#xff0c;由国际电信联盟电信标准化部门视频编码专家组&#xff08;ITU-T VCEG&#xff09;与国际标准化…

俄罗斯OZON新生儿产品好不好卖,OZON新生儿产品

Top1 遥控水球坦克 Танк на радиоуправлении стреляющий орбизами PANAWEALTH 商品id&#xff1a;1384249985 月销量&#xff1a;692 欢迎各位OZON卖家朋友点击这里选品&#xff1a; &#x1f449; D。DDqbt。COm/74rD 遥控射击水…

Java中Set的巧妙用法---查找重复元素/去重/排序

目录 1. Set特性&#xff1a; 3. TreeSet 3.1定制排序&#xff08;比较器排序&#xff09; 3.2自然排序&#xff1a; 4. LinkedHashSet 在日常开发中不可避免会遇到需要去重&#xff0c;或者查找重复元素&#xff0c;下面给介绍一种效率比较高的方法&#xff0c;时间复杂度…