我计划从树莓派上的摄像头采集图像数据到笔记本上,tensorflow处理完毕后再向树莓派发送操作指令,控制电机和舵机。ROS的双机连接还是挺简单的,首先树莓派和笔记本设置ssh免密码和用户名登录(具体请百度),从树莓派和笔记本上都可以互相无密码和用户名登录成功后,在树莓派上输入sudo vi ~/.bashrc,在最后增加export ROS_HOSTNAME=树莓派的ip,export ROS_MASTER_URI=http://笔记本的ip:11311,然后source ~/.bashrc;登录笔记本上sudo vi ~/.bashrc,最后增加 export ROS_HOSTNAME=笔记本ip,export ROS_MASTER_URI=http://笔记本ip:11311(我用笔记本当主机,如果用树莓派当主机,ROS_MASTER_URI地址更换就可以了)。然后树莓派、pc上安装opencv,树莓派上安装picamera。在笔记本上启动roscore,在树莓派用以下代码测试双机发送图像:
# -*- encoding:utf-8 -*-
# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
import sys
import rospy
from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
import io
import numpy as np
# initialize the camera and grab a reference to the raw camera capture
def talker():
camera = PiCamera()
camera.resolution = (320,240)
time.sleep(1)
#allow the camera to warmup
image_pub = rospy.Publisher("image_topic_2",Image,queue_size=10)
bridge = CvBridge()
rospy.init_node('image_talker', anonymous=True)
rate = rospy.Rate(10) # 10hz
#以上是ros节点初始化完成
# capture frames from the camera
stream = io.BytesIO()
for foo in camera.capture_continuous(stream, format='jpeg', use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
data = np.fromstring(stream.getvalue(), dtype=np.uint8)
# 转换为CV2的mat格式
image = cv2.imdecode(data, cv2.IMREAD_COLOR)
(rows,cols,channels) = image.shape
if cols > 60 and rows > 60 :
cv2.circle(image, (50,50), 10, 255)
image_pub.publish(bridge.cv2_to_imgmsg(image, "bgr8"))
rate.sleep()
stream.truncate()
stream.seek(0)
if __name__ == '__main__':
try:
talker()
except rospy.ROSInterruptException:
pass
在/catkin_ws/src/beginner_tutorials/src保存为test.py,chmod +x test.py。运行rosrun beginner_tutorials test.py。在笔记本上运营rqt_image_view查看图像。
网友评论