ROS系列(二):rosbag 中提取视频数据

一、环境安装

当前环境在上一篇文章的基础上进行配置。

ROS系列(一):【环境配置】rosbag 包安装_安装rosbag-CSDN博客

继续安装

sudo apt install ffmpeg

python 包如下

pip install sensor_msgs --extra-index-url https://rospypi.github.io/simple/
pip install geometry_msgs --extra-index-url https://rospypi.github.io/simple/
pip install opencv-python
pip install roslz4 --extra-index-url https://rospypi.github.io/simple/

安装好环境后:

使用如下脚本:

#!/usr/bin/env python3"""
rosbag2video.py
rosbag to video file conversion tool
by Abel Gabor 2019
baquatelle@gmail.com
requirements:
sudo apt install python3-roslib python3-sensor-msgs python3-opencv ffmpeg
based on the tool by Maximilian Laiacker 2016
post@mlaiacker.de"""import roslib
#roslib.load_manifest('rosbag')
import rospy
import rosbag
import sys, getopt
import os
from sensor_msgs.msg import CompressedImage
from sensor_msgs.msg import Image
import cv2import numpy as npimport shlex, subprocessMJPEG_VIDEO = 1
RAWIMAGE_VIDEO = 2
VIDEO_CONVERTER_TO_USE = "ffmpeg" # or you may want to use "avconv"def print_help():print('rosbag2video.py [--fps 25] [--rate 1] [-o outputfile] [-v] [-s] [-t topic] bagfile1 [bagfile2] ...')print()print('Converts image sequence(s) in ros bag file(s) to video file(s) with fixed frame rate using',VIDEO_CONVERTER_TO_USE)print(VIDEO_CONVERTER_TO_USE,'needs to be installed!')print()print('--fps   Sets FPS value that is passed to',VIDEO_CONVERTER_TO_USE)print('        Default is 25.')print('-h      Displays this help.')print('--ofile (-o) sets output file name.')print('        If no output file name (-o) is given the filename \'<prefix><topic>.mp4\' is used and default output codec is h264.')print('        Multiple image topics are supported only when -o option is _not_ used.')print('        ',VIDEO_CONVERTER_TO_USE,' will guess the format according to given extension.')print('        Compressed and raw image messages are supported with mono8 and bgr8/rgb8/bggr8/rggb8 formats.')print('--rate  (-r) You may slow down or speed up the video.')print('        Default is 1.0, that keeps the original speed.')print('-s      Shows each and every image extracted from the rosbag file (cv_bride is needed).')print('--topic (-t) Only the images from topic "topic" are used for the video output.')print('-v      Verbose messages are displayed.')print('--prefix (-p) set a output file name prefix othervise \'bagfile1\' is used (if -o is not set).')print('--start Optional start time in seconds.')print('--end   Optional end time in seconds.')class RosVideoWriter():def __init__(self, fps=25.0, rate=1.0, topic="", output_filename ="", display= False, verbose = False, start = rospy.Time(0), end = rospy.Time(sys.maxsize)):self.opt_topic = topicself.opt_out_file = output_filenameself.opt_verbose = verboseself.opt_display_images = displayself.opt_start = startself.opt_end = endself.rate = rateself.fps = fpsself.opt_prefix= Noneself.t_first={}self.t_file={}self.t_video={}self.p_avconv = {}def parseArgs(self, args):opts, opt_files = getopt.getopt(args,"hsvr:o:t:p:",["fps=","rate=","ofile=","topic=","start=","end=","prefix="])for opt, arg in opts:if opt == '-h':print_help()sys.exit(0)elif opt == '-s':self.opt_display_images = Trueelif opt == '-v':self.opt_verbose = Trueelif opt in ("--fps"):self.fps = float(arg)elif opt in ("-r", "--rate"):self.rate = float(arg)elif opt in ("-o", "--ofile"):self.opt_out_file = argelif opt in ("-t", "--topic"):self.opt_topic = argelif opt in ("-p", "--prefix"):self.opt_prefix = argelif opt in ("--start"):self.opt_start = rospy.Time(int(arg))if(self.opt_verbose):print("starting at",self.opt_start.to_sec())elif opt in ("--end"):self.opt_end = rospy.Time(int(arg))if(self.opt_verbose):print("ending at",self.opt_end.to_sec())else:print("opz:", opt,'arg:', arg)if (self.fps<=0):print("invalid fps", self.fps)self.fps = 1if (self.rate<=0):print("invalid rate", self.rate)self.rate = 1if(self.opt_verbose):print("using ",self.fps," FPS")return opt_files# filter messages using type or only the opic we whant from the 'topic' argumentdef filter_image_msgs(self, topic, datatype, md5sum, msg_def, header):if(datatype=="sensor_msgs/CompressedImage"):if (self.opt_topic != "" and self.opt_topic == topic) or self.opt_topic == "":print("############# COMPRESSED IMAGE  ######################")print(topic,' with datatype:', str(datatype))print()return True;if(datatype=="theora_image_transport/Packet"):if (self.opt_topic != "" and self.opt_topic == topic) or self.opt_topic == "":print(topic,' with datatype:', str(datatype))print('!!! theora is not supported, sorry !!!')return False;if(datatype=="sensor_msgs/Image"):if (self.opt_topic != "" and self.opt_topic == topic) or self.opt_topic == "":print("############# UNCOMPRESSED IMAGE ######################")print(topic,' with datatype:', str(datatype))print()return True;return False;def write_output_video(self, msg, topic, t, video_fmt, pix_fmt = ""):# no data in this topicif len(msg.data) == 0 :return# initiate data for this topicif not topic in self.t_first :self.t_first[topic] = t # timestamp of first image for this topicself.t_video[topic] = 0self.t_file[topic] = 0# if multiple streams of images will start at different times the resulting video files will not be in sync# current offset time we are in the bag fileself.t_file[topic] = (t-self.t_first[topic]).to_sec()# fill video file up with images until we reache the current offset from the beginning of the bag filewhile self.t_video[topic] < self.t_file[topic]/self.rate :if not topic in self.p_avconv:# we have to start a new process for this topicif self.opt_verbose :print("Initializing pipe for topic", topic, "at time", t.to_sec())if self.opt_out_file=="":out_file = self.opt_prefix + str(topic).replace("/", "_")+".mp4"else:out_file = self.opt_out_fileif self.opt_verbose :print("Using output file ", out_file, " for topic ", topic, ".")if video_fmt == MJPEG_VIDEO :cmd = [VIDEO_CONVERTER_TO_USE, '-v', '1', '-stats', '-r',str(self.fps),'-c','mjpeg','-f','mjpeg','-i','-','-an',out_file]self.p_avconv[topic] = subprocess.Popen(cmd, stdin=subprocess.PIPE)if self.opt_verbose :print("Using command line:")print(cmd)elif video_fmt == RAWIMAGE_VIDEO :size = str(msg.width)+"x"+str(msg.height)cmd = [VIDEO_CONVERTER_TO_USE, '-v', '1', '-stats','-r',str(self.fps),'-f','rawvideo','-s',size,'-pix_fmt', pix_fmt,'-i','-','-an',out_file]self.p_avconv[topic] = subprocess.Popen(cmd, stdin=subprocess.PIPE)if self.opt_verbose :print("Using command line:")print(cmd)else :print("Script error, unknown value for argument video_fmt in function write_output_video.")exit(1)# send data to ffmpeg process pipeself.p_avconv[topic].stdin.write(msg.data)# next frame timeself.t_video[topic] += 1.0/self.fpsdef addBag(self, filename):if self.opt_display_images:from cv_bridge import CvBridge, CvBridgeErrorbridge = CvBridge()cv_image = []if self.opt_verbose :print("Bagfile: {}".format(filename))if not self.opt_prefix:# create the output in the same folder and name as the bag file minu '.bag'self.opt_prefix = bagfile[:-4]#Go through the bag filebag = rosbag.Bag(filename)if self.opt_verbose :print("Bag opened.")# loop over all topicsfor topic, msg, t in bag.read_messages(connection_filter=self.filter_image_msgs, start_time=self.opt_start, end_time=self.opt_end):try:if msg.format.find("jpeg")!=-1 :if msg.format.find("8")!=-1 and (msg.format.find("rgb")!=-1 or msg.format.find("bgr")!=-1 or msg.format.find("bgra")!=-1 ):if self.opt_display_images:np_arr = np.fromstring(msg.data, np.uint8)cv_image = cv2.imdecode(np_arr, cv2.CV_LOAD_IMAGE_COLOR)self.write_output_video( msg, topic, t, MJPEG_VIDEO )elif msg.format.find("mono8")!=-1 :if self.opt_display_images:np_arr = np.fromstring(msg.data, np.uint8)cv_image = cv2.imdecode(np_arr, cv2.CV_LOAD_IMAGE_COLOR)self.write_output_video( msg, topic, t, MJPEG_VIDEO )elif msg.format.find("16UC1")!=-1 :if self.opt_display_images:np_arr = np.fromstring(msg.data, np.uint16)cv_image = cv2.imdecode(np_arr, cv2.CV_LOAD_IMAGE_COLOR)self.write_output_video( msg, topic, t, MJPEG_VIDEO )else:print('unsupported jpeg format:', msg.format, '.', topic)# has no attribute 'format'except AttributeError:try:pix_fmt=Noneif msg.encoding.find("mono8")!=-1 or msg.encoding.find("8UC1")!=-1:pix_fmt = "gray"if self.opt_display_images:cv_image = bridge.imgmsg_to_cv2(msg, "bgr8")elif msg.encoding.find("bgra")!=-1 :pix_fmt = "bgra"if self.opt_display_images:cv_image = bridge.imgmsg_to_cv2(msg, "bgr8")elif msg.encoding.find("bgr8")!=-1 :pix_fmt = "bgr24"if self.opt_display_images:cv_image = bridge.imgmsg_to_cv2(msg, "bgr8")elif msg.encoding.find("bggr8")!=-1 :pix_fmt = "bayer_bggr8"if self.opt_display_images:cv_image = bridge.imgmsg_to_cv2(msg, "bayer_bggr8")elif msg.encoding.find("rggb8")!=-1 :pix_fmt = "bayer_rggb8"if self.opt_display_images:cv_image = bridge.imgmsg_to_cv2(msg, "bayer_rggb8")elif msg.encoding.find("rgb8")!=-1 :pix_fmt = "rgb24"if self.opt_display_images:cv_image = bridge.imgmsg_to_cv2(msg, "bgr8")elif msg.encoding.find("16UC1")!=-1 :pix_fmt = "gray16le"else:print('unsupported encoding:', msg.encoding, topic)#exit(1)if pix_fmt:self.write_output_video( msg, topic, t, RAWIMAGE_VIDEO, pix_fmt )except AttributeError:# maybe theora packet# theora not supportedif self.opt_verbose :print("Could not handle this format. Maybe thoera packet? theora is not supported.")passif self.opt_display_images:cv2.imshow(topic, cv_image)key=cv2.waitKey(1)if key==1048603:exit(1)if self.p_avconv == {}:print("No image topics found in bag:", filename)bag.close()if __name__ == '__main__':#print()#print('rosbag2video, by Maximilian Laiacker 2020 and Abel Gabor 2019')#print()if len(sys.argv) < 2:print('Please specify ros bag file(s)!')print_help()sys.exit(1)else :videowriter = RosVideoWriter()try:opt_files = videowriter.parseArgs(sys.argv[1:])except getopt.GetoptError:print_help()sys.exit(2)# loop over all filesfor files in range(0,len(opt_files)):#First arg is the bag to look atbagfile = opt_files[files]videowriter.addBag(bagfile)print("finished")
使用方法介绍:

python rosbag2video.py XXX.bag

参数说明:

[–fps] :设置传递给ffmpeg的帧率,默认为25;
[-h]:显示帮助;
[–ofile]:设置输出文件名;
[–rate]:放慢或加快视频。默认值是1.0,保持原来的速度;
[-s]:显示从rosbag文件提取的每个图像;
[–topic]:仅来自“topic”的图像用于视频输出;
[-v]:显示详细消息;
[–prefix]:设置输出文件名前缀,否则使用“ bagfile1”(如果未设置-o);
[–start]:可选的开始时间(以秒为单位);
[–end]:可选结束时间,单位为秒;

执行结果:

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/160186.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

互联网Java工程师面试题·Java 总结篇·第三弹

20、重载&#xff08;Overload&#xff09;和重写&#xff08;Override&#xff09;的区别。重载的方法能否根据返回类型进行区分&#xff1f; 方法的重载和重写都是实现多态的方式&#xff0c;区别在于前者实现的是编译时的多态性&#xff0c;而后者实现的是运行时的多态性。重…

ROS opencv 人脸识别

人脸识别需要在输入的图像中确定人脸&#xff08;如果存在&#xff09;的位置、大小和姿态&#xff0c;往往用于生物特征识别、视频监听、人机交互等应用中。2001年&#xff0c;Viola和Jones提出了基于Haar特征的级联分类器对象检测算法&#xff0c;并在2002年由Lienhart和Mayd…

Linux安装rpm包在线安装mysql5.7

以前安装过mysql 前言&#xff1a;检查以前是否装有mysql rpm -qa|grep -i mysql安装了会显示&#xff1a;   bt-mysql57-5.7.31-1.el7.x86_64 停止mysql服务和删除之前安装的mysql rpm -e bt-mysql57-5.7.31-1.el7.x86_64查找并删除mysql相关目录 find / -name mysql/va…

基础课2——自然语言处理

1.概念 自然语言处理&#xff08;Natural Language Processing, NLP&#xff09;是计算机科学领域与人工智能领域中的一个重要方向&#xff0c;它研究能实现人与计算机之间用自然语言进行有效通信的各种理论和方法。 自然语言处理的主要研究方向包括&#xff1a; 语言学研究&…

【版本控制】Git(学习笔记)

一、Git工作流程图 clone&#xff08;克隆&#xff09;: 从远程仓库中克隆代码到本地仓库checkout &#xff08;检出&#xff09;&#xff1a;从本地仓库中检出一个仓库分支然后进行修订add&#xff08;添加&#xff09;: 在提交前先将代码提交到暂存区commit&#xff08;提交&…

AI算法检测对无人军用车辆的MitM攻击

南澳大利亚大学和查尔斯特大学的教授开发了一种算法来检测和拦截对无人军事机器人的中间人&#xff08;MitM&#xff09;攻击。 MitM 攻击是一种网络攻击&#xff0c;其中两方&#xff08;在本例中为机器人及其合法控制器&#xff09;之间的数据流量被拦截&#xff0c;以窃听或…

css3自动吸附scroll-snap

我们希望可以一块一块的滚动&#xff0c;比如当前一个块滚出去了一部分并且后一个块滚进来一部分的时候&#xff0c;实现后一个块自动滚入或者前一个块回弹到初始位置这种效果&#xff0c;以前的时候用js需要写比较复杂的判断逻辑&#xff0c;后来有了一个css scroll snap这个方…

C# Winform编程(2)常用控件

C# Winform编程&#xff08;2&#xff09;常用控件 常用控件 常用控件 标签&#xff0c;文本&#xff0c;按钮&#xff0c;列表框&#xff0c;组合框等的使用 Program.cs using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks…

apache shiro安全框架反序列化漏洞

用linux搭建一个环境 配置下源vi /etc/apt/sources.list 源如果是kali官方的有时候会下载不了&#xff0c;改成中科大的源 更新下源apt-get update 安装docker-compose 重启kali 启动docker容器 apt-get install docker apt-get install docker-compose reboot service do…

Prometheus的Pushgateway快速部署及使用

prometheus-pushgateway安装 一. Pushgateway简介 Pushgateway为Prometheus整体监控方案的功能组件之一&#xff0c;并做于一个独立的工具存在。它主要用于Prometheus无法直接拿到监控指标的场景&#xff0c;如监控源位于防火墙之后&#xff0c;Prometheus无法穿透防火墙&…

Xilinx IP 10G Ethernet PCS/PMA IP Core

Vivado 10G Ethernet PCS/PMA介绍 1介绍 完整的10G以太网接口如下图,分为10G PHY和10G MAC两部分。 这篇文章重点讲 10G Ethernet PCS/PMA。 2 IP的基本介绍 10G以太网物理编码子层/物理介质连接(PCS/PMA)核心在Xilinx 10G以太网介质访问控制器(MAC)核心和具有10Gb/s…

Maven打包添加本地工程jar包

前言 先吐槽几句,公司有一小组专门来做各个项目的测试环境以及打包上线的工作&#xff0c;我们称之为XX,这个XX并不是什么业务领导&#xff0c;也只是一个螺丝钉。这群人每天对上跪舔&#xff0c;对其他人爱搭不理&#xff0c;给人一种高高在上的感觉&#xff0c;之前的一个老…

【3】c++11新特性(稳定性和兼容性)—>类成员的快速初始化

在进行类成员变量初始化的时候&#xff0c;C11标准对于C98做了补充&#xff0c;允许在定义类的时候在类的内部直接对非静态变量进行初始化&#xff0c;在初始化的时候可以使用等号&#xff0c;也可以使用花括号{}&#xff0c;等号可以省略不写&#xff1b;静态成员变量需要在类…

OpenCV16-图像连通域分析

OpenCV16-图像连通域分析 1.图像连通域分析2.connectedComponents3.connectedComponentsWithStatus 1.图像连通域分析 连通域是指图像中具有相同像素值并且位置相邻的像素组成的区域。连通域分析是指在图像中寻找彼此互相独立的连通域并将其标记出来。 4邻域与8邻域的概念&am…

多个子div在父中垂直居中

在一个div下&#xff0c;有多个子div&#xff0c;且子div都是水平垂直居中 <template><div><div class"far"><!-- 注意需要多包裹一层 --><div><div class"son1">1</div><div class"son2">222…

Stable Diffusion 动画animatediff-cli-prompt-travel

基于 sd-webui-animatediff 生成动画或者动态图的基础功能,animatediff-cli-prompt-travel突破了部分限制,能让视频生成的时间更长,并且能加入controlnet和提示词信息控制每个片段,并不像之前 sd-webui-animatediff 的一套关键词控制全部画面。 动图太大传不上来,凑合看每…

单片机判断语句与位运算的坑

一.问题描述 在我判断Oled的某点的值是否为1时,用到了如下判断语句 if(oled[x][y/8] &1<<(y%8)但是,当我将其改为如下的判断语句,代码却跑出BUG了 if((oled[x][y/8]&1<<(y%8))1)二.原因分析 1.if语句理解错误 首选让我们看看下面的代码运行结果 #inc…

python项目之AI动物识别工具的设计与实现(django)

项目介绍&#xff1a; &#x1f495;&#x1f495;作者&#xff1a;落落 &#x1f495;&#x1f495;个人简介&#xff1a;混迹java圈十余年&#xff0c;擅长Java、小程序、Python等。 &#x1f495;&#x1f495;各类成品java毕设 。javaweb&#xff0c;ssm&#xff0c;spring…

自动泊车系统设计学习笔记

1 概述 1.1 自动泊车系统研究现状 目前对于自动泊车系统的研究方法通常有两种实现方式&#xff1a; 整个泊车操作可以分为四个阶段&#xff1a;第一阶段车辆向前行驶进行车位识别&#xff0c;第二阶段车辆行驶到准备泊车时的待泊车区域&#xff0c;第三阶段车辆按照规划好的…

uniapp(uncloud) 使用生态开发接口详情2(使用 schema创建数据, schema2code创建页面, iconfont 引入项目)

上一篇介绍如何创建项目,接下来该是如何使用 在项目中pages 目录下,新建界面 项目运行,浏览器中用账号密码登录, 新建一级和二级页面 2.1 系统管理 > 菜单管理 (新增一级界面) 2.2 找到刚刚创建的菜单, 操作行有 子菜单(点击) 用DB Schema创建页面, 3.1 在uniCloud > d…