目录
项目构想
项目感想
项目API调用
项目语言以及库
项目目录
文件构成
代码清单
main:2.Speech_Recognition.py
1. Sound_Recording.py
3.tuling.py
4.Specch_Sythesis.py
问题总结
1.调用playsound库进行播放音频时会出现使用后资源不释放产生以下错误:
PermissionError: [Errno 13] Permission denied: “1.MP3”
解决办法参考以下:https://blog.csdn.net/liang4000/article/details/96766845
2.仔细阅读各个API文档
3.如何退出死循环:添加if判断语音是否含有“退出”
项目视频展示
-
项目构想
录制一段音频并识别成字符,将字符传入图灵机器人并获得回复,将回复合成音频文件并播放。
-
项目感想
讯飞语音识别率还行,但是项目容错率低,并且项目基本调用API,没进一步研究语音识别技术的过程与实现,不过还是颇有收获,学习到了完成项目期间出现各种各样的问题的解决办法,所以只要你敢想,敢动手去做就一定会有收获。
-
项目API调用
讯飞语音识别,百度语音合成,图灵机器人。
-
项目语言以及库
Python+playsound+pyaudio+wave+os+百度API+讯飞API+图灵机器人API。
-
项目目录
-
文件构成
中文目录:语音识别为主函数----->>函数调用步骤(1.语音录制 2.语音识别 3.图灵机器人 4.语音合成)
英文目录:Speech_Recognition.py----->>(1. Sound_Recording.py 2.Speech_Recognition.py 3.tuling.py 4.Specch_Sythesis.py)
-
代码清单
main:2.Speech_Recognition.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2019/12/27 16:10
# @Author : Cxk
# @File : Speech_Recognition.py# -*- coding:utf-8 -*-
#
# author: iflytek
#
# 本demo测试时运行的环境为:Windows + Python3.7
# 本demo测试成功运行时所安装的第三方库及其版本如下,您可自行逐一或者复制到一个新的txt文件利用pip一次性安装:
# cffi==1.12.3
# gevent==1.4.0
# greenlet==0.4.15
# pycparser==2.19
# six==1.12.0
# websocket==0.2.1
# websocket-client==0.56.0
#
# 语音听写流式 WebAPI 接口调用示例 接口文档(必看):https://doc.xfyun.cn/rest_api/语音听写(流式版).html
# webapi 听写服务参考帖子(必看):http://bbs.xfyun.cn/forum.php?mod=viewthread&tid=38947&extra=
# 语音听写流式WebAPI 服务,热词使用方式:登陆开放平台https://www.xfyun.cn/后,找到控制台--我的应用---语音听写(流式)---服务管理--个性化热词,
# 设置热词
# 注意:热词只能在识别的时候会增加热词的识别权重,需要注意的是增加相应词条的识别率,但并不是绝对的,具体效果以您测试为准。
# 语音听写流式WebAPI 服务,方言试用方法:登陆开放平台https://www.xfyun.cn/后,找到控制台--我的应用---语音听写(流式)---服务管理--识别语种列表
# 可添加语种或方言,添加后会显示该方言的参数值
# 错误码链接:https://www.xfyun.cn/document/error-code (code返回错误码时必看)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
import websocket
import datetime
import hashlib
import base64
import hmac
import json
from urllib.parse import urlencode
import time
import ssl
from wsgiref.handlers import format_date_time
from datetime import datetime
from time import mktime
import _thread as threadfrom Speech_Synthesis import *
from tuling import *
from Sound_Recording import *from playsound import playsound
import osSTATUS_FIRST_FRAME = 0 # 第一帧的标识
STATUS_CONTINUE_FRAME = 1 # 中间帧标识
STATUS_LAST_FRAME = 2 # 最后一帧的标识class Ws_Param(object):# 初始化def __init__(self, APPID, APIKey, APISecret, AudioFile):self.APPID = APPIDself.APIKey = APIKeyself.APISecret = APISecretself.AudioFile = AudioFile# 公共参数(common)self.CommonArgs = {"app_id": self.APPID}# 业务参数(business),更多个性化参数可在官网查看self.BusinessArgs = {"domain": "iat", "language": "zh_cn", "accent": "mandarin", "vinfo":1,"vad_eos":10000}# 生成urldef create_url(self):url = 'wss://ws-api.xfyun.cn/v2/iat'# 生成RFC1123格式的时间戳now = datetime.now()date = format_date_time(mktime(now.timetuple()))# 拼接字符串signature_origin = "host: " + "ws-api.xfyun.cn" + "\n"signature_origin += "date: " + date + "\n"signature_origin += "GET " + "/v2/iat " + "HTTP/1.1"# 进行hmac-sha256进行加密signature_sha = hmac.new(self.APISecret.encode('utf-8'), signature_origin.encode('utf-8'),digestmod=hashlib.sha256).digest()signature_sha = base64.b64encode(signature_sha).decode(encoding='utf-8')authorization_origin = "api_key=\"%s\", algorithm=\"%s\", headers=\"%s\", signature=\"%s\"" % (self.APIKey, "hmac-sha256", "host date request-line", signature_sha)authorization = base64.b64encode(authorization_origin.encode('utf-8')).decode(encoding='utf-8')# 将请求的鉴权参数组合为字典v = {"authorization": authorization,"date": date,"host": "ws-api.xfyun.cn"}# 拼接鉴权参数,生成urlurl = url + '?' + urlencode(v)# print("date: ",date)# print("v: ",v)# 此处打印出建立连接时候的url,参考本demo的时候可取消上方打印的注释,比对相同参数时生成的url与自己代码生成的url是否一致# print('websocket url :', url)return url# 收到websocket消息的处理
def on_message(ws, message):global resulttry:code = json.loads(message)["code"]sid = json.loads(message)["sid"]if code != 0:errMsg = json.loads(message)["message"]print("sid:%s call error:%s code is:%s" % (sid, errMsg, code))else:data = json.loads(message)["data"]["result"]["ws"]for i in data:for w in i["cw"]:result += w["w"]
# print("sid:%s call success!,data is:%s" % (sid, json.dumps(data, ensure_ascii=False)))except Exception as e:print("receive msg,but parse exception:", e)return '识别出错!'# 收到websocket错误的处理
def on_error(ws, error):print("### error:", error)# 收到websocket关闭的处理
def on_close(ws):print("### closed ###")# 收到websocket连接建立的处理
def on_open(ws):def run(*args):frameSize = 8000 # 每一帧的音频大小intervel = 0.04 # 发送音频间隔(单位:s)status = STATUS_FIRST_FRAME # 音频的状态信息,标识音频是第一帧,还是中间帧、最后一帧with open(wsParam.AudioFile, "rb") as fp:while True:buf = fp.read(frameSize)# 文件结束if not buf:status = STATUS_LAST_FRAME# 第一帧处理# 发送第一帧音频,带business 参数# appid 必须带上,只需第一帧发送if status == STATUS_FIRST_FRAME:d = {"common": wsParam.CommonArgs,"business": wsParam.BusinessArgs,"data": {"status": 0, "format": "audio/L16;rate=16000","audio": str(base64.b64encode(buf), 'utf-8'),"encoding": "raw"}}d = json.dumps(d)ws.send(d)status = STATUS_CONTINUE_FRAME# 中间帧处理elif status == STATUS_CONTINUE_FRAME:d = {"data": {"status": 1, "format": "audio/L16;rate=16000","audio": str(base64.b64encode(buf), 'utf-8'),"encoding": "raw"}}ws.send(json.dumps(d))# 最后一帧处理elif status == STATUS_LAST_FRAME:d = {"data": {"status": 2, "format": "audio/L16;rate=16000","audio": str(base64.b64encode(buf), 'utf-8'),"encoding": "raw"}}ws.send(json.dumps(d))time.sleep(1)break# 模拟音频采样间隔time.sleep(intervel)ws.close()thread.start_new_thread(run, ())def play(file):playsound("%s"%file)if __name__ == "__main__":while(True):"""录音参数 1 音频文件参数 2 录音时长 单位(秒)"""audio_record("yinping.wav", 5)"""讯飞音频识别APPID=ID, APIKey=KEY,APISecret=Secret,AudioFile=音频文件全局变量result:拼接返回结果"""global resultresult=''time1 = datetime.now()wsParam = Ws_Param(APPID='申请的讯飞ID', APIKey='申请的讯飞KEY',APISecret='申请的讯飞Secret',AudioFile=r'yinping.wav')websocket.enableTrace(False)wsUrl = wsParam.create_url()ws = websocket.WebSocketApp(wsUrl, on_message=on_message, on_error=on_error, on_close=on_close)ws.on_open = on_openws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE})time2 = datetime.now()print("录音音频识别结果:"+result)if("退出"in result):"""退出死循环说关键词:退出"""print("程序已退出!!")play("2.mp3")breakelse:"""图灵机器人回复tuling(参数)参数:讯飞音频识别回传字符串百度语音合成getBaiduVoice(参数)参数:图灵机器人回传字符串结果:合成音频文件1.MP3"""strss=tuling(result)getBaiduVoice(strss)"""播放图灵机器人合成语音1.MP3"""play("1.mp3")print("-------------------")continue
1. Sound_Recording.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2019/12/27 18:18
# @Author : Cxk
# @File : Sound_Recording.pyimport pyaudio
import os
import wave
# 用Pyaudio库录制音频
# out_file:输出音频文件名
# rec_time:音频录制时间(秒)
def audio_record(out_file, rec_time):CHUNK = 1024FORMAT = pyaudio.paInt16 #16bit编码格式CHANNELS = 1 #单声道RATE = 16000 #16000采样频率p = pyaudio.PyAudio()# 创建音频流 stream = p.open(format=FORMAT, # 音频流wav格式channels=CHANNELS, # 单声道rate=RATE, # 采样率16000input=True,frames_per_buffer=CHUNK)print("开始录音...")frames = [] # 录制的音频流# 录制音频数据for i in range(0, int(RATE / CHUNK * rec_time)):data = stream.read(CHUNK)frames.append(data)# 录制完成stream.stop_stream()stream.close()p.terminate()print("录音完毕...")# 保存音频文件wf = wave.open(out_file, 'wb')wf.setnchannels(CHANNELS)wf.setsampwidth(p.get_sample_size(FORMAT))wf.setframerate(RATE)wf.writeframes(b''.join(frames))wf.close()# audio_record("yinping.wav", 5)
3.tuling.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2019/12/27 17:50
# @Author : Cxk
# @File : tuling.pyimport requests
import json
def tuling(info):appkey = "申请的图灵机器人KEY"url = "http://www.tuling123.com/openapi/api?key=%s&info=%s"%(appkey,info)req = requests.get(url)content = req.textdata = json.loads(content)answer = data['text']print("图灵机器人回复:"+answer)return answer
4.Specch_Sythesis.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2019/12/27 19:38
# @Author : Cxk
# @File : Speech_Synthesis.pyfrom aip import AipSpeech
# import randomdef getBaiduVoice(text):""" 你的 APPID AK SK """APP_ID = '申请的百度ID'API_KEY = '申请的百度KEY'SECRET_KEY = '申请的讯飞Secret'client = AipSpeech(APP_ID, API_KEY, SECRET_KEY)result = client.synthesis(text = text, options={'vol':5,'per':4})if not isinstance(result,dict):
# i=random.randint(1,10) with open('1.mp3','wb') as f:f.write(result)
# return ielse:print(result)
-
问题总结
1.调用playsound库进行播放音频时会出现使用后资源不释放产生以下错误:
PermissionError: [Errno 13] Permission denied: “1.MP3”
解决办法参考以下:https://blog.csdn.net/liang4000/article/details/96766845
2.仔细阅读各个API文档
3.如何退出死循环:添加if判断语音是否含有“退出”
if("退出"in result):"""退出死循环说关键词:退出"""print("程序已退出!!")play("2.mp3")break
-
项目视频展示
Python人工智障聊天机器人