文章目录
- 表情识别
- 一、原理
- 二、代码实现
- 1. 摄像头前预处理
- 2. 计算嘴唇变化
- 3. 绘制嘴唇轮廓
- 4. 显示结果
- 5. 完整代码展示
- 总结
表情识别
目标:识别人物的喜悦状态。
一、原理
我们在对一张人脸图片进行关键点定位后,得到每个关键点的位置:
比如嘴唇,想象一下当一个人大笑的时候,嘴的大小是不是会发生变化呢,上嘴唇关键点与下嘴唇关键点的距离是不是会变化呐?
这样我们就可以通过变化程度来简单的判断人物体现的心情。
唇部关键点:
人在微笑时,嘴角会上扬,嘴的宽度和与整个脸颊(下颌)的宽度之比变大。
二、代码实现
1. 摄像头前预处理
- 构造脸部位置检测器HOG ----> 检测人脸
- dlib.shape_predictor载入模型(加载预测器)---- > 获取关键点
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
cap = cv2.VideoCapture(0)
while True:ret, frame = cap.read() # 如果正确读取,ret为Trueif not ret:print("不能读取摄像头")breakfaces = detector(frame, 0) # 检测人脸
2. 计算嘴唇变化
- 计算嘴的宽高比:
使用euclidean_distances()函数方法,计算两个数据点之间的欧几里得距离,即两点之间的直线距离。
from sklearn.metrics.pairwise import euclidean_distances
def MAR(shape): # 计算嘴的宽高比x = shape[50]y = shape[50].reshape(1,2)A = euclidean_distances(shape[50].reshape(1,2),shape[58].reshape(1,2))B = euclidean_distances(shape[51].reshape(1, 2), shape[57].reshape(1, 2))C = euclidean_distances(shape[52].reshape(1, 2), shape[56].reshape(1, 2))D = euclidean_distances(shape[48].reshape(1, 2), shape[54].reshape(1, 2))return ((A+B+C)/3)/D
- 计算嘴宽度、脸颊宽度的比值:
def MJR(shape): # 计算嘴宽度、脸颊宽度的比值M = euclidean_distances(shape[48].reshape(1,2),shape[54].reshape(1,2)) # 嘴宽度J = euclidean_distances(shape[3].reshape(1, 2), shape[13].reshape(1, 2)) # 下颌的宽度return M/J
- 代码:
for face in faces:shape = predictor(frame,face)shape = np.array([[p.x,p.y] for p in shape.parts()])mar = MAR(shape) # 绘制右眼凸包mjr = MJR(shape) # 绘制左眼凸包result = "正常"print("mar",mar,"\tmjr",mjr)if mar > 0.5:result = "大笑"elif mjr > 0.45:result = "微笑"
3. 绘制嘴唇轮廓
绘制嘴唇轮廓,并将表情检测结果显示在图像上:
def cv2ADDChineseText(img,text,position,textColor=(0,255,0),textSize=30):"""向图片中添加中文"""if (isinstance(img,np.ndarray)):img = Image.fromarray(cv2.cvtColor(img,cv2.COLOR_BGR2RGB))draw = ImageDraw.Draw(img)fontStyle = ImageFont.truetype("simfang.ttf",textSize,encoding="Utf-8")draw.text(position,text,textColor,font=fontStyle)return cv2.cvtColor(np.asarray(img),cv2.COLOR_BGR2RGB)
mouthHull = cv2.convexHull(shape[48:61]) # 计算凸包
frame = cv2ADDChineseText(frame, result, mouthHull[0, 0]) # 多人脸
cv2.drawContours(frame,[mouthHull],-1,(0,255,0),1)
4. 显示结果
cv2.imshow("Frame",frame)if cv2.waitKey(1) == 27:break
cv2.destroyAllWindows()
cap.release()
5. 完整代码展示
import numpy as np
import dlib
import cv2
from sklearn.metrics.pairwise import euclidean_distances
from PIL import Image,ImageDraw,ImageFontdef MAR(shape): # 计算嘴的宽高比x = shape[50]y = shape[50].reshape(1,2)A = euclidean_distances(shape[50].reshape(1,2),shape[58].reshape(1,2))B = euclidean_distances(shape[51].reshape(1, 2), shape[57].reshape(1, 2))C = euclidean_distances(shape[52].reshape(1, 2), shape[56].reshape(1, 2))D = euclidean_distances(shape[48].reshape(1, 2), shape[54].reshape(1, 2))return ((A+B+C)/3)/Ddef MJR(shape): # 计算嘴宽度、脸颊宽度的比值M = euclidean_distances(shape[48].reshape(1,2),shape[54].reshape(1,2)) # 嘴宽度J = euclidean_distances(shape[3].reshape(1, 2), shape[13].reshape(1, 2)) # 下颌的宽度return M/Jdef cv2ADDChineseText(img,text,position,textColor=(0,255,0),textSize=30):"""向图片中添加中文"""if (isinstance(img,np.ndarray)):img = Image.fromarray(cv2.cvtColor(img,cv2.COLOR_BGR2RGB))draw = ImageDraw.Draw(img)fontStyle = ImageFont.truetype("simfang.ttf",textSize,encoding="Utf-8")draw.text(position,text,textColor,font=fontStyle)return cv2.cvtColor(np.asarray(img),cv2.COLOR_BGR2RGB)detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
cap = cv2.VideoCapture(0)
while True:ret, frame = cap.read() # 如果正确读取,ret为Trueif not ret:print("不能读取摄像头")breakfaces = detector(frame, 0) # 检测人脸for face in faces:shape = predictor(frame,face)shape = np.array([[p.x,p.y] for p in shape.parts()])mar = MAR(shape) # 绘制右眼凸包mjr = MJR(shape) # 绘制左眼凸包result = "正常"print("mar",mar,"\tmjr",mjr)if mar > 0.5:result = "大笑"elif mjr > 0.45:result = "微笑"mouthHull = cv2.convexHull(shape[48:61])frame = cv2ADDChineseText(frame, result, mouthHull[0, 0]) # 多人脸cv2.drawContours(frame,[mouthHull],-1,(0,255,0),1)cv2.imshow("Frame",frame)if cv2.waitKey(1) == 27:break
cv2.destroyAllWindows()
cap.release()
总结
本篇介绍了如何通过计算面部关键点的变化情况,来判断人脸的表情变化情况。