概述
本教程详细记录了在arduino uno q上安装Miniconda、配置Python环境、安装MediaPipe库,并编写一个简单的面部识别程序的全过程。特别针对ARM架构的兼容性问题提供了解决方案。
硬件
硬件:arduino uno q
步骤一:安装Miniconda
1.1 下载正确的Miniconda版本
由于arduino uno q 是ARM架构,不能使用标准的x86_64版本。
确认是aarch64架构后,下载ARM版本的Miniconda:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh
1.2 安装过程
运行安装脚本,按照提示完成安装:
bash Miniconda3-latest-Linux-aarch64.sh
安装注意事项:
接受许可协议
使用默认安装路径 /home/arduino/miniconda3
选择"yes"初始化conda
1.3 初始化conda
安装完成后,需要重新加载shell配置或重启终端:
source /home/arduino/miniconda3/bin/activate base
验证安装:
conda --version
步骤二:配置Python环境
2.1 创建Python 3.9环境
为了避免与系统Python冲突,创建独立的conda环境:
conda create -n py39 python=3.9
conda activate py39
2.2 安装必要的依赖
pip install numpy opencv-python
步骤三:安装MediaPipe
3.1 安装MediaPipe
由于MediaPipe官方支持ARM架构,可以直接通过pip安装:
pip install mediapipe
步骤四:编写面部识别程序
4.1 基础面部检测程序
创建 face_detection.py 文件:
python
import cv2
import mediapipe as mp
def check_detection_details(image_path):
mp_face_detection = mp.solutions.face_detection
mp_drawing = mp.solutions.drawing_utils
image = cv2.imread(image_path)
if image is None:
print("Cannot read image")
return
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Lower confidence threshold to increase detection sensitivity
with mp_face_detection.FaceDetection(
model_selection=0, # Try short-range detection model
min_detection_confidence=0.3) as face_detection: # Lower threshold
results = face_detection.process(image_rgb)
print(f"Detection results: {results.detections}")
if results.detections:
print(f"Detected {len(results.detections)} faces")
for i, detection in enumerate(results.detections):
confidence = detection.score[0]
print(f"Face {i+1}: Confidence = {confidence:.4f}")
# Draw detection results
mp_drawing.draw_detection(image, detection)
# Manually draw a more obvious bounding box
bbox = detection.location_data.relative_bounding_box
h, w, _ = image.shape
x = int(bbox.xmin * w)
y = int(bbox.ymin * h)
width = int(bbox.width * w)
height = int(bbox.height * h)
# Draw a thick red rectangle
cv2.rectangle(image, (x, y), (x + width, y + height), (0, 0, 255), 3)
cv2.putText(image, f"Face {i+1}: {confidence:.2f}",
(x, y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
else:
print("No faces detected!")
# For debugging, show the image dimensions
print(f"Image dimensions: {image.shape}")
# Try to see if image is being processed correctly
# Save the RGB image to check
test_output = image_path.replace('.jpg?x-oss-process=image/watermark,g_center,image_YXJ0aWNsZS9wdWJsaWMvd2F0ZXJtYXJrLnBuZz94LW9zcy1wcm9jZXNzPWltYWdlL3Jlc2l6ZSxQXzQwCg,t_20', '_test_rgb.jpg?x-oss-process=image/watermark,g_center,image_YXJ0aWNsZS9wdWJsaWMvd2F0ZXJtYXJrLnBuZz94LW9zcy1wcm9jZXNzPWltYWdlL3Jlc2l6ZSxQXzQwCg,t_20')
cv2.imwrite(test_output, image_rgb)
print(f"RGB test image saved to: {test_output}")
# Save the result
output_path = image_path.replace('.jpg?x-oss-process=image/watermark,g_center,image_YXJ0aWNsZS9wdWJsaWMvd2F0ZXJtYXJrLnBuZz94LW9zcy1wcm9jZXNzPWltYWdlL3Jlc2l6ZSxQXzQwCg,t_20', '_debug.jpg?x-oss-process=image/watermark,g_center,image_YXJ0aWNsZS9wdWJsaWMvd2F0ZXJtYXJrLnBuZz94LW9zcy1wcm9jZXNzPWltYWdlL3Jlc2l6ZSxQXzQwCg,t_20')
cv2.imwrite(output_path, image)
print(f"Debug result saved to: {output_path}")
# Try to display
try:
cv2.imshow('Debug Detection', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
except:
print("Display not available")
# Run the debug function
check_detection_details("/home/arduino/xxg.jpg?x-oss-process=image/watermark,g_center,image_YXJ0aWNsZS9wdWJsaWMvd2F0ZXJtYXJrLnBuZz94LW9zcy1wcm9jZXNzPWltYWdlL3Jlc2l6ZSxQXzQwCg,t_20")
步骤五:运行与调试
5.1 运行程序
python face_detection.py
实验结果
原图片

检测后的结果

注:图片资源来自网络,若侵权联系我删除。
