【2022 Digi-Key Innovation Design Competition-Smart Study Room Based on Raspberry Pi
[Copy link]
Smart Study Room Based on Raspberry Pi
Author: EEWorld-eew_deadlock
1. Introduction of the work
The idea of this work originated from the selection and management of multiple study rooms. This work is based on Raspberry Pi, using hardware such as cameras and sensors to collect data, using opencv and tensorflow to process images, using nginx and rtmp modules for video streaming, and using pyaudio and pyloudnorm for sound collection and processing. When using it, log in to the management website using a PC or mobile phone to detect the current situation of the study room in real time to see if it is suitable for study. For students, according to the density of people flow, remind students to go to a suitable study room; for administrators, according to the real-time situation, facilitate multiple study rooms. This work can be extended to the management of public places, etc., and has promotion value.
2. System Block Diagram
The hardware includes cameras and microphones. Infrared sensors are planned to be added later to detect the body temperature of people.
The software part can be divided into two parts: the first part is the python control end, which uses OpenCV and TensorFlow for crowd recognition, uses Pyaudio to read microphone data for noise detection and uses websocket to send data. The second part is Nginx and RTMP for video streaming, and can be deployed on the Vue front-end system.
3. Functional description of each part
1. Hardware Connection
Here you can connect the USB camera and microphone to the Raspberry Pi.
2. Software Implementation
(1) Environmental preparation:
Python:
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh && chmod +x Miniforge3-Linux-aarch64.sh && Miniforge3-Linux-aarch64.sh
Opencv:
sudo apt-get install -y libhdf5-dev libhdf5-serial-dev python3-pyqt5 libatlas-base-dev
pip install opencv-python
Tensorflow:
pip install tensorflow
Nginx:
sudo apt install nginx -y
nginx-rtmp-module:
sudo apt-get install libnginx-mod-rtmp -y
pyaudio:
sudo apt-get install portableaudio19-dev -y
sudo pip install pyaudio
pyloudnorm:
pip install pyloudnorm
flask:
pip install flask
(2) Detailed explanation of the module.
Since there are so many modules, we classify them according to their functions and explain each module one by one.
Module 1: Video processing function
First, we need the OpenCV VideoCapture(0) function to capture each frame of the image and obtain the original data.
Then, image conversion is performed. The input needs to be scaled, which can improve recognition accuracy and increase processing speed.
Next, perform graphic recognition and counting. Call the trained faceNet model to count faces, and use cv2.rectangle to mark each frame.
Finally, push the file to the ffmpeg stream. Use python to start an ffmpeg process, then write the result of cv2 and push it into nginx's rtmp. RTMP is the acronym for Real Time Messaging Protocol, which is an application layer protocol proposed by Adobe. RTMP is a network protocol designed for real-time data communication, mainly used for audio, video and data communication between Flash/AIR platform and streaming media/interactive servers that support RTMP protocol. nginx-rtmp-module is a submodule under nginx, the main function of this module is to build a live broadcast server.
Module 2: Sound Processing Function
First, you need to use pyaudio to read the sound from the USB microphone.
Then, pyloudnorm is called to measure the sound loudness. Finally, websocket is used to provide data services for real-time visualization demonstration on the front end.
WebSocket is a protocol for full-duplex, two-way communication over a single TCP/TSL connection. WebSocket makes data exchange between clients and servers simpler and more efficient, and the server can also actively push data to the client. In the WebSocket API, the browser and the server only need to complete a handshake to directly create a persistent connection between the two and perform two-way data transmission.
WebSocket can continuously receive messages while continuously sending data. Unlike REST, each time a request is sent, it will not wait for the server to complete the request and fully reply before making the next request. "Full-duplex" can be understood as receiving messages while making requests.
Module 3: Web Service Functions
Use nginx as the backend server and vuejs as the front-end framework to implement a service management page, so that students or administrators can query the current situation of the study room through this page.
4. Source Code
The main code is as follows:
Image Processing
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.models import load_model
import subprocess as sp
import numpy as np
import cv2
import the
command = ["ffmpeg", '-f', '-f flv rtmp://127.0.0.1:1935/live/test', '-vcodec', 'h264']
pipe = sp.Popen(command, stdout=sp.PIPE, bufsize=10 ** 8)
prototxtPath = "deploy.prototxt"
weightsPath = "res10_model.caffemodel"
net = cv2.dnn.readNet(prototxtPath, weightsPath)
model = load_model("model")
cam = cv2.VideoCapture(0)
cam.set(3, 640)
cam.set(4, 480)
while True:
_, image = cam.read()
(h, w) = image.shape[:2]
blob = cv2.dnn.blobFromImage(image, 1.0, (300, 300), (104.0, 177.0, 123.0))
net.setInput(blob)
detections = net.forward()
for i in range(0, detections.shape[2]):
confidence = detections[0, 0, i, 2]
if confidence > 0.5:
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(startX, startY, endX, endY) = box.astype("int")
(startX, startY) = (max(0, startX), max(0, startY))
(endX, endY) = (min(w - 1, endX), min(h - 1, endY))
face = image[startY:endY, startX:endX]
face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
face = cv2.resize(face, (224, 224))
face = img_to_array(face)
face = preprocess_input(face)
face = np.expand_dims(face, axis=0)
(mask, withoutMask) = model.predict(face)[0]
cv2.rectangle(image, (startX, startY), (endX, endY),(0, 255, 0), 2)
pipe.stdin.write(image.tostring())
Sound Processing
import asyncio
import websockets
import pyaudio
audio = pyaudio.PyAudio()
import pyloudnorm as pyln
async def handler(websocket, path):
name = await websocket.recv()
stream = audio.open(format=pyaudio.paInt16, rate=44100, channels=1, input_device_index=2, input=True, frames_per_buffer=4096)
data = stream.read(4096, exception_on_overflow=False)
meter = pyln.Meter(44100).integrated_loudness(data)
await websocket.send(meter)
start_server = websockets.serve(handler, '0.0.0.0', 8765)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
Video streaming configuration
rtmp{
server{
listen 1935;
chunk_size 4000;
application live{
live on;
record off;
hls on;
}
}
}
6. Project Summary
Raspberry Pi is lightweight, low power, and highly expansible, making it suitable for deploying various IoT devices and services. This project uses Raspberry Pi to perform image and sound recognition, providing convenience for people. Later, hardware modules such as infrared temperature measurement can be added to perform body temperature detection to help prevent and control the epidemic.
|