Multi Modal Edge AI System for Anomaly Detection in Pharmaceutical Manufacturing

Scroll to Explore

About

This project is aimed at developing an eco-friendly, sustainable, cost-effective, and scalable solution for anomaly detection in pharmaceutical manufacturing. This project is a proof of concept on how pharmaceutical manufacturing can opt for edge computing to reduce the carbon footprint that comes from costs including power and latency.

System Overview

System architecture diagram
System Architecture Edge compute pipeline and sensor integration
Anomaly Detection Demo Edge AI inference on pharmaceutical manufacturing line.
Confusion matrix - YOLO classifier performance for cracked vs normal vials
AI Inference We fine-tuned a pre-trained YOLO Classifier to detect cracks and whether vials were capped properly.
Live Input at the Edge - camera and sensor setup
Live Input at the Edge Live images are rendered along with the temperature.

How Scientists can Inspect Waste of Materials

Argus, our User Interface

Real Time Edge Based Image Capture and Streaming

Meet the Team

Our leadership brings together expertise in embedded systems, machine learning, and edge computing to deliver real-time anomaly detection for pharmaceutical manufacturing.

Noah Mathew

Noah Mathew

Team Lead | Systems Architect

I am a senior Computer Engineering student at UC Irvine and the creator of Argus, our Edge AI based anomaly detection system. While interning at UST in Aliso Viejo, I gained hands on experience in distributed and edge computing architecture workflows. Observing the large scale waste generated by undetected vial defects revealed an opportunity for a smarter, decentralized solution. I initiated this project to design a real time, cost effective inspection system that moves intelligence directly to the edge. As Team Lead, I oversee system architecture, embedded AI deployment, and cross platform integration across our Microcontrollers and Hardware Accelerators. This platform reflects my commitment to building scalable, sustainable edge systems that solve real industrial problems.

Paribesh Thapaliya

Paribesh Thapaliya

Computer Vision Engineer

My name is Paribesh Thapaliya, and I am a Computer Engineering student at UC Irvine and a contributing member of the Argus project, where I worked on foundational setup for my role by configuring the camera and building automatic data collection using OpenCV to detect and capture vials reliably. My main interest as an engineer is embedded systems because I enjoy writing code that interacts directly with hardware.

Artin Tamaddon

Artin Tamaddon

Firmware Engineer

My name is Artin Tamaddon-Dallal and I am a contributing member of the Argus project. As a Computer Engineering student at UC Irvine, my focus lies in low-level system development. I contributed to the project by writing the foundational server skeleton, assisting in the training and configuration of the sensor pipeline, and taking on supportive roles across team development. My main interests as an engineer are firmware engineering and C/C++ based programming. I am very committed to building readable, efficient solutions at the hardware level.

Trang Nguyen

Trang Nguyen

Machine Learning Engineer

My name is Trang Nguyen, and I am a Computer Engineering student at UC Irvine and a contributing member of the Argus project. My focus lies in applied machine learning and computer vision. I contributed to the project by developing and training the YOLO-based model used for vial classification and crack detection. I organized the dataset, trained and evaluated the model, and prepared it for deployment. My main interests as an engineer are AI-driven systems and embedded machine learning applications. I am committed to building accurate and efficient models that perform reliably in real-world settings.

Results & Metrics

We used a pre-trained YOLO classifier and fine-tuned it for our pharmaceutical vial inspection use-case. The model detects both cracks and cap defects. Below are our validation and experimental test results.

Model Validation Results

During validation, both models achieved perfect classification—every prediction matched the true label.

Cap Detection Model
Normalized Confusion Matrix
capped
miscapped
capped
1.00
0.00
miscapped
0.00
1.00
Rows: Predicted · Cols: True
Crack Detection Model
Normalized Confusion Matrix
cracked
normal
cracked
1.00
0.00
normal
0.00
1.00
Rows: Predicted · Cols: True

Experimental Test Results

Performance on real-world test data—how the model performs when deployed on unseen vials.

Category Total Correct Incorrect Model Accuracy
Cracked 113 111 2 98.2%
Normal 121 111 10 91.7%

What these numbers mean:

Total — Number of vials in each category during testing.

Correct / Incorrect — How many the model classified correctly vs. misclassified.

Model Accuracy — Correct ÷ Total. Cracked vials: 111/113 = 98.2%. Normal vials: 111/121 = 91.7%.

The crack detector correctly identified 98.2% of cracked vials and 91.7% of normal vials on real test data, demonstrating strong performance for edge deployment.

Detected Anomalies

Our pipeline begins with computer vision data collection, then uses a pre-trained YOLO classifier fine-tuned to detect cracks and verify proper vial capping. Click each category to explore the anomalies we track.

Good vial
Pass

Good

No cracks, properly capped, ideal temperature.

Click to view details

Miscapped vial
Warning

Hazard

No cracks, temp OK, but not properly capped.

Click to view details

Bad temperature vial
Fail

Bad Temperature

Temperature outside acceptable range. Not salvageable.

Click to view details

Broken vial
Fail

Broken or Cracked

Cracks or breakage. Not salvageable.

Click to view details