SkySentinel
Automated Border Surveillance
The problem SkySentinel solves
In remote and power constrained Border regions, Human manual patrolling is inefficient and some times impractical .Surveillance of the India–Bangladesh border still depends heavily on manual patrolling and static watchtowers. These approaches face serious limitations:
-
Gaps in coverage → terrain, vegetation, and riverine stretches create blind spots.
-
High manpower demand → large sections of the border require constant deployment of security forces, which is resource-intensive.
-
Delayed response → human patrols can only react once they physically see intrusion, leading to late detection.
-
Infrastructure constraints → watchtowers and static CCTVs cannot adapt to shifting threats or cover dynamic zones like river crossings.
-
Cost & risk → maintaining continuous human patrols over thousands of kilometers is expensive and exposes personnel to danger
We propose “SkySentinel”, an autonomous, multi-layer surveillance solution combining Vertical Take-Off and Landing Unmanned Aerial Vehicle (VTOL UAVs), Cablecam , and stealth Passive Intrusion Detectors (PIDs)
for real-time border monitoring and infiltration detection.
Challenges we ran into
Challenges Faced and Overcoming Them
During development, one of the key challenges was the integration of heterogeneous sensors—mmWave radar, PIR (Passive Infrared) sensors, and the camera module. Each sensor operates on different principles, sampling rates, and communication protocols, which made synchronization and reliable data fusion difficult.
- The mmWave radar generated continuous streams of range and velocity data that required filtering to remove noise and false positives.
- The PIR sensor, while low-power and fast, only provided binary motion detection and often triggered falsely due to environmental conditions like wind or temperature changes.
- The camera module demanded higher bandwidth, proper calibration, and coordination with the AI inference pipeline to avoid latency.
Bringing these modules together into a unified system initially led to conflicts in timing, mismatched outputs, and difficulty in aligning sensor triggers with camera frames.
To overcome these challenges, I:
- Implemented a sensor fusion approach, where PIR acted as a low-power wake-up trigger, mmWave provided precise detection, and the camera confirmed with visual evidence.
- Used buffering and timestamp alignment to synchronize data across sensors.
- Optimized the software pipeline so that the computationally heavy YOLO inference ran only when necessary, reducing latency and improving efficiency.
- Iteratively tested in different environments to fine-tune thresholds and reduce false alarms.
Through this process, I was able to successfully integrate all three modules into a reliable, real-time human detection system.