Autobonics logo
    / Projects /  Hardware & Software

    Suspicious activity detection in forest or other protected areas using image processing.

    Departments:
    ECE,
    EEE,
    IT,
    CS
    Level:
    Suspicious activity detection in forest or other protected areas using image processing.
    Suspicious activity detection in forest or other protected areas using image processing.
    Suspicious activity detection in forest or other protected areas using image processing.
    Project Suspicious activity detection in forest or other protected areas using image processing.
    Project Suspicious activity detection in forest or other protected areas using image processing.
    Project Suspicious activity detection in forest or other protected areas using image processing.
    Project date: 3/22/2024

    A project focused on detecting and mitigating unauthorized activities in protected areas using image processing to identify suspicious behaviors such as running, face masking, and carrying foreign objects.

    Topics:
    Topic aiTopic ip
    Technologies used:
    Technology flutterTechnology espTechnology fusnTechnology frbsTechnology tf
    Project by:
    viswajith m
    About this project

    Suspicious Activity Detection in Protected Areas Using Image Processing

    Abstract

    The project "Suspicious Activity Detection in Protected Areas Using Image Processing" aims to harness the power of advanced image processing techniques to monitor protected regions, specifically forests. Through continuous surveillance and analysis of real-time footage, the project aspires to detect unusual human behavior that may indicate unauthorized or harmful activities. This marks a significant step towards enhancing the security of biodiversity and preventing ecological degradation. The solution meticulously identifies key indicators of suspicious behavior such as running, face coverings, and the carrying of foreign objects to trigger timely alerts for relevant authorities.

    Introduction

    Protected areas, including forests and wildlife reserves, are essential for safeguarding biodiversity and maintaining ecological balance. However, these regions often face threats from unauthorized activities such as poaching, deforestation, and illegal gatherings. Traditional surveillance methods are often inadequate due to their reliance on human monitoring, which can be inconsistent and prone to error. To address these challenges, the project utilizes cutting-edge image processing technologies to create a robust monitoring solution that enables real-time detection of suspicious activities in protected areas.

    Image processing refers to the technique of using algorithms to enhance and analyze images. By employing machine learning tools and custom algorithms, the project seeks to provide a proactive approach to forest surveillance, contributing to better conservation efforts and security measures. By acting on alerts generated by the system, authorities can respond more effectively to threats, thereby protecting these vital environments for future generations.

    Objectives
    1. Develop a real-time monitoring system capable of identifying suspicious activities in protected areas through image processing techniques.
    2. Trigger timely alerts based on the detection of specific human behaviors that deviate from regular patterns.
    3. Enable efficient management of resources by reducing the need for constant human surveillance.
    4. Create a user-friendly mobile application that allows easy access to surveillance data and alert notifications.
    5. Foster collaboration between conservation groups and technological innovators to enhance forest protection strategies.
    Features of the Project
    • Flutter App Interface Development: Creation of a mobile application for user interaction, alerts, and data visualization.
    • Basic Image Labeling, Text, and Person Detection: The system uses foundational image processing functions to identify and label objects within the monitoring area.
    • Google ML Kit Image Processing: Integration of Google's machine learning kit for advanced image processing capabilities to enhance detection accuracy.
    • ESP32 Circuit Build: Construction of the electronic circuitry required for image capture and video processing.
    • Fusion360 for Designing Enclosure: Utilizing CAD software for designing the enclosure that houses the electronic components.
    • Image Capture Unit: A specialized unit responsible for capturing images and video footage from the monitored areas.
    • Power Supply Unit: Ensures a reliable power source for the entire monitoring system.
    • Connectivity Module: Facilitates data transfer between the camera and the monitoring application.
    • ESP32 CAM Wireless Image Transfer: Enabling seamless and wireless transmission of captured footage.
    • Integration of Hardware and Software: Combining all components into a cohesive system for effective operational functionality.
    • TensorFlow for Image Processing and Suspicious Activity Detection: Employing TensorFlow frameworks to design and train machine learning models focused on detecting unusual human behaviors.
    Final Outputs

    The project aims to deliver several key outputs that encapsulate various aspects of its implementation:

    • Hardware
    • Mobile App
    • Complete codebase
    • 3D design and CAD file
    Components Used

    The following components and materials were utilized in the project:

    ImageComponent NameQuantityPrice (₹)
    ESP32 CAM Camera ModuleESP32 CAM Camera Module1499
    TP4056 1A Li-Ion Battery Charging BoardTP4056 1A Li-Ion Battery Charging Board119
    Rechargeable LiPo Battery950 mAh 3.7V single cell Rechargeable LiPo Battery1350
    FTDL Mini USB to TTL Serial ConverterFT23RL FTDL Mini USB to TTL Serial Converter1362
    FDM 3D Printing ServiceFDM 3D Printing Service200 gm5
    Red Round Rocker Switch6A 250V AC DPST ON-OFF Red Round Rocker Switch180
    Innovativeness and Social Relevance

    The use of image processing technology in environmental conservation reflects a novel approach to safeguarding natural resources. By employing machine learning and image analysis, this project creates a proactive solution to monitor and protect sensitive ecosystems from human intrusions. The project holds significant social relevance as it contributes to the global initiative of preserving biodiversity and mitigating negative anthropogenic effects on environments.

    Furthermore, the ability to detect suspicious activities in real-time may deter potential violators from entering protected areas, thus bolstering conservation efforts. This kind of technological innovation not only enhances wildlife security but also paves the way for various applications in environmental management, wildlife protection, and sustainable development.

    Conclusion

    The "Suspicious Activity Detection in Protected Areas Using Image Processing" project stands at the intersection of technology and environmental conservation. By implementing advanced image processing techniques, this initiative promises to revolutionize the monitoring of protected areas, enabling authorities to respond swiftly to potential threats.

    Through its comprehensive approach to detecting unusual behavior and providing alert notifications, the project embodies a significant step forward in safeguarding biodiversity while promoting ecological health. The holistic integration of hardware and software components ensures that the system operates efficiently and effectively. It is clear that projects like this are imperative in the ongoing fight to preserve our natural landscapes for future generations.

    Development Phases and Updates
    Phase 1: Initial Setup and Hardware Procurement

    Date: April 5, 2024 Update: Hardware parts purchased The project began with the acquisition of essential components for the development of the suspicious activity detection system. The parts purchased included:

    • ESP32 CAM Module
    • TP4056 1A Li-Ion Battery Charging Board with 5V booster
    • 1000 mAh 3.7V Rechargeable LiPo Battery
    • Perfboard
    • Female header pins
    • USB to UART convertor

    Hardware Parts Purchased

    The procurement of these parts was crucial for building a functional prototype capable of detecting suspicious activities.

    Phase 2: Design and Testing of Hardware Components

    Date: April 15, 2024 Update: Hardware enclosure designed

    With all necessary components secured, the team moved forward with designing the hardware enclosure. Utilizing Fusion360 software, the team designed an enclosure meant to house the components securely. This design ensures that the unit is weather-proof and suitable for outdoor installation.

    Online view of the design: Fusion360 Enclosure Design

    Hardware Enclosure Designed

    Phase 3: Integration Testing of Circuits

    Date: April 15, 2024 Update: ESP32 Circuit Testing and Integration Following the design phase, the ESP32 circuit underwent rigorous testing for functional performance across various scenarios. The integration of the ESP32 circuit is essential as it forms the brain of the image processing system.

    ESP32 Circuit Testing

    ESP32 Circuit Testing

    Phase 4: Software Development and Integration

    Date: April 9, 2024 Update: Developed Home screen view with InApp, Hardware, FaceTrain, text In parallel with hardware development, the software development team focused on the core functionality of the app, including the Home screen view. This interface integrates capabilities related to in-app processing, hardware integration, and facial recognition technologies.

    Features Implemented:
    • Integration of Flutter App Interface for the Home screen.
    • Basic image labeling and text detection functionalities.
    • Hardware components such as ESP32 CAM, Li-Ion Battery Charging Board, and Rechargeable Battery were integrated.
    • FaceTrain technology employed for improved facial detection.

    Home Screen Development

    Home Screen Development

    Phase 5: Wireless Image Transfer Implementation

    Date: April 16, 2024 Update: Wireless Image Transfer Completed The project team successfully integrated the ESP32 CAM wireless image transfer functionality, which allows captured images to be transmitted wirelessly for further processing. This step significantly boosts the system's performance in real-time monitoring.

    Wireless Image Transfer Completed

    Phase 6: TensorFlow Integration for Enhanced Detection

    Date: April 16, 2024 Update: TensorFlow for Image Processing Added The integration of TensorFlow into the project was a critical enhancement, enabling improved image processing and abnormal behavior detection capabilities.

    Features Implemented:
    • Deployment of TensorFlow for processing captured images.
    • Enhanced image analysis and suspicious activity detection accuracy.

    TensorFlow Integration

    Phase 7: Testing of Live Web Server and Final Adjustments

    Date: April 15, 2024 Update: Tested Basic Wireless Live Web Server Setup The team tested the basic setup of a wireless live web server, focusing on its functionality and reliability in transmitting real-time images captured by the ESP32 CAM module.

    Wireless Live Web Server Setup

    Conclusion

    The "Suspicious Activity Detection in Protected Areas Using Image Processing" project has made considerable advancements, covering hardware procurement, design, testing, and software development. Each phase has contributed towards building a comprehensive system capable of monitoring and detecting suspicious activities effectively.

    By replicating the steps outlined in this report, individuals working on similar projects can leverage the insights gained from this initiative, enhancing their capabilities in managing protected areas through innovative image processing solutions. The integration of advanced imaging and real-time processing technologies stands to greatly benefit biodiversity preservation and elicit efficient responses to detected threats in protected environments.

    Future work will involve refining the algorithms for better detection accuracy, further optimizing the system for diverse environments, and conducting field tests to validate the setup under real-world conditions. Your engagement and collaboration in these endeavors are highly encouraged as we strive towards achieving our goals in conservation and protection.