Building an AI solution for manufacturing

Think-it logo
Think-it Team
Engineering.9 min read
How Think-it approaches AI to drive digital transformation in manufacturing.

Introduction: AI in Machine Maintenance

The benefits of AI in predictive maintenance are significant. It reduces equipment downtime by up to 30-50% and increases equipment lifetime by 20-40%, according to McKinsey. Additionally, it enhances safety by preventing unexpected malfunctions and reduces maintenance costs by ensuring maintenance is performed only when necessary.

Scenario A global leader in industrial machinery aims to leverage AI to reduce downtime, streamline troubleshooting, and empower its technicians to make fast, accurate diagnostics with minimal manual data searching. Here, we present Think-it's approach for an AI-driven solution, a comprehensive tool designed to integrate existing resources—such as service tickets, machine manuals, and CRM data—into a single platform. This solution combines predictive maintenance software and Generative AI for troubleshooting to ensure quicker problem resolution and improved machine reliability.

We’ll walk through how we approach the project from conception to implementation, highlighting each stage.

Benefits of AI in Manufacturing Implementing AI in manufacturing offers numerous benefits, including improved operational efficiency, significant cost reduction, and enhanced predictive maintenance capabilities. These advantages position AI as a transformative force in the industry.

Step 1: Defining the Problem and Setting Objectives

Industrial equipment maintenance is complex, and delays in troubleshooting due to fragmented data sources often lead to prolonged downtime. This required a centralized, scalable tool that can do the following:

  • Integrate diverse data sources like manuals, service tickets, and CRM logs to create a single point of reference.
  • Enable predictive diagnostics to anticipate maintenance needs before issues arise.
  • Provide technicians with an AI-driven copilot that could guide them through troubleshooting processes efficiently.

Through the Internal Copilot, we develop a digital solution that can address these pain points by consolidating technical resources, offering real-time support, and making the entire troubleshooting process significantly faster and more accurate.

Step 2: Solution Design and Architecture

To meet these objectives, we designed a solution centered around a large language model for diagnostics. The Internal Copilot would act as a centralized knowledge base for all diagnostic and repair information, accessible via an intuitive interface on both web and mobile platforms.

AI Development View

Internal Copilot Application Development View

Here’s a closer look at the components:

Data Pipeline for Technical Support A data ingestion pipeline to import, process, and store information from a range of sources—PDF documents, CRM logs, and service tickets. This data pipeline transforms the raw information into structured content, optimized for fast access during troubleshooting.

AI-Driven Troubleshooting Solutions At the heart of the Internal Copilot is its Generative AI component, trained to answer technicians’ questions based on extracted data. When users input queries, the Internal Copilot references the knowledge base and responds with targeted, data-backed insights. This AI-driven troubleshooting tool also generates additional diagnostic questions based on the initial query, enabling more refined and accurate issue resolution.

Predictive Maintenance Capabilities The Internal Copilot includes a predictive analytics module that identifies patterns in the data and alerts users to potential issues before they arise. This proactive approach helps maintain equipment performance, prevent downtime, and minimize reactive maintenance efforts.

Role-Based Access and Security For security and compliance, the system was designed with role-based access control, ensuring that only authorized personnel could access certain types of machine data. This control feature aligns with B2B software for industrial automation standards, safeguarding sensitive information while facilitating seamless use across teams.

The next section dives deeper in the logical view of the architecture of the Internal Copilot Application. It explains how the different components are connected.

AI Copilot Application Logical View

Internal Copilot Application Logical View

In the next section, we define the data ingestion pipeline and how the data is ingested from the data sources, cleaned and prepared. It describes the process of extracting relevant information from the data, transforming it if needed, storing it in different formats, formatting it into a structure the LLM can understand and then feeding it to the LLM module for processing and answers generation. This ensures the LLM is trained on high-quality information, improving the accuracy and reliability of its responses. The technologies to satisfy this can include the usage of custom Python scripts, data cleaning libraries (Pandas, NumPy) and ETL tools.

Ingestion architecture: data import & extraction from documents and service tickets

Ingestion architecture: data import & extraction from documents and service tickets

In the next section, we present an architecture to explain how the LLM processing module works to generate the answers to the users' questions.

LLM processing module architecture

LLM processing module architecture

AI/LLM: Diagnostic Questions Generation architecture

AI/LLM: Diagnostic Questions Generation architecture

Step 3: Implementing the Internal Copilot – Key Phases

The Internal Copilot implementation goes through a phased approach, each step focusing on different aspects of system capability. Here’s an overview of each phase:

Milestone 1: Proof of Concept

The Proof of Concept (PoC) phase aims to demonstrate the feasibility and potential of the AI copilot system for enhancing machine troubleshooting and support. Over a 6-week period, engineers develop a proof of concept that showcases the core functionalities of the system, including data ingestion, LLM-powered question answering and user interaction. The timeline includes a built-in contingency buffer to accommodate unexpected challenges and ensure the successful completion of core objectives.

Main objectives

  1. Implement basic data ingestion and processing for PDF documents
  2. Integrate a simple LLM for question answering
  3. Develop a rudimentary web interface for user interaction

Success criteria

  1. Successfully ingest and process at least one type of PDF document (e.g., machine manuals) with 80% accuracy in text extraction
  2. Achieve a baseline accuracy of 70% for LLM responses to user queries, as evaluated by domain experts
  3. Demonstrate a functional web interface allowing users to ask questions and receive answers within 30 seconds on average

PoC development plan

  1. Data Ingestion and processing (Weeks 1-2)
    • Set up a basic data pipeline for ingesting PDF documents
    • Implement text extraction from PDFs
    • Develop metadata extraction (e.g., document type, machine type)
    • Create a simple storage solution for extracted data
  2. LLM integration (Weeks 2-3)
    • Select and set up a suitable LLM API (e.g., OpenAI's GPT-3.5)
    • Implement basic prompt engineering for question answering
    • Develop a simple retrieval mechanism to fetch relevant information from processed documents
    • Create a module to handle LLM API requests and responses
  3. Web interface development (Weeks 3-4)
    • Design and implement a basic web frontend for user interactions
    • Create an interface for users to input questions
    • Develop a display mechanism for showing LLM-generated answers
    • Implement basic error handling and user feedback mechanisms
  4. Testing and documentation (Weeks 5-6)
    • Integrate all components (data ingestion, LLM, web interface, and authorization)
    • Develop and run unit tests for individual components
    • Prepare documentation for the PoC system
    • Demo and presentation showcasing the PoC functionality and results

Milestone 2: Prototype

The Prototype phase builds upon the Proof of Concept, expanding functionality and demonstrating the viability of the AI copilot system for machine troubleshooting and support. This phase is executed over a 2-month period by the engineers. The goal is to create a working prototype that showcases the core features of the system and lays the groundwork for the full implementation.

Main objectives

  1. Enhance data ingestion and processing capabilities
  2. Improve LLM integration and question-answering accuracy, including diagnostic question generation
  3. Develop a more robust and user-friendly web interface
  4. Implement basic authorization and security features

Success criteria

  1. Successfully ingest and process multiple types of PDF documents (manuals, technical info, quality info) with 90% accuracy in text and metadata extraction
  2. Achieve an accuracy of 85% for LLM responses to user queries, as evaluated by domain experts
  3. Demonstrate the ability to generate relevant diagnostic questions with 80% accuracy, as evaluated by domain experts
  4. Implement a functional web interface with user authentication, allowing users to ask questions, receive answers, and engage with diagnostic questions within 15 seconds on average
  5. Implement basic role-based access control for different types of documents and data sources

Prototype development plan

  1. Enhanced data ingestion and processing (Weeks 1-3)
    • Expand PDF processing to handle multiple document types (manuals, technical info, quality info)
    • Implement metadata extraction for all document types
    • Develop a more robust storage solution for extracted data and metadata
    • Create a basic pipeline for processing service ticket data
  2. Advanced LLM integration (Weeks 2-4)
    • Upgrade to a more capable LLM (e.g., GPT-4 or similar)
    • Implement advanced prompt engineering techniques for improved question answering
    • Develop a semantic indexing system from diagnostics
    • Implement a mechanism for generating and presenting diagnostics based on user queries
  3. Web interface and UX (Weeks 3-6)
    • Develop user authentication and session management
    • Implement features for question history and conversation context
    • Create interfaces for presenting and interacting with diagnostics
    • Develop a basic dashboard for displaying document statistics and user activity
  4. Authorization and security implementation (Weeks 5-7)
    • Develop a role-based access control system
    • Implement basic data masking for personal information in service tickets
    • Create a module for managing user roles and permissions
    • Integrate authorization checks with the data retrieval and LLM query processes
  5. Integration, testing, and documentation (Weeks 7-8)
    • Integrate all components into a cohesive system
    • Implement thorough unit and integration testing framework
    • Develop comprehensive documentation for the prototype system
    • Prepare a demonstration and presentation of the prototype's capabilities

Milestone 3: MVP

The MVP milestone aims to transform the prototype into a production-ready tool that can be introduced to end-users. Over a 6-month period, a team of engineers will focus on enhancing stability, security, and privacy while expanding the system's capabilities to meet real-world user needs.

Main objectives

  1. Develop a production-ready AI copilot system with enhanced stability and performance
  2. Implement comprehensive security measures and privacy controls
  3. Expand data processing capabilities to handle all required document types and service tickets
  4. Create a user-friendly interface with advanced features for efficient troubleshooting
  5. Develop a fully functional mobile application for on-the-go access to the AI copilot system

Success criteria

  1. Achieve 95% uptime for the system over a 30-day period
  2. Successfully process and integrate all required document types and service tickets with 95% accuracy
  3. Implement basic security measures and comply with relevant data protection regulations
  4. Achieve an 85% user satisfaction rate based on feedback from a pilot group of 50 technicians
  5. Launch a mobile application with feature parity to the web interface

MVP development plan

  1. System architecture and infrastructure enhancement (Weeks 1-4)
    • Refine and optimize the overall system architecture
    • Set up production-grade cloud infrastructure with scalability and redundancy
    • Implement comprehensive logging and monitoring systems
    • Establish CI/CD pipelines for automated testing and deployment
  2. Data processing and integration (Weeks 3-8)
    • Develop advanced PDF processing capabilities for all document types
    • Implement service ticket integration with external systems (CRM, ERP)
    • Create data validation and cleansing pipelines
    • Optimize data storage and retrieval mechanisms
  3. LLM and AI enhancements (Weeks 7-14)
    • Fine-tune LLM for improved accuracy in the specific domain
    • Implement advanced context management for multi-turn conversations
    • Develop more sophisticated diagnostic question generation algorithms
    • Create a feedback loop system for continuous LLM improvement
  4. Security and privacy implementation (Weeks 13-18)
    • Implement end-to-end encryption for data in transit and at rest
    • Develop advanced role-based access control system
    • Implement data masking for personal information in service tickets
    • Create data anonymization processes for sensitive information
  5. User interface and experience optimization (Weeks 17-22)
    • Develop an intuitive and responsive web interface
    • Implement advanced search and filtering capabilities
    • Create customizable dashboards for different user roles
    • Develop mobile-responsive design for field technicians
  6. Mobile application development (Weeks 1-22)
    • Design and develop the mobile application using Flutter
    • Implement core functionalities mirroring the web interface
    • Optimize for different screen sizes and device capabilities
    • Develop offline mode capabilities for field technicians
    • Integrate push notifications for real-time updates
  7. Testing, documentation, and eeployment (Weeks 21-24)
    • Conduct comprehensive system testing (unit, integration, and end-to-end)
    • Develop user guides and technical documentation
    • Plan and execute a phased rollout strategy
    • Provide training materials and sessions for end-users

Conclusion

This roadmap showcases how Think-it develops an AI-driven solution for revolutionizing troubleshooting, maintenance, and overall efficiency in industrial machinery. By combining AI-driven troubleshooting solutions, predictive maintenance, and an intelligent data pipeline, this tool has become an essential asset for technicians and operators.

Through a combination of large language model for diagnostics and scalable cloud infrastructure, the Service Copilot, transforms technical support workflows and ensures machinery operates at peak performance. For companies aiming to embrace digital transformation in manufacturing, the Service Copilot is a powerful and adaptable solution designed to drive operational efficiency and technical excellence across industrial environments.

Share this story