Skip to content

FT Intelligence Security

This is an informational primer and overview on FieldTwin Intelligence when it comes to overall security, information management, data storage and handling and compliance with the EU AI act and compliance with ISO 42001

FAQ

How does the FieldTwin Intelligence chatbot handle data security when accessing project information?

Answer

The chatbot accesses project, account, and document data relevant to user input on behalf of authorized users using JWT token. All data is securely stored within the client tenant, ensuring that sensitive information remains under the client's control. When processing requests, only tokens are sent to Azure, safeguarding the content of the underlying data. This data is also stored in pre-processed vector stores using PGVector, organized by account and project IDs.

What measures are in place to ensure the accuracy and contextual relevance of responses?

Answer

FieldTwin Intelligence uses Retrieval-Augmented Generation (RAG), which combines retrieval-based and generative models. This approach retrieves relevant information from a large dataset to generate accurate and contextually relevant responses. LangGraph is used to structure interactions between FieldTwin and the Large Language Model (LLM) in a graph-based model, aiming for efficient and accurate data handling.

Are there any known security limitations related to the chatbot's prompts or responses?

Answer

Yes, prompt security is identified as a limitation, current model content moderation is provided by Azure content parser with medium settings with option to adjust on LLM side or implement custom response parser

How does the chatbot integrate with existing FieldTwin data and functionalities?

Answer

The chatbot integrates with FieldTwin to answer questions about User Guides, uploaded project documents, account metadata, and asset definitions. It can also read and update project objects (asset and connections) metadata.

What kind of data can the chatbot currently manipulate within FieldTwin projects?

Answer

Currently, the chatbot supports a limited set of features for assets and connection objects, such as location, name, type, and metadata. It does not have access to any other integrations outside of the FieldTwin Project.

General Security

Data Encryption

All communication with the Azure AI Service is encrypted to ensure data security during transmission using TLS 1.2+. Additionally, the vector stores utilize Azure embedding model encryption, safeguarding stored data both in transit and at rest.

Access Control and Authentication

All chatbot interactions with FieldTwin projects are performed on behalf of the user, ensuring that the user's existing FieldTwin roles and permissions, including both read and write access levels—are strictly enforced at all times. This guarantees that users can only access or modify data for which they are authorized, fully aligning chatbot actions with FieldTwin’s established security and permission protocols.

Auditing and Logging

All prompts and completions generated during user interactions are systematically logged within the Langsmith system, where they are retained for a maximum period of 400 days but no more than 5000 conversations limited by software license. This approach provides robust traceability and supports both auditing and compliance requirements by maintaining a detailed record of chatbot exchanges.

Data Privacy and Compliance

Each conversation is securely stored along with the associated user email address. All models powering the chatbot are deployed within Azure's Sweden data centers, ensuring that data remains within specific geographic boundaries for compliance and protection. According to Microsoft's privacy statement regarding Azure OpenAI Service:

• Prompts (inputs) and completions (outputs) are NOT accessible to other customers. • Data is NOT accessible to LLM Vendors. • Information is NOT used to improve or train other models. • Inputs and outputs are NOT used to enhance Azure AI Service foundation models. • Data will NOT be used to develop Microsoft or third-party products or services without your explicit permission or instruction.

This architecture reinforces both data privacy and sovereignty, ensuring that your information is subject only to Microsoft’s policies and not shared outside the defined environment.

Vulnerability Management and Penetration Testing

Regular security audits or penetration tests are conducted on the chatbot's infrastructure and code, and identified vulnerabilities are quickly addressed.

Incident Response

As part of ISO 42001:2023 framework, the AIMS Continual Improvement, Non-Conformity and Incident Response Policy governs our approach in the event of security breach or data compromise related to the chatbot. This policy not only details our procedures for addressing incidents but also ensures that users are promptly notified should any security issues arise.

LLM Specific Security Concerns

FieldTwin Intelligence implements comprehensive content filtering for both user prompts and model responses. Specifically, filtering is applied at a medium level across categories such as violence, hate, sexual content, and self-harm, ensuring that interactions remain safe and compliant with organizational standards. Furthermore, the deployment includes support for other enabled models, including those designed to detect jailbreak attempts, as well as models attuned to identifying protected material in both text and code. In alignment with these safeguards, it is important to note that we do not train or finetune large language models on any user data. Vector stores that support retrieval-augmented generation are constructed exclusively from project and account data, approved user guides, and API documentation. All user-uploaded documents are subject to the same robust content filtering protocols as prompts and completions, further ensuring compliance with organizational standards and regulatory requirements.

Third-Party Dependencies

In addition to the security provided by the FT platform, we utilize hosted monitoring software, LangSmith, made available by LangChain on an “AS-IS” basis. This means there are no representations, warranties, performance assurances, or data security guarantees, nor is there a support obligation from the provider. As an alternative, we also retain the option to use monitoring solutions that are hosted entirely within the FT platform for greater control and alignment with internal security protocols.

Data Segregation

To further ensure strict data segregation between accounts and projects, a user JWT access token is included with every request and API call. This token is essential for securely selecting the user-specific vector stores corresponding to each project and account, ensuring that only authorized users can access their designated data sets.

EU AI ACT Complicance

FieldTwin Intelligence, as currently implemented and used, does not present threats to the health or safety of individuals or property. Based on its functionalities and the absence of significant risks associated with its operation, the system would be classified under the EU AI Act as “Minimal” or “No Risk”. This designation requires limited adherence to Article 50 of Transparency Obligations for Providers and Deployers of Certain AI Systems

Risk Management System

FieldTwin Intelligence inputs and outputs are human-monitored for debugging purposes, providing an additional layer of oversight to ensure system accuracy and reliability.

Data Governance and Quality

Data used in response generation is constructed exclusively from project and account data, approved user guides, and API documentation. All user-uploaded documents are subject to the same rigorous content filtering protocols as those applied to prompts and completions, ensuring compliance with organizational standards and meeting regulatory requirements. Prompt security is actively managed using Azure Content Moderation, which helps identify and address potential risks in both input and output data.

Technical Documentation

The development process is tracked and documented in JIRA, ensuring an organized record of design decisions and progress. Additionally, technical documentation is maintained directly within the codebase, providing up-to-date, accessible references for the system's design, development practices, and its implementation for querying and updating projects using a Large Language Model (LLM).

Record Keeping and Logging Capabilities

FieldTwin Intelligence uses LangSmith (by LangChain) to log all user inputs and outputs for 400 days. These logs, covering queries, responses, and user actions, are reviewed for debugging, compliance, and traceability. This approach ensures accountability and helps maintain system accuracy and reliability.  

Transparency

To promote transparency and user awareness, every user interaction with FieldTwin Intelligence is initiated by a clear notification that they are engaging with an AI-powered system. This introduction is presented at the outset of each session or conversation, ensuring users understand the nature of the technology, the automated nature of responses, and the indication of AI-generated information.

Purpose and Capabilities

FieldTwin Intelligence generates responses using only project data, account information, approved user guides, and API documentation and uploaded documents. All documents undergo strict content filtering for compliance, and prompt security is managed through Azure Content Moderation.

Users are always notified that they’re interacting with an AI system at the start of each session, ensuring transparency.

• Answer questions about user guides, API documentation • Answer questions about uploaded documents • Answer questions about project objects metadata and account definitions

At present, FieldTwin Intelligence offers a focused set of core features, with ongoing development to expand capabilities in the future. Certain advanced controls, like flow path generation or comprehensive prompt security parsing, are still in developmental stages or recognized as areas of limitation. Current prompt security relies on Azure Content Moderation and strict document filtering, though there is an acknowledged need for more advanced response parsing to mitigate risks associated with large language model outputs.  

Human Oversight

At the present stage of development, FieldTwin Intelligence does not incorporate human oversight or approval workflows when making changes to project data, including actions such as creating new assets or updating project object metadata. All modifications are executed directly based on user inputs within the system. However, a comprehensive workflow approval process is scheduled for future development, intended to introduce necessary review steps and oversight, particularly for critical actions.

Accuracy, Robustness, and Cybersecurity

Access control within FieldTwin Intelligence is governed by robust authentication and authorization mechanisms, primarily leveraging FieldTwin user roles and permissions encoded within JWT tokens. This ensures that access to data—including vector stores—remains tightly scoped to the individual user, specific projects, and relevant accounts. Each user’s interactions are meticulously traced, particularly when uploading documents, thereby providing a complete audit trail and accountability for content additions.

To further mitigate risks associated with inappropriate or irrelevant content, the FieldTwin platform employs Azure’s advanced filtering capabilities. This includes automated screening for violence, hate, sexual material, and self-harm, as well as built-in jailbreak prevention to counteract attempts at bypassing content restrictions. These layered controls collectively bolster data security, enhance system integrity, and support compliance with both internal policies and external regulatory requirements.

Conformity Assessment - Before Deployment

Prior to deployment, a formal pre deployment check is required, which mandates explicit approval of customer success requests by both the client and management teams. This step ensures that client expectations are fully addressed and that organizational oversight is maintained. In addition, compliance with the EU AI Act must be verified at this stage, including the assessment of technical, ethical, and legal standards as outlined by regulatory authorities. Comprehensive documentation of these approvals and compliance checks forms part of the system’s conformity assessment, establishing a framework of accountability and regulatory alignment before the solution goes live.

Post-Market Monitoring - Continuous Monitoring

Following deployment, FieldTwin Intelligence utilizes the LangSmith tool for comprehensive post-market monitoring. This platform enables tracking of key operational metrics, including user inputs, system outputs, and time to token first, providing granular insight into system performance and responsiveness in real time. In parallel, cost monitoring is managed directly through the Azure platform’s billing dashboard, ensuring financial oversight and transparency.

These measures create a robust foundation for ongoing performance evaluation and incident management, while maintaining vigilance over both operational efficiency and expenditure.