Skip to main content

Our AI Testimonial

Secure, Ethical, and Responsible AI — Our Commitment to Trust and Sustainability

Introduction

Security and innovation are in Mindlapse’s DNA. As cybersecurity experts, we spent years solving operational and strategic issues encountered by security professionals across industries.

We couldn’t help but notice that despite the different environments and contexts, GRC (governance risk and compliance) stakeholders came to us with the same issues and pain points they didn’t have technological solutions for.

Determined to solve these wide-scale and common issues in GRC, we spent months building an innovative AI-native platform: Mindlapse.

While artificial intelligence has brought unprecedented capabilities and possibilities to the cybersecurity community, we are mindful of the risks involved and are determined to implement responsible and trustworthy AI systems that reflect our values.

To this end, we centralized our tangible commitments into a Charter shared on this page for transparency and integrated into the contracts of our employees and partners.

What is trustworthy AI?

We believe that the development and deployment of AI should reflect ethical principles such as fairness, and proportional use while integrating adequate safeguards to ensure its robustness and trustworthiness.

For this reason, we have developed Mindlapse leveraging a security and privacy by design approach. We strive to embed principles of transparency, explainability and frugality in our systems, while being compliant with relevant regulations addressing data protection, risk management, and fundamental rights.

Our comprehensive and open approach led us to formulate the following seven commitments:

  • 1. Implement state of the art / universally recognized AI ethics and safety frameworks

At Mindlapse, we care that our business is built and operated in line with our values. We expanded our understanding and analysis of the stakes at hand when it comes to building AI systems ethically.

Inspired by works from renowned organizations and researchers such as the OECD, the UN, NIST but also companies such as Hugging Face, we continue to improve and advance an ethical approach in our practices.

Notably, we are mindful of the openness of the models and data we use, we optimize the use and energy throughout the lifecycle of our AI systems as well as the potential bias they can carry. We also adhere to the data scientist’s Hippocratic Oath published by the organization Data for Good.

  • 2. Take the necessary measures to ensure compliance with the EU AI Act and EU regulations

As a French and European startup, Mindlapse complies with all the regulations it is subjected to. After spending years assisting our clients and helping them meet regulators’ expectations by complying with GDPR, NIS and DORA or implementing international standards like the ISO 2700X, we are experts at standards implementation and are able to guarantee a high security level over time.

Our product is built with a compliance by design approach to guarantee upstream/downstream compliance for our end-users.

  • 3. Ensure our systems are safe and regularly tested with state-of-the-art evaluation methods

Security is not only a pillar of our business but also of our startup culture. We are committed to ensuring our models are robust and safe by state of the art standards.

Mindlapse’s AI models are fine-tuned with thoroughly “cleaned” data, and trained in-house on secure platforms. We monitor model safety by conducting specific risk analyses and keeping watch on existing research and leaderboards provided by specialized third parties.

We continuously assess the robustness of our models throughout the development phase as well as into the deployment phase with regular risk assessments and reviews.

  • 4. Minimize and monitor our energy consumption

At Mindlapse, we are concerned about the environmental impact of our AI systems and their use. For this reason, we have committed to monitoring the energy consumption of Mindlapse throughout the lifecycle of the underlying AI system, and we aim to reduce this impact to the best of our abilities.

To prevent unsustainable exploitation of our AI systems, we have also decided to run our operations exclusively on trusted and vetted hardware. We have also implemented sustainability as a criteria along with associated KPIs we measure when carrying out risk analysis and impact assessments.

  • 5. Guarantee transparency throughout the development and deployment of our solution

Transparency is a pillar of an ethical approach to building AI. It fosters trust between developers and their downstream users, as well as with clients and regulatory authorities.

We commit to upholding this principle in our practices by documenting the processes we carried out to build our technology as well as the way we govern it. We also commit to making this information available to our clients and to the regulator when necessary and to disclose when and how decisions are automated or outputs are AI-generated.

  • 6. Enforce privacy and data protection of users

We commit to uphold the privacy principle in the training of our model by using curated sets excluding unauthorized or personal data to the use of which subjects did not consent. We have also implemented landmark guidelines issued by the ANSSI (French national cybersecurity agency) and the CNIL, (French data protection authority).

Mindlapse contains essential data protection measures by design, and enables an integration with information systems that safeguards privacy and confidentiality in deployment.

  • 7. Monitor and improve the adoption of our AI systems throughout their lifecycle

Operating a ubiquitous and cutting-edge technology like AI requires robust, transparent, and standardized risk governance. At Mindlapse, we have implemented an end-to-end risk management framework that aligns with leading international standards such as the NIST AI Risk Management Framework and ISO/IEC 42001.

This approach includes mechanisms for continuous monitoring, incident reporting, evaluation and regular user feedback. We are committed to continuously improving and adapting our systems to remain aligned with the evolving state of the art in AI safety, risk governance, and regulatory compliance.