Last updated: 26 March 2026
Our principles for using artificial intelligence in our operations, on our website, and in client work — including transparency, oversight, and limitations.
BlendLab builds and deploys systems that may incorporate artificial intelligence (AI) and machine learning capabilities. This AI Policy explains our principles when we use AI in our own operations, on our website, and when delivering work for clients — and what you should expect regarding transparency, oversight, and limitations.
This policy supplements our Privacy Policy and Terms of Service. Where a client agreement includes specific AI or data-processing terms, those terms take precedence for that engagement.
This policy applies to AI-assisted tools used internally by BlendLab, AI features we may expose on the Site (such as chat or assistants), and AI components we design, integrate, or operate as part of client projects — subject to the applicable contract.
We may use AI and related technologies for purposes such as:
Where you interact with an AI-powered feature on the Site, we aim to make it clear that you are engaging with an automated system (for example, through labels or introductory text). For client deliverables, we document major AI components in line with our technical standards and contractual commitments.
We do not rely on AI output alone for decisions that materially affect legal rights, safety, or critical business outcomes without appropriate human review. Our teams validate AI-assisted outputs in line with the risk and context of each use case.
We may use third-party AI providers (such as cloud-hosted large language models) subject to their terms and security requirements. Personal data is processed in accordance with our Privacy Policy and applicable agreements.
We do not use your personal data to train foundation models for public or third-party use without your explicit consent where such consent is required. Client confidential data is handled only as permitted by contract and applicable law.
AI systems can produce incorrect, incomplete, or outdated information (“hallucinations”). Outputs may reflect biases present in training data or prompts. You should verify outputs that matter for decisions, compliance, or public-facing content.
BlendLab is not responsible for decisions made solely on AI-generated content without reasonable human validation appropriate to the risk.
We do not deploy AI for unlawful purposes. We avoid or carefully govern uses that could cause harm, discrimination, or misuse of sensitive data beyond what law and contracts permit. We assess client use cases for feasibility, safety, and regulatory fit before committing to delivery.
We apply technical and organizational measures appropriate to the sensitivity of data processed through AI systems. Access is limited to personnel and subprocessors with a legitimate need. Client-specific requirements are addressed in statements of work or data processing terms.
AI capabilities and regulations evolve rapidly. We may update this policy to reflect new practices, tools, or legal requirements. The “Last updated” date indicates the latest revision.
For questions about AI at BlendLab, contact us at [email protected].