AGENTRUNTIME AI USAGE POLICY
Effective Date: March 1, 2026 Last Updated: March 1, 2026
This AI Usage Policy ("Policy") governs the deployment and operation of artificial intelligence systems, automated agents, machine learning models, and other automated systems (collectively, "AI Systems") on the AgentRuntime platform and related services (collectively, the "Services") operated by AgentRuntime Labs Ltd ("AgentRuntime," "we," "us," or "our").
This Policy is incorporated by reference into AgentRuntime's Terms of Service and Acceptable Use Policy, both of which form part of your binding agreement with AgentRuntime. In the event of a conflict between this Policy and the Terms of Service, the Terms of Service shall prevail.
Capitalized terms used but not defined herein have the meanings given in the Terms of Service.
1. USER RESPONSIBILITY FOR AI SYSTEMS
1.1 Operational Responsibility. You are solely and fully responsible for all AI Systems that you deploy, operate, configure, or make available through the Services, including:
(a) the design, training, fine-tuning, and configuration of AI Systems;
(b) monitoring the outputs, behaviors, and downstream impacts of your AI Systems;
(c) promptly identifying and remediating harmful, erroneous, or non-compliant outputs;
(d) preventing misuse of AI Systems by Authorized Users or third parties; and
(e) ensuring that your AI Systems comply with all applicable laws, regulations, ethical standards, and this Policy.
1.2 AgentRuntime's Role. AgentRuntime provides infrastructure and tooling for the deployment of AI Systems but does not control, review, or approve the logic, training data, outputs, or behavior of user-deployed AI Systems. AgentRuntime assumes no liability for the decisions, actions, or outputs of AI Systems deployed by users.
1.3 Regulatory Compliance. You are responsible for determining the applicable legal and regulatory requirements in your jurisdiction and ensuring that your AI Systems are designed, deployed, and operated in compliance with such requirements, including but not limited to AI-specific legislation, product liability laws, sector-specific regulations, and data protection laws.
2. TRANSPARENCY OBLIGATIONS
2.1 Disclosure of Automation. Where applicable law or industry standards require disclosure that a user is interacting with an automated system, you must implement appropriate disclosure mechanisms in your AI Systems.
2.2 No Unlawful Impersonation. You must not deploy AI Systems that impersonate human operators in contexts where such impersonation is prohibited or where users are likely to be materially deceived by the absence of disclosure.
2.3 Identification of Automated Actions. In contexts where AI Systems take actions with legal or material consequences on behalf of or affecting third parties, you must ensure that the automated nature of such actions is appropriately identifiable.
3. HIGH-RISK AI APPLICATIONS
3.1 Heightened Obligations. Certain AI applications present elevated risk to individuals and society by virtue of their subject matter or potential impact. These include, without limitation:
(a) AI Systems used in clinical decision support, medical diagnosis, triage, or treatment recommendations;
(b) automated financial trading, credit scoring, insurance underwriting, or investment advisory systems;
(c) legal decision support, litigation analytics, or automated legal advice systems;
(d) AI Systems that control or influence critical infrastructure, utilities, transportation, or emergency services;
(e) AI Systems used in hiring, performance evaluation, or employment decisions; and
(f) AI Systems used in law enforcement, border control, biometric identification, or social scoring.
3.2 Required Safeguards. Before deploying a high-risk AI System through the Services, you must:
(a) conduct an appropriate risk assessment and, where required by law, a conformity assessment;
(b) implement human oversight mechanisms to review, validate, or override AI outputs where appropriate;
(c) maintain records of system design, training data, testing, and operational decisions sufficient to support regulatory accountability;
(d) provide affected individuals with appropriate notice and, where required, the ability to contest automated decisions; and
(e) establish post-deployment monitoring processes to identify and remediate errors, bias, or harmful behavior.
3.3 No Endorsement. AgentRuntime's provision of infrastructure for high-risk AI applications does not constitute endorsement, approval, or certification of those applications for any particular purpose or jurisdiction.
4. AUTONOMOUS AGENTS
4.1 Operational Governance. When deploying autonomous or semi-autonomous agents through the Services, you must:
(a) define clear operational boundaries and scope for each agent, including permissible actions and accessible systems;
(b) set appropriate execution limits, including maximum run time, resource consumption caps, and permitted API calls;
(c) implement monitoring systems to detect unexpected, erroneous, or runaway behavior;
(d) include fail-safe and circuit-breaker mechanisms capable of halting agent execution upon detection of anomalous activity;
(e) maintain human oversight capabilities enabling authorized personnel to pause, override, or terminate agent operations; and
(f) regularly review agent behavior and update operational parameters as needed.
4.2 Platform Safeguards. AgentRuntime may, at its sole discretion, introduce platform-level safeguards, resource limits, or circuit-breakers to prevent runaway automation, protect platform stability, and mitigate risk to third parties. Such safeguards do not relieve you of your obligations under this Policy.
5. PROHIBITED AI USES
In addition to the prohibited activities set forth in the Acceptable Use Policy, you shall not deploy AI Systems designed, configured, or reasonably likely to be used to:
(a) Disinformation and Synthetic Media. Generate, publish, or amplify false or misleading information, synthetic media, or AI-generated deepfakes intended to deceive, manipulate public opinion, or harm any individual or organization.
(b) Election Interference. Influence, interfere with, or manipulate democratic processes, including elections, referendums, voter registration, or political campaigns, in violation of applicable law.
(c) Mass Harassment. Conduct, coordinate, or facilitate targeted harassment campaigns, including the generation of abusive or threatening content directed at individuals or groups.
(d) Social Engineering. Perform social engineering attacks, phishing, pretexting, or other manipulation techniques designed to extract sensitive information, gain unauthorized access, or influence behavior through deception.
(e) Unlawful Impersonation. Impersonate any natural person, public official, government entity, or organization in a manner that is deceptive, harmful, or unlawful.
(f) Discriminatory Systems. Deploy AI Systems that unlawfully discriminate against individuals on the basis of protected characteristics, including race, ethnicity, gender, age, disability, religion, sexual orientation, or national origin.
(g) Manipulation of Vulnerable Persons. Target, exploit, or manipulate vulnerable individuals, including minors, persons with mental health conditions, or those in financial distress.
6. AI SAFETY AND PLATFORM MONITORING
6.1 Monitoring. AgentRuntime may monitor system activity and agent behavior across the platform for the purposes of: (a) detecting abuse, misuse, or violations of this Policy; (b) protecting platform stability and security; (c) preventing harm to third parties; and (d) complying with applicable law.
6.2 Scope of Monitoring. Monitoring activities are limited to operational metadata, resource usage patterns, and system-level signals. AgentRuntime does not routinely inspect the content of user workflows or intellectual property beyond what is operationally necessary for the purposes described above.
6.3 Enforcement. Where AgentRuntime determines that an AI System violates this Policy or poses a risk of harm, AgentRuntime may take enforcement action as described in the Acceptable Use Policy, including suspending or terminating access to the Services.
7. MODEL OUTPUT RESPONSIBILITY
7.1 Output Validation. You are responsible for validating all outputs generated by AI Systems deployed through the Services before acting upon or publishing such outputs, and for ensuring that reliance on AI outputs does not cause harm to individuals or third parties.
7.2 No Accuracy Guarantee. AgentRuntime makes no warranty, representation, or guarantee regarding the accuracy, completeness, fitness for purpose, or reliability of outputs generated by AI models or automated systems, whether operated by AgentRuntime or by users.
7.3 Downstream Harm. You are solely liable for any harm, loss, or liability arising from the outputs or actions of AI Systems you deploy through the Services, including any downstream impacts on third parties.
8. COMPLIANCE WITH AI REGULATIONS
8.1 Applicable Regulations. You must comply with all applicable AI-related laws and regulations in your jurisdiction. Relevant frameworks may include, without limitation: the EU Artificial Intelligence Act; applicable sector-specific AI regulations; data protection laws as they relate to automated decision-making; and consumer protection laws governing AI-generated content.
8.2 Export Controls. You must ensure that your use of AI Systems through the Services complies with applicable export control and trade compliance laws, including restrictions on the use of AI technology in certain jurisdictions or for certain end uses.
8.3 Consequences of Non-Compliance. Failure to comply with this Policy or applicable AI regulations may result in account suspension or termination in accordance with the Terms of Service.
9. UPDATES TO THIS POLICY
AgentRuntime may update this AI Usage Policy from time to time to reflect developments in law, technology, or platform practices. Updates will be communicated through platform notifications or email, and the updated Policy will take effect upon publication. Continued use of the Services following the effective date of any update constitutes acceptance of the revised Policy.
10. CONTACT
For questions regarding this AI Usage Policy:
AgentRuntime Labs Ltd Email: legal@agentruntime.io
© 2026 AgentRuntime Labs Ltd. All rights reserved.