NeironHub Acceptable Use and AI Policy
Effective December 27, 2025 - Schedule D to the NeironHub Terms of Service
This Acceptable Use and AI Policy (the "Policy") establishes the rules and standards for using the NeironHub Platform, including specific requirements for developing, deploying, and delivering AI systems and AI Outputs. This Policy is incorporated into and forms part of the NeironHub Terms of Service ("ToS"). Capitalized terms not defined herein have the meanings given in the ToS.
Purpose: NeironHub is committed to fostering a secure, ethical, and professional AI collaboration environment. This Policy protects Users, maintains Platform integrity, ensures regulatory compliance, prevents misuse of AI technologies, and upholds the highest standards of AI safety and responsible development.
Scope: This Policy applies to all Users (Clients and AI Experts), all Projects conducted through the Platform, all Content uploaded to NeironLab workspaces, all AI systems developed or deployed using Platform infrastructure, all communications through Platform messaging systems, and all use of NeironHub features including NeironConsult and NeironLab.
IMPORTANT: Violations of this Policy may result in immediate Account suspension, Project cancellation, fund forfeiture, permanent Platform ban, reporting to law enforcement or regulatory authorities, and legal action. NeironHub takes violations seriously and enforces this Policy strictly to protect our community and maintain Platform integrity.
1. General Prohibited Uses
1.1 Illegal Activities
Users may not use the Platform to engage in, facilitate, or support any illegal activities, including but not limited to:
- Violating any federal, state, local, or international law, statute, ordinance, or regulation
- Facilitating or promoting illegal activities, including money laundering, terrorism financing, fraud, theft, or criminal enterprises
- Trafficking in illegal goods or controlled substances
- Violating export control laws, economic sanctions, or trade restrictions (ITAR, EAR, OFAC)
- Engaging in activities that violate anti-corruption laws (FCPA, UK Bribery Act)
- Tax evasion, securities fraud, or other financial crimes
- Identity theft, impersonation, or fraudulent misrepresentation
- Unauthorized access to computer systems, hacking, or cybercrime
- Copyright infringement, trademark violations, or theft of trade secrets
- Creating or distributing child sexual abuse material (CSAM) or content depicting minors in sexual situations
1.2 Harmful or Abusive Conduct
Users may not engage in conduct that harms, harasses, threatens, or abuses other Users or third parties:
- Harassment, bullying, intimidation, or threats of violence
- Hate speech, discriminatory content, or incitement to violence based on race, ethnicity, national origin, religion, gender, sexual orientation, disability, or other protected characteristics
- Doxxing (publishing private information without consent)
- Stalking, unwanted sexual advances, or other forms of harassment
- Impersonating other individuals, organizations, or NeironHub personnel
- Defamation, libel, or knowingly spreading false information to damage reputations
- Promoting self-harm, suicide, eating disorders, or dangerous activities
- Encouraging or facilitating violence, terrorism, or extremism
1.3 Platform Integrity and Security
Users may not compromise Platform security, stability, or integrity:
- Malware and malicious code: Uploading viruses, worms, trojans, ransomware, spyware, or other malicious software
- System attacks: Attempting to gain unauthorized access, conducting denial-of-service (DoS) attacks, penetration testing without authorization, or exploiting security vulnerabilities
- Data scraping: Using automated tools (bots, scrapers, crawlers) to extract data without authorization
- Reverse engineering: Attempting to reverse engineer, decompile, or disassemble Platform software or infrastructure
- Account abuse: Creating multiple accounts, sharing login credentials, selling accounts, or circumventing access controls
- Rate limit evasion: Circumventing rate limits, usage quotas, or technical restrictions
- Network interference: Interfering with Platform servers, networks, or other Users' access
- False reporting: Submitting false abuse reports, disputes, or complaints
2. AI-Specific Prohibited Uses
2.1 Prohibited AI Applications and Outputs
Users may not develop, deploy, or deliver AI systems or AI Outputs that:
- Child exploitation: Generate, distribute, or facilitate the creation of child sexual abuse material (CSAM), sexually explicit content involving minors, or content that sexualizes, grooms, or exploits children in any way
- Malware and cyberweapons: Create malware, ransomware, exploit code, botnets, hacking tools, or other malicious software; develop AI-powered cyberweapons or autonomous attack systems
- Disinformation and manipulation: Generate coordinated disinformation campaigns, fake news designed to manipulate elections or public opinion, or deceptive content intended to mislead consumers or voters
- Deepfakes without disclosure: Create synthetic media (deepfakes) depicting real individuals without clear, conspicuous disclosure that the content is AI-generated; create non-consensual intimate imagery or "revenge porn"
- Spam and deceptive practices: Generate spam, phishing emails, scam content, fraudulent schemes, or deceptive advertising
- Illegal content generation: Generate instructions for illegal activities, bomb-making, biological weapons, illegal drugs manufacturing, or other dangerous and illegal content
- Biometric surveillance without consent: Deploy facial recognition, emotion detection, or biometric identification systems without proper consent, legal authority, and privacy protections
- Social scoring systems: Develop social credit scoring or mass surveillance systems that rank individuals for social control purposes
2.2 High-Risk AI Applications - Special Requirements
The following AI applications are considered high-risk and require enhanced safeguards, human oversight, and compliance documentation. AI Experts must proactively disclose high-risk applications and obtain NeironHub approval before accepting Projects:
- Medical and healthcare AI: Diagnostic tools, treatment recommendations, medical imaging analysis, clinical decision support systems, or drug discovery applications (subject to FDA, EMA, or other medical device regulations)
- Legal AI systems: Legal research, contract analysis, e-discovery, predictive sentencing, or legal advice automation (subject to attorney supervision requirements and unauthorized practice of law restrictions)
- Financial services AI: Credit scoring, lending decisions, fraud detection, algorithmic trading, investment recommendations, or insurance underwriting (subject to FCRA, ECOA, fair lending laws, and financial regulations)
- Employment and HR AI: Hiring algorithms, resume screening, candidate ranking, performance evaluation, or termination recommendations (subject to EEOC, anti-discrimination laws, and employment regulations)
- Law enforcement and justice AI: Predictive policing, criminal risk assessment, recidivism prediction, or evidence analysis (subject to due process, bias mitigation, and transparency requirements)
- Educational AI: Student assessment, college admissions, scholarship allocation, or academic performance prediction (subject to FERPA, educational equity, and accessibility requirements)
- Critical infrastructure: Power grid management, water systems, transportation networks, or other essential services (subject to sector-specific security and safety regulations)
- Autonomous systems: Self-driving vehicles, drones, robotics with autonomous decision-making (subject to safety certifications and liability frameworks)
High-risk requirements:
- Human-in-the-loop oversight and manual review processes
- Bias testing, fairness audits, and disparate impact analysis
- Explainability and transparency documentation
- Regular accuracy and performance monitoring
- Incident response and error correction procedures
- Compliance with sector-specific regulations and standards
- Professional liability insurance (may be required)
- Additional contractual protections and indemnification
2.3 Prohibited Sole Reliance on AI for Critical Decisions
Human oversight required: Users may not rely exclusively on AI-generated recommendations, analyses, or outputs for decisions that significantly affect individuals' rights, safety, or well-being. The following decisions must involve appropriate human judgment, professional expertise, and manual review:
- Medical diagnosis, treatment planning, or healthcare decisions
- Legal determinations, sentencing, or justice system outcomes
- Credit decisions, loan approvals, or financial services determinations
- Employment decisions (hiring, promotion, termination)
- Educational assessments, college admissions, or scholarship awards
- Child welfare or foster care placement decisions
- Immigration or asylum determinations
- Benefits eligibility or social services allocation
- Safety-critical systems (aviation, nuclear, medical devices)
- Insurance underwriting or claims adjudication affecting material coverage
Disclosure requirement: When AI systems are used to inform (not solely determine) these decisions, the use of AI must be disclosed to affected individuals, and human reviewers must have the ability to override or modify AI recommendations based on individual circumstances.
3. Training Data and AI Model Compliance
3.1 Training Data Requirements
AI Experts must ensure that all training data used to develop AI models or AI Outputs:
- Lawful acquisition: Is obtained through lawful means with appropriate licenses, permissions, or legal basis (purchase, license, public domain, fair use, consent)
- No unauthorized scraping: Does not include data obtained through web scraping in violation of terms of service, robots.txt directives, technological protection measures, or data protection laws
- Copyright compliance: Does not include copyrighted materials used without authorization, proper licensing, or lawful exception (fair use, creative commons)
- No personal data violations: Complies with GDPR, CCPA, and other data protection laws; does not include personal data obtained without consent or legal basis
- No proprietary data: Does not include trade secrets, confidential information, or proprietary datasets belonging to third parties without proper authorization
- Ethical sourcing: Is not obtained through deceptive practices, data breaches, unauthorized access, or exploitation
- Bias mitigation: Is evaluated for biases based on protected characteristics and includes efforts to ensure dataset diversity and representativeness
3.2 Model Transparency and Disclosure
Third-party AI model disclosure: AI Experts must disclose all external AI models, APIs, or pre-trained models used in Deliverables, including:
- Model name, provider, and version
- License type and restrictions (open-source, commercial, restricted use)
- Known limitations, biases, or failure modes
- Training data sources (if publicly disclosed by model provider)
- Terms of use restrictions (e.g., prohibition on certain applications)
- Attribution requirements for open-source models
- Any usage costs or API rate limits that may affect Client deployment
Open-source compliance: AI Experts using open-source models must comply with applicable licenses (MIT, Apache 2.0, GPL, creative commons) and provide proper attribution. Clients are responsible for ongoing compliance after deployment.
4. AI Safety and Regulatory Compliance
4.1 NIST AI Risk Management Framework
AI Experts developing AI systems through the Platform should align with the NIST AI Risk Management Framework (AI RMF) principles:
- Valid and reliable: AI systems produce valid, reliable outputs appropriate for their intended use and context
- Safe: Systems do not pose unreasonable safety risks and include appropriate safeguards
- Secure and resilient: Systems are protected against adversarial attacks, data poisoning, and unauthorized manipulation
- Accountable and transparent: Decision-making processes are documented, explainable, and subject to human oversight
- Fair with harmful bias managed: Systems are evaluated for bias and discrimination, with mitigation strategies implemented
- Privacy-enhanced: Systems protect privacy, minimize data collection, and implement privacy-preserving techniques
4.2 EU AI Act Compliance (When Applicable)
For AI systems deployed in the European Union or affecting EU residents, AI Experts must comply with the EU AI Act requirements, including:
- Prohibited AI systems: Do not develop social scoring systems, real-time biometric identification in public spaces (with limited exceptions), emotion recognition in workplaces or schools, or manipulative AI systems that exploit vulnerabilities
- High-risk AI systems: Implement conformity assessments, risk management systems, data governance, technical documentation, logging, transparency, human oversight, and accuracy/robustness requirements for high-risk applications (employment, education, law enforcement, critical infrastructure)
- Transparency requirements: Disclose when users are interacting with AI systems, deepfakes, or emotion recognition; provide clear information about AI system capabilities and limitations
- General-purpose AI: For general-purpose AI models, provide technical documentation, implement copyright compliance measures, and publish training data summaries
EU AI Act requirements apply as they come into effect under the Act's implementation timeline.
4.3 FTC Guidance on AI and Algorithmic Decision-Making
AI Experts developing consumer-facing AI systems must comply with FTC Act requirements prohibiting unfair or deceptive practices:
- Deceptive claims: Do not make false or misleading claims about AI capabilities, accuracy, or performance
- Algorithmic discrimination: Ensure algorithms do not discriminate based on protected characteristics in violation of fair lending, fair housing, or equal employment laws
- Transparency and explainability: Provide meaningful explanations for adverse decisions (credit denials, employment rejections) when required by law
- Data minimization: Collect only necessary data and avoid excessive surveillance or privacy-invasive practices
- Unfair practices: Avoid AI practices that cause substantial injury to consumers that is not reasonably avoidable and not outweighed by benefits
4.4 Export Control Compliance
Restricted technologies: AI Experts must comply with U.S. export control laws (ITAR and EAR) and ensure that AI technologies, models, and services are not provided to:
- Prohibited countries or territories subject to U.S. sanctions (Cuba, Iran, North Korea, Syria, Crimea region)
- Individuals or entities on restricted party lists (SDN list, Entity List, Denied Persons List)
- Military, intelligence, or security applications in restricted countries
- End uses related to weapons of mass destruction, military intelligence, or surveillance of dissidents
- Chinese military companies or entities involved in military-civil fusion
- Russian defense or intelligence sectors
Screening requirement: AI Experts must screen Clients and end users against OFAC sanctions lists and denied party lists before providing services for government, defense, or sensitive applications.
5. Content Restrictions
5.1 Prohibited Content Types
Users may not upload, transmit, or store the following content types on the Platform, in NeironLab workspaces, or through Platform communications:
- Child sexual abuse material (CSAM): Any content depicting or describing sexual abuse, exploitation, or sexualization of minors
- Illegal pornography: Non-consensual intimate imagery, revenge porn, hidden camera footage, or other illegal adult content
- Extreme violence: Graphic depictions of violence, gore, torture, or mutilation designed to shock or harm viewers
- Terrorist content: Materials that promote, glorify, or provide instructions for terrorist activities
- Hate speech: Content promoting violence, hatred, or discrimination against individuals or groups based on protected characteristics
- Self-harm content: Content promoting suicide, self-injury, eating disorders, or dangerous challenges
- Illegal goods: Content facilitating the sale of illegal drugs, weapons, explosives, or other contraband
- Stolen data: Data obtained through hacking, unauthorized access, or data breaches
5.2 Intellectual Property and Copyright
Copyright infringement: Users may not upload, share, or transmit content that infringes copyright, trademarks, patents, trade secrets, or other intellectual property rights. This includes:
- Pirated software, cracked applications, or license key generators
- Copyrighted text, images, videos, or music without proper authorization
- Counterfeit goods, fake branded products, or trademark violations
- Stolen trade secrets, proprietary datasets, or confidential business information
- Unauthorized reproductions of copyrighted AI training datasets
- Reverse-engineered proprietary models or algorithms
DMCA compliance: NeironHub responds to valid Digital Millennium Copyright Act (DMCA) takedown notices. Copyright owners may submit DMCA notices to legal@neironhub.ai. Repeat infringers will have their Accounts terminated.
5.3 Spam and Unsolicited Communications
Users may not send spam, unsolicited bulk messages, or unwanted communications through the Platform, including:
- Mass messaging to AI Experts or Clients without prior relationship or consent
- Automated messaging, bot-generated communications, or scripted outreach
- Promotional content, advertisements, or marketing materials unrelated to legitimate Projects
- Chain letters, pyramid schemes, or multi-level marketing solicitations
- Phishing attempts, scam messages, or fraudulent proposals
- Repetitive or harassing messages after a User has requested no contact
6. Professional Conduct Standards for AI Experts
6.1 Professional Responsibilities
AI Experts must maintain high professional standards and ethical practices:
- Accurate representation: Truthfully represent qualifications, experience, capabilities, and past work. Do not exaggerate expertise or fabricate credentials.
- Quality work: Deliver work that meets industry standards and documented acceptance criteria. Do not submit incomplete, non-functional, or deliberately defective Deliverables.
- Timely delivery: Make reasonable efforts to meet agreed deadlines. Communicate proactively about delays or obstacles.
- Communication: Respond to Client messages within 24 hours during active Projects. Maintain professional, respectful communication at all times.
- Confidentiality: Protect Client confidential information and do not disclose project details without authorization.
- Conflict of interest: Disclose any conflicts of interest, competitive relationships, or circumstances that could affect objective judgment.
- No double-dipping: Do not reuse significant portions of Client-specific work across multiple Projects without disclosure and consent.
6.2 Verification and Profile Accuracy
Profile integrity: AI Experts must maintain accurate, truthful profiles:
- Skills, expertise areas, and technology proficiencies must reflect actual competencies
- Educational credentials, certifications, and degrees must be verifiable and accurate
- Employment history and company affiliations must be truthful
- Portfolio samples must represent your own work (or clearly indicate collaboration)
- Client testimonials and ratings must be authentic and not fabricated
- Profile photos must depict the actual AI Expert (no stock photos, celebrities, or misrepresentations)
Verification compliance: AI Experts must comply with NeironHub's verification requirements, including identity verification, background checks, skills assessments, and periodic re-verification. Providing false information during verification may result in permanent Account ban.
7. Data Security and NeironLab Requirements
7.1 NeironLab Workspace Security
Secure development environment: NeironLab provides SOC2-compliant, isolated containers for secure AI development and testing. Users must:
- Use NeironLab for all Project development work involving Client data
- Not export Client data from NeironLab to personal devices, external servers, or third-party systems without explicit written authorization
- Not share NeironLab workspace access credentials with unauthorized individuals
- Enable multi-factor authentication (MFA) for NeironLab access
- Use strong, unique passwords for Platform and NeironLab authentication
- Log out of NeironLab workspaces when not actively working
- Report any suspected security incidents, data breaches, or unauthorized access immediately
7.2 Client Data Protection
Data handling requirements: AI Experts handling Client data must:
- Data minimization: Request and access only the data necessary to complete the Project. Do not collect excessive or unnecessary data.
- Confidentiality: Treat all Client data as confidential and proprietary. Do not disclose, share, or discuss Client data with third parties.
- No reuse: Do not use Client data for training your own AI models, benchmarking, or other Projects without explicit written consent.
- Secure storage: Store Client data only in NeironLab workspaces or other NeironHub-approved secure environments.
- Data deletion: Delete or return all Client data upon Project completion or termination as specified in the Project agreement.
- No local copies: Do not download Client datasets to personal devices, local machines, or unsecured storage.
- Third-party restrictions: Do not upload Client data to third-party AI APIs, cloud services, or external tools without Client authorization.
7.3 Data Breach Notification
Immediate reporting required: If an AI Expert becomes aware of any actual or suspected data breach, security incident, unauthorized access, or data exposure involving Client data, they must:
- Immediately notify NeironHub security team at security@neironhub.ai
- Provide details of the incident including: nature of the breach, data affected, systems compromised, and number of records exposed
- Preserve evidence and logs related to the incident
- Cooperate fully with NeironHub's incident response investigation
- Do not notify the Client directly until instructed by NeironHub (to ensure coordinated response)
- Take immediate steps to contain the breach and prevent further data exposure
Consequences: Failure to report data breaches promptly may result in Account termination, liability for damages, regulatory penalties, and legal action.
8. Enforcement and Violations
8.1 Violation Investigation and Enforcement
NeironHub enforcement approach: NeironHub takes Policy violations seriously and enforces this Policy through a combination of:
- Automated monitoring: Automated systems scan for malware, prohibited content, spam patterns, and suspicious activity
- Manual review: Human reviewers investigate reported violations, disputes, and flagged content
- User reports: Community reporting through the Platform's reporting tools
- Pattern analysis: Behavioral analysis to detect circumvention, fraud, and abuse
- Third-party reports: Reports from law enforcement, copyright holders, or regulatory authorities
8.2 Enforcement Actions and Remedies
Depending on the severity and nature of the violation, NeironHub may take the following actions:
- Warning: Formal written warning for first-time or minor violations with opportunity to cure
- Content removal: Deletion of prohibited content, Deliverables, or communications
- Account suspension: Temporary suspension of Account access (typically 7-30 days) pending investigation
- Project cancellation: Immediate termination of active Projects with refunds or partial payments as appropriate
- Fund forfeiture: Forfeiture of escrowed funds for serious violations (circumvention, fraud, malware)
- Permanent ban: Permanent Account termination and ban from Platform for severe or repeat violations
- Legal action: Civil litigation to recover damages, obtain injunctions, or enforce contractual obligations
- Law enforcement referral: Reporting to FBI, NCMEC, Secret Service, or other authorities for criminal activity
- Regulatory reporting: Reporting to FTC, SEC, OFAC, or other regulators as required by law
8.3 Severity-Based Enforcement Matrix
Minor violations (typically warning or temporary suspension):
- Unintentional spam or excessive messaging
- Minor Terms violations without malicious intent
- Profile inaccuracies or outdated information
- Late delivery without pattern of non-performance
Serious violations (typically suspension, fund forfeiture, or permanent ban):
- Harassment, threats, hate speech, or abusive conduct
- Intellectual property infringement
- Data breaches due to negligence or careless handling
- Misrepresentation of qualifications or fake credentials
- Platform circumvention or fee avoidance attempts
- Multiple Terms violations or repeat offenses
Severe violations (immediate permanent ban and legal/law enforcement action):
- Child sexual abuse material (CSAM) or exploitation of minors
- Malware distribution, cyberattacks, or hacking
- Terrorist content or violent extremism
- Fraud, money laundering, or financial crimes
- Intentional data breaches or theft of confidential information
- Export control violations or sanctions evasion
- Criminal activity or illegal operations
8.4 Appeals Process
Right to appeal: Users who believe their Account was suspended or terminated in error may appeal by:
- Sending a written appeal to legal@neironhub.ai within 14 days of suspension/termination
- Providing a detailed explanation and supporting evidence demonstrating that the enforcement action was unwarranted
- NeironHub will review appeals within 30 business days and issue a final determination
- NeironHub's decision on appeals is final and binding, except where prohibited by applicable consumer protection laws
No appeal for severe violations: Appeals are not available for CSAM, malware, terrorist content, or other severe violations involving illegal activity. These decisions are final and not subject to appeal.
9. Reporting Violations and Abuse
9.1 How to Report Violations
In-Platform reporting: Users can report Policy violations through:
- Project dashboard → Report Issue → Select violation type
- User profile → Report User → Describe violation
- Message thread → Report Message → Flag inappropriate content
- Help Center → Submit Ticket → Abuse/Safety Report
Email reporting: For serious violations or detailed reports, email:
- General abuse: abuse@neironhub.ai
- Security incidents: security@neironhub.ai
- Copyright infringement (DMCA): legal@neironhub.ai
- Child safety concerns: legal@neironhub.ai
9.2 What to Include in Reports
Effective abuse reports should include:
- Username or profile link of the violating User
- Project name or ID (if applicable)
- Description of the violation and which Policy section was violated
- Screenshots, links, or other evidence documenting the violation
- Date and time of the violation
- Any relevant context or additional information
Confidentiality: NeironHub treats abuse reports confidentially and does not disclose the identity of reporters to accused Users except where required by law or legal process.
9.3 False Reports and Retaliation
False reporting prohibited: Submitting knowingly false or malicious abuse reports to harass Users, damage reputations, or interfere with Projects is a Policy violation. Users who submit repeated false reports may face Account suspension.
No retaliation: Users may not retaliate against individuals who report Policy violations in good faith. Retaliation includes harassment, threats, negative reviews motivated by reporting, or attempts to identify and punish reporters.
10. Monitoring, Investigation, and Cooperation
10.1 Platform Monitoring
Automated and manual monitoring: NeironHub reserves the right to monitor Platform activity, Content, communications, and NeironLab workspaces to:
- Detect and prevent Policy violations, fraud, abuse, and illegal activity
- Ensure Platform security, integrity, and proper functioning
- Investigate reported violations and suspicious activity
- Comply with legal obligations, court orders, and regulatory requirements
- Improve Platform features, safety systems, and abuse detection
- Enforce Terms of Service and contractual obligations
No expectation of privacy: Users acknowledge that communications, Content, and activity on the Platform are not private and may be monitored, reviewed, and disclosed as necessary for Platform operations, safety, legal compliance, and enforcement. While NeironHub implements security measures to protect User data, Users should not use the Platform for communications or activities requiring absolute privacy.
10.2 Investigation Cooperation
Cooperation with investigations: Users must cooperate with NeironHub investigations of Policy violations, security incidents, or abuse reports. This includes:
- Responding to requests for information, evidence, or clarification within 72 hours
- Providing access to relevant Deliverables, communications, or workspace data
- Participating in interviews, mediation, or dispute resolution processes
- Preserving evidence and not destroying or altering relevant materials
- Complying with temporary restrictions or Account limitations during investigations
Non-cooperation consequences: Failure to cooperate with investigations, destroying evidence, providing false information, or obstructing NeironHub enforcement efforts may result in Account suspension, fund forfeiture, and adverse determinations in disputes.
© 2025 NeironHub INC. All rights reserved.