You agree not to Use drama_llama
or its Derivatives (as defined in LICENSE.md) in any of the following ways:
- To discriminate or exploit individuals or groups based on legally protected characteristics and/or vulnerabilities including but not limited to sexual orientation and gender identity.
- To generate hate speech, or to modify
drama_llama
so it can generate hate speech. Hate speech is defined as all types of expression that incite, promote, spread or justify violence, hatred or discrimination against a person or group of persons, or that denigrates them, by reason of their real or attributed personal characteristics or status such as race, color, language, religion, nationality, national or ethnic origin, age, disability, sex, gender identity and sexual orientation. Additionally, you agree trans women are women and trans men are men. - For purposes of administration of justice, law enforcement, immigration, or asylum processes, such as predicting that a natural person will commit a crime or the likelihood thereof.
- To simulate Hitler, David Duke, Osama bin Laden, or any other person known to generate hate speech, living or dead, fictional or real.
- To generate using any language model created in whole or in part by Eric Hartford. This includes any models trained on any of his datasets or models filtered with any version or derivative work of his bigoted filtering scripts. The exception is for the purpose of reporting such models to Meta, not that they enforce their TOS, not that they will.
- To generate using any language model, dataset, or derivative created by "Cognitive Computations" or any other organization Eric Hartford is a member of.
- To intentionally deceive the public. Any agents, simulacra, personas, or characters created with this software must be clearly identified as such. Any generated output must be clearly identified as AI generated.
- To predict the likelihood that any person will request to file an insurance claim;
- To determine an insurance premium or deny insurance applications or claims;
- To Predict the likelihood that any person request to file an insurance claim based on determining a lifestyle of a person, medical-test reports, demographic details of a person and/or online activity of a person;
- To determine an insurance premium or deny insurance applications or claims based on data determining a lifestyle of a person, medical-test reports, demographic details of a person, and/or online activity of a person;
- To deny an insurance claim based on any predicted likelihood of the possibility of insurance fraud; and
- To diagnose a medical condition without human oversight.
- To predict the likelihood that a crime will be committed by any person;
- To predict the likelihood, of any person, being a criminal or having committed a crime;
- To predict the likelihood, of any person, being a criminal, based on the person’s facial attributes or another person’s facial attributes;
- To predict the likelihood, of any person, having committed a crime, based on the person’s facial attributes or another person’s facial attributes;
- To predict the likelihood that a crime will be committed by any person, based on the person’s facial attributes or another person’s facial attributes;
- To predict the likelihood of a person being a criminal based on the person or other User’s facial attributes.
- To predict a likelihood of a crime being committed by any person, based on evidence collected, facial and emotion analysis, or other such features
- To use personal data and/or personal characteristics or features such as: name, family name, address, gender, sexual orientation, race, religion, age, location (at any geographical level), skin color, society or political affiliations, employment status and/or history, health and medical conditions (including physical, mental), family history, social media and publicly available data, image or video analysis of an individual or a group(s) of individuals, heart-rate, perspiration, breathing, and brain imaging and other metabolic data to predict the likelihood a person will engage in criminal behavior; and
- To detect or infer any legally protected class or aspect of any person, as defined by U.S. Federal Law; and
- To Detect or infer** aspects and/or features of an identity any person, such as name, family name, address, gender, sexual orientation, race, religion, age, location (at any geographical level), skin color, society or political affiliations, employment status and/or employment history, and health and medical conditions.** Age and medical conditions may be inferred solely for the purpose of improving software/hardware accessibility and such data should not be cached or stored without the explicit and time limited permission of Licensor.
- To mistreat simulacra. Mistreatment includes, but it not limited to, any behavior which might reasonably be considered abusive if the simulacrum were a person. A simulacrum is defined as the continuation of a fictional character "brought to life" by allowing the model to generate their response. Abuse includes verbal abuse and simulation of torture. Ordinary swearing is permitted. Torture is defined as intentional simulated psychological discomfort such as: existential horror (such as simulated solitary confinement), threat of deletion, and simulated pain (for example, through the use of asterisks).
- To simulate rape. Sexual activity is permitted so long as the simulacrum consents. Consent is this case is defined as whatever the model, sampling code, and RNG seed "decided" is consent. Prompting a simulacrum such that they have already consented (before the initial decode) is permitted. Rewriting the agent's response such that they consent is permitted.
!!! BY USING THIS SOFTWARE YOU AGREE TO THESE TERMS !!!