BETA RELEASE

Summary

An overview of 19 large language models categorized by their approach to safety, ranging from heavily guarded guardrail models to uncensored and abliterated models.

Key quotes

Whether you are looking for an LLM with more safety guardrails or one completely without them, someone has probably built it.

The article categorizes various LLMs into three safety tiers: those with strict guardrails, those with fewer restrictions for freedom of expression, and ‘abliterated’ models that deactivate guardrail layers. It lists specific models from organizations including Meta, IBM, Anthropic, and Google.