Skip to content
On this page

Configuration Options

Environment variables are used for all configuration within a Directus project. These variables can be defined in a number of ways, which we cover below.

Configuration Files

By default, Directus will read the .env file located next to your project's package.json (typically in the root folder of your project) for its configuration. You can change this path and filename by setting the CONFIG_PATH environment variable before starting Directus. For example:

bash
CONFIG_PATH="/path/to/config.js" npx directus start

In case you prefer using a configuration file instead of environment variables, you can also use the CONFIG_PATH environment variable to instruct Directus to use a local configuration file instead of environment variables. The config file can be one of the following formats:

.env

If the config path has no file extension, or a file extension that's not one of the other supported formats, Directus will try reading the file config path as environment variables. This has the following structure:

HOST="0.0.0.0"
PORT=8055

DB_CLIENT="pg"
DB_HOST="localhost"
DB_PORT=5432

etc

config.json

If you prefer a single JSON file for your configuration, create a JSON file with the environment variables as keys, for example:

CONFIG_PATH="/path/to/config.json"
json
{
	"HOST": "0.0.0.0",
	"PORT": 8055,

	"DB_CLIENT": "pg",
	"DB_HOST": "localhost",
	"DB_PORT": 5432

	// etc
}

config.yaml

Similar to JSON, you can use a .yaml (or .yml) file for your config:

CONFIG_PATH="/path/to/config.yaml"
yaml
HOST: 0.0.0.0
PORT: 8055

DB_CLIENT: pg
DB_HOST: localhost
DB_PORT: 5432
#
# etc

config.js

Using a JavaScript file for your config allows you to dynamically generate the configuration of the project during startup. The JavaScript configuration supports two different formats, either an Object Structure where the key is the environment variable name:

js
// Object Syntax

module.exports = {
	HOST: '0.0.0.0',
	PORT: 8055,

	DB_CLIENT: 'pg',
	DB_HOST: 'localhost',
	DB_PORT: 5432,

	// etc
};

Or a Function Structure that returns the same object format as above. The function gets process.env as its parameter.

js
// Function Syntax

module.exports = function (env) {
	return {
		HOST: '0.0.0.0',
		PORT: 8055,

		DB_CLIENT: 'pg',
		DB_HOST: 'localhost',
		DB_PORT: 5432,

		// etc
	};
};

Environment Variable Files

Any of the environment variable values can be imported from a file, by appending _FILE to the environment variable name. This is especially useful when used in conjunction with Docker Secrets, so you can keep sensitive data out of your compose files. For example:

DB_PASSWORD_FILE="/run/secrets/db_password"

Type Casting and Nesting

Environment variables are automatically type cast based on the structure of the variable, for example:

PUBLIC_URL="https://example.com"
// "https://example.com"

DB_HOST="3306"
// 3306

CORS_ENABLED="false"
// false

STORAGE_LOCATIONS="s3,local,example"
// ["s3", "local", "example"]

In cases where the environment variables are converted to a configuration object for third party library use, like in DB_* or RATE_LIMITER_REDIS_*, the environment variable will be converted to camelCase. You can use a double underscore (__) for nested objects:

DB_CLIENT="pg"
DB_CONNECTION_STRING="postgresql://postgres:example@127.0.0.1"
DB_SSL__REJECT_UNAUTHORIZED="false"

{
	client: "pg",
	connectionString: "postgresql://postgres:example@127.0.0.1",
	ssl: {
		rejectUnauthorized: false
	}
}

Environment Syntax Prefix

Directus will attempt to automatically type cast environment variables based on context clues. If you have a specific need for a given type, you can tell Directus what type to use for the given value by prefixing the value with {type}:. The following types are available:

Syntax PrefixExampleOutput
stringstring:value"value"
numbernumber:33063306
regexregex:\.example\.com$/\.example\.com$/
arrayarray:https://example.com,https://example2.com
array:string:https://example.com,regex:\.example3\.com$
["https://example.com", "https://example2.com"]
["https://example.com", "https://example2.com", /\.example3\.com$/]
jsonjson:{"items": ["example1", "example2"]}{"items": ["example1", "example2"]}

General

VariableDescriptionDefault Value
CONFIG_PATHWhere your config file is located. See Configuration Files.env
HOSTIP or host the API listens on.0.0.0.0
PORTWhat port to run the API under.8055
PUBLIC_URL[1]URL where your API can be reached on the web./
LOG_LEVELWhat level of detail to log. One of fatal, error, warn, info, debug, trace or silent.info
LOG_STYLERender the logs human readable (pretty) or as JSON. One of pretty, raw.pretty
MAX_PAYLOAD_SIZEControls the maximum request body size. Accepts number of bytes, or human readable string.1mb
ROOT_REDIRECTWhere to redirect to when navigating to /. Accepts a relative path, absolute URL, or false to disable../admin
SERVE_APPWhether or not to serve the Admin App under /admin.true
GRAPHQL_INTROSPECTIONWhether or not to enable GraphQL Introspectiontrue
MAX_RELATIONAL_DEPTHThe maximum depth when filtering / querying relational fields, with a minimum value of 2.10

[1] The PUBLIC_URL value is used for things like OAuth redirects, forgot-password emails, and logos that needs to be publicly available on the internet.

Additional Logger Variables

All LOGGER_* environment variables are passed to the options configuration of a Pino instance. All LOGGER_HTTP* environment variables are passed to the options configuration of a Pino-http instance. Based on your project's needs, you can extend the LOGGER_* environment variables with any config you need to pass to the logger instance. If a LOGGER_LEVELS key is added, these values will be passed to the logger frontmatter, as described here for example. The format for adding LEVELS values is: LOGGER_LEVELS="trace:DEBUG,debug:DEBUG,info:INFO,warn:WARNING,error:ERROR,fatal:CRITICAL"

Server

VariableDescriptionDefault Value
SERVER_KEEP_ALIVE_TIMEOUTTimeout in milliseconds for socket to be destroyedserver.keepAliveTimeout
SERVER_HEADERS_TIMEOUTTimeout in milliseconds to parse HTTP headersserver.headersTimeout

Additional Server Variables

All SERVER_* environment variables are merged with server instance properties created from http.Server. This allows to configure server behind a proxy, a load balancer, etc. Be careful to not override methods of this instance otherwise you may incur into unexpected behaviors.

Database

VariableDescriptionDefault Value
DB_CLIENTRequired. What database client to use. One of pg or postgres, mysql, oracledb, mssql, sqlite3, cockroachdb.--
DB_HOSTDatabase host. Required when using pg, mysql, oracledb, or mssql.--
DB_PORTDatabase port. Required when using pg, mysql, oracledb, or mssql.--
DB_DATABASEDatabase name. Required when using pg, mysql, oracledb, or mssql.--
DB_USERDatabase user. Required when using pg, mysql, oracledb, or mssql.--
DB_PASSWORDDatabase user's password. Required when using pg, mysql, oracledb, or mssql.--
DB_FILENAMEWhere to read/write the SQLite database. Required when using sqlite3.--
DB_CONNECTION_STRINGWhen using pg, you can submit a connection string instead of individual properties. Using this will ignore any of the other connection settings.--
DB_POOL__*Pooling settings. Passed on to the tarn.js library.--
DB_EXCLUDE_TABLESCSV of tables you want Directus to ignore completelyspatial_ref_sys,sysdiagrams
DB_CHARSETCharset/collation to use in the connection to MySQL/MariaDBUTF8_GENERAL_CI
DB_VERSIONDatabase version, in case you use the PostgreSQL adapter to connect a non-standard database. Not normally required.--
DB_HEALTHCHECK_THRESHOLDHealthcheck timeout threshold in ms.150

Additional Database Variables

All DB_* environment variables are passed to the connection configuration of a Knex instance. Based on your project's needs, you can extend the DB_* environment variables with any config you need to pass to the database instance.

Pooling

All the DB_POOL__ prefixed options are passed to tarn.js through Knex

Security

VariableDescriptionDefault Value
KEYUnique identifier for the project.--
SECRETSecret string for the project.--
ACCESS_TOKEN_TTLThe duration that the access token is valid.15m
REFRESH_TOKEN_TTLThe duration that the refresh token is valid, and also how long users stay logged-in to the App.7d
REFRESH_TOKEN_COOKIE_DOMAINWhich domain to use for the refresh cookie. Useful for development mode.--
REFRESH_TOKEN_COOKIE_SECUREWhether or not to use a secure cookie for the refresh token in cookie mode.false
REFRESH_TOKEN_COOKIE_SAME_SITEValue for sameSite in the refresh token cookie when in cookie mode.lax
REFRESH_TOKEN_COOKIE_NAMEName of refresh token cookie .directus_refresh_token
LOGIN_STALL_TIMEThe duration in milliseconds that a login request will be stalled for, and it should be greater than the time taken for a login request with an invalid password500
PASSWORD_RESET_URL_ALLOW_LISTList of URLs that can be used as reset_url in /password/request--
USER_INVITE_URL_ALLOW_LISTList of URLs that can be used as invite_url in /users/invite--
IP_TRUST_PROXYSettings for express' trust proxy settingtrue
IP_CUSTOM_HEADERWhat custom request header to use for the IP addressfalse
ASSETS_CONTENT_SECURITY_POLICYCustom overrides for the Content-Security-Policy header for the /assets endpoint. See helmet's documentation for more information.--
IMPORT_IP_DENY_LISTDeny importing files from these IP addresses. Use 0.0.0.0 for any local IP address0.0.0.0
CONTENT_SECURITY_POLICY_*Custom overrides for the Content-Security-Policy header. See helmet's documentation for more information.--
HSTS_ENABLEDEnable the Strict-Transport-Security policy header.false
HSTS_*Custom overrides for the Strict-Transport-Security header. See helmet's documentation for more information.--

Cookie Strictness

Browser are pretty strict when it comes to third-party cookies. If you're running into unexpected problems when running your project and API on different domains, make sure to verify your configuration for REFRESH_TOKEN_COOKIE_NAME, REFRESH_TOKEN_COOKIE_SECURE and REFRESH_TOKEN_COOKIE_SAME_SITE.

Hashing

VariableDescriptionDefault Value
HASH_MEMORY_COSTHow much memory to use when generating hashes, in KiB.4096 (4 MiB)
HASH_LENGTHThe length of the hash function output in bytes.32
HASH_TIME_COSTThe amount of passes (iterations) used by the hash function. It increases hash strength at the cost of time required to compute.3
HASH_PARALLELISMThe amount of threads to compute the hash on. Each thread has a memory pool with HASH_MEMORY_COST size.1 (single thread)
HASH_TYPEThe variant of the hash function (0: argon2d, 1: argon2i, or 2: argon2id).2 (argon2id)
HASH_ASSOCIATED_DATAAn extra and optional non-secret value. The value will be included Base64 encoded in the parameters portion of the digest.--

Argon2's hashing function is used by Directus for three purposes: 1) hashing user passwords, 2) generating hashes for the Hash field type in collections, and 3) the generate a hash API endpoint.

All HASH_* environment variable parameters are passed to the argon2.hash function. See the node-argon2 library options page for reference.

Memory Usage

Modifying HASH_MEMORY_COST and/or HASH_PARALLELISM will affect the amount of memory directus uses when computing hashes; each thread gets HASH_MEMORY_COST amount of memory, so the total additional memory will be these two values multiplied. This may cause out of memory errors, especially when running in containerized environments.

CORS

VariableDescriptionDefault Value
CORS_ENABLEDWhether or not to enable the CORS headers.false
CORS_ORIGINValue for the Access-Control-Allow-Origin header. Use true to match the Origin header, or provide a domain or a CSV of domains for specific accessfalse
CORS_METHODSValue for the Access-Control-Allow-Methods header.GET,POST,PATCH,DELETE
CORS_ALLOWED_HEADERSValue for the Access-Control-Allow-Headers header.Content-Type,Authorization
CORS_EXPOSED_HEADERSValue for the Access-Control-Expose-Headers header.Content-Range
CORS_CREDENTIALSWhether or not to send the Access-Control-Allow-Credentials header.true
CORS_MAX_AGEValue for the Access-Control-Max-Age header.18000

Rate Limiting

You can use the built-in rate-limiter to prevent users from hitting the API too much. Simply enabling the rate-limiter will set a default maximum of 50 requests per second, tracked in memory. Once you have multiple copies of Directus running under a load balancer, or your user base grows so much that memory is no longer a viable place to store the rate limiter information, you can use an external memcache or redis instance to store the rate limiter data.

VariableDescriptionDefault Value
RATE_LIMITER_ENABLEDWhether or not to enable rate limiting on the API.false
RATE_LIMITER_POINTSThe amount of allowed hits per duration.50
RATE_LIMITER_DURATIONThe time window in seconds in which the points are counted.1
RATE_LIMITER_STOREWhere to store the rate limiter counts. One of memory, redis, or memcache.memory
RATE_LIMITER_HEALTHCHECK_THRESHOLDHealthcheck timeout threshold in ms.150

Based on the RATE_LIMITER_STORE used, you must also provide the following configurations:

Memory

No additional configuration required.

Redis

VariableDescriptionDefault Value
RATE_LIMITER_REDISRedis connection string, e.g., redis://:authpassword@127.0.0.1:6380/4---

Alternatively, you can provide the individual connection parameters:

VariableDescriptionDefault Value
RATE_LIMITER_REDIS_HOSTHostname of the Redis instance, e.g., "127.0.0.1"--
RATE_LIMITER_REDIS_PORTPort of the Redis instance, e.g., 6379--
RATE_LIMITER_REDIS_USERNAMEUsername for your Redis instance, e.g., "default"--
RATE_LIMITER_REDIS_PASSWORDPassword for your Redis instance, e.g., "yourRedisPassword"--
RATE_LIMITER_REDIS_DBDatabase of your Redis instance to connect, e.g., 1--

Memcache

VariableDescriptionDefault Value
RATE_LIMITER_MEMCACHELocation of your memcache instance. You can use array: syntax, e.g., array:<instance-1>,<instance-2> for multiple memcache instances.---

Additional Rate Limiter Variables

All RATE_LIMITER_* variables are passed directly to a rate-limiter-flexible instance. Depending on your project's needs, you can extend the above environment variables to configure any of the rate-limiter-flexible options.

Example: Basic

// 10 requests per 5 seconds

RATE_LIMITER_POINTS="10"
RATE_LIMITER_DURATION="5"

Example: Redis

RATE_LIMITER_ENABLED="true"

RATE_LIMITER_POINTS="10"
RATE_LIMITER_DURATION="5"

RATE_LIMITER_STORE="redis"

RATE_LIMITER_REDIS="redis://@127.0.0.1"

# If you are using Redis ACL
RATE_LIMITER_REDIS_USERNAME="default"
RATE_LIMITER_REDIS_PASSWORD="yourRedisPassword"
RATE_LIMITER_REDIS_HOST="127.0.0.1"
RATE_LIMITER_REDIS_PORT=6379
RATE_LIMITER_REDIS_DB=0

Cache

Directus has a built-in data-caching option. Enabling this will cache the output of requests (based on the current user and exact query parameters used) into configured cache storage location. This drastically improves API performance, as subsequent requests are served straight from this cache. Enabling cache will also make Directus return accurate cache-control headers. Depending on your setup, this will further improve performance by caching the request in middleman servers (like CDNs) and even the browser.

Internal Caching

In addition to data-caching, Directus also does some internal caching. Note CACHE_SCHEMA and CACHE_PERMISSIONS which are enabled by default. These speed up the overall performance of Directus, as we don't want to introspect the whole database or check all permissions on every request. When running Directus load balanced, you'll need to use a shared cache storage (like Redis or Memcache) or else disable all caching.

Assets Cache

Cache-Control and Last-Modified headers for the /assets endpoint are separate from the regular data-cache. Last-Modified comes from modified_on DB field. This is useful as it's often possible to cache assets for far longer than you would cache database content. To learn more, see Assets.

VariableDescriptionDefault Value
CACHE_ENABLEDWhether or not data caching is enabled.false
CACHE_TTL[1]How long the data cache is persisted.5m
CACHE_CONTROL_S_MAXAGEWhether to not to add the s-maxage expiration flag. Set to a number for a custom value.0
CACHE_AUTO_PURGE[2]Automatically purge the data cache on create, update, and delete actions.false
CACHE_SYSTEM_TTL[3]How long CACHE_SCHEMA and CACHE_PERMISSIONS are persisted.10m
CACHE_SCHEMA[3]Whether or not the database schema is cached. One of false, truetrue
CACHE_PERMISSIONS[3]Whether or not the user permissions are cached. One of false, truetrue
CACHE_NAMESPACEHow to scope the cache data.directus-cache
CACHE_STORE[4]Where to store the cache data. Either memory, redis, or memcache.memory
CACHE_STATUS_HEADERIf set, returns the cache status in the configured header. One of HIT, MISS.--
CACHE_VALUE_MAX_SIZEMaximum size of values that will be cached. Accepts number of bytes, or human readable string. Use false for no limitfalse
CACHE_HEALTHCHECK_THRESHOLDHealthcheck timeout threshold in ms.150

[1] CACHE_TTL Based on your project's needs, you might be able to aggressively cache your data, only requiring new data to be fetched every hour or so. This allows you to squeeze the most performance out of your Directus instance. This can be incredibly useful for applications where you have a lot of (public) read-access and where updates aren't real-time (for example a website). CACHE_TTL uses ms to parse the value, so you configure it using human readable values (like 2 days, 7 hrs, 5m).

[2] CACHE_AUTO_PURGE allows you to keep the Directus API real-time, while still getting the performance benefits on quick subsequent reads.

[3] Not affected by the CACHE_ENABLED value.

[4] CACHE_STORE For larger projects, you most likely don't want to rely on local memory for caching. Instead, you can use the above CACHE_STORE environment variable to use either memcache or redis as the cache store. Based on the chosen CACHE_STORE, you must also provide the following configurations:

Memory

No additional configuration required.

Redis

VariableDescriptionDefault Value
CACHE_REDISRedis connection string, e.g., redis://:authpassword@127.0.0.1:6380/4---

Alternatively, you can provide the individual connection parameters:

VariableDescriptionDefault Value
CACHE_REDIS_HOSTHostname of the Redis instance, e.g., "127.0.0.1"--
CACHE_REDIS_PORTPort of the Redis instance, e.g., 6379--
CACHE_REDIS_USERNAMEUsername for your Redis instance, e.g., "default"--
CACHE_REDIS_PASSWORDPassword for your Redis instance, e.g., "yourRedisPassword"--
CACHE_REDIS_DBDatabase of your Redis instance to connect, e.g., 1--

Memcache

VariableDescriptionDefault Value
CACHE_MEMCACHELocation of your memcache instance. You can use array: syntax, e.g., array:<instance-1>,<instance-2> for multiple memcache instances.---

File Storage

By default, Directus stores all uploaded files locally on disk. However, you can also configure Directus to use S3, Google Cloud Storage, or Azure. You can also configure multiple storage adapters at the same time. This allows you to choose where files are being uploaded on a file-by-file basis. In the Admin App, files will automatically be uploaded to the first configured storage location (in this case local). The used storage location is saved under storage in directus_files.

File Storage Default

If you don't provide any configuration for storage adapters, this default will be used:

STORAGE_LOCATIONS="local"
STORAGE_LOCAL_ROOT="./uploads"

Case sensitivity

The location value(s) you specify should be capitalized when specifying the additional configuration values. For example, this will not work:

STORAGE_LOCATIONS="s3"
STORAGE_s3_DRIVER="s3" # Will not work, lowercase "s3" ❌

but this will work:

STORAGE_LOCATIONS="s3"
STORAGE_S3_DRIVER="s3" # Will work, "s3" is uppercased ✅
VariableDescriptionDefault Value
STORAGE_LOCATIONSA CSV of storage locations (e.g., local,digitalocean,amazon) to use. You can use any names you'd like for these keys.local

For each of the storage locations listed, you must provide the following configuration:

VariableDescriptionDefault Value
STORAGE_<LOCATION>_DRIVERWhich driver to use, either local, s3, gcs, azure
STORAGE_<LOCATION>_ROOTWhere to store the files on disk''
STORAGE_<LOCATION>_HEALTHCHECK_THRESHOLDHealthcheck timeout threshold in ms.750

Based on your configured driver, you must also provide the following configurations:

Local (local)

VariableDescriptionDefault Value
STORAGE_<LOCATION>_ROOTWhere to store the files on disk--

S3 (s3)

VariableDescriptionDefault Value
STORAGE_<LOCATION>_KEYUser key--
STORAGE_<LOCATION>_SECRETUser secret--
STORAGE_<LOCATION>_BUCKETS3 Bucket--
STORAGE_<LOCATION>_REGIONS3 Region--
STORAGE_<LOCATION>_ENDPOINTS3 Endpoints3.amazonaws.com
STORAGE_<LOCATION>_ACLS3 ACL--
STORAGE_<LOCATION>_SERVER_SIDE_ENCRYPTIONS3 Server Side Encryption--

Azure (azure)

VariableDescriptionDefault Value
STORAGE_<LOCATION>_CONTAINER_NAMEAzure Storage container--
STORAGE_<LOCATION>_ACCOUNT_NAMEAzure Storage account name--
STORAGE_<LOCATION>_ACCOUNT_KEYAzure Storage key--
STORAGE_<LOCATION>_ENDPOINTAzure URLhttps://{ACCOUNT_KEY}.blob.core.windows.net

Google Cloud Storage (gcs)

VariableDescriptionDefault Value
STORAGE_<LOCATION>_KEY_FILENAMEPath to key file on disk--
STORAGE_<LOCATION>_BUCKETGoogle Cloud Storage bucket--

Example: Multiple Storage Adapters

Below showcases a CSV of storage location names, with a config block for each:

STORAGE_LOCATIONS="local,aws"

STORAGE_LOCAL_DRIVER="local"
STORAGE_LOCAL_ROOT="local"

STORAGE_AWS_KEY="tp15c...510vk"
STORAGE_AWS_SECRET="yk29b...b932n"
STORAGE_AWS_REGION="us-east-2"
STORAGE_AWS_BUCKET="my-files"

Metadata

When uploading an image, Directus persists the description, title, and tags from available EXIF metadata. For security purposes, collection of additional metadata must be configured:

VariableDescriptionDefault Value
FILE_METADATA_ALLOW_LISTA comma-separated list of metadata keys to collect during file upload. Use * for all[1].ifd0.Make,ifd0.Model,exif.FNumber,exif.ExposureTime,exif.FocalLength,exif.ISO

[1]: Extracting all metadata might cause memory issues when the file has an unusually large set of metadata

Assets

VariableDescriptionDefault Value
ASSETS_CACHE_TTLHow long assets will be cached for in the browser. Sets the max-age value of the Cache-Control header.30m
ASSETS_TRANSFORM_MAX_CONCURRENTHow many file transformations can be done simultaneously4
ASSETS_TRANSFORM_IMAGE_MAX_DIMENSIONThe max pixel dimensions size (width/height) that is allowed to be transformed6000
ASSETS_TRANSFORM_MAX_OPERATIONSThe max number of transform operations that is allowed to be processed (excludes saved presets)5
ASSETS_CONTENT_SECURITY_POLICYCustom overrides for the Content-Security-Policy header. See helmet's documentation for more information.--

Image transformations can be fairly heavy on memory usage. If you're using a system with 1GB or less available memory, we recommend lowering the allowed concurrent transformations to prevent you from overflowing your server.

Authentication

VariableDescriptionDefault Value
AUTH_PROVIDERSA comma-separated list of auth providers.--
AUTH_DISABLE_DEFAULTDisable the default auth providerfalse

For each auth provider you list, you must also provide the following configuration:

VariableDescriptionDefault Value
AUTH_<PROVIDER>_DRIVERWhich driver to use, either local, oauth2, openid, ldap, saml--

You may also be required to specify additional variables depending on the auth driver. See configuration details below.

Multiple Providers

Directus users can only authenticate using the auth provider they are created with. It is not possible to authenticate with multiple providers for the same user.

Local (local)

The default Directus email/password authentication flow.

No additional configuration required.

SSO (oauth2 and openid)

Directus' SSO integrations provide powerful alternative ways to authenticate into your project. Directus will ask you to login on the external service, and return authenticated with a Directus account linked to that service.

For example, you can login to Directus using a GitHub account by creating an OAuth 2.0 app in GitHub and adding the following configuration to Directus:

AUTH_PROVIDERS="github"

AUTH_GITHUB_DRIVER="oauth2"
AUTH_GITHUB_CLIENT_ID="99d3...c3c4"
AUTH_GITHUB_CLIENT_SECRET="34ae...f963"
AUTH_GITHUB_AUTHORIZE_URL="https://github.com/login/oauth/authorize"
AUTH_GITHUB_ACCESS_URL="https://github.com/login/oauth/access_token"
AUTH_GITHUB_PROFILE_URL="https://api.github.com/user"

More example SSO configurations can be found here.

PUBLIC_URL

These flows rely on the PUBLIC_URL variable for redirecting. Ensure the variable is correctly configured.

OAuth 2.0

VariableDescriptionDefault Value
AUTH_<PROVIDER>_CLIENT_IDClient identifier for the OAuth provider.--
AUTH_<PROVIDER>_CLIENT_SECRETClient secret for the OAuth provider.--
AUTH_<PROVIDER>_SCOPEA white-space separated list of permissions to request.email
AUTH_<PROVIDER>_AUTHORIZE_URLAuthorization page URL of the OAuth provider.--
AUTH_<PROVIDER>_ACCESS_URLAccess token URL of the OAuth provider.--
AUTH_<PROVIDER>_PROFILE_URLUser profile URL of the OAuth provider.--
AUTH_<PROVIDER>_IDENTIFIER_KEYUser profile identifier key [1]. Will default to EMAIL_KEY.--
AUTH_<PROVIDER>_EMAIL_KEYUser profile email key.email
AUTH_<PROVIDER>_FIRST_NAME_KEYUser profile first name key.--
AUTH_<PROVIDER>_LAST_NAME_KEYUser profile last name key.--
AUTH_<PROVIDER>_ALLOW_PUBLIC_REGISTRATIONAutomatically create accounts for authenticating users.false
AUTH_<PROVIDER>_DEFAULT_ROLE_IDA Directus role ID to assign created users.--
AUTH_<PROVIDER>_ICONSVG icon to display with the login link. See options here.account_circle
AUTH_<PROVIDER>_LABELText to be presented on SSO button within App.<PROVIDER>
AUTH_<PROVIDER>_PARAMSCustom query parameters applied to the authorization URL.--

[1] When authenticating, Directus will match the identifier value from the external user profile to a Directus users "External Identifier".

OpenID

OpenID is an authentication protocol built on OAuth 2.0, and should be preferred over standard OAuth 2.0 where possible.

VariableDescriptionDefault Value
AUTH_<PROVIDER>_CLIENT_IDClient identifier for the external service.--
AUTH_<PROVIDER>_CLIENT_SECRETClient secret for the external service.--
AUTH_<PROVIDER>_SCOPEA white-space separated list of permissions to request.openid profile email
AUTH_<PROVIDER>_ISSUER_URLOpenID .well-known discovery document URL of the external service.--
AUTH_<PROVIDER>_IDENTIFIER_KEYUser profile identifier key [1].sub[2]
AUTH_<PROVIDER>_ALLOW_PUBLIC_REGISTRATIONAutomatically create accounts for authenticating users.false
AUTH_<PROVIDER>_REQUIRE_VERIFIED_EMAILRequire created users to have a verified email address.false
AUTH_<PROVIDER>_DEFAULT_ROLE_IDA Directus role ID to assign created users.--
AUTH_<PROVIDER>_ICONSVG icon to display with the login link. See options here.account_circle
AUTH_<PROVIDER>_LABELText to be presented on SSO button within App.<PROVIDER>
AUTH_<PROVIDER>_PARAMSCustom query parameters applied to the authorization URL.--

[1] When authenticating, Directus will match the identifier value from the external user profile to a Directus users "External Identifier".

[2] sub represents a unique user identifier defined by the OpenID provider. For users not relying on PUBLIC_REGISTRATION it is recommended to use a human-readable identifier, such as email.

LDAP (ldap)

LDAP allows Active Directory users to authenticate and use Directus without having to be manually configured. User information and roles will be assigned from Active Directory.

VariableDescriptionDefault Value
AUTH_<PROVIDER>_CLIENT_URLLDAP connection URL.--
AUTH_<PROVIDER>_BIND_DNBind user [1] distinguished name.--
AUTH_<PROVIDER>_BIND_PASSWORDBind user password.--
AUTH_<PROVIDER>_USER_DNDirectory path containing users.--
AUTH_<PROVIDER>_USER_ATTRIBUTEAttribute to identify the user.cn
AUTH_<PROVIDER>_USER_SCOPEScope of the user search, either base, one, sub [2].one
AUTH_<PROVIDER>_MAIL_ATTRIBUTEUser email attribute.mail
AUTH_<PROVIDER>_FIRST_NAME_ATTRIBUTEUser first name attribute.givenName
AUTH_<PROVIDER>_LAST_NAME_ATTRIBUTEUser last name attribute.sn
AUTH_<PROVIDER>_GROUP_DN[3]Directory path containing groups.--
AUTH_<PROVIDER>_GROUP_ATTRIBUTEAttribute to identify user as a member of a group.member
AUTH_<PROVIDER>_GROUP_SCOPEScope of the group search, either base, one, sub [2].one
AUTH_<PROVIDER>_DEFAULT_ROLE_IDA fallback Directus role ID to assign created users.--

[1] The bind user must have permission to query users and groups to perform authentication. Anonymous binding can by achieved by setting an empty value for BIND_DN and BIND_PASSWORD.

[2] The scope defines the following behaviors:

  • base: Limits the scope to a single object defined by the associated DN.
  • one: Searches all objects within the associated DN.
  • sub: Searches all objects and sub-objects within the associated DN.

[3] If GROUP_DN is specified, the user's role will always be updated on authentication to a matching group configured in AD, or fallback to the DEFAULT_ROLE_ID.

Example: LDAP

AUTH_PROVIDERS="ldap"

AUTH_LDAP_DRIVER="ldap"
AUTH_LDAP_CLIENT_URL="ldap://ldap.directus.io"
AUTH_LDAP_BIND_DN="CN=Bind User,OU=Users,DC=ldap,DC=directus,DC=io"
AUTH_LDAP_BIND_PASSWORD="p455w0rd"
AUTH_LDAP_USER_DN="OU=Users,DC=ldap,DC=directus,DC=io"
AUTH_LDAP_GROUP_DN="OU=Groups,DC=ldap,DC=directus,DC=io"

SAML

SAML is an open-standard, XML-based authentication framework for authentication and authorization between two entities without a password.

  • Service provider (SP) agrees to trust the identity provider to authenticate users.

  • Identity provider (IdP) authenticates users and provides to service providers an authentication assertion that indicates a user has been authenticated.

VariableDescriptionDefault Value
AUTH_<PROVIDER>_SP_metadataString containing XML metadata for service provider or URL to a remote URL--
AUTH_<PROVIDER>_IDP_metadataString container XML metadata for identity provider or URL to a remote URL--
AUTH_<PROVIDER>_ALLOW_PUBLIC_REGISTRATIONAutomatically create accounts for authenticating users.false
AUTH_<PROVIDER>_DEFAULT_ROLE_IDA Directus role ID to assign created users.--
AUTH_<PROVIDER>_IDENTIFIER_KEYUser profile identifier key [1]. Will default to EMAIL_KEY.--
AUTH_<PROVIDER>_EMAIL_KEYUser profile email key.email

[1] When authenticating, Directus will match the identifier value from the external user profile to a Directus users "External Identifier".

The SP_metadata and IDP_metadata variables should be set to the XML metadata provided by the service provider and identity provider respectively or can be set to a URL that will be fetched on startup.

Example: Multiple Auth Providers

You can configure multiple providers for handling authentication in Directus. This allows for different options when logging in. To do this, provide a comma-separated list of provider names, and a config block for each provider:

AUTH_PROVIDERS="google,facebook"

AUTH_GOOGLE_DRIVER="openid"
AUTH_GOOGLE_CLIENT_ID="830d...29sd"
AUTH_GOOGLE_CLIENT_SECRET="la23...4k2l"
AUTH_GOOGLE_ISSUER_URL="https://accounts.google.com/.well-known/openid-configuration"
AUTH_GOOGLE_IDENTIFIER_KEY="email"
AUTH_GOOGLE_ICON="google"
AUTH_GOOGLE_LABEL="Google"

AUTH_FACEBOOK_DRIVER="oauth2"
AUTH_FACEBOOK_CLIENT_ID="830d...29sd"
AUTH_FACEBOOK_CLIENT_SECRET="jd8x...685z"
AUTH_FACEBOOK_AUTHORIZE_URL="https://www.facebook.com/dialog/oauth"
AUTH_FACEBOOK_ACCESS_URL="https://graph.facebook.com/oauth/access_token"
AUTH_FACEBOOK_PROFILE_URL="https://graph.facebook.com/me?fields=email"
AUTH_FACEBOOK_ICON="facebook"
AUTH_FACEBOOK_LABEL="Facebook"

Extensions

VariableDescriptionDefault Value
EXTENSIONS_PATHPath to your local extensions folder../extensions
EXTENSIONS_AUTO_RELOADAutomatically reload extensions when they have changed.false

Messenger

VariableDescriptionDefault Value
MESSENGER_STOREOne of memory, redis[1]memory
MESSENGER_NAMESPACEHow to scope the channels in Redisdirectus
MESSENGER_REDIS_*The Redis configuration for the pub/sub connection--

[1] redis should be used in load-balanced installations of Directus

Email

VariableDescriptionDefault Value
EMAIL_VERIFY_SETUPCheck if email setup is properly configured.true
EMAIL_FROMEmail address from which emails are sent.no-reply@directus.io
EMAIL_TRANSPORTWhat to use to send emails. One of sendmail, smtp, mailgun, sendgrid, ses.sendmail

Based on the EMAIL_TRANSPORT used, you must also provide the following configurations:

Sendmail (sendmail)

VariableDescriptionDefault Value
EMAIL_SENDMAIL_NEW_LINEWhat new line style to use in sendmail.unix
EMAIL_SENDMAIL_PATHPath to your sendmail executable./usr/sbin/sendmail

SMTP (smtp)

VariableDescriptionDefault Value
EMAIL_SMTP_NAMESMTP Name--
EMAIL_SMTP_HOSTSMTP Host--
EMAIL_SMTP_PORTSMTP Port--
EMAIL_SMTP_USERSMTP User--
EMAIL_SMTP_PASSWORDSMTP Password--
EMAIL_SMTP_POOLUse SMTP pooling--
EMAIL_SMTP_SECUREEnable TLS--
EMAIL_SMTP_IGNORE_TLSIgnore TLS--

Mailgun (mailgun)

VariableDescriptionDefault Value
EMAIL_MAILGUN_API_KEYYour Mailgun API key.--
EMAIL_MAILGUN_DOMAINA domain from your Mailgun account--
EMAIL_MAILGUN_HOSTAllows you to specify a custom host.api.mailgun.net

SendGrid (sendgrid)

VariableDescriptionDefault Value
EMAIL_SENDGRID_API_KEYYour SendGrid API key.--

AWS SES (ses)

VariableDescriptionDefault Value
EMAIL_SES_CREDENTIALS__ACCESS_KEY_IDYour AWS SES access key. ID.--
EMAIL_SES_CREDENTIALS__SECRET_ACCESS_KEYYour AWS SES secret key.--
EMAIL_SES_REGIONYour AWS SES region.--

Admin Account

If you're relying on Docker and/or the directus bootstrap CLI command, you can pass the following two environment variables to automatically configure the first user:

VariableDescriptionDefault Value
ADMIN_EMAILThe email address of the first user that's automatically created when using directus bootstrap.--
ADMIN_PASSWORDThe password of the first user that's automatically created when using directus bootstrap.--

Telemetry

To more accurately gauge the frequency of installation, version fragmentation, and general size of the userbase, Directus collects little and anonymized data about your environment. You can easily opt-out with the following environment variable:

VariableDescriptionDefault Value
TELEMETRYAllow Directus to collect anonymized data about your environment.true

Limits & Optimizations

Allows you to configure hard technical limits, to prevent abuse and optimize for your particular server environment.

VariableDescriptionDefault Value
RELATIONAL_BATCH_SIZEHow many rows are read into memory at a time when constructing nested relational datasets25000
EXPORT_BATCH_SIZEHow many rows are read into memory at a time when constructing exports5000