Alliance Auth
Welcome to the official documentation for Alliance Auth!
Alliance Auth is a web site that helps Eve Online organizations efficiently manage access to applications and external services.
Installation - Bare Metal
This chapter contains the main installation guides for Alliance Auth.
In addition to the main guide for installation Alliance Auth, you also find guides for configuring web servers (Apache, NGINX) and the recommended WSGI server (Gunicorn).
Alliance Auth
This document describes how to install Alliance Auth from scratch.
Note
There are additional installation steps for activating services and apps that come with Alliance Auth. Please see the page for the respective service or apps in chapter :doc:/features/index
for details.
Dependencies
Operating Systems
Alliance Auth can be installed on any in-support *nix operating system.
Our install documentation targets the following operating systems.
Ubuntu 20.04
Ubuntu 22.04
Centos 7
CentOS Stream 8
CentOS Stream 9
To install on your favorite flavour of Linux, identify and install equivalent packages to the ones listed here.
OS Maintenance
It is recommended to ensure your OS is fully up-to-date before proceeding. We may also add Package Repositories here, used later in the documentation.
sudo apt-get update
sudo apt-get upgrade
sudo do-dist-upgrade
yum install epel-release
sudo yum upgrade
sudo dnf config-manager --set-enabled powertools
sudo dnf install epel-release epel-next-release
sudo yum upgrade
sudo dnf config-manager --set-enabled crb
sudo dnf install epel-release epel-next-release
sudo yum upgrade
Python
Install Python 3.11 and related tools on your system.
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install python3.11 python3.11-dev python3.11-venv
We need to build Python from source
cd ~
sudo yum install gcc openssl-devel bzip2-devel libffi-devel wget
wget https://www.python.org/ftp/python/3.11.7/Python-3.11.7.tgz
tar xvf Python-3.11.7.tgz
cd Python-3.11.7/
./configure --enable-optimizations --enable-shared
sudo make altinstall
We need to build Python from source
cd ~
sudo yum install gcc openssl-devel bzip2-devel libffi-devel wget
wget https://www.python.org/ftp/python/3.11.7/Python-3.11.7.tgz
tar xvf Python-3.11.7.tgz
cd Python-3.11.7/
./configure --enable-optimizations --enable-shared
sudo make altinstall
We need to build Python from source
cd ~
sudo yum install gcc openssl-devel bzip2-devel libffi-devel wget
wget https://www.python.org/ftp/python/3.11.7/Python-3.11.7.tgz
tar xvf Python-3.11.7.tgz
cd Python-3.11.7/
./configure --enable-optimizations --enable-shared
sudo make altinstall
Database
It’s recommended to use a database service instead of SQLite. Many options are available, but this guide will use MariaDB 10.11
Follow the instructions at https://mariadb.org/download/?t=repo-config&d=20.04+”focal”&v=10.11&r_m=osuosl to add the MariaDB repository to your host.
sudo apt-get install mariadb-server mariadb-client libmysqlclient-dev
Follow the instructions at https://mariadb.org/download/?t=repo-config&d=CentOS+7&v=10.11&r_m=osuosl to add the MariaDB repository to your host.
sudo yum install MariaDB-server MariaDB-client MariaDB-devel MariaDB-shared
Follow the instructions at https://mariadb.org/download/?t=repo-config&d=CentOS+Stream&v=10.11&r_m=osuosl to add the MariaDB repository to your host.
sudo dnf install mariadb mariadb-server mariadb-devel
Follow the instructions at https://mariadb.org/download/?t=repo-config&d=CentOS+Stream&v=10.11&r_m=osuosl to add the MariaDB repository to your host.
sudo dnf install mariadb mariadb-server mariadb-devel
Important
If you don’t plan on running the database on the same server as auth you still need to install the libmysqlclient-dev
package
If you don’t plan on running the database on the same server as auth you still need to install the mariadb-devel
package
If you don’t plan on running the database on the same server as auth you still need to install the mariadb-devel
package
If you don’t plan on running the database on the same server as auth you still need to install the mariadb-devel
package
Redis and Other Tools
A few extra utilities are also required for installation of packages.
sudo apt-get install unzip git redis-server curl libssl-dev libbz2-dev libffi-dev build-essential pkg-config
sudo yum install gcc gcc-c++ unzip git redis curl bzip2-devel openssl-devel libffi-devel wget pkg-config
sudo systemctl enable redis.service
sudo systemctl start redis.service
sudo dnf install gcc gcc-c++ unzip git redis curl bzip2-devel openssl-devel libffi-devel wget
sudo systemctl enable redis.service
sudo systemctl start redis.service
sudo dnf install gcc gcc-c++ unzip git redis curl bzip2-devel openssl-devel libffi-devel wget
sudo systemctl enable redis.service
sudo systemctl start redis.service
Database Setup
Alliance Auth needs a MySQL user account and database. Open an SQL shell with
sudo mysql -u root
and create them as follows, replacing PASSWORD
with an actual secure password:
CREATE USER 'allianceserver'@'localhost' IDENTIFIED BY 'PASSWORD';
CREATE DATABASE alliance_auth CHARACTER SET utf8mb4;
GRANT ALL PRIVILEGES ON alliance_auth . * TO 'allianceserver'@'localhost';
Once your database is set up, you can leave the SQL shell with exit
.
Add timezone tables to your mysql installation:
mysql_tzinfo_to_sql /usr/share/zoneinfo | sudo mysql -u root mysql
Note
You may see errors when you add the timezone tables. To make sure that they were correctly added run the following commands and check for the time_zone
tables
mysql -u root -p
use mysql;
show tables;
Close the SQL shell and secure your database server with this command:
mysql_secure_installation
Auth Install
User Account
For security and permissions, it’s highly recommended you create a separate user to install auth under. Do not log in as this account.
sudo adduser --disabled-login allianceserver
sudo passwd -l allianceserver
sudo passwd -l allianceserver
sudo passwd -l allianceserver
Prepare Directories
sudo mkdir -p /var/www/myauth/static
sudo chown -R allianceserver:allianceserver /var/www/myauth/static/
Warning
When installing and performing maintenance on Alliance Auth, virtual environments and python packages, sudo means superuser do, this will not use your venv or your allianceserver user and will routinely break your permission structure.
Only use sudo for system management or if you are unsure, when explicitly instructed to do so.
sudo su allianceserver
Virtual Environment
Switch to the allianceserver user.
sudo su allianceserver
And switch to its home directory:
cd ~
Create a Python virtual environment and put it somewhere convenient (e.g. /home/allianceserver/venv/auth/
)
Note
Your python3.x command/version may vary depending on your installed python version.
python3.11 -m venv /home/allianceserver/venv/auth/
Tip
A virtual environment provides support for creating a lightweight “copy” of Python with their own site directories. Each virtual environment has its own Python binary (allowing creation of environments with various Python versions) and can have its own independent set of installed Python packages in its site directories. You can read more about virtual environments on the Python_ docs. https://docs.python.org/3/library/venv.html
Activate the virtual environment with (Note the /bin/activate
on the end of the path):
source /home/allianceserver/venv/auth/bin/activate
Hint
Each time you come to do maintenance on your Alliance Auth installation, you should activate your virtual environment first. When finished, deactivate it with the deactivate
command.
Eve Online SSO
You need to have a dedicated Eve SSO app for Alliance auth. Please go to EVE Developer to create one.
For scopes your SSO app needs to have at least publicData
. Additional scopes depend on which Alliance Auth apps you will be using. For convenience, we recommend adding all available ESO scopes to your SSO app. Note that Alliance Auth will always ask the users to approve specific scopes before they are used.
As callback URL you want to define the URL of your Alliance Auth site plus the route: /sso/callback
. Example for a valid callback URL: https://auth.example.com/sso/callback
Alliance Auth Project
Warning
Before installing any Python packages, please double-check that you have activated in the virtual environment. This is usually indicated by your command line in the terminal starting with: (auth)
.
Install Python packages
Update & install basic tools before installing further Python packages:
pip install -U pip setuptools wheel
You can install Alliance Auth with the following command. This will install AA, AA’s Python dependencies, superlance for memory monitoring and gunicorn as a wsgi server
pip install allianceauth superlance gunicorn
Create the Alliance Auth project
Now you need to create the Django project that will run Alliance Auth. Ensure you are in the allianceserver home directory by issuing:
cd /home/allianceserver
The following command bootstraps a Django project which will run your Alliance Auth instance. You can rename it from myauth
to anything you’d like. Note that this name is shown by default as the site name but that can be changed later.
allianceauth start myauth
Update settings
Your settings file needs configuring:
nano myauth/myauth/settings/local.py
Be sure to configure:
Your site URL as
SITE_URL
The database user account setup from earlier in Database Setup
ESI_SSO_CLIENT_ID
,ESI_SSO_CLIENT_SECRET
from the EVE Online Developers Portal from earlier in Eve Online SSOESI_USER_CONTACT_EMAIL
to an email address to ensure that CCP has reliable contact information for youValid email server settings
Install database & static files
Django needs to set up the database before it can start.
python /home/allianceserver/myauth/manage.py migrate
Now we need to round up all the static files required to render templates. Make a directory to serve them from and populate it.
python /home/allianceserver/myauth/manage.py collectstatic --noinput
Check to ensure your settings are valid.
python /home/allianceserver/myauth/manage.py check
Hint
If you are using root, ensure the allianceserver user has read/write permissions to this directory before proceeding::
chown -R allianceserver:allianceserver /home/allianceserver/myauth
Setup superuser
Before using your auth site, it is essential to create a superuser account. This account will have all permissions in Alliance Auth. It’s OK to use this as your personal auth account.
python /home/allianceserver/myauth/manage.py createsuperuser
Once your installation is complete, the superuser account is accessed by logging in via the admin site at https://example.com/admin
.
If you intend to use this account as your personal auth account, you need to add a main character. Navigate to the normal user dashboard (at https://example.com
) after logging in via the admin site and select Change Main
. Once a main character has been added, it is possible to use SSO to log in to this account.
Services
Alliance Auth needs some additional services to run, which we will set up and configure next.
Gunicorn
To run the Alliance Auth website, a WSGI Server is required. For this Gunicorn is highly recommended for its ease of configuring. It can be manually run from within your myauth
base directory with gunicorn --bind 0.0.0.0 myauth.wsgi
or automatically run using Supervisor.
If you don’t see any errors, this means that Gunicorn is running fine. You can stop it with Ctrl+C
now.
The default configuration is good enough for most installations. Additional information is available in the gunicorn doc.
Supervisor
Supervisor is a process watchdog service: it makes sure other processes are started automatically and kept running. It can be used to automatically start the WSGI server and Celery workers for background tasks.
Note
You will need to exit the allianceserver user back to a user with sudo capabilities to install supervisor::
exit
sudo apt-get install supervisor
sudo dnf install supervisor
sudo systemctl enable supervisord.service
sudo systemctl start supervisord.service
sudo dnf install supervisor
sudo systemctl enable supervisord.service
sudo systemctl start supervisord.service
sudo dnf install supervisor
sudo systemctl enable supervisord.service
sudo systemctl start supervisord.service
Once installed, it needs a configuration file to know which processes to watch. Your Alliance Auth project comes with a ready-to-use template which will ensure the Celery workers, Celery task scheduler and Gunicorn are all running.
ln -s /home/allianceserver/myauth/supervisor.conf /etc/supervisor/conf.d/myauth.conf
sudo ln -s /home/allianceserver/myauth/supervisor.conf /etc/supervisord.d/myauth.ini
sudo ln -s /home/allianceserver/myauth/supervisor.conf /etc/supervisord.d/myauth.ini
sudo ln -s /home/allianceserver/myauth/supervisor.conf /etc/supervisord.d/myauth.ini
Activate it with sudo supervisorctl reload
.
You can check the status of the processes with sudo supervisorctl status
. Logs from these processes are available in /home/allianceserver/myauth/log
named by process.
Note
Any time the code or your settings change, you’ll need to restart Gunicorn and Celery. ::
sudo supervisorctl restart myauth:
Web server
Once installed, decide on whether you’re going to use NGINX or Apache and follow the respective guide.
Note that Alliance Auth is designed to run with web servers on HTTPS. While running on HTTP is technically possible, it is not recommended for production use, and some functions (e.g., Email confirmation links) will not work properly.
Updating
Periodically new releases are issued with bug fixes and new features. Be sure to read the release notes which will highlight changes.
To update your installation, swap to your allianceserver user
sudo su allianceserver
Activate your virtual environment
source /home/allianceserver/venv/auth/bin/activate
and update with:
pip install -U allianceauth
Some releases come with changes to the base settings. Update your project’s settings with:
allianceauth update /home/allianceserver/myauth
Some releases come with new or changed models. Update your database to reflect this with:
python /home/allianceserver/myauth/manage.py migrate
Finally, some releases come with new or changed static files. Run the following command to update your static files’ folder:
python /home/allianceserver/myauth/manage.py collectstatic --noinput
Always restart AA, Celery and Gunicorn after updating:
supervisorctl restart myauth:
NGINX
Overview
Nginx (engine x) is an HTTP server known for its high performance, stability, simple configuration, and low-resource consumption. Unlike traditional servers (i.e., Apache), Nginx doesn’t rely on threads to serve requests, rather using an asynchronous event-driven approach which permits predictable resource usage and performance under load.
If you’re trying to cram Alliance Auth into a very small VPS of say, 1 to 2GB or less, then Nginx will be considerably friendlier to your resources compared to Apache.
You can read more about NGINX on the NGINX wiki.
Coming from Apache
If you’re converting from Apache, here are some things to consider.
Nginx is lightweight for a reason. It doesn’t try to do everything internally and instead concentrates on just being a good HTTP server. This means that, unlike Apache, it won’t automatically run PHP scripts via mod_php and doesn’t have an internal WSGI server like mod_wsgi. That doesn’t mean that it can’t, just that it relies on external processes to run these instead. This might be good or bad depending on your outlook. It’s good because it allows you to segment your applications, restarting Alliance Auth won’t impact your PHP applications. On the other hand, it means more config and more management of services. For some people it will be worth it, for others losing the centralised nature of Apache may not be worth it.
Apache |
Nginx Replacement |
---|---|
mod_php |
php5-fpm or php7-fpm (PHP FastCGI) |
mod_wsgi |
Gunicorn or other external WSGI server |
Your .htaccess files won’t work. Nginx has a separate way of managing access to folders via the server config. Everything you can do with htaccess files you can do with Nginx config. Read more on the Nginx wiki
Setting up Nginx
Install Nginx via your preferred package manager or other method. If you need help, search, there are plenty of guides on installing Nginx out there.
Nginx needs to be able to read the folder containing your auth project’s static files. chown -R nginx:nginx /var/www/myauth/static
.
Tip
Some specific distros may use www-data:www-data
instead of nginx:nginx
, causing static files (images, stylesheets etc.) not to appear. You can confirm what user Nginx will run under by checking either its base config file /etc/nginx/nginx.conf
for the “user” setting, or once Nginx has started ps aux | grep nginx
.
Adjust your chown commands to the correct user if needed.
You will need to have Gunicorn or some other WSGI server setup for hosting Alliance Auth.
Install
sudo apt-get install nginx
sudo yum install nginx
sudo dnf install nginx
sudo dnf install nginx
Create a config file in /etc/nginx/sites-available
(/etc/nginx/conf.d
on CentOS) and call it alliance-auth.conf
or whatever your preferred name is.
Create a symbolic link to enable the site (not needed on CentOS):
ln -s /etc/nginx/sites-available/alliance-auth.conf /etc/nginx/sites-enabled/
Basic config
Copy this basic config into your config file. Make whatever changes you feel are necessary.
server {
listen 80;
listen [::]:80;
server_name example.com;
location /static {
alias /var/www/myauth/static;
autoindex off;
}
location /robots.txt {
alias /var/www/myauth/static/robots.txt;
}
location /favicon.ico {
alias /var/www/myauth/static/allianceauth/icons/favicon.ico;
}
# Gunicorn config goes below
location / {
include proxy_params;
proxy_pass http://127.0.0.1:8000;
}
}
Restart Nginx after making changes to the config files. On Ubuntu service nginx restart
and on CentOS systemctl restart nginx.service
.
Adding TLS/SSL
With Let’s Encrypt offering free SSL certificates, there’s no good reason to not run HTTPS anymore. The bot can automatically configure Nginx on some operating systems. If not proceed with the manual steps below.
Your config will need a few additions once you’ve got your certificate.
listen 443 ssl http2; # Replace listen 80; with this
listen [::]:443 ssl http2; # Replace listen [::]:80; with this
ssl_certificate /path/to/your/cert.crt;
ssl_certificate_key /path/to/your/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS;
ssl_prefer_server_ciphers on;
If you want to redirect all your non-SSL visitors to your secure site, below your main configs server
block, add the following:
server {
listen 80;
listen [::]:80;
server_name example.com;
# Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
return 301 https://$host$request_uri;
}
If you have trouble with the ssl_ciphers
listed here or some other part of the SSL config, try getting the values from Mozilla’s SSL Config Generator.
Apache
Overview
Alliance Auth gets served using a Web Server Gateway Interface (WSGI) script. This script passes web requests to Alliance Auth, which generates the content to be displayed and returns it. This means very little has to be configured in Apache to host Alliance Auth.
If you’re using a small VPS to host services with very limited memory, consider using NGINX.
Installation
apt-get install apache2
yum install httpd
dnf install httpd
systemctl enable httpd
systemctl start httpd
CentOS 7, Stream 8, Stream 9
Configuration
Permissions
Apache needs to be able to read the folder containing your auth project’s static files.
chown -R www-data:www-data /var/www/myauth/static
chown -R apache:apache /var/www/myauth/static
chown -R apache:apache /var/www/myauth/static
chown -R apache:apache /var/www/myauth/static
Further Configuration
Apache serves sites through defined virtual hosts. These are located in /etc/apache2/sites-available/
on Ubuntu and /etc/httpd/conf.d/httpd.conf
on CentOS.
A virtual host for auth needs only proxy requests to your WSGI server (Gunicorn if you followed the installation guide) and serve static files. Examples can be found below. Create your config in its own file e.g. myauth.conf
To proxy and modify headers a few mods need to be enabled.
a2enmod proxy
a2enmod proxy_http
a2enmod headers
Create a new config file for auth e.g. /etc/apache2/sites-available/myauth.conf
and fill out the virtual host configuration. To enable your config use a2ensite myauth.conf
and then reload apache with service apache2 reload
.
Place your virtual host configuration in the appropriate section within /etc/httpd/conf.d/httpd.conf
and restart the httpd service with systemctl restart httpd
.
Place your virtual host configuration in the appropriate section within /etc/httpd/conf.d/httpd.conf
and restart the httpd service with systemctl restart httpd
.
Place your virtual host configuration in the appropriate section within /etc/httpd/conf.d/httpd.conf
and restart the httpd service with systemctl restart httpd
.
Warning
In some scenarios, the Apache default page is still enabled. To disable it use
a2dissite 000-default.conf
CentOS
Sample Config File
<VirtualHost *:80>
ServerName auth.example.com
ProxyPassMatch ^/static !
ProxyPassMatch ^/robots.txt !
ProxyPassMatch ^/favicon.ico !
ProxyPass / http://127.0.0.1:8000/
ProxyPassReverse / http://127.0.0.1:8000/
ProxyPreserveHost On
Alias "/static" "/var/www/myauth/static"
Alias "/robots.txt" "/var/www/myauth/static/robots.txt"
Alias "/favicon.ico" "/var/www/myauth/static/allianceauth/icons/favicon.ico"
<Directory "/var/www/myauth/static">
Require all granted
</Directory>
<Location "/robots.txt">
SetHandler None
Require all granted
</Location>
<Location "/favicon.ico">
SetHandler None
Require all granted
</Location>
</VirtualHost>
SSL
It’s 2018 - there’s no reason to run a site without SSL. The EFF provides free, renewable SSL certificates with an automated installer. Visit their website for information.
After acquiring SSL, the config file needs to be adjusted. Add the following lines inside the <VirtualHost>
block:
RequestHeader set X-FORWARDED-PROTOCOL https
RequestHeader set X-FORWARDED-SSL On
Known Issues
Apache2 vs. Django
For some versions of Apache2, you might have to tell the Django framework explicitly
to use SSL, since the automatic detection doesn’t work. SSL in general will work,
but internally created URLs by Django might still be prefixed with just http://
instead of https://
, so it can’t hurt to add these lines to
myauth/myauth/settings/local.py
.
# Setup support for proxy headers
USE_X_FORWARDED_HOST = True
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTOCOL", "https")
Gunicorn
Gunicorn is a Python WSGI HTTP Server for UNIX. The Gunicorn server is light on server resources, and fairly speedy.
If you find Apache’s mod_wsgi
to be a headache or want to use NGINX (or some other webserver), then Gunicorn could be for you. There are a number of other WSGI server options out there, and this documentation should be enough for you to piece together how to get them working with your environment.
Check out the full Gunicorn docs.
Note
The page contains additional steps on how to set up and configure Gunicorn that are not required for users who decide to stick with the default Gunicorn configuration as described in the main installation guide for AA.
Setting up Gunicorn
Note
If you’re using a virtual environment, activate it now:: sudo su allianceserver source /home/allianceserver/venv/auth/bin/activate
Install Gunicorn using pip
pip install gunicorn
In your myauth
base directory, try running gunicorn --bind 0.0.0.0:8000 myauth.wsgi
. You should be able to browse to http://yourserver:8000
and see your Alliance Auth installation running. Images and styling will be missing, but don’t worry, your web server will provide them.
Once you validate its running, you can kill the process with Ctrl+C and continue.
Running Gunicorn with Supervisor
If you are following this guide, we already use Supervisor to keep all of Alliance Auth’s components running. You don’t have to, but we will be using it to start and run Gunicorn for consistency.
Sample Supervisor config
You’ll want to edit /etc/supervisor/conf.d/myauth.conf
(or whatever you want to call the config file)
[program:gunicorn]
user = allianceserver
directory=/home/allianceserver/myauth/
command=/home/allianceserver/venv/auth/bin/gunicorn myauth.wsgi --workers=3 --timeout 120
stdout_logfile=/home/allianceserver/myauth/log/gunicorn.log
stderr_logfile=/home/allianceserver/myauth/log/gunicorn.log
autostart=true
autorestart=true
stopsignal=INT
[program:gunicorn]
- Changegunicorn
to whatever you wish to call your process in Supervisor.user = allianceserver
- Change to whatever user you wish Gunicorn to run as. You could even set this as allianceserver if you wished. I’ll leave the question security of that up to you.directory=/home/allianceserver/myauth/
- Needs to be the path to your Alliance Auth project.command=/home/allianceserver/venv/auth/bin/gunicorn myauth.wsgi --workers=3 --timeout 120
- Running Gunicorn and the options to launch with. This is where you have some decisions to make. We’ll continue below.
Gunicorn Arguments
See the Commonly Used Arguments or Full list of settings for more information.
Where to bind Gunicorn to
What address are you going to use to reference it? By default, without a bind parameter, Gunicorn will bind to 127.0.0.1:8000
. This might be fine for your application. If it clashes with another application running on that port, you will need to change it. I would suggest using UNIX sockets too if you can.
For UNIX sockets add --bind=unix:/run/allianceauth.sock
(or to a path you wish to use). Remember that your web server will need to be able to access this socket file.
For a TCP address add --bind=127.0.0.1:8001
(or to the address/port you wish to use, but I would strongly advise against binding it to an external address).
Whatever you decide to use, remember it because we’ll need it when configuring your webserver.
Number of workers
By default, Gunicorn will spawn only one worker. The number you set this to will depend on your own server environment, how many visitors you have etc. Gunicorn suggests (2 x $num_cores) + 1
for the number of workers. So, for example, if you have 2 cores, you want 2 x 2 + 1 = 5 workers. See here for the official discussion on this topic.
Change it by adding --workers=5
to the command.
Running with a virtual environment
Following this guide, you are running with a virtual environment. Therefore, you’ll need to add the path to the command=
config line.
e.g. command=/path/to/venv/bin/gunicorn myauth.wsgi
The example config is using the myauth venv from the main installation guide:
command=/home/allianceserver/venv/auth/bin/gunicorn myauth.wsgi
Starting via Supervisor
Once you have your configuration all sorted, you will need to reload your supervisor config service supervisor reload
and then you can start the Gunicorn server via supervisorctl start myauth:gunicorn
(or whatever you renamed it to). You should see something like the following myauth-gunicorn: started
. If you get some other message, you’ll need to consult the Supervisor log files, usually found in /var/log/supervisor/
.
Configuring your webserver
Any web server capable of proxy passing should be able to sit in front of Gunicorn. Consult their documentation armed with your --bind=
address, and you should be able to find how to do it relatively easily.
Restarting Gunicorn
In the past, when you made changes, you restarted the entire Apache server. This is no longer required. When you update or make configuration changes that ask you to restart Apache, instead you can just restart Gunicorn:
supervisorctl restart myauth:gunicorn
Upgrading Python 3
This guide describes how to upgrade an existing Alliance Auth (AA) installation to a newer Python 3 version.
This guide shares many similarities with the Alliance Auth install guide, but it is targeted towards existing installations needing to update.
Note
This guide will upgrade the software components only but not change any data or configuration.
Install a new Python version
To run AA with a newer Python 3 version than your system’s default, you need to install it first. Technically, it would be possible to upgrade your system’s default Python 3, but since many of your system’s tools have been tested to work with that specific version, we would not recommend it. Instead, we recommend installing an additional Python 3 version alongside your default version and using that for AA.
To install other Python versions than those included with your distribution, you need to add a new installation repository. Then you can install the specific Python 3 to your system.
Note
Ubuntu 2204 ships with Python 3.10 already
Centos Stream 8/9:
Note
A Python 3.9 Package is available for Stream 8 and 9. You may use this instead of building your own package. But our documentation will assume Python3.11, and you may need to substitute as necessary sudo dnf install python39 python39-devel
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install python3.11 python3.11-dev python3.11-venv
cd ~
sudo yum install gcc openssl-devel bzip2-devel libffi-devel wget
wget https://www.python.org/ftp/python/3.11.7/Python-3.11.7.tgz
tar xvf Python-3.11.7.tgz
cd Python-3.11.7/
./configure --enable-optimizations --enable-shared
sudo make altinstall
cd ~
sudo yum install gcc openssl-devel bzip2-devel libffi-devel wget
wget https://www.python.org/ftp/python/3.11.7/Python-3.11.7.tgz
tar xvf Python-3.11.7.tgz
cd Python-3.11.7/
./configure --enable-optimizations --enable-shared
sudo make altinstall
cd ~
sudo yum install gcc openssl-devel bzip2-devel libffi-devel wget
wget https://www.python.org/ftp/python/3.11.7/Python-3.11.7.tgz
tar xvf Python-3.11.7.tgz
cd Python-3.11.7/
./configure --enable-optimizations --enable-shared
sudo make altinstall
Preparing your venv
Before updating your venv, it is important to make sure that your current installation is stable. Otherwise, your new venv might not be consistent with your data, which might create problems.
Start by navigating to your main project folder (the one that has manage.py
in it). If you followed the default installation, the path is: /home/allianceserver/myauth
Note
If you installed Alliance Auth under the allianceserver user, as recommended. Remember to switch users for easier permission management::
sudo su allianceserver
Activate your venv:
source /home/allianceserver/venv/auth/bin/activate
Upgrade AA
Make sure to upgrade AA to the newest version:
pip install -U allianceauth
Run migrations and collectstatic.
python manage.py migrate
python manage.py collectstatic
Restart your AA supervisor:
supervisorctl restart myauth:
Upgrade your apps
You also need to upgrade all additional apps to their newest version that you have installed. And you need to make sure that you can reinstall all your apps later, e.g., you know from which repo they came. We recommend making a list of all your apps, so you can go through them later when you rebuild your venv.
If you unsure which apps you have installed from repos check INSTALLED_APPS
in your settings. Alternatively, run this command to get a list of all apps in your venv.
pip list
Repeat as needed for your apps
pip install -U APP_NAME
Make sure to run migrations and collect static files for all upgraded apps.
python manage.py migrate
python manage.py collectstatic
Restart and final check
Do a final restart of your AA supervisors and make sure your installation is still running normally.
For a final check that there are no issues - e.g., any outstanding migrations - run this command:
python manage.py check
If you get the following result, you are good to go. Otherwise, make sure to fix any issues first before proceeding.
System check identified no issues (0 silenced).
Backup current venv
Make sure you are in your venv!
First, we create a list of all installed packages in your venv. You can use this list later as a reference to see what packages should be installed.
pip freeze > requirements.txt
At this point, we recommend creating a list of the additional packages that you need to manually reinstall later on top of AA:
Community AA apps (e.g. aa-structures)
Additional tools you are using (e.g., flower, django-extensions)
Hint
While requirements.txt
will contain a complete list of your packages, it will also contain many packages that are automatically installed as dependencies and don’t need to be manually reinstalled.
Note
Some guides on the Internet will suggest using the requirements.txt
file to recreate a venv. This is indeed possible, but only works if all packages can be installed from PyPI. Since most community apps are installed directly from repos, this guide will not follow that approach.
Leave the venv and shutdown all AA services:
deactivate
supervisorctl stop myauth:
Rename and keep your old venv, so we have a fallback in case of some unforeseeable issues:
mv /home/allianceserver/venv/auth /home/allianceserver/venv/auth_old
Create your new venv
Now let’s create our new venv with Python 3.11 and activate it:
python3.11 -m venv /home/allianceserver/venv/auth
source /home/allianceserver/venv/auth/bin/activate
Reinstall packages
Now we need to reinstall all packages into your new venv.
Install basic packages
pip install -U pip setuptools wheel
Installing AA & Gunicorn
pip install allianceauth
pip install gunicorn
Install all other packages
Last, but not least, you need to reinstall all other packages, e.g., for AA community apps or additional tools.
Use the list of packages you created earlier as a checklist. Alternatively you use the requirements.txt
file we created earlier to see what you need. During the installation process you can run pip list
to see what you already got installed.
To check whether you are missing any apps, you can also run the check command:
python manage.py check
Note: In case you forget to install an app, you will get this error
ModuleNotFoundError: No module named 'xyz'
Note that you should not need to run any migrations unless you forgot to upgrade one of your existing apps, or you got the newer version of an app through a dependency. In that case, you run migrations normally.
Restart
After you have completed installing all packages, start your AA supervisor again.
supervisorctl start myauth:
We recommend keeping your old venv copy for a couple of days, so you have a fallback just in case. After that, you should be fine to remove it.
Fallback
In case you run into any major issue, you can always switch back to your initial venv.
Before you start double-check that you still have your old venv for auth:
ls /home/allianceserver/venv/auth /home/allianceserver/venv
If the output shows these two folders, you should be safe to proceed:
auth
auth_old
Run these commands to remove your current venv and switch back to the old venv for auth:
supervisorctl stop myauth:
rm -rf /home/allianceserver/venv/auth
mv /home/allianceserver/venv/auth_old /home/allianceserver/venv/auth
supervisorctl start myauth:
Switch to non-root
If you followed the official installation guide for Alliance Auth (AA) pre AA 3.x you usually ended up with a “root installation”. A root installation means that you have installed AA with the root user and now need to log in as root every time to perform maintenance for AA, e.g., updating existing apps.
Since working as root is generally not recommended, this guide explains how you can easily migrate your existing “root installation” to a “non-root installation”.
How to switch to non-root
We will change the setup so that you can use your allianceserver
user to perform most maintenance operations. In addition, you also need a sudo user for invoking root privileges, e.g., when restarting the AA services.
The migration itself is rather straightforward. The main idea is to change ownership for all relevant directories and files to allianceserver
.
First, log in as your sudo user and run the following commands in order:
# Set the right owner
sudo chown -R allianceserver: /home/allianceserver
sudo chown -R allianceserver: /var/www/myauth
# Remove static files, they will be re-added later
sudo rm -rf /var/www/mayauth/static/*
# Fix directory permissions
sudo chmod -R 755 /var/www/myauth
That’s it. Your AA installation is now configured to be maintained with the allianceserver
user.
How to do maintenance with a non-root user
Here is how you can maintain your AA installation in the future:
First, log in with your sudo user.
Then, switch to the allianceserver
user:
sudo su allianceserver
Go to your home folder and activate your venv:
cd ~
source venv/auth/bin/activate
Finally, switch to the main AA folder, from where you can run most commands directly:
cd myauth
Now it’s time to re-add the static files with the right permissions. To do so simply run:
python manage.py collectstatic
When you want to restart myauth, you need to switch back to your sudo user, because allianceserver
does not have sudo privileges:
exit
sudo supervisorctl restart myauth:
Alternatively, you can open another terminal with your sudo user for restarting myauth. That has the added advantage that you can now continue working with both your allianceauth user and your sudo user for restarts at the same time.
Installation - Containerized
This document describes how to install Alliance Auth using various Containerization techniques.
If you would like to install on Bare Metal instead see :doc:/installation/index
instead
Note
There are additional installation steps for activating services and apps that come with Alliance Auth. Please see the page for the respective service or apps in chapter :doc:/features/index
for details.
Docker
Prerequisites
You should have the following available on the system you are using to set this up:
Docker - https://docs.docker.com/get-docker/
git
curl
Hint
If at any point docker compose
does not work, but docker-compose
does, you have an older version of Docker (and Compose), please update before continuing. Be cautious of these two commands and any suggestions copy and pasted from the internet
Setup Guide
run
bash <(curl -s https://gitlab.com/allianceauth/allianceauth/-/raw/master/docker/scripts/download.sh)
. This will download all the files you need to install Alliance Auth and place them in a directory namedaa-docker
. Feel free to rename/move this folder.run
./scripts/prepare-env.sh
to set up your environment(optional) Change
PROTOCOL
tohttp://
if not using SSL in.env
run
docker compose --env-file=.env up -d
(NOTE: if this command hangs, follow the instructions here)run
docker compose exec allianceauth_gunicorn bash
to open up a terminal inside an auth containerrun
auth migrate
run
auth collectstatic
run
auth createsuperuser
visit http://yourdomain:81 to set up nginx proxy manager (NOTE: if this doesn’t work, the machine likely has a firewall. You’ll want to open up ports 80,443, and 81. Instructions for ufw)
login with user
admin@example.com
and passwordchangeme
, then update your password as requestedclick on “Proxy Hosts”
click “Add Proxy Host”, with the following settings for auth. The example uses
auth.localhost
for the domain, but you’ll want to use whatever address you have auth configured onclick “Add Proxy Host”, with the following settings for grafana. The example uses
grafana.localhost
for the domain
Congrats! You should now see auth running at http://auth.yourdomain and grafana at http://grafana.yourdomain!
SSL Guide
Unless you’re running auth locally in docker for testing, you should be using SSL. Thankfully, setting up SSL in nginx Proxy Manager takes about three clicks.
Edit your existing proxy host, and go to the SSL tab. Select “Request a new SSL Certificate” from the drop down.
Now, enable “Force SSL” and “HTTP/2 Support”. (NOTE: Do not enable HSTS unless you know what you’re doing. This will force your domains to only work with SSL enabled, and is cached extremely hard in browsers. )
(optional) select “Use a DNS Challenge”. This is not a required option, but it is recommended if you use a supported DNS provider. You’ll then be asked for an API key for the provider you choose. If you use Cloudflare, you’ll probably have issues getting SSL certs unless you use a DNS Challenge.
The email address here will be used to notify you if there are issues renewing your certificates.
Repeat for any other services, like grafana.
That’s it! You should now be able to access your auth install at https://auth.yourdomain
Adding extra packages
There are a handful of ways to add packages:
Running
pip install
in the containersModifying the container’s initial command to install packages
Building a custom Docker image (recommended, and less scary than it sounds!)
Using a custom docker image
Using a custom docker image is the preferred approach, as it gives you the stability of packages only changing when you tell them to, along with packages not having to be downloaded every time your container restarts
Add each additional package that you want to install to a single line in
conf/requirements.txt
. It is recommended, but not required, that you include a version number as well. This will keep your packages from magically updating. You can lookup packages on https://pypi.org, and copy from the title at the top of the page to use the most recent version. It should look something likeallianceauth-signal-pings==0.0.7
. Every entry in this file should be on a separate lineModify
docker-compose.yml
, as follows.Comment out the
image
line underallianceauth
Uncomment the
build
sectione.g.
x-allianceauth-base: &allianceauth-base # image: ${AA_DOCKER_TAG?err} build: context: . dockerfile: custom.dockerfile args: AA_DOCKER_TAG: ${AA_DOCKER_TAG?err} restart: always ...
run
docker compose --env-file=.env up -d
, your custom container will be built, and auth will have your new packages. Make sure to follow the package’s instructions on config values that go inlocal.py
run
docker compose exec allianceauth_gunicorn bash
to open up a terminal inside your auth containerrun
allianceauth update myauth
run
auth migrate
run
auth collectstatic
NOTE: It is recommended that you put any secret values (API keys, database credentials, etc) in an environment variable instead of hardcoding them into local.py
. This gives you the ability to track your config in git without committing passwords. To do this, just add it to your .env
file, and then reference in local.py
with os.environ.get("SECRET_NAME")
Updating Auth
Base Image
Whether you’re using a custom image or not, the version of auth is dictated by $AA_DOCKER_TAG in your .env
file.
To update to a new version of auth, update the version number at the end (or replace the whole value with the tag in the release notes).
run
docker compose pull
run
docker compose --env-file=.env up -d
run
docker compose exec allianceauth_gunicorn bash
to open up a terminal inside your auth containerrun
allianceauth update myauth
run
auth migrate
run
auth collectstatic
NOTE: If you specify a version of allianceauth in your requirements.txt
in a custom image it will override the version from the base image. Not recommended unless you know what you’re doing
Custom Packages
Update the versions in your
requirements.txt
fileRun
docker compose build
Run
docker compose --env-file=.env up -d
Migrating your Docker Compose stack from AA V3.x to AA v4.x
Our Docker Compose stack has both changed significantly, and simplified itself drastically depending on your level of familiarity with Docker.
We have Removed our need to run Supervisor inside the container to run the various tasks needed, and split the stack into multiple containers responsible for each task, as well as modernized many elements.
aa-docker/conf/*
We are bundling a few often customized files along side our AA install for easier modification by users, you will need to download these into aa-docker/conf
wget https://gitlab.com/allianceauth/allianceauth/-/raw/v4.x/docker/conf/celery.py
wget https://gitlab.com/allianceauth/allianceauth/-/raw/v4.x/docker/conf/urls.py
wget https://gitlab.com/allianceauth/allianceauth/-/raw/v4.x/docker/conf/memory_check.sh
wget https://gitlab.com/allianceauth/allianceauth/-/raw/v4.x/docker/conf/redis_healthcheck.sh
Docker Compose
At this point you should take a copy of your docker-compose and take note of any additional volumes or configurations you have, and why.
Take a complete backup of your local.py, docker-compose and SQL database.
docker compose down
Replace your conf/nginx.conf with the contents of https://gitlab.com/allianceauth/allianceauth/-/raw/v4.x/docker/conf/nginx.conf
Replace your docker-compose.yml with the contents of https://gitlab.com/allianceauth/allianceauth/-/raw/v4.x/docker/docker-compose.yml
V3.x installs likely used a dedicated database for Nginx Proxy Manager, you can either setup NPM again without a database, or uncomment the sections noted to maintain this configuration
proxy:
...
# Uncomment this section to use a dedicated database for Nginx Proxy Manager
environment:
DB_MYSQL_HOST: "proxy-db"
DB_MYSQL_PORT: 3306
DB_MYSQL_USER: "npm"
DB_MYSQL_PASSWORD: "${PROXY_MYSQL_PASS?err}"
DB_MYSQL_NAME: "npm"
...
# Uncomment this section to use a dedicated database for Nginx Proxy Manager
proxy-db:
image: 'jc21/mariadb-aria:latest'
restart: always
environment:
MYSQL_ROOT_PASSWORD: "${PROXY_MYSQL_PASS_ROOT?err}"
MYSQL_DATABASE: 'npm'
MYSQL_USER: 'npm'
MYSQL_PASSWORD: "${PROXY_MYSQL_PASS?err}"
ports:
- 3306
volumes:
- proxy-db:/var/lib/mysql
logging:
driver: "json-file"
options:
max-size: "1Mb"
max-file: "5"
.env
You will need to add some entries to your .env file
AA_DB_CHARSET=utf8mb4
GF_SECURITY_ADMIN_USERNAME=admin
and
GF_SECURITY_ADMIN_PASSWORD
The password field is intentionally not filled so that you create one. You can either use the grafana credentials you have been using, or create a suitably secure password now.
You will also need to update the AA_DOCKER_TAG
to the version of V4.x you want to install. Either follow the pattern or check https://gitlab.com/allianceauth/allianceauth/-/releases
(Optional) Build Custom Container
If you are using a docker container with a requirements.txt, You will need to reinstate some customizations.
Modify docker-compose.yml
, as follows.
Comment out the
image
line underallianceauth
Uncomment the
build
sectione.g.
x-allianceauth-base: &allianceauth-base # image: ${AA_DOCKER_TAG?err} build: context: . dockerfile: custom.dockerfile args: AA_DOCKER_TAG: ${AA_DOCKER_TAG?err} restart: always ...
Now build your custom image
docker compose pull
docker compose build
Bring docker back up, migrate, collect static
docker compose --env-file=.env up -d --remove-orphans
docker compose exec allianceauth_gunicorn bash
allianceauth update myauth
auth migrate
auth collectstatic --clear
Features
Learn about the features of Alliance Auth and how to install and use them.
Overview
Alliance Auth (AA) is a website that helps Eve Online organizations efficiently manage access to applications and external services.
It has the following key features:
Automatically grants or revokes users access to external services (e.g. Discord, Mumble) and web apps (e.g. SRP requests) based on the user’s current membership to in-game organizations and groups
Provides a central website where users can directly access web apps (e.g., SRP requests) and manage their access to external services and groups.
Includes a set of connectors (called “services”) for integrating access management with many popular external services like Discord, Mumble, Teamspeak 3, SMF and others
Includes a set of web apps which add many useful functions, e.g.: fleet schedule, timer board, SRP request management, fleet activity tracker
It can be easily extended with additional services and apps. Many are provided by the community.
Chinese, English, German and Spanish localization
Core Features
Managing access to applications and services is one of the core functions of Alliance Auth. The related key concepts and functionalities are described in this section.
Dashboard
The dashboard is the main page of the Alliance Auth website, and the first page every logged-in user will see.
The content of the dashboard is specific to the logged-in user. It has a sidebar, which will display the list of apps a user currently as access to based on his permissions. And it also shows which character the user has registered and to which group he belongs.
For admin users, the dashboard shows additional technical information about the AA instance.
Settings
Here is a list of available settings for the dashboard. They can be configured by adding them to your AA settings file (local.py
).
Note that all settings are optional and the app will use the documented default settings if they are not used.
Name |
Description |
Default |
---|---|---|
|
Statistics will be calculated for task events not older than max hours. |
|
|
Disables recording of task statistics. Used mainly in development. |
|
States
States define the basic role of a user based on his affiliation with your organization. A user that has a character in your organization (e.g., alliance) will usually have the Member
state. And a user, that has no characters in your organization will usually have the Guest
state.
States are assigned and updated automatically. So a user which character just left your organization will automatically lose his Member
state and get the Guest
state instead.
The main purpose of states like Member
is to have one place where you can assign all permissions that should apply to all users with that particular state. For example, if all your members should have access to the SRP app, you would add the permission that gives access to the SRP app to the Member
state.
Creating a State
States are created through your installation’s admin site. Upon install three states are created for you: Member
, Blue
, and Guest
. New ones can be created like any other Django model by users with the appropriate permission (authentication | state | Can add state
) or superusers.
A number of fields are available and are described below.
Name
This is the displayed name of a state. It should be self-explanatory.
Permissions
This lets you select permissions to grant to the entire state, much like a group. Any user with this state will be granted these permissions.
A common use case would be granting service access to a state.
Priority
This value determines the order in which states are applied to users. Higher numbers come first. So if a random user Bob
could member of both the Member
and Blue
states, because Member
has a higher priority Bob
will be assigned to it.
Public
Checking this box means this state is available to all users. There isn’t much use for this outside the Guest
state.
Member Characters
This lets you select which characters the state is available to. Characters can be added by selecting the green plus icon.
Member Corporations
This lets you select which Corporations the state is available to. Corporations can be added by selecting the green plus icon.
Member Alliances
This lets you select which Alliances the state is available to. Alliances can be added by selecting the green plus icon.
Member Factions
This lets you select which factions the state is available to. Factions can be added by selecting the green plus icon, and are limited to those which can be enlisted in for faction warfare.
Determining a User’s State
States are mutually exclusive, meaning a user can only be in one at a time.
Membership is determined based on a user’s main character. States are tested in order of descending priority - the first one, which allows membership to the main character, is assigned to the user.
States are automatically assigned when a user registers to the site, their main character changes, they are activated or deactivated, or states are edited. Note that editing states triggers lots of state checks, so it can be a very slow process.
Assigned states are visible in the Users
section of the Authentication
admin site.
The Guest State
If no states are available to a user’s main character, or their account has been deactivated, they are assigned to a catch-all Guest
state. This state cannot be deleted nor can its name be changed.
The Guest
state allows permissions to be granted to users who would otherwise not get any. For example, access to public services can be granted by giving the Guest
state a service access permission.
Groups
Group Management is one of the core tasks of Alliance Auth. Many of Alliance Auth’s services allow for synchronizing of group membership, allowing you to grant permissions or roles in services to access certain aspects of them.
Creating groups
Administrators can create custom groups for users to join. Examples might be groups like Leadership
, CEO
or Scouts
.
When you create a Group
additional settings are available beyond the normal Django group model. The admin page looks like this:
Here you have several options:
Internal
Users cannot see, join or request to join this group. This is primarily used for Auth’s internally managed groups, though it can be useful if you want to prevent users from managing their membership of this group themselves. This option will override the Hidden, Open and Public options when enabled.
By default, every new group created will be an internal group.
Open
When a group is toggled open, users who request to join the group will be immediately added to the group.
If the group is not open, their request will have to be approved manually by someone with the group management role, or a group leader of that group.
Public
Group is accessible to any registered user, even when they do not have permission to join regular groups.
The key difference is that the group is completely unmanaged by Auth. Once a member joins they will not be removed unless they leave manually, you remove them manually, or their account is deliberately set inactive or deleted.
Most people won’t have a use for public groups, though it can be useful if you wish to allow public access to some services. You can grant service permissions to a public group to allow this behavior.
Restricted
When a group is restricted, only superuser admins can directly add or remove them to/from users. The purpose of this property is to prevent staff admins from assigning themselves to groups that are security sensitive. The “restricted” property can be combined with all the other properties.
Reserved group names
When using Alliance Auth to manage external services like Discord, Auth will automatically duplicate groups on those services. E.g., on Discord Auth will create roles of the same name as groups. However, there may be cases where you want to manage groups on external services by yourself or by another bot. For those cases, you can define a list of reserved group names. Auth will ensure that you cannot create groups with a reserved name. You will find this list on the admin site under groupmanagement.
Note
While this feature can help to avoid naming conflicts with groups on external services, the respective service component in Alliance Auth also needs to be built in such a way that it knows how to prevent these conflicts. Currently only the Discord and Teamspeak3 services have this ability.
Managing groups
To access group management, users need to be either a superuser, granted the auth | user | group_management ( Access to add members to groups within the alliance )
permission or a group leader (discussed later).
Group Requests
When a user joins or leaves a group which is not marked as “Open”, their group request will have to be approved manually by a user with the group_management
permission or by a group leader of the group they are requesting.
Group Membership
The group membership tab gives an overview of all the non-internal groups.
Group Member Management
Clicking on the blue eye will take you to the group member management screen. Here you can see a list of people who are in the group, and remove members where necessary.
Group Audit Log
Whenever a user Joins, Leaves, or is Removed from a group, this is logged. To find the audit log for a given group, click the light-blue button to the right of the Group Member Management (blue eye) button.
These logs contain the Date and Time the action was taken (in EVE/UTC), the user which submitted the request being acted upon (requestor), the user’s main character, the type of request (join, leave or removed), the action taken (accept, reject or remove), and the user that took the action (actor).
Group Leaders
Group leaders have the same abilities as users with the group_management
permission, however, they will only be able to:
Approve requests for groups they are a leader of.
View the Group Membership and Group Members of groups they are leaders of.
This allows you to more fine control who has access to manage which groups.
Auto Leave
By default, in AA both requests and leaves for non-open groups must be approved by a group manager. If you wish to allow users to leave groups without requiring approvals, add the following lines to your local.py
## Allows users to freely leave groups without requiring approval.
GROUPMANAGEMENT_AUTO_LEAVE = True
Note
Before you set GROUPMANAGEMENT_AUTO_LEAVE = True
, make sure there are no pending leave requests, as this option will hide the “Leave Requests” tab.
Settings
Here is a list of available settings for Group Management. They can be configured by adding them to your AA settings file (local.py
).
Note that all settings are optional and the app will use the documented default settings if they are not used.
Name |
Description |
Default |
---|---|---|
|
Send Auth notifications to all group leaders for join and leave requests. |
|
|
Allows users to freely leave groups without requiring approval.. |
|
Permissions
To join a group other than a public group, the permission groupmanagement.request_groups
(Can request non-public groups
in the admin panel) must be active on their account, either via a group or directly applied to their User account.
When a user loses this permission, they will be removed from all groups except Public groups.
Note
By default, the groupmanagement.request_groups
permission is applied to the Member
group. In most instances this, and perhaps adding it to the Blue
group, should be all that is ever needed. It is unsupported and NOT advisable to apply this permission to a public group. See #697 for more information.
Group Management should be mostly done using group leaders, a series of permissions are included below for thoroughness:
Permission |
Admin Site |
Auth Site |
---|---|---|
auth.group_management |
None |
Can Approve and Deny all Group Requests, Can view and manage all group memberships |
groupmanagement.request_groups |
None |
Can Request Non-Public Groups |
Analytics FAQ
Alliance Auth has an opt-out analytics module using Google Analytics Measurement Protocol.
How to Opt-Out
Before you proceed, please read through this page and/or raise any concerns on the Alliance Auth discord. This data helps us make AA better.
To opt out, modify our preloaded token using the Admin dashboard */admin/analytics/analyticstokens/1/change/
Each of the three features Daily Stats, Celery Events and Page Views can be enabled/Disabled independently.
Alternatively, you can fully opt out of analytics with the following optional setting:
ANALYTICS_DISABLED = True
What
Alliance Auth has taken great care to anonymize the data sent. To identify unique installs, we generate a UUIDv4, a random mathematical construct which does not contain any identifying information UUID - UUID Objects
Analytics comes preloaded with our Google Analytics token, and the three types of tasks can be opted out independently. Analytics can also be loaded with your own GA token, and the analytics module will act any/all tokens loaded.
Our Daily Stats contain the following:
A phone-in task to identify a server’s existence
A task to send the Number of User models
A task to send the Number of Token Models
A task to send the Number of Installed Apps
A task to send a List of Installed Apps
Each Task contains the UUID and Alliance Auth Version
Our Celery Events contain the following:
Unique Identifier (The UUID)
Celery Namespace of the task e.g., allianceauth.eveonline
Celery Task
Task Success or Exception
A context number for bulk tasks or sometimes a binary True/False
Our Page Views contain the following:
Unique Identifier (The UUID)
Page Path
Page Title
The locale of the users browser
The User-Agent of the user’s browser
The Alliance Auth Version
Why
This data allows Alliance Auth development to gather accurate statistics on our installation base, as well as how those installations are used.
This allows us to better target our development time to commonly used modules and features and test them at the scales in use.
Where
This data is stored in a Team Google Analytics Dashboard. The Maintainers all have Management permissions here, and if you have contributed to the Alliance Auth project or third party applications, feel free to ask in the Alliance Auth discord for access.
Using Analytics in my App
Analytics Event
Notifications
Alliance Auth has a build in notification system. The purpose of the notification system is to provide an easy and quick way to send messages to users of Auth. For example, some apps are using it to inform users about results after long-running tasks have been completed, and admins will automatically get notifications about system errors.
The number of unread notifications is shown to the user in the top menu. And the user can click on the notification count to open the Notifications app.
Settings
The Notifications app can be configured through settings.
NOTIFICATIONS_REFRESH_TIME
: The unread count in the top menu is automatically refreshed to keep the user informed about new notifications. This setting allows setting the time between each refresh in seconds. You can also set it to0
to turn off automatic refreshing. Default:30
NOTIFICATIONS_MAX_PER_USER
: Maximum number of notifications that are stored per user. Newer replace older notifications. Default:50
Admin Site
The admin site allows administrators to configure, manage and troubleshoot Alliance Auth and all its applications and services. E.g., you can create new groups and assign groups to users.
You can open the admin site by clicking on “Admin” in the drop-down menu for a user that has access.
Setup for small to medium size installations
For small to medium size alliances, it is often sufficient to have no more than two superuser admins (admins that also are superusers). Having two admins usually makes sense, so you can have one primary and one backup.
Warning
Superusers have read & write access to everything on your AA installation. Superuser also automatically have all permissions and therefore access to all features of your apps. Therefore, we recommend to be very careful to whom you give superuser privileges.
Setup for large installations
For large alliances and coalitions, you may want to have a couple of administrators to be able to distribute and handle the work load. However, having a larger number of superusers may be a security concern.
As an alternative to superusers admins, you can define staff admins. Staff admins can perform most of the daily admin work, but are not superusers and therefore can be restricted in what they can access.
To create a staff admin, you need to do two things:
Enable the
is_staff
property for a userGive the user permissions for admin tasks
Note
Note that staff admins have the following limitations:
Cannot promote users to staff
Cannot promote users to superuser
Cannot add/remove permissions for users, groups and states
These limitations exist to prevent staff admins from promoting themselves to quasi superusers. Only superusers can perform these actions.
Staff property
Access to the admin site is restricted. Users need to have the is_staff
property to be able to open the site at all. The superuser created during the installation
process will automatically have access to the admin site.
Hint
Without any permissions, a “staff user” can open the admin site, but can neither view nor edit anything except for viewing the list of permissions.
Permissions for common admin tasks
Here is a list of permissions a staff admin would need to perform some common admin tasks:
Edit users
auth | user | Can view user
auth | user | Can change user
authentication | user | Can view user
authentication | user | Can change user
authentication | user profile | Can change profile
Delete users
auth | user | Can view user
auth | user | Can delete user
authentication | user | Can delete user
authentication | user profile | Can delete user profile
Add & edit states
authentication | state | Can add state
authentication | state | Can change state
authentication | state | Can view state
Delete states
authentication | state | Can delete state
authentication | state | Can view state
Add & edit groups
auth | group | Can add group
auth | group | Can change group
auth | group | Can view group
authentication | group | Can add group
authentication | group | Can change group
authentication | group | Can view group
Delete groups
auth | group | Can delete group
authentication | group | Can delete group
Permissions for other apps
The permission a staff admin needs to perform tasks for other applications depends on how the applications are configured. The default is to have four permissions (change, delete, edit view) for each model of the applications. The view permission is usually required to see the model list on the admin site, and the other three permissions are required to perform the respective action to an object of that model. However, an app developer can choose to define permissions differently.
Services
Alliance Auth supports managing access to many 3rd party services and apps. This section describes which services are supported and how to install and configure them. Please note that any service need to be installed and configured before it can be used.
Supported Services
Discord
Overview
Discord is a web-based instant messaging client with voice. Kind of like TeamSpeak meets Slack meets Skype. It also has a standalone app for phones and desktop.
Discord is very popular amongst ad-hoc small groups and larger organizations seeking a modern technology. Alternative voice communications should be investigated for larger than small-medium groups for more advanced features.
Setup
Prepare Your Settings File
Make the following changes in your auth project’s settings file (local.py
):
Add
'allianceauth.services.modules.discord',
toINSTALLED_APPS
Append the following to the bottom of the settings file:
# Discord Configuration
# Be sure to set the callback URLto https://example.com/discord/callback/
# substituting your domain for example.com in Discord's developer portal
# (Be sure to add the trailing slash)
DISCORD_GUILD_ID = ''
DISCORD_CALLBACK_URL = f"{SITE_URL}/discord/callback/"
DISCORD_APP_ID = ''
DISCORD_APP_SECRET = ''
DISCORD_BOT_TOKEN = ''
DISCORD_SYNC_NAMES = False
CELERYBEAT_SCHEDULE['discord.update_all_usernames'] = {
'task': 'discord.update_all_usernames',
'schedule': crontab(minute='0', hour='*/12'),
}
Note
You will have to add most of the values for these settings, e.g., your Discord server ID (aka guild ID), later in the setup process.
Creating a Server
Navigate to the Discord site and register an account, or log in if you have one already.
On the left side of the screen, you’ll see a circle with a plus sign. This is the button to create a new server. Go ahead and do that, naming it something obvious.
Now retrieve the server ID following this procedure.
Update your auth project’s settings file, inputting the server ID as DISCORD_GUILD_ID
Note
If you already have a Discord server, skip the creation step, but be sure to retrieve the server ID
Registering an Application
Navigate to the Discord Developers site. Press the plus sign to create a new application.
Give it a name and description relating to your auth site. Add a redirect to https://example.com/discord/callback/
, substituting your domain. Press Create Application.
Update your auth project’s settings file, inputting this redirect address as DISCORD_CALLBACK_URL
On the application summary page, press “Create a Bot User”.
Update your auth project’s settings file with these pieces of information from the summary page:
From the General Information panel,
DISCORD_APP_ID
is the Client/Application IDFrom the OAuth2 > General panel,
DISCORD_APP_SECRET
is the Client SecretFrom the Bot panel,
DISCORD_BOT_TOKEN
is the Token
Preparing Auth
Before continuing, it is essential to run migrations and restart Gunicorn and Celery.
Adding a Bot to the Server
Once created, navigate to the “Services” page of your Alliance Auth install as the superuser account. At the top there is a big green button labeled “Link Discord Server”. Click it, then from the drop-down select the server you created, and then Authorize.
This adds a new user to your Discord server with a BOT
tag, and a new role with the same name as your Discord application. Don’t touch either of these. If for some reason the bot loses permissions or is removed from the server, click this button again.
To manage roles, this bot role must be at the top of the hierarchy. Edit your Discord server, roles, and click and drag the role with the same name as your application to the top of the list. This role must stay at the top of the list for the bot to work. Finally, the owner of the bot account must enable 2-Factor Authentication (this is required from Discord for kicking and modifying member roles). If you are unsure what 2FA is or how to set it up, refer to this support page. It is also recommended to force 2FA on your server (this forces any admins or moderators to have 2FA enabled to perform similar functions on discord).
Note that the bot will never appear online as it does not participate in chat channels.
Linking Accounts
Instead of the usual account creation procedure, for Discord to work we need to link accounts to Alliance Auth. When attempting to enable the Discord service, users are redirected to the official Discord site to authenticate. They will need to create an account if they don’t have one prior to continuing. Upon authorization, users are redirected back to Alliance Auth with an OAuth code which is used to join the Discord server.
Syncing Nicknames
If you want users to have their Discord nickname changed to their in-game character name, set DISCORD_SYNC_NAMES
to True
.
Managing Roles
Once users link their accounts, you’ll notice Roles get populated on Discord. These are the equivalent to groups on every other service. The default permissions should be enough for members to use text and audio communications. Add more permissions to the roles as desired through the server management window.
By default, Alliance Auth is taking over full control of role assignments on Discord. This means that users in Discord can in general only have roles that correlate to groups on Auth. However, there are two exceptions to this rule.
Internal Discord roles
First, users will keep their so-called “Discord managed roles”. Those are internal roles created by Discord, e.g., for Nitro.
Excluding roles from being managed by Auth
Second, it is possible to exclude Discord roles from being managed by Auth at all. This can be useful if you have other bots on your Discord server that are using their own roles and which would otherwise conflict with Auth. This would also allow you to manage a role manually on Discord if you so chose.
To exclude roles from being managed by Auth, you only have to add them to the list of reserved group names in Group Management.
Note
Role names on Discord are case-sensitive, while reserved group names on Auth are not. Therefore, reserved group names will cover all roles regardless of their case. For example, if you have reserved the group name “alpha”, then the Discord roles “alpha” and “Alpha” will both be persisted.
See also
For more information see Reserved group names.
Tasks
The Discord service contains a number of tasks that can be run to manually perform updates to all users.
You can run any of these tasks from the command line. Please make sure that you are in your venv, and then you can run this command from the same folder that your manage.py is located:
celery -A myauth call discord.update_all_groups
Name |
Description |
---|---|
update_all_groups |
Updates groups of all users |
update_all_nicknames |
Update nicknames of all users (also needs setting) |
update_all_usernames |
Update locally stored Discord usernames of all users |
update_all |
Update groups, nicknames, usernames of all users |
Note
Depending on how many users you have, running these tasks can take considerable time to finish. You can calculate roughly 1 sec per user for all tasks, except update_all, which needs roughly 3 secs per user.
Settings
You can configure your Discord services with the following settings:
Name |
Description |
Default |
---|---|---|
DISCORD_APP_ID |
Oauth client ID for the Discord Auth app |
‘’ |
DISCORD_APP_SECRET |
Oauth client secret for the Discord Auth app |
‘’ |
DISCORD_BOT_TOKEN |
Generated bot token for the Discord Auth app |
‘’ |
DISCORD_CALLBACK_URL |
Oauth callback URL |
‘’ |
DISCORD_GUILD_ID |
Discord ID of your Discord server |
‘’ |
DISCORD_GUILD_NAME_CACHE_MAX_AGE |
How long the Discord server name is cached locally in seconds |
86400 |
DISCORD_ROLES_CACHE_MAX_AGE |
How long roles retrieved from the Discord server are cached locally in seconds |
3600 |
DISCORD_SYNC_NAMES |
When set to True the nicknames of Discord users will be set to the user’s main character name |
False |
DISCORD_TASKS_RETRY_PAUSE |
Pause in seconds until next retry for tasks after an error occurred |
60 |
DISCORD_TASKS_MAX_RETRIES |
max retries of tasks after an error occurred |
3 |
Permissions
To use this service, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
discord.access_discord |
None |
Can Access the Discord Service |
Troubleshooting
“Unknown Error” on Discord site when activating service
This indicates your callback URL doesn’t match. Ensure the DISCORD_CALLBACK_URL
setting exactly matches the URL entered on the Discord developers site. This includes http(s), trailing slash, etc.
“Add/Remove” Errors in Discord Service
If you are receiving errors in your Notifications after verifying that your settings are all correct, try the following:
Ensure that the bot role in Discord is at the top of the roles list. Each time you add it to your server, you will need to do this again.
Make sure that the bot is not trying to modify the Owner of the discord, as it will fail. A holding discord account added with an invite link will mitigate this.
Make sure that the bot role on discord has all needed permissions, Admin etc., remembering that these will need to be set every time you add the bot to the Discord server.
Discourse
Prepare Your Settings
In your auth project’s settings file, do the following:
Add
'allianceauth.services.modules.discourse',
to yourINSTALLED_APPS
listAppend the following to your local.py settings file:
# Discourse Configuration
DISCOURSE_URL = ''
DISCOURSE_API_USERNAME = ''
DISCOURSE_API_KEY = ''
DISCOURSE_SSO_SECRET = ''
Install Docker
wget -qO- https://get.docker.io/ | sh
Install Discourse
Download Discourse
mkdir /var/discourse
git clone https://github.com/discourse/discourse_docker.git /var/discourse
Configure
cd /var/discourse
cp samples/standalone.yml containers/app.yml
nano containers/app.yml
Change the following:
DISCOURSE_DEVELOPER_EMAILS
should be a list of admin account email addresses separated by commas.DISCOUSE_HOSTNAME
should bediscourse.example.com
or something similar.Everything with
SMTP
depends on your mail settings. There are plenty of free email services online recommended by Discourse if you haven’t set one up for auth already.
To install behind Apache/Nginx, look for this section:
...
## which TCP/IP ports should this container expose?
expose:
- "80:80" # fwd host port 80 to container port 80 (http)
...
Change it to this:
...
## which TCP/IP ports should this container expose?
expose:
- "7890:80" # fwd host port 7890 to container port 80 (http)
...
Or any other port will do, if taken. Remember this number.
Build and launch
nano /etc/default/docker
Uncomment this line:
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"
Restart Docker:
service docker restart
Now build:
./launcher bootstrap app
./launcher start app
Web Server Configuration
You will need to configure your web server to proxy requests to Discourse.
A minimal Apache config might look like:
<VirtualHost *:80>
ServerName discourse.example.com
ProxyPass / http://0.0.0.0:7890/
ProxyPassReverse / http://0.0.0.0:7890/
</VirtualHost>
A minimal Nginx config might look like:
server {
listen 80;
server_name discourse.example.com;
location / {
include proxy_params;
proxy_pass http://127.0.0.1:7890;
}
}
Configure API
Generate admin account
From the /var/discourse
directory,
./launcher enter app
rake admin:create
Follow prompts, being sure to answer y
when asked to allow admin privileges.
Create an API key
Navigate to discourse.example.com
and log on. Top right, press the 3 lines and select Admin
. Go to API tab and press Generate Master API Key
.
Add the following values to your auth project’s settings file:
DISCOURSE_URL
:https://discourse.example.com
(do not add a trailing slash!)DISCOURSE_API_USERNAME
: the username of the admin account you generated the API key withDISCOURSE_API_KEY
: the key you just generated
Configure SSO
Navigate to discourse.example.com
and log in. Back to the admin site, scroll down to find SSO settings and set the following:
enable_sso
: Truesso_url
:http://example.com/discourse/sso
sso_secret
: some secure key
Now set DISCOURSE_SSO_SECRET
in your auth project’s settings file to the secure key you put in Discourse.
Finally, run migrations and restart Gunicorn and Celery.
Permissions
To use this service, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
discourse.access_discourse |
None |
Can Access the Discourse Service |
Mumble
Mumble is a free voice chat server. While not as flashy as TeamSpeak, it has all the functionality and is easier to customize. And it is better. I may be slightly biased.
Note
Note that this guide assumes that you have installed Auth with the official :doc:/installation/allianceauth
guide under /home/allianceserver
and that it is called myauth
. Accordingly, it assumes that you have a service user called allianceserver
that is used to run all Auth services under supervisor.
Warning
This guide is currently for Ubuntu only.
Bare Metal Installations
Installing Mumble Server
The mumble server package can be retrieved from a repository, which we need to add:
sudo apt-add-repository ppa:mumble/release
sudo apt-get update
Now three packages need to be installed:
sudo apt-get install python-software-properties mumble-server libqt5sql5-mysql
Installing Mumble Authenticator
Next, we need to download the latest authenticator release from the authenticator repository.
git clone https://gitlab.com/allianceauth/mumble-authenticator /home/allianceserver/mumble-authenticator
We will now install the authenticator into your Auth virtual environment. Please make sure to activate it first:
source /home/allianceserver/venv/auth/bin/activate
Install the python dependencies for the mumble authenticator. Note that this process can take 2 to 10 minutes to complete.
pip install -r requirements.txt
Configuring Mumble Server
The mumble server needs its own database. Open an SQL shell with mysql -u root -p
and execute the SQL commands to create it:
CREATE DATABASE alliance_mumble CHARACTER SET utf8mb4;
GRANT ALL PRIVILEGES ON alliance_mumble . * TO 'allianceserver'@'localhost';
Mumble ships with a configuration file that needs customization. By default, it’s located at /etc/mumble-server.ini
. Open it with your favorite text editor:
sudo nano /etc/mumble-server.ini
We need to enable the ICE authenticator. Edit the following:
icesecretwrite=MY_CLEVER_PASSWORD
, obviously choosing a secure passwordensure the line containing
Ice="tcp -h 127.0.0.1 -p 6502"
is uncommented
We also want to enable Mumble to use the previously created MySQL / MariaDB database, edit the following:
uncomment the database line, and change it to
database=alliance_mumble
dbDriver=QMYSQL
dbUsername=allianceserver
or whatever you called the Alliance Auth MySQL userdbPassword=
that user’s passworddbPort=3306
dbPrefix=murmur_
To name your root channel, uncomment and set registerName=
to whatever cool name you want
Save and close the file.
To get Mumble superuser account credentials, run the following:
sudo dpkg-reconfigure mumble-server
Set the password to something you’ll remember and write it down. This is your superuser password and later needed to manage ACLs.
Now restart the server to see the changes reflected.
sudo service mumble-server restart
That’s it! Your server is ready to be connected to at example.com:64738
Configuring Mumble Authenticator
The ICE authenticator lives in the mumble-authenticator repository, cd to the directory where you cloned it.
Make a copy of the default config:
cp authenticator.ini.example authenticator.ini
Edit authenticator.ini
and change these values:
[database]
user =
your allianceserver MySQL userpassword =
your allianceserver MySQL user’s password
[ice]
secret =
theicewritesecret
password set earlier
Test your configuration by starting it:
python /home/allianceserver/mumble-authenticator/authenticator.py
And finally, ensure the allianceserver user has read/write permissions to the mumble authenticator files before proceeding:
sudo chown -R allianceserver:allianceserver /home/allianceserver/mumble-authenticator
The authenticator needs to be running 24/7 to validate users on Mumble. This can be achieved by adding a section to your auth project’s supervisor config file like the following example:
[program:authenticator]
command=/home/allianceserver/venv/auth/bin/python authenticator.py
directory=/home/allianceserver/mumble-authenticator
user=allianceserver
stdout_logfile=/home/allianceserver/myauth/log/authenticator.log
stderr_logfile=/home/allianceserver/myauth/log/authenticator.log
autostart=true
autorestart=true
startsecs=10
priority=996
In addition, we’d recommend adding the authenticator to Auth’s restart group in your supervisor conf. For that, you need to add it to the group line as shown in the following example:
[group:myauth]
programs=beat,worker,gunicorn,authenticator
priority=999
To enable the changes in your supervisor configuration, you need to restart the supervisor process itself. And before we do that, we are shutting down the current Auth supervisors gracefully:
sudo supervisor stop myauth:
sudo systemctl restart supervisor
Configuring Auth
In your auth project’s settings file (myauth/settings/local.py
), do the following:
Add
'allianceauth.services.modules.mumble',
to yourINSTALLED_APPS
listset
MUMBLE_URL
to the public address of your mumble server. Do not include any leadinghttp://
ormumble://
.
Example config:
# Installed apps
INSTALLED_APPS += [
# ...
'allianceauth.services.modules.mumble'
# ...
]
# Mumble Configuration
MUMBLE_URL = "mumble.example.com"
Finally, run migrations and restart your supervisor to complete the setup:
python /home/allianceserver/myauth/manage.py migrate
supervisorctl restart myauth:
Permissions
To use this service, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
mumble.access_mumble |
None |
Can Access the Mumble Service |
ACL configuration
On a freshly installed mumble server only your superuser has the right to configure ACLs and create channels. The credentials for logging in with your superuser are:
user:
SuperUser
password: what you defined when configuring your mumble server
Optimizing a Mumble Server
The needs and available resources will vary between Alliance Auth installations. Consider yours when applying these settings.
Bandwidth
https://wiki.mumble.info/wiki/Murmur.ini#bandwidth This is likely the most important setting for scaling a Mumble installation, The default maximum Bandwidth is 72000bps Per User. Reducing this value will cause your clients to automatically scale back their bandwidth transmitted, while causing a reduction in voice quality. A value that’s still high may cause robotic voices or users with bad connections to drop due entirely due to the network load.
Please tune this value to your individual needs, the below scale may provide a rough starting point.
72000
- Superior voice quality - Less than 50 users.
54000
- No noticeable reduction in quality - 50+ Users or many channels with active audio.
36000
- Mild reduction in quality - 100+ Users
30000
- Noticeable reduction in quality but not function - 250+ Users
Forcing Opus
https://wiki.mumble.info/wiki/Murmur.ini#opusthreshold A Mumble server, by default, will fall back to the older CELT codec as soon as a single user connects with an old client. This will significantly reduce your audio quality and likely place a higher load on your server. We highly recommend setting this to Zero, to force OPUS to be used at all times. Be aware any users with Mumble clients prior to 1.2.4 (From 2013…) Will not hear any audio.
opusthreshold=0
AutoBan and Rate Limiting
https://wiki.mumble.info/wiki/Murmur.ini#autobanAttempts.2C_autobanTimeframe_and_autobanTime The AutoBan feature has some sensible settings by default. You may wish to tune these if your users keep locking themselves out by opening two clients by mistake, or if you are receiving unwanted attention
https://wiki.mumble.info/wiki/Murmur.ini#messagelimit_and_messageburst This, too, is set to a sensible configuration by default. Take note on upgrading older installs, as this may actually be set too restrictively and will rate-limit your admins accidentally, take note of the configuration in https://github.com/mumble-voip/mumble/blob/master/scripts/murmur.ini#L156
“Suggest” Options
There is no way to force your users to update their clients or use Push to Talk, but these options will throw an error into their Mumble Client.
https://wiki.mumble.info/wiki/Murmur.ini#Miscellany
We suggest using Mumble 1.4.0+ for your server and Clients, you can tune this to the latest Patch version.
suggestVersion=1.4.287
If Push to Talk is to your tastes, configure the suggestion as follows
suggestPushToTalk=true
General notes
Setting a server password
With the default configuration, your mumble server is public. Meaning that everyone who has the address can at least connect to it and might also be able to join all channels that don’t have any permissions set (Depending on your ACL configured for the root channel). If you want only registered member being able to join your mumble, you have to set a server password. To do so open your mumble server configuration which is by default located at /etc/mumble-server.ini
.
sudo nano /etc/mumble-server.ini
Now search for serverpassword=
and set your password here. If there is no such line, add it.
serverpassword=YourSuperSecretServerPassword
Save the file and restart your mumble server afterward.
sudo service mumble-server restart
From now on, only registered member can join your mumble server. Now if you still want to allow guests to join, you have two options.
Allow the “Guest” state to activate the Mumble service in your Auth instance
Enabling Avatars in Overlay (V1.0.0+)
Ensure you have an up-to-date Mumble-Authenticator. This feature was added in V1.0.0
Edit authenticator.ini
and change (or add for older installations) This code block.
;If enabled, textures are automatically set as player's EvE avatar for use on overlay.
avatar_enable = True
;Get EvE avatar images from this location. {charid} will be filled in.
ccp_avatar_url = https://images.evetech.net/characters/{charid}/portrait?size=32
Mumble
An alternate install guide for Mumble using Docker, better suited to an Alliance Auth Docker install
Mumble is a free voice chat server. While not as flashy as TeamSpeak, it has all the functionality and is easier to customize. And is better. I may be slightly biased.
Configuring Auth
In your auth project’s settings file (aa-docker/conf/local.py
), do the following:
Add
'allianceauth.services.modules.mumble',
to yourINSTALLED_APPS
listAppend the following to your auth project’s settings file:
# Mumble Configuration
MUMBLE_URL = "mumble.example.com"
Add the following lines to your .env
file
# Mumble
MUMBLE_SUPERUSER_PASSWORD = superuser_password
MUMBLE_ICESECRETWRITE = icesecretwrite
MUMBLE_SERVERPASSWORD = serverpassword
Finally, restart your stack and run migrations
docker compose --env-file=.env up -d
docker compose exec allianceauth_gunicorn bash
auth migrate
Docker Installations
Installing Mumble and Authenticator
Inside your aa-docker
directory, clone the authenticator to a sub directory as follows
git clone https://gitlab.com/allianceauth/mumble-authenticator.git
Add the following to your docker-compose.yml
under the services:
section
mumble-server:
image: mumblevoip/mumble-server:latest
restart: always
environment:
- MUMBLE_SUPERUSER_PASSWORD=${MUMBLE_SUPERUSER_PASSWORD}
- MUMBLE_CONFIG_ice="tcp -h 127.0.0.1 -p 6502"
- MUMBLE_CONFIG_icesecretwrite=${MUMBLE_ICESECRETWRITE}
- MUMBLE_CONFIG_serverpassword=${MUMBLE_SERVERPASSWORD}
- MUMBLE_CONFIG_opusthreshold=0
- MUMBLE_CONFIG_suggestPushToTalk=true
- MUMBLE_CONFIG_suggestVersion=1.4.0
ports:
- 64738:64738
- 64738:64738/udp
logging:
driver: "json-file"
options:
max-size: "10Mb"
max-file: "5"
mumble-authenticator:
build:
context: .
dockerfile: ./mumble-authenticator/Dockerfile
restart: always
volumes:
- ./mumble-authenticator/authenticator.py:/authenticator.py
- ./mumble-authenticator/authenticator.ini.docker:/authenticator.ini
environment:
- MUMBLE_SUPERUSER_PASSWORD=${MUMBLE_SUPERUSER_PASSWORD}
- MUMBLE_CONFIG_ice="tcp -h 127.0.0.1 -p 6502"
- MUMBLE_CONFIG_icesecretwrite=${MUMBLE_ICESECRETWRITE}
- MUMBLE_CONFIG_serverpassword=${MUMBLE_SERVERPASSWORD}
depends_on:
- mumble-server
- auth_mysql
logging:
driver: "json-file"
options:
max-size: "10Mb"
max-file: "5"
Permissions
To use this service, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
mumble.access_mumble |
None |
Can Access the Mumble Service |
ACL configuration
On a freshly installed mumble server only your superuser has the right to configure ACLs and create channels. The credentials for logging in with your superuser are:
user:
SuperUser
password: what you defined when configuring your mumble server
Optimizing a Mumble Server
The needs and available resources will vary between Alliance Auth installations. Consider yours when applying these settings.
Bandwidth
https://wiki.mumble.info/wiki/Murmur.ini#bandwidth This is likely the most important setting for scaling a Mumble install, The default maximum Bandwidth is 72000bps Per User. Reducing this value will cause your clients to automatically scale back their bandwidth transmitted, while causing a reduction in voice quality. A value thats still high may cause robotic voices or users with bad connections to drop due entirely due to network load.
Please tune this value to your individual needs, the below scale may provide a rough starting point. 72000 - Superior voice quality - Less than 50 users. 54000 - No noticeable reduction in quality - 50+ Users or many channels with active audio. 36000 - Mild reduction in quality - 100+ Users 30000 - Noticeable reduction in quality but not function - 250+ Users
Forcing Opus
https://wiki.mumble.info/wiki/Murmur.ini#opusthreshold A Mumble server by default, will fall back to the older CELT codec as soon as a single user connects with an old client. This will significantly reduce your audio quality and likely place higher load on your server. We highly reccommend setting this to Zero, to force OPUS to be used at all times. Be aware any users with Mumble clients prior to 1.2.4 (From 2013…) Will not hear any audio.
Our default config sets this as follows
mumble-authenticator:
environment:
`MUMBLE_CONFIG_opusthreshold=0`
AutoBan and Rate Limiting
https://wiki.mumble.info/wiki/Murmur.ini#autobanAttempts.2C_autobanTimeframe_and_autobanTime The AutoBan feature has some sensible settings by default, You may wish to tune these if your users keep locking themselves out by opening two clients by mistake, or if you are receiving unwanted attention
https://wiki.mumble.info/wiki/Murmur.ini#messagelimit_and_messageburst This too, is set to a sensible configuration by default. Take note on upgrading older installs, as this may actually be set too restrictively and will rate-limit your admins accidentally, take note of the configuration in https://github.com/mumble-voip/mumble/blob/master/scripts/murmur.ini#L156
mumble-authenticator:
environment:
MUMBLE_CONFIG_messagelimit=
MUMBLE_CONFIG_messageburst=
MUMBLE_CONFIG_autobanAttempts=10
MUMBLE_CONFIG_autobanTimeframe=120
MUMBLE_CONFIG_autobanTime=30
MUMBLE_CONFIG_autobanSuccessfulConnections=false
“Suggest” Options
There is no way to force your users to update their clients or use Push to Talk, but these options will throw an error into their Mumble Client.
https://wiki.mumble.info/wiki/Murmur.ini#Miscellany
We suggest using Mumble 1.4.0+ for your server and Clients, you can tune this to the latest Patch version. If Push to Talk is to your tastes, configure the suggestion as follows
mumble-authenticator:
environment:
MUMBLE_CONFIG_suggestVersion=s1.4.287
MUMBLE_CONFIG_suggestPushToTalk=true
General notes
Server password
With the default Mumble configuration your mumble server is public. Meaning that everyone who has the address can at least connect to it and might also be able join all channels that don’t have any permissions set (Depending on your ACL configured for the root channel).
We have changed this behaviour by setting a Server Password by default, to change this password modify MUMBLE_SERVERPASSWORD
in .env
.
Restart the container to apply the change.
docker compose restart mumble-server
It is not reccommended to share/use this password, instead use the Mumble Authenticator whenever possible.
As only registered member can join your mumble server. If you still want to allow guests to join you have 2 options.
Allow the “Guest” state to activate the Mumble service in your Auth instance
Enabling Avatars in Overlay (V1.0.0+)
Ensure you have an up to date Mumble-Authenticator, this feature was added in V1.0.0
Edit authenticator.ini
and change (or add for older installs) This code block.
;If enabled, textures are automatically set as player's EvE avatar for use on overlay.
avatar_enable = True
;Get EvE avatar images from this location. {charid} will be filled in.
ccp_avatar_url = https://images.evetech.net/characters/{charid}/portrait?size=32
Openfire
Openfire is a Jabber (XMPP) server.
Prepare Your Settings
Add
'allianceauth.services.modules.openfire',
to yourINSTALLED_APPS
listAppend the following to your auth project’s settings file:
# Jabber Configuration
JABBER_URL = ""
JABBER_PORT = 5223
JABBER_SERVER = ""
OPENFIRE_ADDRESS = ""
OPENFIRE_SECRET_KEY = ""
BROADCAST_USER = ""
BROADCAST_USER_PASSWORD = ""
BROADCAST_SERVICE_NAME = "broadcast"
OS Dependencies
Openfire require a Java 8 runtime environment.
sudo apt-get install openjdk-11-jre
sudo yum install java-11-openjdk java-11-openjdk-devel
sudo dnf install java-11-openjdk java-11-openjdk-devel
sudo dnf install java-11-openjdk java-11-openjdk-devel
Setup
Download Installer
Openfire is not available through repositories, so we need to get a package from the developer.
On your PC, navigate to the Ignite Realtime downloads section, and under Openfire select Linux, click on the Ubuntu: Debian package (second from bottom of the list, ends with .deb) or CentOS: RPM Package (no JRE bundled, as we have installed it on the host)
Retrieve the file location by copying the URL from the “click here” link. Depending on your browser, you may have a Copy Link or similar option in your right click menu.
In the console, ensure you’re in your user’s home directory:
cd ~
Download and install the package, replacing the URL with the latest you got from the Openfire download page earlier
wget https://www.igniterealtime.org/downloadServlet?filename=openfire/openfire_4.7.2_all.deb dpkg -i openfire_4.7.2_all.deb
wget https://www.igniterealtime.org/downloadServlet?filename=openfire/openfire-4.7.2-1.noarch.rpm yum install -y openfire-4.7.2-1.noarch.rpm
wget https://www.igniterealtime.org/downloadServlet?filename=openfire/openfire-4.7.2-1.noarch.rpm yum install -y openfire-4.7.2-1.noarch.rpm
Create Database
Performance is best when working from an SQL database. If you installed MySQL or MariaDB alongside your auth project, go ahead and create a database for Openfire:
mysql -u root -p
create database alliance_jabber;
grant all privileges on alliance_jabber . * to 'allianceserver'@'localhost';
exit;
Web Configuration
The remainder of the setup occurs through Openfire’s web interface. Navigate to http://example.com:9090, or if you’re behind CloudFlare, go straight to your server’s IP:9090.
Select your language. I sure hope it’s English if you’re reading this guide.
Under Server Settings, set the Domain to example.com
replacing it with your actual domain. Don’t touch the rest.
Under Database Settings, select Standard Database Connection
On the next page, select MySQL
from the dropdown list and change the following:
[server]
is replaced by127.0.0.1
[database]
is replaced by the name of the database to be used by Openfireenter the login details for your auth project’s database user
If Openfire returns with a failed to connect error, re-check these settings. Note the lack of square brackets.
Under Profile Settings, leave Default
selected.
Create an administrator account. The actual name is irrelevant, just don’t lose this login information.
Finally, log in to the console with your admin account.
Edit your auth project’s settings file and enter the values you set:
JABBER_URL
is the public address of your jabber serverJABBER_PORT
is the port for clients to connect to (usually 5223)JABBER_SERVER
is the name of the jabber server. If you didn’t alter it during the installation, it’ll usually be your domain (egexample.com
)OPENFIRE_ADDRESS
is the web address of Openfire’s web interface. Use http:// with port 9090 or https:// with port 9091 if you configure SSL in Openfire
REST API Setup
Navigate to the plugins
tab, and then Available Plugins
on the left navigation bar. You’ll need to fetch the list of available plugins by clicking the link.
Once loaded, press the green plus on the right for REST API
.
Navigate the Server
tab, Sever Settings
subtab. At the bottom of the left navigation bar select REST API
.
Select Enabled
, and Secret Key Auth
. Update your auth project’s settings with this secret key as OPENFIRE_SECRET_KEY
.
Broadcast Plugin Setup
Navigate to the Users/Groups
tab and select Create New User
from the left navigation bar.
Pick a username (e.g. broadcast
) and password for your ping user. Enter these in your auth project’s settings file as BROADCAST_USER
and BROADCAST_USER_PASSWORD
. Note that BROADCAST_USER
needs to be in the format user@example.com
matching your jabber server name. Press Create User
to save this user.
Broadcasting requires a plugin. Navigate to the plugins
tab, press the green plus for the Broadcast
plugin.
Navigate to the Server
tab, Server Manager
subtab, and select System Properties
. Enter the following:
Name:
plugin.broadcast.disableGroupPermissions
Value:
True
Do not encrypt this property value
Name:
plugin.broadcast.allowedUsers
Value:
broadcast@example.com
, replacing the domain name with yoursDo not encrypt this property value
If you have troubles getting broadcasts to work, you can try setting the optional (you will need to add it) BROADCAST_IGNORE_INVALID_CERT
setting to True
. This will allow invalid certificates to be used when connecting to the Openfire server to send a broadcast.
Preparing Auth
Once all settings are entered, run migrations and restart Gunicorn and Celery.
Group Chat
Channels are available which function like a chat room. Access can be controlled either by password or ACL (not unlike mumble).
Navigate to the Group Chat
tab and select Create New Room
from the left navigation bar.
Room ID is a short, easy-to-type version of the room’s name users will connect to
Room Name is the full name for the room
Description is short text describing the room’s purpose
Set a password if you want password authentication
Every other setting is optional. Save changes.
Now select your new room. On the left navigation bar, select Permissions
.
ACL is achieved by assigning groups to each of the three tiers: Owners
, Admins
and Members
. Outcast
is the blacklist. You’ll usually only be assigning groups to the Member
category.
Permissions
To use this service, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
openfire.access_openfire |
None |
Can Access the Openfire Service |
Openfire
An alternate install guide for Openfire using Docker, better suited to an Alliance Auth Docker install
Openfire is a Jabber (XMPP) server.
Configuring Auth
In your auth project’s settings file (aa-docker/conf/local.py
), do the following:
Add
'allianceauth.services.modules.openfire',
to yourINSTALLED_APPS
listAppend the following to your auth project’s settings file:
# Jabber Configuration
JABBER_URL = SITE_URL
JABBER_PORT = os.environ.get('JABBER_PORT', 5223)
JABBER_SERVER = SITE_URL
OPENFIRE_ADDRESS = SITE_URL
OPENFIRE_SECRET_KEY = os.environ.get('OPENFIRE_SECRET_KEY', '')
BROADCAST_USER = ""
BROADCAST_USER_PASSWORD = os.environ.get('BROADCAST_USER_PASSWORD', '127.0.0.1')
BROADCAST_SERVICE_NAME = "broadcast"
Add the following lines to your .env
file
# Openfire
OPENFIRE_SECRET_KEY = superuser_password
BROADCAST_USER_PASSWORD = icesecretwrite
Finally, restart your stack and run migrations
docker compose --env-file=.env up -d
docker compose exec allianceauth_gunicorn bash
auth migrate
Docker Installation
Add the following to your docker-compose.yml
under the services:
section
openfire:
image: nasqueron/openfire:4.7.5
ports:
- "5222:5222/tcp"
- "5223:5223/tcp"
- "7777:7777/tcp"
volumes:
- openfire-data:/var/lib/openfire
depends_on:
- auth_mysql
logging:
driver: "json-file"
options:
max-size: "50Mb"
max-file: "5"
Create Database
We have a Mariadb container already as part of the Alliance Auth stack, enter it and create a database for it.
docker exec -it auth_mysql
mysql -u root -p $AA_DB_ROOT_PASSWORD
create database alliance_jabber;
grant all privileges on alliance_jabber . * to 'aauth'@'localhost';
exit;
exit
Configure Webserver
In Nginx Proxy Manager http://yourdomain:81/
, go to Proxy Hosts
, Click Add Proxy Host
. You can refer to :doc:/installation-containerized/docker
Domain Name: jabber.yourdomain
Forward Hostname openfire
forward port 9090
for http, 9091
for https
Web Configuration
The remainder of the setup occurs through Openfire’s web interface. Navigate to http://jabber.yourdomain.com
Select your language, our guide will assume English
Under Server Settings, set the Domain to jabber.yourdomain.com
replacing it with your actual domain. Don’t touch the rest.
Under Database Settings, select Standard Database Connection
On the next page, select MySQL
from the dropdown list and change the following:
[server]
:auth_mysql
[database]
:alliance_jabber
[user]
:aauth
[password]
: Your database users password
If Openfire returns with a failed to connect error, re-check these settings. Note the lack of square brackets.
Under Profile Settings, leave Default
selected.
Create an administrator account. The actual name is irrelevant, just don’t lose this login information.
Finally, log in to the console with your admin account.
Edit your auth project’s settings file (aa-docker/conf/local.py
) and enter the values you just set:
JABBER_URL
is the pubic address of your jabber serverJABBER_PORT
is the port for clients to connect to (usually 5223)JABBER_SERVER
is the name of the jabber server. If you didn’t alter it during install it’ll usually be your domain (egjabber.example.com
)OPENFIRE_ADDRESS
is the web address of Openfire’s web interface. Use http:// with port 9090 or https:// with port 9091 if you configure SSL in Openfire and Nginx Proxy Manager
REST API Setup
Navigate to the plugins
tab, and then Available Plugins
on the left navigation bar. You’ll need to fetch the list of available plugins by clicking the link.
Once loaded, press the green plus on the right for REST API
.
Navigate the Server
tab, Sever Settings
subtab. At the bottom of the left navigation bar select REST API
.
Select Enabled
, and Secret Key Auth
. Update your auth project’s settings with this secret key as OPENFIRE_SECRET_KEY
.
Broadcast Plugin Setup
Navigate to the Users/Groups
tab and select Create New User
from the left navigation bar.
Pick a username (e.g. broadcast
) and password for your ping user. Enter these in your auth project’s settings file as BROADCAST_USER
and BROADCAST_USER_PASSWORD
. Note that BROADCAST_USER
needs to be in the format user@example.com
matching your jabber server name. Press Create User
to save this user.
Broadcasting requires a plugin. Navigate to the plugins
tab, press the green plus for the Broadcast
plugin.
Navigate to the Server
tab, Server Manager
subtab, and select System Properties
. Enter the following:
Name:
plugin.broadcast.disableGroupPermissions
Value:
True
Do not encrypt this property value
Name:
plugin.broadcast.allowedUsers
Value:
broadcast@example.com
, replacing the domain name with yoursDo not encrypt this property value
If you have troubles getting broadcasts to work, you can try setting the optional (you will need to add it) BROADCAST_IGNORE_INVALID_CERT
setting to True
. This will allow invalid certificates to be used when connecting to the Openfire server to send a broadcast.
Preparing Auth
Once all settings are entered, run migrations and restart Gunicorn and Celery.
Group Chat
Channels are available which function like a chat room. Access can be controlled either by password or ACL (not unlike mumble).
Navigate to the Group Chat
tab and select Create New Room
from the left navigation bar.
Room ID is a short, easy-to-type version of the room’s name users will connect to
Room Name is the full name for the room
Description is short text describing the room’s purpose
Set a password if you want password authentication
Every other setting is optional. Save changes.
Now select your new room. On the left navigation bar, select Permissions
.
ACL is achieved by assigning groups to each of the three tiers: Owners
, Admins
and Members
. Outcast
is the blacklist. You’ll usually only be assigning groups to the Member
category.
Permissions
To use this service, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
openfire.access_openfire |
None |
Can Access the Openfire Service |
phpBB3
Overview
phpBB is a free PHP-based forum.
Dependencies
phpBB3 requires PHP installed in your web server. Apache has mod_php
, NGINX requires php-fpm
. See the official guide for PHP package requirements.
Prepare Your Settings
In your auth project’s settings file, do the following:
Add
'allianceauth.services.modules.phpbb3',
to yourINSTALLED_APPS
listAppend the following to the bottom of the settings file:
# PHPBB3 Configuration
PHPBB3_URL = ''
DATABASES['phpbb3'] = {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'alliance_forum',
'USER': 'allianceserver',
'PASSWORD': 'password',
'HOST': '127.0.0.1',
'PORT': '3306',
}
Setup
Prepare the Database
Create a database to install phpBB3 in.
mysql -u root -p
create database alliance_forum;
grant all privileges on alliance_forum . * to 'allianceserver'@'localhost';
exit;
Edit your auth project’s settings file and fill out the DATABASES['phpbb3']
part.
Download phpBB3
phpBB3 is available as a zip from their website. Navigate to the website’s downloads section using your PC browser and copy the URL for the latest version zip.
In the console, navigate to your user’s home directory: cd ~
Now download using wget, replacing the URL with the URL for the package you just retrieved
wget https://download.phpbb.com/pub/release/3.3/3.3.8/phpBB-3.3.8.zip
This needs to be unpackaged. Unzip it, replacing the file name with that of the file you just downloaded
unzip phpBB-3.3.8.zip
Now we need to move this to our web directory. Usually /var/www/forums
.
mv phpBB3 /var/www/forums
The web server needs read/write permissions to this folder
Apache: chown -R www-data:www-data /var/www/forums
Nginx: chown -R nginx:nginx /var/www/forums
Tip
Nginx: Some distributions use the www-data:www-data
user:group instead of nginx:nginx
. If you run into problems with permissions try it instead.
Configuring Web Server
You will need to configure your web server to serve PHPBB3 before proceeding with installation.
A minimal Apache config file might look like:
<VirtualHost *:80>
ServerName forums.example.com
DocumentRoot /var/www/forums
<Directory /var/www/forums>
Require all granted
DirectoryIndex index.php
</Directory>
</VirtualHost>
A minimal Nginx config file might look like:
server {
listen 80;
server_name forums.example.com;
root /var/www/forums;
index index.php;
access_log /var/logs/forums.access.log;
location ~ /(config\.php|common\.php|cache|files|images/avatars/upload|includes|store) {
deny all;
return 403;
}
location ~* \.(gif|jpe?g|png|css)$ {
expires 30d;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/tmp/php.socket;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Enter your forum’s web address as the PHPBB3_URL
setting in your auth project’s settings file.
Web Install
Navigate to your forum web address where you will be presented with an installer.
Click on the Install
tab.
All the requirements should be met. Press Start Install
.
Under Database Settings, set the following:
Database Type is
MySQL
Database Server Hostname is
127.0.0.1
Database Server Port is left blank
Database Name is
alliance_forum
Database Username is your auth MySQL user, usually
allianceserver
Database Password is this user’s password
If you use a table prefix other than the standard phpbb_
you need to add an additional setting to your auth project’s settings file, PHPBB3_TABLE_PREFIX = ''
, and enter the prefix.
You should see Successful Connection
and proceed.
Enter administrator credentials on the next page.
Everything from here should be intuitive.
phpBB will then write its own config file.
Open the Forums
Before users can see the forums, we need to remove the installation directory
rm -rf /var/www/forums/install
Enabling Avatars
AllianceAuth sets user avatars to their character portrait when the account is created or password reset. We need to allow external URLs for avatars for them to behave properly. Navigate to the admin control panel for phpbb3, and under the General
tab, along the left navigation bar beneath Board Configuration
, select Avatar Settings
. Set Enable Remote Avatars
to Yes
and then Submit
.
You can allow members to overwrite the portrait with a custom image if desired. Navigate to Users and Groups
, Group Permissions
, select the appropriate group (usually Member
if you want everyone to have this ability), expand Advanced Permissions
, under the Profile
tab, set Can Change Avatars
to Yes
, and press Apply Permissions
.
Setting the default theme
Users generated via Alliance Auth do not have a default theme set. You will need to set this on the phpbb_users table in SQL
mysql -u root -p
use alliance_forum;
alter table phpbb_users change user_style user_style int not null default 1
If you would like to use a theme that is NOT prosilver or theme “1”. You will need to deactivate prosilver, this will then fall over to the set forum wide default.
Prepare Auth
Once settings have been configured, run migrations and restart Gunicorn and Celery.
Permissions
To use this service, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
phpbb3.access_phpbb3 |
None |
Can Access the PHPBB3 Service |
SMF
Overview
SMF is a free PHP-based forum.
Dependencies
SMF requires PHP installed in your web server. Apache has mod_php
, NGINX requires php-fpm
. More details can be found in the SMF requirements page.
Prepare Your Settings
In your auth project’s settings file, do the following:
Add
'allianceauth.services.modules.smf',
to yourINSTALLED_APPS
listAppend the following to the bottom of the settings file:
# SMF Configuration
SMF_URL = ''
DATABASES['smf'] = {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'alliance_smf',
'USER': 'allianceserver-smf',
'PASSWORD': 'password',
'HOST': '127.0.0.1',
'PORT': '3306',
}
Setup
Download SMF
Using your browser, you can download the latest version of SMF to your desktop computer. All SMF downloads can be found at SMF Downloads. The latest recommended version will always be available at http://www.simplemachines.org/download/index.php/latest/install/. Retrieve the file location from the hyperlinked box icon for the zip full install, depending on your browser, you may have a Copy Link or similar option in your right click menu.
Download using wget, replacing the URL with the URL for the package you just retrieved
wget https://download.simplemachines.org/index.php?thanks;filename=smf_2-1-2_install.tar.gz
This needs to be unpackaged. Unzip it, replacing the file name with that of the file you just downloaded
unzip smf_2-1-2_install.zip
Now we need to move this to our web directory. Usually /var/www/forums
.
mv smf /var/www/forums
The web server needs read/write permissions to this folder
Apache: chown -R www-data:www-data /var/www/forums
Nginx: chown -R nginx:nginx /var/www/forums
Tip
Nginx: Some distributions use the www-data:www-data
user:group instead of nginx:nginx
. If you run into problems with permissions, try it instead.
Database Preparation
SMF needs a database. Create one:
mysql -u root -p
create database alliance_smf;
grant all privileges on alliance_smf . * to 'allianceserver'@'localhost';
exit;
Enter the database information into the DATABASES['smf']
section of your auth project’s settings file.
Web Server Configuration
Your web server needs to be configured to serve SMF.
A minimal Apache config might look like:
<VirtualHost *:80>
ServerName forums.example.com
DocumentRoot /var/www/forums
<Directory "/var/www/forums">
DirectoryIndex index.php
</Directory>
</VirtualHost>
A minimal Nginx config might look like:
server {
listen 80;
server_name forums.example.com;
root /var/www/forums;
index index.php;
access_log /var/logs/forums.access.log;
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/tmp/php.socket;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Enter the web address to your forums into the SMF_URL
setting in your auth project’s settings file.
Web Install
Navigate to your forum address where you will be presented with an installer.
Click on the Install
tab.
All the requirements should be met. Press Start Install
.
Under Database Settings, set the following:
Database Type is
MySQL
Database Server Hostname is
127.0.0.1
Database Server Port is left blank
Database Name is
alliance_smf
Database Username is your auth MySQL user, usually
allianceserver
Database Password is this user’s password
If you use a table prefix other than the standard smf_
you need to add an additional setting to your auth project’s settings file, SMF_TABLE_PREFIX = ''
, and enter the prefix.
Follow the directions in the installer.
Preparing Auth
Once settings are entered, apply migrations and restart Gunicorn and Celery.
Permissions
To use this service, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
smf.access_smf |
None |
Can Access the SMF Service |
TeamSpeak 3
Overview
TeamSpeak3 is the most popular VOIP program for gamers.
But have you considered using Mumble? Not only is it free, but it has features and performance far superior to Teamspeak3.
Setup
Sticking with TS3? Alright, I tried.
Prepare Your Settings
In your auth project’s settings file, do the following:
Add
'allianceauth.services.modules.teamspeak3',
to yourINSTALLED_APPS
listAppend the following to the bottom of the settings file:
# Teamspeak3 Configuration
TEAMSPEAK3_SERVER_IP = '127.0.0.1'
TEAMSPEAK3_SERVER_PORT = 10011
TEAMSPEAK3_SERVERQUERY_USER = 'serveradmin'
TEAMSPEAK3_SERVERQUERY_PASSWORD = ''
TEAMSPEAK3_VIRTUAL_SERVER = 1
TEAMSPEAK3_PUBLIC_URL = ''
CELERYBEAT_SCHEDULE['run_ts3_group_update'] = {
'task': 'allianceauth.services.modules.teamspeak3.tasks.run_ts3_group_update',
'schedule': crontab(minute='*/30'),
}
Download Installer
To install, we need a copy of the server. You can find the latest version on the TeamSpeak website. Be sure to get a link to the Linux version.
Download the server, replacing the link with the link you got earlier.
cd ~
wget https://files.teamspeak-services.com/releases/server/3.13.7/teamspeak3-server_linux_amd64-3.13.7.tar.bz2
Now we need to extract the file.
tar -xf teamspeak3-server_linux_amd64-3.13.7.tar.bz2
Create User
TeamSpeak needs its own user.
adduser --disabled-login teamspeak
Install Binary
Now we move the server binary somewhere more accessible and change its ownership to the new user.
mv teamspeak3-server_linux_amd64 /usr/local/teamspeak
chown -R teamspeak:teamspeak /usr/local/teamspeak
Startup
Now we generate a startup script so TeamSpeak comes up with the server.
ln -s /usr/local/teamspeak/ts3server_startscript.sh /etc/init.d/teamspeak
update-rc.d teamspeak defaults
Finally, we start the server.
service teamspeak start
Update Settings
Set your Teamspeak Serveradmin password to a random string
./ts3server_minimal_runscript.sh inifile=ts3server.ini serveradmin_password=pleasegeneratearandomstring
If you plan on claiming the ServerAdmin token, do so with a different TeamSpeak client profile than the one used for your auth account, or you will lose your admin status.
Edit the settings you added to your auth project’s settings file earlier, entering the following:
TEAMSPEAK3_SERVERQUERY_USER
isloginname
from the above bash command (usuallyserveradmin
)TEAMSPEAK3_SERVERQUERY_PASSWORD
ispassword
following the equals inserveradmin_password=
TEAMSPEAK_VIRTUAL_SERVER
is the virtual server ID of the server to be managed - it will only ever not be 1 if your server is hosted by a professional companyTEAMSPEAK3_PUBLIC_URL
is the public address of your TeamSpeak server. Do not include any leading http:// or teamspeak://
Once settings are entered, run migrations and restart Gunicorn and Celery.
Generate User Account
And now we can generate ourselves a user account. Navigate to the services in Alliance Auth for your user account and press the checkmark for TeamSpeak 3.
Click the URL provided to automatically connect to our server. It will prompt you to redeem the serveradmin token, enter the token
from startup.
Groups
Now we need to make groups. AllianceAuth handles groups in teamspeak differently: instead of creating groups, it creates an association between groups in TeamSpeak and groups in AllianceAuth. Go ahead and make the groups you want to associate with auth groups, keeping in mind multiple TeamSpeak groups can be associated with a single auth group.
Navigate back to the AllianceAuth admin interface (example.com/admin) and under Teamspeak3
, select Auth / TS Groups
.
In the top-right corner click, first click on Update TS3 Groups
to fetch the newly created server groups from TS3 (this may take a minute to complete). Then click on Add Auth / TS Group
to link Auth groups with TS3 server groups.
The dropdown box provides all auth groups. Select one and assign TeamSpeak groups from the panels below. If these panels are empty, wait a minute for the database update to run, or see the troubleshooting section below.
Troubleshooting
Insufficient client permissions (failed on Invalid permission: 0x26)
Using the advanced permissions editor, ensure the Guest
group has the permission Use Privilege Keys to gain permissions
(under Virtual Server
expand the Administration
section)
To enable advanced permissions, on your client go to the Tools
menu, Application
, and under the Misc
section, tick Advanced permission system
TS group models not populating on admin site
The method which populates these runs every 30 minutes. To populate manually, you start the process from the admin site or from the Django shell.
Admin Site
Navigate to the AllianceAuth admin interface and under Teamspeak3
, select Auth / TS Groups
.
Then, in the top-right corner click, click on Update TS3 Groups
to start the process of fetching the server groups from TS3 (this may take a minute to complete).
Django Shell
Start a django shell with:
python manage.py shell
And execute the update as follows:
from allianceauth.services.modules.teamspeak3.tasks import Teamspeak3Tasks
Teamspeak3Tasks.run_ts3_group_update()
Ensure that command does not return an error.
2564 access to default group is forbidden
This usually occurs because auth is trying to remove a user from the Guest
group (group ID 8). The guest group is only assigned to a user when they have no other groups, unless you have changed the default teamspeak server config.
Teamspeak servers v3.0.13 and up are especially susceptible to this. Ensure the Channel Admin Group is not set to Guest (8)
. Check by right-clicking on the server name, Edit virtual server
, and in the middle of the panel select the Misc
tab.
TypeError: string indices must be integers, not str
This error generally means teamspeak returned an error message that went unhandled. The full traceback is required for proper debugging, which the logs do not record. Please check the superuser notifications for this record and get in touch with a developer.
3331 flood ban
This most commonly happens when your teamspeak server is externally hosted. You need to add the auth server IP to the teamspeak serverquery whitelist. This varies by provider.
If you have SSH access to the server hosting it, you need to locate the teamspeak server folder and add the auth server IP on a new line in query_ip_allowlist.txt
(named query_ip_whitelist.txt
on older teamspeak versions).
520 invalid loginname or password
The serverquery account login specified in local.py is incorrect. Please verify TEAMSPEAK3_SERVERQUERY_USER
and TEAMSPEAK3_SERVERQUERY_PASSWORD
. The installation section describes where to get them.
2568 insufficient client permissions
This usually occurs if you’ve created a separate serverquery user to use with auth. It has not been assigned sufficient permissions to complete all the tasks required of it. The full list of required permissions is not known, so assign them liberally.
Permissions
To use and configure this service, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
teamspeak.access_teamspeak |
None |
Can Access the TeamSpeak Service |
teamspeak.add_authts |
Can Add Model |
None |
teamspeak.change_authts |
Can Change Model |
None |
teamspeak.delete_authts |
Can Delete Model |
None |
teamspeak.view_authts |
Can View Model |
None |
TeamSpeak 3
Overview
TeamSpeak3 is the most popular VOIP program for gamers.
But have you considered using Mumble? Not only is it free, but it has features and performance far superior to Teamspeak3.
Setup
Sticking with TS3? Alright, I tried.
Configuring Auth
In your auth project’s settings file (aa-docker/conf/local.py
), do the following:
Add
'allianceauth.services.modules.teamspeak',
to yourINSTALLED_APPS
listAppend the following to your auth project’s settings file:
# Teamspeak3 Configuration
TEAMSPEAK3_SERVER_IP = os.environ.get('TEAMSPEAK3_SERVER_IP', '127.0.0.1')
TEAMSPEAK3_SERVER_PORT = os.environ.get('TEAMSPEAK3_SERVER_PORT', 10011)
TEAMSPEAK3_SERVERQUERY_USER = os.environ.get('TEAMSPEAK3_SERVERQUERY_USER', "serverquery")
TEAMSPEAK3_SERVERQUERY_PASSWORD = os.environ.get('TEAMSPEAK3_SERVERQUERY_PASSWORD', "")
TEAMSPEAK3_VIRTUAL_SERVER = os.environ.get('TEAMSPEAK3_VIRTUAL_SERVER', 1)
TEAMSPEAK3_PUBLIC_URL = SITE_URL
CELERYBEAT_SCHEDULE['run_ts3_group_update'] = {
'task': 'allianceauth.services.modules.teamspeak3.tasks.run_ts3_group_update',"
'schedule': crontab(minute='*/30'),
}
Add the following lines to your .env
file
# Temspeak
TEAMSPEAK3_SERVERQUERY_USER = "serverquery"
TEAMSPEAK3_SERVERQUERY_PASSWORD = ""
Docker Installation
Add the following to your docker-compose.yml
under the services:
section
teamspeak:
image: teamspeak:3.13
restart: always
environment:
TS3SERVER_LICENSE: accept
ports:
- 9987:9987/udp
- 30033:30033
volumes:
- teamspeak-data:/var/ts3server/
logging:
driver: "json-file"
options:
max-size: "10Mb"
max-file: "5"
Update Settings
In (aa-docker/conf/local.py
), update the following
TEAMSPEAK_VIRTUAL_SERVER
is the virtual server ID of the server to be managed - it will only ever not be 1 if your server is hosted by a professional companyTEAMSPEAK3_PUBLIC_URL
is the public address of your TeamSpeak server. Do not include any leading http:// or teamspeak://
In your .env
file, update the following, obtained from the logs of the Teamspeak server initaliztion docker compose logs teamspeak
TEAMSPEAK3_SERVERQUERY_USER
isloginname
from the above bash command (usuallyserveradmin
)TEAMSPEAK3_SERVERQUERY_PASSWORD
ispassword
following the equals inserveradmin_password=
Once settings are entered, run migrations and restart your stack
docker compose --env-file=.env up -d
docker compose exec allianceauth_gunicorn bash
auth migrate
Generate User Account
And now we can generate ourselves a user account. Navigate to the services in Alliance Auth for your user account and press the checkmark for TeamSpeak 3.
Click the URL provided to automatically connect to our server. It will prompt you to redeem the serveradmin token, enter the token
from startup.
Groups
Now we need to make groups. AllianceAuth handles groups in teamspeak differently: instead of creating groups it creates an association between groups in TeamSpeak and groups in AllianceAuth. Go ahead and make the groups you want to associate with auth groups, keeping in mind multiple TeamSpeak groups can be associated with a single auth group.
Navigate back to the AllianceAuth admin interface (example.com/admin) and under Teamspeak3
, select Auth / TS Groups
.
In the top-right corner click, first click on Update TS3 Groups
to fetch the newly created server groups from TS3 (this may take a minute to complete). Then click on Add Auth / TS Group
to link Auth groups with TS3 server groups.
The dropdown box provides all auth groups. Select one and assign TeamSpeak groups from the panels below. If these panels are empty, wait a minute for the database update to run, or see the troubleshooting section below.
Troubleshooting
Insufficient client permissions (failed on Invalid permission: 0x26)
Using the advanced permissions editor, ensure the Guest
group has the permission Use Privilege Keys to gain permissions
(under Virtual Server
expand the Administration
section)
To enable advanced permissions, on your client go to the Tools
menu, Application
, and under the Misc
section, tick Advanced permission system
TS group models not populating on admin site
The method which populates these runs every 30 minutes. To populate manually you start the process from the admin site or from the Django shell.
Admin Site
Navigate to the AllianceAuth admin interface and under Teamspeak3
, select Auth / TS Groups
.
Then, in the top-right corner click, click on Update TS3 Groups
to start the process of fetching the server groups from TS3 (this may take a minute to complete).
Django Shell
Start a django shell with:
docker compose exec allianceauth_gunicorn bash
auth shell
And execute the update as follows:
from allianceauth.services.modules.teamspeak3.tasks import Teamspeak3Tasks
Teamspeak3Tasks.run_ts3_group_update()
Ensure that command does not return an error.
2564 access to default group is forbidden
This usually occurs because auth is trying to remove a user from the Guest
group (group ID 8). The guest group is only assigned to a user when they have no other groups, unless you have changed the default teamspeak server config.
Teamspeak servers v3.0.13 and up are especially susceptible to this. Ensure the Channel Admin Group is not set to Guest (8)
. Check by right clicking on the server name, Edit virtual server
, and in the middle of the panel select the Misc
tab.
TypeError: string indices must be integers, not str
This error generally means teamspeak returned an error message that went unhandled. The full traceback is required for proper debugging, which the logs do not record. Please check the superuser notifications for this record and get in touch with a developer.
3331 flood ban
This most commonly happens when your teamspeak server is externally hosted. You need to add the auth server IP to the teamspeak serverquery whitelist. This varies by provider.
If you have SSH access to the server hosting it, you need to locate the teamspeak server folder and add the auth server IP on a new line in query_ip_allowlist.txt
(named query_ip_whitelist.txt
on older teamspeak versions).
520 invalid loginname or password
The serverquery account login specified in local.py is incorrect. Please verify TEAMSPEAK3_SERVERQUERY_USER
and TEAMSPEAK3_SERVERQUERY_PASSWORD
. The installation section describes where to get them.
2568 insufficient client permissions
This usually occurs if you’ve created a separate serverquery user to use with auth. It has not been assigned sufficient permissions to complete all the tasks required of it. The full list of required permissions is not known, so assign liberally.
Permissions
To use and configure this service, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
teamspeak.access_teamspeak |
None |
Can Access the TeamSpeak Service |
teamspeak.add_authts |
Can Add Model |
None |
teamspeak.change_authts |
Can Change Model |
None |
teamspeak.delete_authts |
Can Delete Model |
None |
teamspeak.view_authts |
Can View Model |
None |
XenForo
Overview
XenForo is a popular, paid forum. This guide will assume that you already have XenForo installed with a valid license (please keep in mind that XenForo is not free nor open-source, therefore, you need to purchase a license first). If you come across any problems related with the installation of XenForo please contact their support service.
Prepare Your Settings
In your auth project’s settings file, do the following:
Add
'allianceauth.services.modules.xenforo',
to yourINSTALLED_APPS
listAppend the following to your local.py settings file:
# XenForo Configuration
XENFORO_ENDPOINT = 'example.com/api.php'
XENFORO_DEFAULT_GROUP = 0
XENFORO_APIKEY = 'yourapikey'
XenAPI
By default, XenForo does not support any kind of API, however, there is a third-party package called XenAPI which provides a simple REST interface by which we can access XenForo’s functions to create and edit users.
The installation of XenAPI is pretty straight forward. The only thing you need to do is to download the api.php
from the official repository and upload it in the root folder of your XenForo installation. The final result should look like this:
*forumswebsite.com/*api.php
Now that XenAPI is installed, the only thing left to do is to provide a key.
$restAPI = new RestAPI('REPLACE_THIS_WITH_AN_API_KEY');
Configuration
The settings you created earlier now need to be filled out.
XENFORO_ENDPOINT
is the address to the API you added. No leading http://
, but be sure to include the /api.php
at the end.
XENFORO_DEFAULT_GROUP
is the ID of the group in XenForo auth users will be added to. Unfortunately, XenAPI cannot create new groups, therefore, you have to create a group manually and then get its ID.
XENFORO_API_KEY
is the API key value you set earlier.
Once these are entered, run migrations and restart Gunicorn and Celery.
Permissions
To use this service, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
xenforo.access_xenforo |
None |
Can Access the XenForo Service |
Tools
Services Name Formats
This app allows you to customize how usernames for services are created.
Each service’s username or nickname, depending on which the service supports, can be customized through the use of the Name Formatter config provided the service supports custom formats. This config can be found in the admin panel under Services -> Name format config
Currently, the following services support custom name formats:
Service |
Used with |
Default Formatter |
---|---|---|
Discord |
Nickname |
|
Discourse |
Username |
|
IPS4 |
Username |
|
Mumble |
Username |
|
Openfire |
Username |
|
phpBB3 |
Username |
|
SMF |
Username |
|
Teamspeak 3 |
Nickname |
|
Xenforo |
Username |
|
Note
It’s important to note here, before we get into what you can do with a name formatter, that before the generated name is passed off to the service to create an account it will be sanitized to remove characters (the letters and numbers etc.) that the service cannot support. This means that, despite what you configured, the service may display something different. It is up to you to test your formatter and understand how your format may be disrupted by a certain services sanitization function.
Available format data
The following fields are available for a user account and main character:
username
- Alliance Auth usernamecharacter_id
character_name
corp_id
corp_name
corp_ticker
alliance_id
alliance_name
alliance_ticker
alliance_or_corp_name
(defaults to Corporation name if there is no Alliance)alliance_or_corp_ticker
(defaults to Corporation ticker if there is no Alliance)
Building a formatter string
The name formatter uses the advanced string formatting specified by PEP-3101. Anything supported by this specification is supported in a name formatter.
More digestible documentation of string formatting in Python is available on the PyFormat website.
Some examples of strings you could use:
Formatter |
Result |
---|---|
|
|
|
|
|
|
Important
For most services, name formats only take effect when a user creates an account. This means if you create or update a name formatter, it won’t retroactively alter the format of users’ names. There are some exceptions to this where the service updates nicknames on a periodic basis. Check the service’s documentation to see which of these apply.
Important
You must only create one formatter per service per state. E.g., don’t create two formatters for Mumble for the Member state. In this case, one of the formatters will be used, and it may not be the formatter you are expecting:
Service Permissions
In the past, access to services was dictated by a list of settings in settings.py
, granting access to each particular service for Members and/or Blues. This meant that granting access to a service was very broad and rigidly structured around these two states.
Permissions based access
Instead of granting access to services by the previous rigid structure, access to services is now granted by the built-in Django permissions system. This means that service access can be more granular, allowing only certain states, certain groups, for instance, Corp CEOs, or even individual user access to each enabled service.
Important
If you grant access to an individual user, they will have access to that service regardless of whether they are a member.
Each service has an access permission defined, named like Can access the <service name> service
.
To mimic the old behaviour of enabling services for all members, you would select the Member
group from the admin panel, add the required service permission to the group and save. Likewise for Blues, select the Blue
group and add the required permission.
A user can be granted the same permission from multiple sources. e.g., they may have it granted by several groups and directly granted on their account as well. Auth will not remove their account until all instances of the permission for that service have been revoked.
Removing access
Danger
Access removal is processed immediately after removing a permission from a user or group. If you remove access from a large group, such as Member, it will immediately remove all users from that service.
When you remove a service permission from a user, a signal is triggered which will activate an immediate permission check. For users, this will trigger an access check for all services. For groups, due to the potential extra load, only the services whose permissions have changed will be verified, and only the users in that group.
If a user no longer has permission to access the service when this permission check is triggered, that service will be immediately disabled for them.
Disabling user accounts
When you unset a user as active in the admin panel, all of that user’s service accounts will be immediately disabled or removed. This is due to the built-in behaviour of the Django permissions system, which will return False for all permissions if a user’s account is disabled, regardless of their actual permissions state.
Apps
Alliance Auth comes with a set of apps (also called plugin-apps) which provide basic functions useful to many organizations in Eve Online like a fleet schedule and a timerboard. This section describes which apps are available and how to install and use them. Please note that any app needs to be installed before it can be used.
Auto Groups
Auto Groups allows you to automatically place users of certain states into corp or alliance-based groups. These groups are created when the first user is added to them and removed when the configuration is deleted.
Installation
This is an optional app that needs to be installed.
To install this app add 'allianceauth.eveonline.autogroups',
to your INSTALLED_APPS
list and run migrations. All other settings are controlled via the admin panel under the Eve_Autogroups
section.
Configuring a group
When you create an autogroup config, you will be given the following options:
Warning
After creating a group, you won’t be able to change the Corp and Alliance group prefixes, name source, and the replace spaces settings. Make sure you configure these the way you want before creating the config. If you need to change these, you will have to create a new autogroup config.
States select which states will be added to automatic Corp/Alliance groups
Corp/Alliance groups checkbox toggles Corp/Alliance autogroups on or off for this config.
Corp/Alliance group prefix sets the prefix for the group name, e.g., if your corp was called
MyCorp
and your prefix wasCorp
, your autogroup name would be created asCorp MyCorp
. This field accepts leading/trailing spaces.Corp/Alliance name source sets the source of the Corp/Alliance name used in creating the group name. Currently, the options are Full name and Ticker.
Replace spaces allows you to replace spaces in the autogroup name with the value in the replace spaces with field. This can be blank.
Permissions
Auto Groups are configured via models in the Admin Interface, a user will require the Staff
Flag in addition to the following permissions.
Permission |
Admin Site |
Auth Site |
---|---|---|
eve_autogroups.add_autogroupsconfig |
Can create model |
None. |
eve_autogroups.change_autogroupsconfig |
Can edit model |
None. |
eve_autogroups.delete_autogroupsconfig |
Can delete model |
None. |
There exists more models that will be automatically created and maintained by this module, they do not require end-user/admin interaction. managedalliancegroup
managedcorpgroups
Corporation Stats
This module is used to check the registration status of Corp members and to determine character relationships, being mains or alts.
Installation
Corp Stats requires access to the esi-corporations.read_corporation_membership.v1
SSO scope. Update your application on the EVE Developers site to ensure it is available.
Add 'allianceauth.corputils',
to your INSTALLED_APPS
list in your auth project’s settings file. Run migrations to complete installation.
Creating a Corp Stats
Upon initial installation, nothing will be visible. For every Corp, a model will have to be created before data can be viewed.
If you are a superuser, the “add” button will be immediately visible to you. If not, your user account requires the add_corpstats
permission.
Corp Stats requires an EVE SSO token to access data from the EVE Swagger Interface. Upon pressing the Add button, you will be prompted to authenticate. Please select the character who is in the Corporation you want data for.
You will return to auth where you are asked to select a token with the green arrow button. If you want to use a different character, press the LOG IN with EVE Online
button.
If this works (and you have permission to view the Corp Stats you just created), you’ll be returned to a view of the Corp Stats. If it fails, an error message will be displayed.
Corp Stats View
Last Update
An update can be performed immediately by pressing the update button. Anyone who can view the Corp Stats can update it.
Character Lists
Three views are available:
main characters and their alts
registered characters and their main character
unregistered characters
Each view contains a sortable and searchable table. The number of listings shown can be increased with a dropdown selector. Pages can be changed using the controls on the bottom-right of the table. Each list is searchable at the top-right. Tables can be re-ordered by clicking on column headings.
Main List
This list contains all main characters registered in the selected Corporation and their alts. Each character has a link to zKillboard.
Member List
The list contains all characters in the Corporation. Red backgrounds mean they are not registered in auth. A link to zKillboard is present for all characters. If registered, the character will also have a main character, main Corporation, and main Alliance field.
Unregistered List
This list contains all characters not registered on auth. Each character has a link to zKillboard.
Search View
This view is essentially the same as the Corp Stats page, but not specific to a single Corporation. The search query is visible in the search box. Characters from all Corp Stats to which the user has view access will be displayed. APIs respect permissions.
Permissions
To use this feature, users will require some of the following:
Permission |
Admin Site |
Auth Site |
---|---|---|
corpstats.view_corp_corpstats |
None |
Can view corp stats of their corporation. |
corpstats.view_alliance_corpstats |
None |
Can view corp stats of members of their alliance. |
corpstats.view_state_corpstats |
None |
Can view corp stats of members of their auth state. |
corpstats.add_corpstats |
Can create model |
Can add new corpstats using an SSO token. |
Users who add a Corp Stats with their token will be granted permissions to view it regardless of the above permissions. View permissions are interpreted in the “OR” sense: a user can view their corporation’s Corp Stats without the view_corp_corpstats
permission if they have the view_alliance_corpstats
permission, same idea for their state. Note that these evaluate against the user’s main character.
Automatic Updating
By default, Corp Stats are only updated on demand. If you want to automatically refresh on a schedule, add an entry to your project’s settings file:
CELERYBEAT_SCHEDULE['update_all_corpstats'] = {
'task': 'allianceauth.corputils.tasks.update_all_corpstats',
'schedule': crontab(minute="0", hour="*/6"),
}
Adjust the crontab as desired.
Troubleshooting
Failure to create Corp Stats
Unrecognized corporation. Please ensure it is a member of the alliance or a blue.
Corp Stats can only be created for Corporations who have a model in the database. These only exist for tenant corps, corps of tenant alliances, blue corps, and members of blue alliances.
Selected corp already has a statistics module.
Only one Corp Stats may exist at a time for a given Corporation.
Failed to gather corporation statistics with selected token.
During the initial population, the EVE Swagger Interface did not return any member data. This aborts the creation process. Please wait for the API to start working before attempting to create again.
Failure to update Corp Stats
Any of the following errors will result in a notification to the owning user and deletion of the Corp Stats model.
Your token has expired or is no longer valid. Please add a new one to create a new CorpStats.
This occurs when the SSO token is invalid, which can occur when deleted by the user, the character is transferred between accounts, or the API is having a bad day.
CorpStats for (corp name) cannot update with your ESI token as you have left corp.
The SSO token’s character is no longer in the Corporation that the Corp Stats are for, and therefore membership data cannot be retrieved.
HTTPForbidden
The SSO token lacks the required scopes to update membership data.
Fleet Activity Tracking
The Fleet Activity Tracking (FAT) app allows you to track fleet participation.
Installation
Fleet Activity Tracking requires access to the esi-location.read_location.v1
, esi-location.read_ship_type.v1
, and esi-universe.read_structures.v1
SSO scopes. Update your application on the EVE Developers site to ensure these are available.
Add 'allianceauth.fleetactivitytracking',
to your INSTALLED_APPS
list in your auth project’s settings file. Run migrations to complete installation.
Permissions
To administer this feature, users will require some of the following.
Users do not require any permissions to interact with FAT Links created.
Permission |
Admin Site |
Auth Site |
---|---|---|
auth.fleetactivitytracking |
None |
Create and Modify FATLinks |
auth.fleetactivitytracking_statistics |
None |
Can view detailed statistics for corp models and other characters. |
HR Applications
This app allows you to manage applications for multiple corporations in your alliance. Key features include:
Define application questionnaires for corporations
Users can apply to corporations by filling outquestionnaires
Manage a review and approval process of applications
Installation
Add 'allianceauth.hrapplications',
to your INSTALLED_APPS
list in your auth project’s settings file. Run migrations to complete installation.
Management
Creating Forms
The most common task is creating ApplicationForm models for corps. Only when such models exist will a Corporation be listed as a choice for applicants. This occurs in the Django admin site, so only staff have access.
The first step is to create questions. This is achieved by creating ApplicationQuestion models, one for each question. Titles are not unique.
The next step is to create the actual ApplicationForm model. It requires an existing EveCorporationInfo model to which it will belong. It also requires the selection of questions. ApplicationForm models are unique per Corporation: only one may exist for any given Corporation concurrently.
You can adjust these questions at any time. This is the preferred method of modifying the form: deleting and recreating will cascade the deletion to all received applications from this form, which is usually not intended.
Once completed, the Corporation will be available to receive applications.
Reviewing Applications
Superusers can see all applications, while normal members with the required permission can view only those to their corp.
Selecting an application from the management screen will provide all the answers to the questions in the form at the time the user applied.
When a reviewer assigns themselves an application, they mark it as in progress. This notifies the applicant and permanently attaches the reviewer to the application.
Only the assigned reviewer can approve/reject/delete the application if they possess the appropriate permission.
Any reviewer who can see the application can view the applicant’s APIs if they possess the appropriate permission.
Permissions
To administer this feature, users will require some of the following.
Users do not require any permission to apply to a corporation and fill out the form.
Permission |
Admin Site |
Auth Site |
---|---|---|
auth.human_resources |
None |
Can view applications and mark in progress |
hrapplications.approve_application |
None |
Can approve applications |
hrapplications.delete_application |
Can delete model |
Can delete applications |
hrapplications.reject_applications |
None |
Can reject applications |
hrapplications.add_applicationcomment |
Can create model |
Can comment on applications |
A user with auth.human_resources
can only see applications to their own corp.
Best practice is to bundle the auth.human_resources
permission alongside the hrapplications.approve_application
and hrapplications.reject_application
permissions, as in isolation these make little sense.
Models
ApplicationQuestion
This is the model representation of a question. It contains a title and a field for optional “helper” text. It is referenced by ApplicationForm models but acts independently. Modifying the question after it has been created will not void responses, so it’s not advisable to edit the title or the answers may not make sense to reviewers.
ApplicationForm
This is the template for an application. It points at a Corporation, with only one form allowed per Corporation. It also points at ApplicationQuestion models. When a user creates an application, they will be prompted with each question the form includes at the given time. Modifying questions in a form after it has been created will not be reflected in existing applications, so it’s perfectly fine to adjust them as you see fit. Changing corporations, however, is not advisable, as existing applications will point at the wrong Corporation after they’ve been submitted, confusing reviewers.
Application
This is the model representation of a completed application. It references an ApplicationForm from which it was spawned, which is where the Corporation specificity comes from. It points at a user, contains info regarding its reviewer, and has a status. Shortcut properties also provide the applicant’s main character, the applicant’s APIs, and a string representation of the reviewer (for cases when the reviewer doesn’t have a main character or the model gets deleted).
ApplicationResponse
This is an answer to a question. It points at the Application to which it belongs, to the ApplicationQuestion which it is answering, and contains the answer text. Modifying any of these fields is dangerous.
ApplicationComment
This is a reviewer’s comment on an application. Points at the application, points to the user, and contains the comment text. Modifying any of these fields is dangerous.
Troubleshooting
No corps accepting applications
Ensure there are ApplicationForm models in the admin site. Ensure the user does not already have an application to these Corporations. If the users wish to re-apply, they must first delete their completed application
Reviewer unable to complete application
Reviewers require permission for each of the three possible outcomes of an application, Approve Reject or Delete. Any user with the human resources permission can mark an application as in-progress, but if they lack these permissions, then the application will get stuck. Either grant the user the required permissions or change the assigned reviewer in the admin site. Best practice is to bundle the auth.human_resources
permission alongside the hrapplications.approve_application
and hrapplications.reject_application
permissions, as in isolation these serve little purpose.
Fleet Operations
Fleet Operations is an app for organizing and communicating fleet schedules.
Installation
Add 'allianceauth.optimer',
to your INSTALLED_APPS
list in your auth project’s settings file. Run migrations to complete installation.
Permissions
To use and administer this feature, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
auth.optimer_view |
None |
Can view Fleet Operation Timers |
auth.optimer_manage |
None |
Can Manage Fleet Operation timers |
Permissions Auditing
Access to most of Alliance Auth’s features is controlled by Django’s permissions system. To help you secure your services, Alliance Auth provides a permission auditing tool.
This is an optional app that needs to be installed.
To install it add 'allianceauth.permissions_tool',
to your INSTALLED_APPS
list in your auth project’s settings file.
Usage
Access
To grant users access to the permission auditing tool, they will need to be granted the permissions_tool.audit_permissions
permission or be a superuser.
When a user has access to the tool, they will see the “Permissions Audit” menu item.
Permissions Overview
The first page gives you a general overview of permissions and how many users have access to each permission.
App, Model and Code Name contain the internal details of the permission while Name contains the name/description you’ll see in the admin panel.
Users is the number of users explicitly granted this permission on their account.
Groups is the number of groups with this permission assigned.
Groups Users is the total number of users in all of the groups with this permission assigned.
Clicking on the Code Name link will take you to the Permissions Audit Page
Permissions Audit Page
The permissions audit page will give you an overview of all the users who have access to this permission either directly or granted via group membership.
Please note that users may appear multiple times if this permission is granted via multiple sources.
Permissions
To use this feature, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
permissions_tool.audit_permissions |
None |
Can view the Permissions Audit tool |
Ship Replacement
Ship Replacement helps you to organize ship replacement programs (SRP) for your alliance.
Installation
Add 'allianceauth.srp',
to your INSTALLED_APPS
list in your auth project’s settings file. Run migrations to complete installation.
Permissions
To use and administer this feature, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
auth.access_srp |
None |
Can create an SRP request from a fleet |
auth.srp_management |
None |
Can Approve and Deny SRP requests, Can create an SRP Fleet |
srp.add_srpfleetmain |
Can Add Model |
Can Create an SRP Fleet |
Structure Timers
Structure Timers helps you keep track of both offensive and defensive structure timers in your space.
Installation
Add 'allianceauth.timerboard',
to your INSTALLED_APPS
list in your auth project’s settings file. Run migrations to complete installation.
Permissions
To use and administer this feature, users will require some of the following.
Permission |
Admin Site |
Auth Site |
---|---|---|
auth.timer_view |
None |
Can view Timerboard Timers |
auth.timer_manage |
None |
Can Manage Timerboard timers |
Community Contributions
Another key feature of Alliance Auth is that it can be easily extended. Our great community is providing a variety of plug-in apps and services, which you can choose from to add more functions to your AA installation.
Check out the Community Creations repo for more details.
Or if you have specific needs, you can always develop your own plugin-apps and services. Please see the Development chapter for details.
Maintenance
In the maintenance chapter, you find details about where important log files are found, how you can customize your AA installation and how to solve common issues.
App Maintenance
Adding Apps
Your auth project is just a regular Django project - you can add in other Django apps as desired. Most come with dedicated setup guides, but here is the general procedure:
add
'appname',
to yourINSTALLED_APPS
setting inlocal.py
run
python manage.py migrate
run
python manage.py collectstatic --noinput
restart AA with
supervisorctl restart myauth:
Removing Apps
The following instructions will explain how you can remove an app properly from your Alliance Auth installation.
Note
We recommend following these instructions to avoid dangling foreign keys or orphaned Python packages on your system, which might cause conflicts with other apps down the road.
Step 1 - Removing database tables
First, we want to remove the app related tables from the database.
Automatic table removal
Let’s first try the automatic approach by running the following command:
python manage.py migrate appname zero
If that works, you’ll get a confirmation message.
If that did not work, and you got error messages, you will need to remove the tables manually. This is pretty common btw, because many apps use sophisticated table setups, which cannot be removed automatically by Django.
Manual table removal
First, tell Django that these migrations are no longer in effect (note the additional --fake
):
python manage.py migrate appname zero --fake
Then, open the mysql tool and connect to your Alliance Auth database:
sudo mysql -u root
use alliance_auth;
Next, disable foreign key check. This makes it much easier to drop tables in any order.
SET FOREIGN_KEY_CHECKS=0;
Then get a list of all tables. All tables belonging to the app in question will start with appname_
.
show tables;
Now, drop the tables from the app one by one like so:
drop table appname_model_1;
drop table appname_model_2;
...
And finally, but very importantly, re-enable foreign key checks again and then exit:
SET FOREIGN_KEY_CHECKS=1;
exit;
Step 2 - Remove the app from Alliance Auth
Once the tables have been removed, you can remove the app from Alliance Auth. This is done by removing the applabel from the INSTALLED_APPS
list in your local settings file.
Step 3 - Remove the Python package
Finally, we want to remove the app’s Python package. For that run the following command:
pip uninstall app-package-name
Congrats, you have now removed this app from your Alliance Auth installation.
Permission Cleanup
Mature Alliance Auth installations, or those with actively developed extensions may find themselves with stale or duplicated Permission models.
This can make it confusing for admins to apply the right permissions, contribute to larger queries in backend management or simply look unsightly.
python manage.py remove_stale_contenttypes --include-stale-apps
This inbuilt Django command will step through each contenttype and offer to delete it, displaying what exactly this will cascade to delete. Pay attention and ensure you understand exactly what is being removed before answering yes
.
This should only clean up uninstalled apps, deprecated permissions within apps should be cleaned up using Data Migrations by each responsible application.
Folder structure
When installing Alliance Auth, you are instructed to run the allianceauth start
command which generates a folder containing your auth project. This auth project is based off Alliance Auth but can be customized how you wish.
The myauth folder
The first folder created is the root directory of your auth project. This folder contains:
the
manage.py
management script used to interact with Djangoa preconfigured
supervisor.conf
Supervisor config for running Celery (and optionally Gunicorn) automaticallya
log
folder which contains log files generated by Alliance Auth
The myauth subfolder
Within your auth project root folder is another folder of the same name (a quirk of Django project structures). This folder contains:
a Celery app definition in
celery.py
for registering tasks with the background workersa web server gateway interface script
wsgi.py
for processing web requeststhe root URL config
urls.py
which Django uses to direct requests to the appropriate view
There are also two subfolders for static
and templates
which allow adding new content and overriding default content shipped with Alliance Auth or Django.
And finally the settings folder.
Settings Files
With the settings folder lives two settings files: base.py
and local.py
The base settings file contains everything needed to run Alliance Auth. It handles configuration of Django and Celery, defines logging, and many other Django-required settings. This file should not be edited. While updating Alliance Auth, you may be instructed to update the base settings file - this is achieved through the allianceauth update
command which overwrites the existing base settings file.
The local settings file is referred to as “your auth project’s settings file” and you are instructed to edit it during the installation process. You can add any additional settings required by other apps to this file. Upon creation the first line is from .base import *
meaning all settings defined in the base settings file are loaded. You can override any base setting by simply redefining it in your local settings file.
Log Files
Your auth project comes with four log file definitions by default. These are created in the myauth/log/
folder at runtime.
allianceauth.log
contains allINFO
level and above logging messages from Alliance Auth. This is useful for tracking who is making changes to the site, what is happening to users, and debugging any errors that may occur.worker.log
contains logging messages from the Celery background task workers. This is useful for monitoring background processes such as group syncing to services.beat.log
contains logging messages from the background task scheduler. This is of limited use unless the scheduler isn’t starting.gunicorn.log
contains logging messages from Gunicorn workers. This contains all web-sourced messages found inallianceauth.log
as well as runtime errors from the workers themselves.
When asking for assistance with your auth project, be sure to first read the logs, and share any relevant entries.
Troubleshooting
Logging
In its default configuration, your auth project logs INFO and higher messages to myauth/log/allianceauth.log. If you’re encountering issues, it’s a good idea to view DEBUG messages as these greatly assist the troubleshooting process. These are printed to the console with manually starting the webserver via python manage.py runserver
.
To record DEBUG messages in the log file, alter a setting in your auth project’s settings file: LOGGING['handlers']['log_file']['level'] = 'DEBUG'
. After restarting gunicorn and celery, your log file will record all logging messages.
Common Problems
I’m getting error 500 when trying to connect to the website on a new installation
Great. Error 500 is the generic message given by your web server when anything breaks. The actual error message is hidden in one of your auth project’s log files. Read them to identify it.
Failed to configure log handler
Make sure the log directory is writeable by the allianceserver user: chmown -R allianceserver:allianceserver /path/to/myauth/log/
, then restart the auth supervisor processes.
Groups aren’t syncing to services
Make sure the background processes are running: supervisorctl status myauth:
. If myauth:worker
or myauth:beat
do not show RUNNING
read their log files to identify why.
Task queue is way too large
Stop celery workers with supervisorctl stop myauth:worker
then clear the queue:
redis-cli FLUSHALL
celery -A myauth worker --purge
Press Control+C once.
Now start the worker again with supervisorctl start myauth:worker
Proxy timeout when entering email address
This usually indicates an issue with your email settings. Ensure these are correct and your email server/service is properly configured.
No images are available to users accessing the website
This is likely due to a permission mismatch. Check the setup guide for your web server. Additionally ensure the user who owns /var/www/myauth/static
is the same user as running your webserver, as this can be non-standard.
Unable to execute ‘gunicorn myauth.wsgi’ or ImportError: No module named ‘myauth.wsgi’
Gunicorn needs to have context for its running location, /home/alllianceserver/myauth/gunicorn myauth.wsgi
will not work, instead cd /home/alllianceserver/myauth
then gunicorn myauth.wsgi
is needed to boot Gunicorn. This is handled in the Supervisor config, but this may be encountered running Gunicorn manually for testing.
Specified key was too long error
Migrations may about with the following error message:
Specified key was too long; max key length is 767 bytes
This error will occur if one is trying to use Maria DB prior to 10.2.x, which is not compatible with Alliance Auth.
Install a newer Maria DB version to fix this issue another DBMS supported by Django 2.2.
Tuning
The official installation guide will install a stable version of Alliance Auth that will work fine for most cases. However, there are a lot of levels that can be used to optimize a system. For example, some installations may we short on RAM and want to reduce the total memory footprint, even though that may reduce system performance. Others are fine with further increasing the memory footprint to get better system performance.
Warning
Tuning usually has benefits and costs and should only be performed by experienced Linux administrators who understand the impact of tuning decisions on their system.
Gunicorn
Number of workers
The default installation will have 3 workers configured for Gunicorn. This will be fine on most systems, but if your system as more than one core than you might want to increase the number of workers to get better response times. Note that more workers will also need more RAM though.
The number you set this to will depend on your own server environment, how many visitors you have etc. Gunicorn suggests (2 x $num_cores) + 1
for the number of workers. So for example, if you have 2 cores, you want 2 x 2 + 1 = 5 workers. See here for the official discussion on this topic.
For example, to get 5 workers change the setting --workers=5
in your supervisor.conf
file and then reload the supervisor with the following command to activate the change (Ubuntu):
systemctl restart supervisor
Celery
Hint
Most tunings will require a change to your supervisor configuration in your supervisor.conf
file. Note that you need to restart the supervisor daemon in order for any changes to take effect. And before restarting the daemon, you may want to make sure your supervisors stop gracefully:(Ubuntu):
supervisor stop myauth:
systemctl supervisor restart
Task Logging
By default, task logging is deactivated. Enabling task logging allows you to monitor what tasks are doing in addition to getting all warnings and error messages. To enable info logging for tasks, add the following to the command configuration of your worker in the supervisor.conf
file:
-l info
Full example:
command=/home/allianceserver/venv/auth/bin/celery -A myauth worker -l info
Protection against memory leaks
Celery workers often have memory leaks and will therefore grow in size over time. While the Alliance Auth team is working hard to ensure Auth is free of memory leaks, some may still be caused by bugs in different versions of libraries or community apps. It is therefore good practice to enable features that protect against potential memory leaks.
Hint
The 256 MB limit is just an example and should be adjusted to your system configuration. We would suggest to not go below 128MB though, since new workers start with around 80 MB already. Also take into consideration that this value is per worker and that you may have more than one worker running in your system.
Supervisor
It is also possible to configure your supervisor to monitor and automatically restart programs that exceed a memory threshold.
This is not a built-in feature and requires the 3rd party extension superlance, which includes a set of plugin utilities for supervisor. The one that watches memory consumption is memmon.
To install superlance into your venv, run:
pip install superlance
You can then add memmon
to your supervisor.conf
:
[eventlistener:memmon]
command=/home/allianceserver/venv/auth/bin/memmon -p worker=256MB
directory=/home/allianceserver/myauth
events=TICK_60
This setup will check the memory consumption of the program “worker” every 60 secs and automatically restart it if it goes above 256 MB. Note that it will use the stop signal configured in supervisor, which is TERM
by default. TERM
will cause a “warm shutdown” of your worker, so all currently running tasks are completed before the restart.
Again, the 256 MB is just an example and should be adjusted to fit your system configuration.
Increasing task throughput
Celery tasks are designed to run concurrently, so one obvious way to increase task throughput is to run more tasks in parallel. The default celery worker configuration will allow either of these options to be configured out of the box.
Extra Worker Threads
The easiest way to increate throughput can be achieved by increasing the numprocs
parameter of the suprvisor process. For example:
[program:worker]
...
numprocs=2
process_name=%(program_name)s_%(process_num)02d
...
This number will be multiplied by your concurrency setting. For example:
numprocs * concurency = workers
Increasing this number will require a modification to the memmon settings as each numproc
worker will get a unique name for example with numproc=3
[eventlistener:memmon]
...
command=... -p worker_00=256MB -p worker_01=256MB -p worker_02=256MB
...
Hint
You will want to experiment with different settings to find the optimal. One way to generate some task load and verify your configuration is to run a model update with the following command:
celery -A myauth call allianceauth.eveonline.tasks.run_model_update
Concurrency
This can be achieved by the setting the concurrency parameter of the celery worker to a higher number. For example:
--concurrency=10
Hint
The optimal number will hugely depend on your individual system configuration, and you may want to experiment with different settings to find the optimal. One way to generate some task load and verify your configuration is to run a model update with the following command:
celery -A myauth call allianceauth.eveonline.tasks.run_model_update
Hint
The optimal number of concurrent workers will be different for every system, and we recommend experimenting with different figures to find the optimal for your system. Note that the example of 10 threads is conservative and should work even with smaller systems.
Redis
Compression
Cache compression can help tame the memory usage of specialised installation configurations and Community Apps that heavily utilize Redis, in exchange for increased CPU utilization.
Various compression algorithms are supported, with various strengths. Our testing has shown that lzma works best with our use cases. You can read more at Django-Redis Compression Support
Add the following lines to local.py
import lzma
CACHES = {
"default": {
# ...
"OPTIONS": {
"COMPRESSOR": "django_redis.compressors.lzma.LzmaCompressor",
}
}
}
Python
Version Update
Newer versions of python can focus heavily on performance improvements, some more than others. But be aware regressions for stability or security reasons may always be the case.
As a general rule, Python 3.9 and Python 3.11 both had a strong focus on performance improvements. Python 3.12 is looking promising but has yet to have widespread testing, adoption and deployment. A simple comparison is available at speed.python.org.
Djangobench is the source of synthetic benchmarks and a useful tool for running comparisons. Below are some examples to inform your investigations.
Keep in mind while a 1.2x faster result is significant, it’s only one step of the process, Celery, SQL, Redis, and many other factors will influence the endresult, and this python speed improvement will not translate 1:1 into real world performance.
Django 4.0.10
Djangobench 0.10.0
Django 4.0.10
Python 3.8.18 vs Python 3.11.5
joel@METABOX:~/djangobench/django$ djangobench --vcs=none --control=. --experiment=. --control-python=/home/joel/djangobench/py38/bin/python --experiment-python=/home/joel/djangobench/py311/bin/python -r /home/joel/djangobench/results -t 500
Running all benchmarks
Recording data to '/home/joel/djangobench/results'
Control: Django 4.0.10 (in .)
Experiment: Django 4.0.10 (in .)
Running 'multi_value_dict' benchmark ...
Min: 0.000014 -> 0.000013: 1.0304x faster
Avg: 0.000183 -> 0.000133: 1.3698x faster
Significant (t=9.325958)
Stddev: 0.00010 -> 0.00007: 1.3791x smaller (N = 500)
Running 'query_values' benchmark ...
Min: 0.000079 -> 0.000070: 1.1308x faster
Avg: 0.000084 -> 0.000074: 1.1267x faster
Significant (t=19.174361)
Stddev: 0.00001 -> 0.00001: 1.0255x larger (N = 500)
Running 'query_delete' benchmark ...
Min: 0.000082 -> 0.000074: 1.1145x faster
Avg: 0.000086 -> 0.000078: 1.0987x faster
Significant (t=17.504085)
Stddev: 0.00001 -> 0.00001: 1.1888x smaller (N = 500)
Running 'query_select_related' benchmark ...
Min: 0.016771 -> 0.013520: 1.2405x faster
Avg: 0.017897 -> 0.014149: 1.2649x faster
Significant (t=40.942990)
Stddev: 0.00190 -> 0.00077: 2.4535x smaller (N = 500)
Running 'query_aggregate' benchmark ...
Min: 0.000092 -> 0.000083: 1.1105x faster
Avg: 0.000100 -> 0.000090: 1.1107x faster
Significant (t=9.967204)
Stddev: 0.00002 -> 0.00001: 1.5003x smaller (N = 500)
Running 'query_raw_deferred' benchmark ...
Min: 0.004157 -> 0.003563: 1.1666x faster
Avg: 0.004626 -> 0.003809: 1.2143x faster
Significant (t=12.325104)
Stddev: 0.00121 -> 0.00086: 1.4047x smaller (N = 500)
Running 'query_get_or_create' benchmark ...
Min: 0.000412 -> 0.000362: 1.1385x faster
Avg: 0.000458 -> 0.000407: 1.1259x faster
Significant (t=14.169322)
Stddev: 0.00006 -> 0.00005: 1.1306x smaller (N = 500)
Running 'query_values_list' benchmark ...
Min: 0.000080 -> 0.000071: 1.1231x faster
Avg: 0.000089 -> 0.000076: 1.1706x faster
Significant (t=18.298942)
Stddev: 0.00001 -> 0.00001: 1.9398x smaller (N = 500)
Running 'url_resolve_flat_i18n_off' benchmark ...
Min: 0.055764 -> 0.045370: 1.2291x faster
Avg: 0.057670 -> 0.047020: 1.2265x faster
Significant (t=111.187780)
Stddev: 0.00206 -> 0.00059: 3.4618x smaller (N = 500)
Running 'qs_filter_chaining' benchmark ...
Min: 0.000236 -> 0.000196: 1.2034x faster
Avg: 0.000248 -> 0.000206: 1.2041x faster
Significant (t=44.893544)
Stddev: 0.00002 -> 0.00001: 1.0833x smaller (N = 500)
Running 'template_render' benchmark ...
Min: 0.000933 -> 0.000712: 1.3110x faster
Avg: 0.001003 -> 0.000777: 1.2909x faster
Significant (t=8.379095)
Stddev: 0.00043 -> 0.00042: 1.0287x smaller (N = 500)
Running 'query_get' benchmark ...
Min: 0.000259 -> 0.000230: 1.1259x faster
Avg: 0.000282 -> 0.000238: 1.1829x faster
Significant (t=42.267305)
Stddev: 0.00002 -> 0.00001: 1.7842x smaller (N = 500)
Running 'query_none' benchmark ...
Min: 0.000053 -> 0.000045: 1.1830x faster
Avg: 0.000056 -> 0.000049: 1.1449x faster
Significant (t=16.426843)
Stddev: 0.00001 -> 0.00001: 1.1267x larger (N = 500)
Running 'query_complex_filter' benchmark ...
Min: 0.000039 -> 0.000034: 1.1527x faster
Avg: 0.000041 -> 0.000037: 1.1091x faster
Significant (t=13.582718)
Stddev: 0.00000 -> 0.00001: 1.5373x larger (N = 500)
Running 'query_filter' benchmark ...
Min: 0.000127 -> 0.000112: 1.1288x faster
Avg: 0.000133 -> 0.000119: 1.1228x faster
Significant (t=22.727829)
Stddev: 0.00001 -> 0.00001: 1.1771x smaller (N = 500)
Running 'template_render_simple' benchmark ...
Min: 0.000030 -> 0.000024: 1.2405x faster
Avg: 0.000035 -> 0.000029: 1.2042x faster
Not significant
Stddev: 0.00007 -> 0.00005: 1.3190x smaller (N = 500)
Running 'default_middleware' benchmark ...
Min: -0.000047 -> -0.000054: 0.8624x faster
Avg: 0.000017 -> 0.000017: 1.0032x faster
Not significant
Stddev: 0.00037 -> 0.00037: 1.0091x larger (N = 500)
Running 'query_annotate' benchmark ...
Min: 0.000186 -> 0.000162: 1.1505x faster
Avg: 0.000207 -> 0.000178: 1.1660x faster
Significant (t=16.516089)
Stddev: 0.00003 -> 0.00003: 1.1403x smaller (N = 500)
Running 'raw_sql' benchmark ...
Min: 0.000015 -> 0.000013: 1.1070x faster
Avg: 0.000017 -> 0.000014: 1.1676x faster
Significant (t=13.598519)
Stddev: 0.00000 -> 0.00000: 2.3503x smaller (N = 500)
Running 'url_resolve_flat' benchmark ...
Min: 0.056378 -> 0.044772: 1.2592x faster
Avg: 0.058268 -> 0.046656: 1.2489x faster
Significant (t=197.176590)
Stddev: 0.00121 -> 0.00051: 2.3665x smaller (N = 500)
Running 'l10n_render' benchmark ...
Min: 0.001097 -> 0.000727: 1.5092x faster
Avg: 0.001160 -> 0.000768: 1.5101x faster
Significant (t=36.971179)
Stddev: 0.00019 -> 0.00014: 1.2946x smaller (N = 500)
Running 'query_count' benchmark ...
Min: 0.000083 -> 0.000073: 1.1302x faster
Avg: 0.000091 -> 0.000079: 1.1640x faster
Significant (t=15.049336)
Stddev: 0.00002 -> 0.00001: 1.6661x smaller (N = 500)
Running 'model_delete' benchmark ...
Min: 0.000123 -> 0.000105: 1.1701x faster
Avg: 0.000135 -> 0.000119: 1.1396x faster
Significant (t=17.781816)
Stddev: 0.00001 -> 0.00002: 1.1990x larger (N = 500)
Running 'query_iterator' benchmark ...
Min: 0.000102 -> 0.000088: 1.1605x faster
Avg: 0.000108 -> 0.000093: 1.1598x faster
Significant (t=23.872009)
Stddev: 0.00001 -> 0.00001: 1.1366x smaller (N = 500)
Running 'template_compilation' benchmark ...
Min: 0.000155 -> 0.000129: 1.2015x faster
Avg: 0.000169 -> 0.000137: 1.2317x faster
Significant (t=6.119618)
Stddev: 0.00009 -> 0.00007: 1.4162x smaller (N = 500)
Running 'query_all_multifield' benchmark ...
Min: 0.014582 -> 0.012509: 1.1658x faster
Avg: 0.015715 -> 0.013337: 1.1783x faster
Significant (t=19.183517)
Stddev: 0.00207 -> 0.00184: 1.1241x smaller (N = 500)
Running 'query_prefetch_related' benchmark ...
Min: 0.014293 -> 0.012157: 1.1758x faster
Avg: 0.015467 -> 0.013276: 1.1650x faster
Significant (t=20.607411)
Stddev: 0.00176 -> 0.00160: 1.0952x smaller (N = 500)
Running 'query_all_converters' benchmark ...
Min: 0.000536 -> 0.000464: 1.1554x faster
Avg: 0.000563 -> 0.000486: 1.1595x faster
Significant (t=38.503433)
Stddev: 0.00004 -> 0.00002: 1.6468x smaller (N = 500)
Running 'query_distinct' benchmark ...
Min: 0.000106 -> 0.000092: 1.1583x faster
Avg: 0.000127 -> 0.000096: 1.3223x faster
Significant (t=27.798102)
Stddev: 0.00002 -> 0.00001: 3.7187x smaller (N = 500)
Running 'query_dates' benchmark ...
Min: 0.000249 -> 0.000209: 1.1953x faster
Avg: 0.000275 -> 0.000228: 1.2056x faster
Significant (t=30.785168)
Stddev: 0.00003 -> 0.00002: 1.0854x smaller (N = 500)
Running 'model_save_existing' benchmark ...
Min: 0.003526 -> 0.003094: 1.1397x faster
Avg: 0.003723 -> 0.003212: 1.1591x faster
Significant (t=47.274918)
Stddev: 0.00018 -> 0.00016: 1.1817x smaller (N = 500)
Running 'query_delete_related' benchmark ...
Min: 0.000120 -> 0.000103: 1.1655x faster
Avg: 0.000132 -> 0.000111: 1.1815x faster
Significant (t=6.428771)
Stddev: 0.00005 -> 0.00004: 1.2149x smaller (N = 500)
Running 'url_reverse' benchmark ...
Min: 0.000062 -> 0.000060: 1.0318x faster
Avg: 0.000072 -> 0.000068: 1.0622x faster
Not significant
Stddev: 0.00006 -> 0.00005: 1.0531x smaller (N = 500)
Running 'query_latest' benchmark ...
Min: 0.000136 -> 0.000118: 1.1454x faster
Avg: 0.000155 -> 0.000129: 1.2008x faster
Significant (t=8.372115)
Stddev: 0.00007 -> 0.00001: 5.1365x smaller (N = 500)
Running 'form_create' benchmark ...
Min: 0.000015 -> 0.000013: 1.2319x faster
Avg: 0.000019 -> 0.000015: 1.2739x faster
Significant (t=4.158080)
Stddev: 0.00002 -> 0.00001: 1.1449x smaller (N = 500)
Running 'query_update' benchmark ...
Min: 0.000047 -> 0.000041: 1.1323x faster
Avg: 0.000052 -> 0.000044: 1.1721x faster
Significant (t=18.470635)
Stddev: 0.00001 -> 0.00000: 1.6104x smaller (N = 500)
Running 'query_in_bulk' benchmark ...
Min: 0.000152 -> 0.000136: 1.1193x faster
Avg: 0.000173 -> 0.000147: 1.1735x faster
Significant (t=16.901845)
Stddev: 0.00003 -> 0.00001: 2.1199x smaller (N = 500)
Running 'url_resolve_nested' benchmark ...
Min: 0.000043 -> 0.000034: 1.2871x faster
Avg: 0.000075 -> 0.000047: 1.6049x faster
Not significant
Stddev: 0.00066 -> 0.00023: 2.8387x smaller (N = 500)
Running 'model_creation' benchmark ...
Min: 0.000077 -> 0.000066: 1.1579x faster
Avg: 0.000088 -> 0.000072: 1.2205x faster
Significant (t=10.514202)
Stddev: 0.00003 -> 0.00001: 3.1410x smaller (N = 500)
Running 'query_order_by' benchmark ...
Min: 0.000135 -> 0.000124: 1.0945x faster
Avg: 0.000145 -> 0.000133: 1.0902x faster
Significant (t=13.574502)
Stddev: 0.00001 -> 0.00001: 1.1586x smaller (N = 500)
Running 'startup' benchmark ...
Skipped: Django 1.9 and later has changed app loading. This benchmark needs fixing anyway.
Running 'form_clean' benchmark ...
Min: 0.000005 -> 0.000003: 1.4696x faster
Avg: 0.000006 -> 0.000004: 1.4931x faster
Significant (t=11.263253)
Stddev: 0.00000 -> 0.00000: 2.2571x smaller (N = 500)
Running 'locale_from_request' benchmark ...
Min: 0.000076 -> 0.000082: 1.0895x slower
Avg: 0.000083 -> 0.000090: 1.0877x slower
Not significant
Stddev: 0.00009 -> 0.00006: 1.6230x smaller (N = 500)
Running 'query_exists' benchmark ...
Min: 0.000243 -> 0.000214: 1.1399x faster
Avg: 0.000262 -> 0.000227: 1.1571x faster
Significant (t=27.797738)
Stddev: 0.00002 -> 0.00002: 1.2601x smaller (N = 500)
Running 'query_values_10000' benchmark ...
Min: 0.005755 -> 0.005269: 1.0923x faster
Avg: 0.006184 -> 0.005587: 1.1067x faster
Significant (t=10.895954)
Stddev: 0.00094 -> 0.00079: 1.1902x smaller (N = 500)
Running 'query_exclude' benchmark ...
Min: 0.000159 -> 0.000141: 1.1256x faster
Avg: 0.000177 -> 0.000151: 1.1741x faster
Significant (t=23.556200)
Stddev: 0.00002 -> 0.00001: 1.8250x smaller (N = 500)
Running 'query_raw' benchmark ...
Min: 0.005619 -> 0.004860: 1.1562x faster
Avg: 0.006181 -> 0.005041: 1.2263x faster
Significant (t=18.008590)
Stddev: 0.00121 -> 0.00074: 1.6376x smaller (N = 500)
Running 'url_resolve' benchmark ...
Min: 0.004666 -> 0.004233: 1.1023x faster
Avg: 0.004920 -> 0.004347: 1.1318x faster
Significant (t=24.865249)
Stddev: 0.00049 -> 0.00016: 3.1507x smaller (N = 500)
Running 'model_save_new' benchmark ...
Min: 0.003420 -> 0.003105: 1.1014x faster
Avg: 0.003610 -> 0.003217: 1.1221x faster
Significant (t=42.956103)
Stddev: 0.00017 -> 0.00011: 1.6304x smaller (N = 500)
Running 'query_all' benchmark ...
Min: 0.008101 -> 0.007077: 1.1447x faster
Avg: 0.009006 -> 0.007936: 1.1348x faster
Significant (t=9.981534)
Stddev: 0.00171 -> 0.00168: 1.0215x smaller (N = 500)
Django 4.2.6
Djangobench 0.10.0
Django 4.0.10
Python 3.8.18 vs Python 3.11.5
joel@METABOX:~/djangobench/django$ djangobench --vcs=none --control=. --experiment=. --control-python=/home/joel/djangobench/py38/bin/python --experiment-python=/home/joel/djangobench/py311/bin/python -r /home/joel/djangobench/results -t 500
Running all benchmarks
Recording data to '/home/joel/djangobench/results'
Control: Django 4.2.6 (in .)
Experiment: Django 4.2.6 (in .)
Running 'multi_value_dict' benchmark ...
Min: -0.000004 -> 0.000013: -3.0336x slower
Avg: 0.000182 -> 0.000133: 1.3680x faster
Significant (t=9.151616)
Stddev: 0.00010 -> 0.00007: 1.3826x smaller (N = 500)
Running 'query_values' benchmark ...
Min: 0.000082 -> 0.000072: 1.1485x faster
Avg: 0.000086 -> 0.000075: 1.1462x faster
Significant (t=30.114973)
Stddev: 0.00001 -> 0.00001: 1.0258x larger (N = 500)
Running 'query_delete' benchmark ...
Min: 0.000080 -> 0.000071: 1.1169x faster
Avg: 0.000086 -> 0.000077: 1.1088x faster
Significant (t=13.459411)
Stddev: 0.00001 -> 0.00001: 1.0008x smaller (N = 500)
Running 'query_select_related' benchmark ...
Min: 0.016889 -> 0.013513: 1.2498x faster
Avg: 0.018370 -> 0.013885: 1.3230x faster
Significant (t=48.921967)
Stddev: 0.00196 -> 0.00061: 3.2174x smaller (N = 500)
Running 'query_aggregate' benchmark ...
Min: 0.000167 -> 0.000153: 1.0904x faster
Avg: 0.000182 -> 0.000165: 1.1029x faster
Significant (t=12.685517)
Stddev: 0.00002 -> 0.00002: 1.3019x smaller (N = 500)
Running 'query_raw_deferred' benchmark ...
Min: 0.004160 -> 0.003674: 1.1323x faster
Avg: 0.004596 -> 0.003888: 1.1820x faster
Significant (t=11.504156)
Stddev: 0.00117 -> 0.00073: 1.5957x smaller (N = 500)
Running 'query_get_or_create' benchmark ...
Min: 0.000421 -> 0.000356: 1.1823x faster
Avg: 0.000470 -> 0.000392: 1.2011x faster
Significant (t=14.613017)
Stddev: 0.00008 -> 0.00009: 1.0954x larger (N = 500)
Running 'query_values_list' benchmark ...
Min: 0.000080 -> 0.000070: 1.1395x faster
Avg: 0.000085 -> 0.000075: 1.1202x faster
Significant (t=20.300988)
Stddev: 0.00001 -> 0.00001: 1.0537x smaller (N = 500)
Running 'url_resolve_flat_i18n_off' benchmark ...
Min: 0.056031 -> 0.045854: 1.2219x faster
Avg: 0.057048 -> 0.048370: 1.1794x faster
Significant (t=106.668460)
Stddev: 0.00117 -> 0.00139: 1.1819x larger (N = 500)
Running 'qs_filter_chaining' benchmark ...
Min: 0.000247 -> 0.000205: 1.2080x faster
Avg: 0.000267 -> 0.000219: 1.2211x faster
Significant (t=38.507950)
Stddev: 0.00002 -> 0.00002: 1.0252x larger (N = 500)
Running 'template_render' benchmark ...
Min: 0.000956 -> 0.000761: 1.2550x faster
Avg: 0.001061 -> 0.000862: 1.2302x faster
Significant (t=6.128572)
Stddev: 0.00052 -> 0.00051: 1.0109x smaller (N = 500)
Running 'query_get' benchmark ...
Min: 0.000268 -> 0.000235: 1.1388x faster
Avg: 0.000293 -> 0.000256: 1.1411x faster
Significant (t=24.002331)
Stddev: 0.00002 -> 0.00003: 1.2917x larger (N = 500)
Running 'query_none' benchmark ...
Min: 0.000055 -> 0.000050: 1.1079x faster
Avg: 0.000061 -> 0.000055: 1.1183x faster
Significant (t=3.149707)
Stddev: 0.00003 -> 0.00004: 1.3162x larger (N = 500)
Running 'query_complex_filter' benchmark ...
Min: 0.000040 -> 0.000034: 1.1777x faster
Avg: 0.000042 -> 0.000038: 1.1267x faster
Significant (t=15.246074)
Stddev: 0.00000 -> 0.00001: 1.5250x larger (N = 500)
Running 'query_filter' benchmark ...
Min: 0.000131 -> 0.000116: 1.1288x faster
Avg: 0.000139 -> 0.000127: 1.0907x faster
Significant (t=14.448319)
Stddev: 0.00001 -> 0.00001: 1.2281x larger (N = 500)
Running 'template_render_simple' benchmark ...
Min: 0.000031 -> 0.000024: 1.2650x faster
Avg: 0.000037 -> 0.000029: 1.2895x faster
Significant (t=2.094800)
Stddev: 0.00007 -> 0.00005: 1.3630x smaller (N = 500)
Running 'default_middleware' benchmark ...
Min: -0.000037 -> -0.000060: 0.6180x faster
Avg: 0.000001 -> 0.000001: 1.0056x slower
Not significant
Stddev: 0.00002 -> 0.00002: 1.0915x smaller (N = 500)
Running 'query_annotate' benchmark ...
Min: 0.000192 -> 0.000173: 1.1122x faster
Avg: 0.000206 -> 0.000185: 1.1134x faster
Significant (t=17.849733)
Stddev: 0.00002 -> 0.00002: 1.0456x smaller (N = 500)
Running 'raw_sql' benchmark ...
Min: 0.000013 -> 0.000012: 1.0839x faster
Avg: 0.000015 -> 0.000014: 1.0882x faster
Significant (t=4.252084)
Stddev: 0.00001 -> 0.00000: 1.5868x smaller (N = 500)
Running 'url_resolve_flat' benchmark ...
Min: 0.055540 -> 0.046018: 1.2069x faster
Avg: 0.058030 -> 0.048408: 1.1988x faster
Significant (t=98.852976)
Stddev: 0.00157 -> 0.00151: 1.0444x smaller (N = 500)
Running 'l10n_render' benchmark ...
Min: 0.001604 -> 0.001253: 1.2797x faster
Avg: 0.001684 -> 0.001304: 1.2918x faster
Significant (t=37.535402)
Stddev: 0.00017 -> 0.00015: 1.1476x smaller (N = 500)
Running 'query_count' benchmark ...
Min: 0.000176 -> 0.000165: 1.0631x faster
Avg: 0.000189 -> 0.000176: 1.0755x faster
Significant (t=12.229046)
Stddev: 0.00002 -> 0.00002: 1.0395x larger (N = 500)
Running 'model_delete' benchmark ...
Min: 0.000122 -> 0.000104: 1.1743x faster
Avg: 0.000152 -> 0.000115: 1.3227x faster
Significant (t=19.812953)
Stddev: 0.00004 -> 0.00001: 2.6554x smaller (N = 500)
Running 'query_iterator' benchmark ...
Min: 0.000108 -> 0.000094: 1.1518x faster
Avg: 0.000119 -> 0.000098: 1.2203x faster
Significant (t=21.984884)
Stddev: 0.00002 -> 0.00001: 2.7750x smaller (N = 500)
Running 'template_compilation' benchmark ...
Min: 0.000164 -> 0.000148: 1.1034x faster
Avg: 0.000184 -> 0.000162: 1.1386x faster
Significant (t=4.665298)
Stddev: 0.00008 -> 0.00007: 1.2952x smaller (N = 500)
Running 'query_all_multifield' benchmark ...
Min: 0.014802 -> 0.012188: 1.2144x faster
Avg: 0.016029 -> 0.013294: 1.2057x faster
Significant (t=21.516971)
Stddev: 0.00210 -> 0.00191: 1.0984x smaller (N = 500)
Running 'query_prefetch_related' benchmark ...
Min: 0.013401 -> 0.011583: 1.1569x faster
Avg: 0.014822 -> 0.013366: 1.1090x faster
Significant (t=11.422100)
Stddev: 0.00170 -> 0.00229: 1.3410x larger (N = 500)
Running 'query_all_converters' benchmark ...
Min: 0.000499 -> 0.000429: 1.1618x faster
Avg: 0.000531 -> 0.000455: 1.1670x faster
Significant (t=42.716720)
Stddev: 0.00002 -> 0.00003: 1.5394x larger (N = 500)
Running 'query_distinct' benchmark ...
Min: 0.000108 -> 0.000095: 1.1397x faster
Avg: 0.000116 -> 0.000100: 1.1646x faster
Significant (t=25.915629)
Stddev: 0.00001 -> 0.00001: 1.1204x larger (N = 500)
Running 'query_dates' benchmark ...
Min: 0.000333 -> 0.000290: 1.1490x faster
Avg: 0.000365 -> 0.000326: 1.1207x faster
Significant (t=18.213858)
Stddev: 0.00003 -> 0.00003: 1.0118x larger (N = 500)
Running 'model_save_existing' benchmark ...
Min: 0.003455 -> 0.003081: 1.1215x faster
Avg: 0.003764 -> 0.003326: 1.1316x faster
Significant (t=32.229651)
Stddev: 0.00023 -> 0.00020: 1.1398x smaller (N = 500)
Running 'query_delete_related' benchmark ...
Min: 0.000122 -> 0.000102: 1.1946x faster
Avg: 0.000131 -> 0.000113: 1.1564x faster
Significant (t=5.027485)
Stddev: 0.00005 -> 0.00006: 1.4129x larger (N = 500)
Running 'url_reverse' benchmark ...
Min: 0.000068 -> 0.000067: 1.0193x faster
Avg: 0.000075 -> 0.000074: 1.0157x faster
Not significant
Stddev: 0.00006 -> 0.00005: 1.1543x smaller (N = 500)
Running 'query_latest' benchmark ...
Min: 0.000147 -> 0.000138: 1.0631x faster
Avg: 0.000167 -> 0.000148: 1.1277x faster
Significant (t=11.353029)
Stddev: 0.00003 -> 0.00002: 1.6091x smaller (N = 500)
Running 'form_create' benchmark ...
Min: 0.000016 -> 0.000013: 1.2659x faster
Avg: 0.000020 -> 0.000015: 1.2770x faster
Significant (t=3.482649)
Stddev: 0.00002 -> 0.00002: 1.0947x larger (N = 500)
Running 'query_update' benchmark ...
Min: 0.000047 -> 0.000043: 1.0971x faster
Avg: 0.000050 -> 0.000046: 1.0691x faster
Significant (t=9.363513)
Stddev: 0.00001 -> 0.00000: 1.2636x smaller (N = 500)
Running 'query_in_bulk' benchmark ...
Min: 0.000157 -> 0.000143: 1.0970x faster
Avg: 0.000178 -> 0.000162: 1.0981x faster
Significant (t=9.031182)
Stddev: 0.00002 -> 0.00003: 1.5173x larger (N = 500)
Running 'url_resolve_nested' benchmark ...
Min: 0.000046 -> 0.000038: 1.2179x faster
Avg: 0.000075 -> 0.000052: 1.4505x faster
Not significant
Stddev: 0.00059 -> 0.00024: 2.4300x smaller (N = 500)
Running 'model_creation' benchmark ...
Min: 0.000071 -> 0.000065: 1.1058x faster
Avg: 0.000079 -> 0.000073: 1.0876x faster
Significant (t=2.786580)
Stddev: 0.00003 -> 0.00004: 1.1518x larger (N = 500)
Running 'query_order_by' benchmark ...
Min: 0.000146 -> 0.000128: 1.1407x faster
Avg: 0.000154 -> 0.000138: 1.1206x faster
Significant (t=14.021341)
Stddev: 0.00002 -> 0.00002: 1.2540x larger (N = 500)
Running 'startup' benchmark ...
Skipped: Django 1.9 and later has changed app loading. This benchmark needs fixing anyway.
Running 'form_clean' benchmark ...
Min: 0.000005 -> 0.000004: 1.4613x faster
Avg: 0.000006 -> 0.000004: 1.3654x faster
Significant (t=12.763128)
Stddev: 0.00000 -> 0.00000: 1.1666x larger (N = 500)
Running 'locale_from_request' benchmark ...
Min: 0.000097 -> 0.000108: 1.1090x slower
Avg: 0.000108 -> 0.000120: 1.1178x slower
Significant (t=-3.057677)
Stddev: 0.00007 -> 0.00006: 1.1186x smaller (N = 500)
Running 'query_exists' benchmark ...
Min: 0.000273 -> 0.000234: 1.1698x faster
Avg: 0.000290 -> 0.000248: 1.1686x faster
Significant (t=39.518859)
Stddev: 0.00002 -> 0.00002: 1.2025x smaller (N = 500)
Running 'query_values_10000' benchmark ...
Min: 0.005601 -> 0.005298: 1.0571x faster
Avg: 0.006023 -> 0.005691: 1.0583x faster
Significant (t=6.167352)
Stddev: 0.00082 -> 0.00088: 1.0752x larger (N = 500)
Running 'query_exclude' benchmark ...
Min: 0.000159 -> 0.000140: 1.1367x faster
Avg: 0.000165 -> 0.000149: 1.1020x faster
Significant (t=19.643154)
Stddev: 0.00001 -> 0.00001: 1.2636x larger (N = 500)
Running 'query_raw' benchmark ...
Min: 0.005764 -> 0.004630: 1.2450x faster
Avg: 0.006169 -> 0.004881: 1.2638x faster
Significant (t=20.996453)
Stddev: 0.00109 -> 0.00083: 1.3105x smaller (N = 500)
Running 'url_resolve' benchmark ...
Min: 0.004928 -> 0.004597: 1.0721x faster
Avg: 0.005217 -> 0.004716: 1.1063x faster
Significant (t=46.893945)
Stddev: 0.00022 -> 0.00010: 2.2192x smaller (N = 500)
Running 'model_save_new' benchmark ...
Min: 0.003404 -> 0.003012: 1.1301x faster
Avg: 0.003494 -> 0.003105: 1.1251x faster
Significant (t=45.888484)
Stddev: 0.00014 -> 0.00013: 1.0298x smaller (N = 500)
Running 'query_all' benchmark ...
Min: 0.007971 -> 0.007085: 1.1250x faster
Avg: 0.009091 -> 0.008147: 1.1159x faster
Significant (t=7.583074)
Stddev: 0.00183 -> 0.00210: 1.1518x larger (N = 500)
SQL
SQL Tuning is usually the realm of experienced Database Admins, as it can be full of missteps leading to worse performance. It is extremely important that you take it slowly, make one change at a time with dedicated research and test before and after.
Before you start down this path, it’s best to update MariaDB / MySQL. Performance Schemas, some default tuning and other general performance improvements are only available on new versions. You must also allow your server to run for 24 hours at least to gather accurate data.
MySQLTuner
MySQLTuner is a Perl script that will analyze inbuilt metrics and provide recommendations.
performance_schema
This should be ON for 24 hours before applying any recommendations, then can be turned OFF to save Memory while it’s not needed.
-------- Performance schema ------------------------------------------------------------------------
[--] Performance_schema is activated.
[--] Memory used by Performance_schema: 105.7M
[--] Sys schema is installed.
# 105M could be significant depending on your hardware
SQL Variables
skip-name-resolve
While on, SQL will perform a dns resolve for all connections, if your database is connected via IP, this will save you a handful of cpu cycles per connection.
Note you must use 127.0.0.1 for localhost connections, and all entries in GRANT tables (permissions), must use IP addresses.
table_definition_cache
Usually need to be expanded on installations with many extensions.
Most installs should cache all their tables, but if your hit rate is still quite high, you may have a lot of rarely used tables that you don’t need to waste memory caching.
[OK] Table cache hit rate: 63% (2K hits / 4K requests).
# Only 63% of our queries are using this cache
table_definition_cache (400) > 567 or -1 (autosizing if supported)
# Here We have 567 tables, but a default cache of only 400.
[OK] Table cache hit rate: 99% (372M hits / 372M requests)
[OK] table_definition_cache (600) is greater than number of tables (567)
# Much better
innodb_buffer_pool_size
This is in short, the amount of memory assigned to store data for faster reads. If you are memory starved, you should not increase this variable regardless of the suggestions of this tool. Pushing SQL cache to pagefile will not result in faster queries.
If you are not memory starved, you can wind this up to the amount of total data you have to store it all in memory. This would be a significant performance increase for larger installations on dedicated hardware with memory to spare.
[!!] InnoDB buffer pool / data size: 128.0M / 651.6M
# I have 651mb of _possible_ data to cache.
# We won't and shouldn't cache it all unless we have excess memory to spare.
...
[!!] InnoDB Read buffer efficiency: 0% (-2019 hits / 0 total)
# A low ratio here would suggest more Memory can be used effectively.
# A high ratio might mean we have most of our regularly used data cached already
innodb_log_buffer_size / innodb_log_file_size
innodb_log_file_size This is your write log, used to redo any commits in the event of a crash. MySQLTuner recommends this be 1/4 of your innodb_buffer_pool / read buffer. I would not lower this past the default size.
innodb_log_buffer_size This is the memory buffer for the “write” log. Larger transactions will benefit from a larger setting.
[!!] Ratio InnoDB log file size / InnoDB Buffer pool size (75%): 96.0M \* 1 / 128.0M should be equal to 25%
# Here our write log file is 75% of our read buffer, but 96MB is the default so we probably wont shrink it further
...
[OK] InnoDB Write Log efficiency: 99.11% (23167614 hits / 23375465 total) # Our transactions are typically not large enough to exhaust the write log buffer,
[OK] InnoDB log waits: 0.00% (0 waits / 207851 writes)
innodb_file_per_table
This is not for performance, but for file system utilization and ease of use. While off, all tables are stored in a single monolith file, as opposed to individual files. This is deprecated and set to ON in MariaDB 11.x
[OK] InnoDB File per table is activated
join_buffer_size
It is always better to optimize a table with indexes, if you have valuable performance data and analysis, please reach out to either the Alliance Auth or Community dev responsible for the data that could benefit from indexes. MySQLTuner will likely recommend increasing this number for as long as there are any queries that could benefit, regardless of their resulting performance impact.
Also keep in mind this is per thread, if you have a potential 200 connections, 256KB * 200 = 50MB, scaling this setting out too far can result in more memory use than expected.
[!!] Joins performed without indexes: 67646
[OK] No joins without indexes
# An ideal scenario. With well designed apps this is possible with a default join buffer.
tmp_table_size / max_heap_table_size
If a temporary table must be created due to a lack of other optimizations or large queries, it may only be stored in memory under this size. Any larger and it is performed on disk reducing performance.
tmp_table_size and max_heap_table_size should be increased together.
[!!] Temporary tables created on disk: 32% (775 on disk / 2K total)
# 32% of my temp tables are performed on disk. If you increase this size, monitor if your performance improves.
# If it does not you may have data of a certain size that is impractical to cache, and you can reclaim this memory.
[OK] Temporary tables created on disk: 0% (5K on disk / 4M total)
# Here only a miniscule amount of my temp tables are done on disk. No action required
key_buffer_size
Index buffer for MyISAM tables, If you use no or very little data in MyISAM tables. You may reclaim some memory here
In this example, we still have some MyISAM tables. You may have none.
[--] General MyIsam metrics:
[--] +-- Total MyISAM Tables : 67
[--] +-- Total MyISAM indexes : 7.1M
[--] +-- KB Size :8.0M
[--] +-- KB Used Size :1.5M
[--] +-- KB used :18.3%
[--] +-- Read KB hit rate: 0% (0 cached / 0 reads)
[--] +-- Write KB hit rate: 0% (0 cached / 0 writes)
[!!] Key buffer used: 18.3% (1.5M used / 8.0M cache) # We have only filled 1.5M of the 8 assigned
[OK] Key buffer size / total MyISAM indexes: 8.0M/7.1M # This is the max theoretical buffer to cache all my indexes
Variables to adjust:
key_buffer_size (~ 1M)
# Tuner has seen that we barely use this buffer and it can be shrunk, if you care about its impact don't lower this below your total indexes.
aria_pagecache_buffer_size
Index and data buffer for Aria tables, If you use no or very little data in Aria tables. You may reclaim some memory here
-------- Aria Metrics ------------------------------------------------------------------------------
[--] Aria Storage Engine is enabled.
[OK] Aria pagecache size / total Aria indexes: 128.0M/328.0K # i use a fraction of my aria buffer since i have no aria tables.
[OK] Aria pagecache hit rate: 99.9% (112K cached / 75 reads) # Aria is used internally for MariaDB, so you still want an incredibly high ratio here.
Swappiness
Swappiness is not an SQL variable but part of your system kernel. Swappiness controls how much free memory a server “likes” to have at any given time, and how frequently it shifts data to swapfile to free up memory. Desktop operating systems will have this value set quite high, whereas servers are less aggressive with their swapfile.
Database workloads especially benefit from having their caches stay in memory and will recommend values under 10 for a dedicated database server. 10 is a good compromise for a mixed use server with adequate memory.
If your server is memory starved, leave swapfile aggressive to ensure it is moving memory around as needed.
joel@METABOX:~/aa_dev$ free -m
total used free shared buff/cache available
Mem: 15998 1903 13372 2 722 13767
Swap: 4096 1404 2691
# Here we can see a lot of memory page (1404MB) sitting in swap while there is free memory (13372MB) available
[root@auth ~]# free -m
total used free shared buff/cache available
Mem: 738 611 59 1 68 35
Swap: 2047 1014 1033
# Here we can see a memory starved server highly utilizing swap already. I wouldn't mess with it too much. (vm.swappiness is 30)
[root@mysql ~]# free -m
total used free shared buff/cache available
Mem: 738 498 95 7 145 120
Swap: 2047 289 1758
# Here we can see a dedicated single use Database Server, Swappiness is 10 here because we have been careful not to starve it of memory and there is low potential to impact other applications
[--] Information about kernel tuning:
...
[--] vm.swappiness = 30
[xx] Swappiness is < 10.
...
vm.swappiness <= 10 (echo 10 > /proc/sys/vm/swappiness) or vm.swappiness=10 in /etc/sysctl.conf
Max Asynchronous IO
Unless you are still operating on spinning rust (Hard Disk Drives), or an IO-limited VPS, you can likely increase this value. Database workloads appreciate the additional scaling.
[--] Information about kernel tuning:
[--] fs.aio-max-nr = 65536
...
fs.aio-max-nr > 1M (echo 1048576 > /proc/sys/fs/aio-max-nr) or fs.aio-max-nr=1048576 in /etc/sysctl.conf
Support
If you encounter any AA related issues during installation or otherwise, please first check the following resources:
See the section on troubleshooting your AA instance, e.g. the list of common problems
Search the AA issue list (especially the closed ones)
No solution?
Customizing
It is possible to customize your Alliance Auth instance.
Warning
Keep in mind that you may need to update some of your customizations manually after new Auth releases (e.g., when replacing templates).
Site name
You can replace the default name shown on the website with your own, e.g., the name of your Alliance.
Just update SITE_NAME
in your local.py
settings file accordingly, e.g.:
SITE_NAME = 'Awesome Alliance'
Custom Static and Templates
Within your auth project exists two folders named static
and templates
. These are used by Django for rendering web pages. Static refers to content Django does not need to parse before displaying, such as CSS styling or images. When running via a WSGI worker such as Gunicorn, static files are copied to a location for the web server to read from. Templates are always read from the template folders, rendered with additional context from a view function, and then displayed to the user.
You can add extra static or templates by putting files in these folders. Note that changes to static require running the python manage.py collectstatic
command to copy to the web server directory.
It is possible to overload static and templates shipped with Django or Alliance Auth by including a file with the exact path of the one you wish to overload. For instance if you wish to add extra links to the menu bar by editing the template, you would make a copy of the allianceauth/templates/allianceauth/base-bs5.html
(Bootstrap 5) allianceauth/templates/allianceauth/base.html
(Legacy BS3) file to myauth/templates/allianceauth/*.html
and edit it there. Notice the paths are identical after the templates/
directory - this is critical for it to be recognized. Your custom template would be used instead of the one included with Alliance Auth when Django renders the web page. Similar idea for static: put CSS or images at an identical path after the static/
directory and they will be copied to the web server directory instead of the ones included.
Custom URLs and Views
It is possible to add or override URLs with your auth project’s URL config file. Upon installing, it is of the form:
from django.urls import re_path
from django.urls import include
import allianceauth.urls
urlpatterns = [
re_path(r'', include(allianceauth.urls)),
]
This means every request gets passed to the Alliance Auth URL config to be interpreted.
If you wanted to add a URL pointing to a custom view, it can be added anywhere in the list if not already used by Alliance Auth:
from django.urls import re_path
from django.urls import include, path
import allianceauth.urls
import myauth.views
urlpatterns = [
re_path(r'', include(allianceauth.urls)),
path('myview/', myauth.views.myview, name='myview'),
]
Additionally, you can override URLs used by Alliance Auth here:
from django.urls import re_path
from django.urls import include, path
import allianceauth.urls
import myauth.views
urlpatterns = [
path('account/login/', myauth.views.login, name='auth_login_user'),
re_path(r'', include(allianceauth.urls)),
]
Development
Alliance Auth is designed to be extended easily. Learn how to develop your own apps and services for AA or to develop for AA core in the development chapter.
Custom apps and services
This section describes how to extend Alliance Auth with custom apps, services and themes.
Integrating Services
One of the primary roles of Alliance Auth is integrating with external services to authenticate and manage users. This is achieved through the use of service modules.
The Service Module
Each service module is its own self-contained Django app. It will likely contain views, models, migrations and templates. Anything that is valid in a Django app is valid in a service module.
Normally service modules live in services.modules
though they may also be installed as external packages and installed via pip
if you wish. A module is installed by including it in the INSTALLED_APPS
setting.
Service Module Structure
Typically, a service will contain 5 key components:
The architecture looks something like this:
urls -------▶ Views
▲ |
| |
| ▼
ServiceHook ----▶ Tasks ----▶ Manager
▲
|
|
AllianceAuth
Where:
Module --▶ Dependency/Import
While this is the typical structure of the existing services modules, there is no enforcement of this structure, and you are, effectively, free to create whatever architecture may be necessary. A service module need not even communicate with an external service, for example, if similar triggers such as validate_user, delete_user are required for a module it may be convenient to masquerade as a service. Ideally, using the common structure improves the maintainability for other developers.
The Hook
To integrate with Alliance Auth service modules must provide a services_hook
. This hook will be a function that returns an instance of the services.hooks.ServiceHook
class and decorated with the @hooks.registerhook
decorator. For example:
@hooks.register('services_hook')
def register_service():
return ExampleService()
This would register the ExampleService class which would need to be a subclass of services.hooks.ServiceHook
.
Important
The hook MUST be registered in yourservice.auth_hooks
along with any other hooks you are registering for Alliance Auth.
A subclassed ServiceHook
might look like this:
class ExampleService(ServicesHook):
def __init__(self):
ServicesHook.__init__(self)
self.urlpatterns = urlpatterns
self.service_url = 'https://exampleservice.example.com'
"""
Overload base methods here to implement functionality
"""
The ServiceHook class
The base ServiceHook
class defines function signatures that Alliance Auth will call under certain conditions to trigger some action in the service.
You will need to subclass services.hooks.ServiceHook
in order to provide implementation of the functions so that Alliance Auth can interact with the service correctly. All the functions are optional, so its up to you to define what you need.
Instance Variables:
Properties:
Functions:
Variables
self.name
Internal name of the module, should be unique amongst modules.
self.service-ctrl-template
The template used to render
self.urlpatterns
You should usually define all of your service URLs internally, in urls.py
. Then you can import them and set self.urlpatterns
to your defined urlpatterns.
from . import urls
...
class MyService(ServiceHook):
def __init__(self):
...
self.urlpatterns = urls.urlpatterns
All of your apps defined urlpatterns will then be included in the URLconf
when the core application starts.
self.service_ctrl_template
This is provided as a courtesy and defines the default template to be used with render_service_ctrl. You are free to redefine or not use this variable at all.
title
This is a property which provides a user-friendly display of your service’s name. It will usually do a reasonably good job unless your service name has punctuation or odd capitalization. If this is the case, you should override this method and return a string.
Functions
self.service_ctrl_template
This is provided as a courtesy and defines the default template to be used with render_service_ctrl. You are free to redefine or not use this variable at all.
delete_user
def delete_user(self, user, notify_user=False):
Delete the user’s service account, optionally notify them that the service has been disabled. The user
parameter should be a Django User object. If notify_user is set to True
a message should be set to the user via the notifications
module to alert them that their service account has been disabled.
The function should return a boolean, True
if successfully disabled, False
otherwise.
validate_user
def validate_user(self, user):
Validate the user’s service account, deleting it if they should no longer have access. The user
parameter should be a Django User object.
An implementation will probably look like the following:
def validate_user(self, user):
logger.debug('Validating user %s %s account' % (user, self.name))
if ExampleTasks.has_account(user) and not self.service_active_for_user(user):
self.delete_user(user, notify_user=True)
No return value is expected.
This function will be called periodically on all users to validate that the given user should have their current service accounts.
sync_nickname
def sync_nickname(self, user):
Very optional. As of writing, only one service defines this. The user
parameter should be a Django User object. When called, the given users nickname for the service should be updated and synchronized with the service.
If this function is defined, an admin action will be registered on the Django Users view, allowing admins to manually trigger this action for one or many users. The hook will trigger this action user by user, so you won’t have to manage a list of users.
sync_nicknames_bulk
def sync_nicknames_bulk(self, users):
Updates the nickname for a list of users. The users
parameter must be a list of Django User objects.
If this method is defined, the admin action for updating service related nicknames for users will call this bulk method instead of sync_nickname. This gives you more control over how mass updates are executed, e.g., ensuring updates do not run in parallel to avoid causing rate limit violations from an external API.
This is an optional method.
update_groups
def update_groups(self, user):
Update the user’s group membership. The user
parameter should be a Django User object.
When this is called, the service should determine the groups the user is a member of and synchronize the group membership with the external service. If your service does not support groups, then you are not required to define this.
If this function is defined, an admin action will be registered on the Django Users view, allowing admins to manually trigger this action for one or many users. The hook will trigger this action user by user, so you won’t have to manage a list of users.
This action is usually called via a signal when a user’s group membership changes (joins or leaves a group).
update_groups_bulk
def update_groups_bulk(self, users):
Updates the group memberships for a list of users. The users
parameter must be a list of Django User objects.
If this method is defined, the admin action for updating service related groups for users will call this bulk method instead of update_groups. This gives you more control over how mass updates are executed, e.g., ensuring updates do not run in parallel to avoid causing rate limit violations from an external API.
This is an optional method.
update_all_groups
def update_all_groups(self):
The service should iterate through all of its recorded users and update their groups.
I’m really not sure when this is called, it may have been a hold over from before signals started to be used. Regardless, it can be useful to server admins who may call this from a Django shell to force a synchronization of all user groups for a specific service.
service_active_for_user
def service_active_for_user(self, user):
Is this service active for the given user? The user
parameter should be a Django User object.
Usually you won’t need to override this as it calls service_enabled_members
or service_enabled_blues
depending on the user’s state.
show_service_ctrl
def show_service_ctrl(self, user, state):
Should the service be shown for the given user
with the given state
? The user
parameter should be a Django User object, and the state
parameter should be a valid state from authentication.states
.
Usually you won’t need to override this function.
For more information see the render_service_ctrl section.
render_service_ctrl
def render_services_ctrl(self, request):
Render the services control row. This will be called for all active services when a user visits the /services/
page and show_service_ctrl returns True
for the given user.
It should return a string (usually from render_to_string
) of a table row (<tr>
) with 4 columns (<td>
). Column #1 is the service name, column #2 is the user’s username for this service, column #3 is the services URL, and column #4 is the action buttons.
You may either define your own service template or use the default one provided. The default can be used like this example:
def render_services_ctrl(self, request):
"""
Example for rendering the service control panel row
You can override the default template and create a
custom one if you wish.
:param request:
:return:
"""
urls = self.Urls()
urls.auth_activate = 'auth_example_activate'
urls.auth_deactivate = 'auth_example_deactivate'
urls.auth_reset_password = 'auth_example_reset_password'
urls.auth_set_password = 'auth_example_set_password'
return render_to_string(self.service_ctrl_template, {
'service_name': self.title,
'urls': urls,
'service_url': self.service_url,
'username': 'example username'
}, request=request)
the Urls
class defines the available URL names for the four actions available in the default template:
Activate (create a service account)
Deactivate (delete a service account)
Reset Password (random password)
Set Password (custom password)
If you don’t define one or all of these variables, the button for the undefined URLs will not be displayed.
Most services will survive with the default template. If, however, you require extra buttons for whatever reason, you are free to provide your own template as long as you stick within the 4 columns. Multiple rows should be OK, though it may be confusing to users.
The Service Manager
The service manager is what interacts with the external service. Ideally, it should be completely agnostic about its environment, meaning that it should avoid calls to Alliance Auth and Django in general (except in special circumstances where the service is managed locally, e.g., Mumble). Data should come in already arranged by the Tasks and data passed back for the tasks to manage or distribute.
The reason for maintaining this separation is that managers may be reused from other sources, and there may not even be a need to write a custom manager. Likewise, by maintaining this neutral environment, others may reuse the managers that we write. It can also significantly ease the unit testing of services.
The Views
As mentioned at the start of this page, service modules are fully fledged Django apps. This means you’re free to do whatever you wish with your views.
Typically, most traditional username/password services define four views.
Create Account
Delete Account
Reset Password
Set Password
These views should interact with the service via the Tasks, though in some instances may bypass the Tasks and access the manager directly where necessary, for example, OAuth functionality.
The Tasks
The tasks component is the glue that holds all the other components of the service module together. It provides the function implementation to handle things like adding and deleting users, updating groups, and validating the existence of a user’s account. Whatever tasks auth_hooks
and views
have with interacting with the service will probably live here.
The Models
It’s very likely that you’ll need to store data about a users remote service account locally. As service modules are fully fledged Django apps, you are free to create as many models as necessary for persistent storage. You can create foreign keys to other models in Alliance Auth if necessary, though I strongly recommend you limit this to the User and Groups models from django.contrib.auth.models
and query any other data manually.
If you create models, you should create the migrations that go along with them inside your module/app.
Examples
There is a bare-bones example service included in services.modules.example
, you may like to use this as the base for your new service.
You should have a look through some of the other service modules before you get started to get an idea of the general structure. A lot of them aren’t perfect, so don’t feel like you have to rigidly follow the structure of the existing services if you think its suboptimal or doesn’t suit the external service you’re integrating.
Testing
You will need to add unit tests for all aspects of your service module before it is accepted. Be mindful that you don’t actually want to make external calls to the service, so you should mock the appropriate components to prevent this behavior.
- class ServicesHook[source]
Abstract base class for creating a compatible services hook. Decorate with @register(‘services_hook’) to have the services module registered for callbacks. Must be in auth_hook(.py) sub module
- delete_user(user, notify_user=False)[source]
Delete the users service account, optionally notify them that the service has been disabled :param user: Django.contrib.auth.models.User :param notify_user: Whether the service should sent a notification to the user about the disabling of their service account. :return: True if the service account has been disabled, or False if it doesnt exist.
- render_services_ctrl(request)[source]
Render the services control template row :param request: :return:
- show_service_ctrl(user)[source]
Whether the service control should be displayed to the given user who has the given service state. Usually this function wont require overloading. :param user: django.contrib.auth.models.User :return: bool True if the service should be shown
- sync_nickname(user)[source]
Sync the users nickname :param user: Django.contrib.auth.models.User :return: None
- property title
A nicely formatted title of the service, for client facing display. :return: str
URL Hooks
Base functionality
The URL hooks allow you to dynamically specify URL patterns from your plugin app or service. To achieve this, you should subclass or instantiate the services.hooks.UrlHook
class and then register the URL patterns with the hook.
To register a UrlHook class, you would do the following:
@hooks.register('url_hook')
def register_urls():
return UrlHook(app_name.urls, 'app_name', r^'app_name/')
urls
The urls module to include. See the Django docs for designing urlpatterns.
namespace
The URL namespace to apply. This is usually just the app name.
base_url
The URL prefix to match against in regex form. Example r'^app_name/'
. This prefix will be applied in front of all URL patterns included. It is possible to use the same prefix as existing apps (or no prefix at all) but standard URL resolution ordering applies (hook URLs are the last ones registered).
Public views
In addition, is it possible to make views public. Normally, all views are automatically decorated with the main_character_required
decorator. That decorator ensures a user needs to be logged in and have a main before he can access that view. This feature protects against a community app sneaking in a public view without the administrator knowing about it.
An app can opt out of this feature by adding a list of views to be excluded when registering the URLs. See the excluded_views
parameter for details.
Note
Note that for a public view to work, administrators need to also explicitly allow apps to have public views in their AA installation, by adding the app label to APPS_WITH_PUBLIC_VIEWS
setting.
Examples
An app called plugin
provides a single view:
def index(request):
return render(request, 'plugin/index.html')
The app’s urls.py
would look like so:
from django.urls import path
import plugin.views
urlpatterns = [
path('index/', plugins.views.index, name='index'),
]
Subsequently, it would implement the UrlHook in a dedicated auth_hooks.py
file like so:
from alliance_auth import hooks
from services.hooks import UrlHook
import plugin.urls
@hooks.register('url_hook')
def register_urls():
return UrlHook(plugin.urls, 'plugin', r^'plugin/')
When this app is included in the project’s settings.INSTALLED_APPS
users would access the index view by navigating to https://example.com/plugin/index
.
API
- class UrlHook(urls, namespace: str, base_url: str, excluded_views: Iterable[str] | None = None)[source]
A hook for registering the URLs of a Django app.
- Parameters:
urls (-) – The urls module to include
namespace (-) – The URL namespace to apply. This is usually just the app name.
base_url (-) – The URL prefix to match against in regex form. Example
r'^app_name/'
. This prefix will be applied in front of all URL patterns included. It is possible to use the same prefix as existing apps (or no prefix at all), but standard URL resolution ordering applies (hook URLs are the last ones registered).excluded_views (-) – Optional list of views to be excluded from auto-decorating them with the default
main_character_required
decorator, e.g. to make them public. Views must be specified by their qualified name, e.g.["example.views.my_public_view"]
Logging from Custom Apps
Alliance Auth provides a logger for use with custom apps to make everyone’s life a little easier.
Using the Extensions Logger
AllianceAuth provides a helper function to get the logger for the current module to reduce the amount of code you need to write.
from allianceauth.services.hooks import get_extension_logger
logger = get_extension_logger(__name__)
This works by creating a child logger of the extension logger which propagates all log entries to the parent (extensions) logger.
Changing the Logging Level
By default, the extension logger’s level is set to DEBUG
.
To change this, uncomment (or add) the following line in local.py
.
LOGGING['handlers']['extension_file']['level'] = 'INFO'
(Remember to restart your supervisor workers after changes to local.py
)
This will change the logger’s level to the level you define.
Options are: (all options accept entries of levels listed below them)
DEBUG
INFO
WARNING
ERROR
CRITICAL
allianceauth.services.hooks.get_extension_logger
Takes the name of a plugin/extension and generates a child logger of the extensions logger to be used by the extension to log events to the extensions logger.
The logging level is determined by the level defined for the parent logger.
- param:
name: the name of the extension doing the logging
- return:
an extensions child logger
- __init__(*args, **kwargs)
Initialize self. See help(type(self)) for accurate signature.
Theme Hooks
The theme hook allows custom themes to be loaded dynamically by AAs CSS/JS Bundles, as selected by Users.
To register a ThemeHook class you would do the following:
@hooks.register('theme_hook')
def register_darkly_hook():
return ThemeHook()
The ThemeHook
class specifies some parameters/instance variables required.
- class ThemeHook(name: str, description: str, css: List[dict], js: List[dict], css_template: str | None = None, js_template: str | None = None, html_tags: str | None = '', header_padding: str | None = '4em')[source]
Theme hook for injecting a Bootstrap 5 Theme and associated JS into alliance auth. these can be local or CDN delivered
AA Framework
This section contains information about the Alliance Auth framework and how to use it.
The Alliance Auth framework is a collection of reusable Python code as well as CSS classes that are used throughout Alliance Auth. It is designed to be used by developers of community apps to make their lives easier.
The Alliance Auth framework is split into several submodules, each of which is documented in its own section.
Alliance Auth Helper-Functions API
The following helper-functions are available in the allianceauth.framework.api
module.
They are intended to be used in Alliance Auth itself as well as in the community apps.
These functions are intended to make the life of our community apps developer a little easier, so they don’t have to reinvent the wheel all the time.
EveCharacter API
get_main_character_from_evecharacter
This is to get the main character object (EveCharacter
) of an EveCharacter
object.
Given we have an EveCharacter
object called my_evecharacter
and we want to get the main character:
# Alliance Auth
from allianceauth.framework.api.evecharacter import get_main_character_from_evecharacter
main_character = get_main_character_from_evecharacter(character=my_evecharacter)
Now, main_character
is an EveCharacter
object, or None
if the EveCharacter
has no main character.
get_user_from_evecharacter
This is to get the user object (User
) of an EveCharacter
object.
Given we have an EveCharacter
object called my_evecharacter
and we want to get the user:
# Alliance Auth
from allianceauth.framework.api.evecharacter import get_user_from_evecharacter
user = get_user_from_evecharacter(character=my_evecharacter)
Now, user
is a User
object, or the sentinel username (see get_sentinel_user)
if the EveCharacter
has no user.
User API
get_all_characters_from_user
This is to get all character objects (EveCharacter
) of a user.
Given we have a User
object called my_user
and we want to get all characters:
# Alliance Auth
from allianceauth.framework.api.user import get_all_characters_from_user
characters = get_all_characters_from_user(user=my_user)
Now, characters
is a list
containing all EveCharacter
objects of the user.
If the user is None
, an empty list
will be returned.
get_main_character_from_user
This is to get the main character object (EveCharacter
) of a user.
Given we have a User
object called my_user
and we want to get the main character:
# Alliance Auth
from allianceauth.framework.api.user import get_main_character_from_user
main_character = get_main_character_from_user(user=my_user)
Now, main_character
is an EveCharacter
object, or None
if the user has no main
character or the user is None
.
get_main_character_name_from_user
This is to get the name of the main character from a user.
Given we have a User
object called my_user
and we want to get the main character name:
# Alliance Auth
from allianceauth.framework.api.user import get_main_character_name_from_user
main_character = get_main_character_name_from_user(user=my_user)
Now, main_character
is a string
containing the user’s main character name.
If the user has no main character, the username will be returned. If the user is None
,
the sentinel username (see get_sentinel_user) will be returned.
get_sentinel_user
This function is useful in models when using User
model-objects as foreign keys.
Django needs to know what should happen to those relations when the user is being
deleted. To keep the data, you can have Django map this to the sentinel user.
Import:
# Alliance Auth
from allianceauth.framework.api.user import get_sentinel_user
And later in your model:
creator = models.ForeignKey(
to=User,
on_delete=models.SET(get_sentinel_user),
)
CSS Framework
To establish a unified style language throughout Alliance Auth and Community Apps, Alliance Auth is providing its own CSS framework with a couple of CSS classes.
Cursors
Our CSS framework provides different classes to manipulate the cursor, which are missing in Bootstrap.
CSS Class |
Effect |
Example |
---|---|---|
cursor-default |
System default curser |
![]() |
cursor-pointer |
Pointer, like it looks like for links and form buttons |
![]() |
cursor-wait |
Wait animation |
![]() |
cursor-text |
Text selection cursor |
![]() |
cursor-move |
4-arrow-shaped cursor |
![]() |
cursor-help |
Cursor with a little question mark |
![]() |
cursor-not-allowed |
Not Allowed sign |
![]() |
cursor-inherit |
Inherited from its parent element |
|
cursor-zoom-in |
Zoom in symbol |
![]() |
cursor-zoom-out |
Zoom out symbol |
![]() |
Callout-Boxes
These are similar to the Bootstrap alert/notification boxes, but not as “loud”.
Callout-boxes need a base-class (.aa-callout
) and a modifier-class (e.g.:
.aa-callout-info
for an info-box). Modifier classes are available for the usual
Bootstrap alert levels “Success”, “Info”, “Warning” and “Danger”.
HTML
<div class="aa-callout">
<p>
This is a callout-box.
</p>
</div>
Alert Level Modifier Classes
The callout boxes come in four alert levels: success, info, warning and danger.
Use the modifier classes .aa-callout-success
, .aa-callout-info
, .aa-callout-warning
and .aa-callout-danger
to change the left border color of the callout box.
<div class="aa-callout aa-callout-success">
<p>
This is a success callout-box.
</p>
</div>
<div class="aa-callout aa-callout-info">
<p>
This is an info callout-box.
</p>
</div>
<div class="aa-callout aa-callout-warning">
<p>
This is a warning callout-box.
</p>
</div>
<div class="aa-callout aa-callout-danger">
<p>
This is a danger callout-box.
</p>
</div>
Size Modifier Classes
The callout boxes come in three sizes: small, default and large.
Use the modifier classes .aa-callout-sm
for small and .aa-callout-lg
for large, where .aa-callout-sm
will change the default padding form 1rem to 0.5rem and .aa-callout-lg
will change it to 1.5rem.
These modifier classes can be combined with the alert level modifier classes.
<div class="aa-callout aa-callout-sm">
<p>
This is a small callout-box.
</p>
</div>
<div class="aa-callout">
<p>
This is a default callout-box.
</p>
</div>
<div class="aa-callout aa-callout-lg">
<p>
This is a large callout-box.
</p>
</div>
Templates
Bundles
As bundles, we see templates that load essential CSS and JavaScript and are used throughout Alliance Auth. These bundles can also be used in your own apps, so you don’t have to load specific CSS or JavaScript yourself.
These bundles include DataTables CSS and JS, jQuery Datepicker CSS and JS, jQueryUI CSS and JS, and more.
A full list of bundles we provide can be found here: https://gitlab.com/allianceauth/allianceauth/-/tree/master/allianceauth/templates/bundles
To use a bundle, you can use the following code in your template (Example for jQueryUI):
{% block extra_css %}
{% include "bundles/jquery-ui-css.html" %}
{% endblock %}
{% block extra_javascript %}
{% include "bundles/jquery-ui-js.html" %}
{% endblock %}
Template Partials
To ensure a unified style language throughout Alliance Auth and Community Apps, we also provide a couple of template partials. This collection is bound to grow over time, so best have an eye on this page.
Page Header
On some pages you want to have a page header. To make this easier, we provide a template partial for this.
To use it, you can use the following code in your template:
{% block content %}
<div>
{% translate "My Page Header" as page_header %}
{% include "framework/header/page-header.html" with title=page_header %}
<p>My page content</p>
</div>
{% endblock %}
Developing AA Core
This section contains important information on how to develop Alliance Auth itself.
Alliance Auth documentation
The documentation for Alliance Auth uses Sphinx to build documentation. When a new commit to specific branches is made (master, primarily), the repository is automatically pulled, docs built and deployed on readthedocs.org.
Documentation was migrated from the GitHub wiki pages and into the repository to allow documentation changes to be included with pull requests. This means that documentation can be guaranteed to be updated when a pull request is accepted rather than hoping documentation is updated afterwards or relying on maintainers to do the work. It also allows for documentation to be maintained at different versions more easily.
Building Documentation
If you’re developing new documentation, it’s likely you’ll want or need to test build it before committing to your branch. To achieve this, you can use Sphinx to build the documentation locally as it appears on Read the Docs.
Activate your virtual environment (if you’re using one) and install the documentation requirements found in
docs/requirements.txt
using pip, e.g. pip install -r docs/requirements.txt
.
You can then build the docs by changing to the docs/
directory and running make html
or make dirhtml
, depending
on how the Read the Docs project is configured. Either should work fine for testing. You can now find the output of the
build in the /docs/_build/
directory.
Occasionally you may need to fully rebuild the documents by running make clean
first, usually when you add or
rearrange toctrees.
Documentation Format
CommonMark-plus Markdown is the current preferred format, via MyST-Parser. reStructuredText is supported if required, or you can execute snippets of MyST inside Markdown by using a code block:
```{eval-rst}
reStructuredText here
```
Markdown is used elsewhere on GitHub, so it provides the most portability of documentation from Issues and Pull Requests as well as providing an easier initial migration path from the GitHub wiki.
Code Style
Pre-Commit
Alliance Auth is a team effort with developers of various skill levels and background. To avoid significant drift or formatting changes between developers, we use pre-commit to apply a very minimal set of formatting checks to code contributed to the project.
Pre-commit is also very popular with our Community Apps and may be significantly more opinionated or looser depending on the project.
To get started, pip install pre-commit
, then pre-commit install
to add the git hooks.
Before any code is “git push”-ed, pre-commit will check it for uniformity and correct it if possible
check python ast.....................................(no files to check)Skipped
check yaml...........................................(no files to check)Skipped
check json...........................................(no files to check)Skipped
check toml...........................................(no files to check)Skipped
check xml............................................(no files to check)Skipped
check for merge conflicts............................(no files to check)Skipped
check for added large files..........................(no files to check)Skipped
detect private key...................................(no files to check)Skipped
check for case conflicts.............................(no files to check)Skipped
debug statements (python)............................(no files to check)Skipped
fix python encoding pragma...........................(no files to check)Skipped
fix utf-8 byte order marker..........................(no files to check)Skipped
mixed line ending....................................(no files to check)Skipped
trim trailing whitespace.............................(no files to check)Skipped
check that executables have shebangs.................(no files to check)Skipped
fix end of files.....................................(no files to check)Skipped
Check .editorconfig rules............................(no files to check)Skipped
django-upgrade.......................................(no files to check)Skipped
pyupgrade............................................(no files to check)Skipped
Editorconfig
Editorconfig is supported my most IDE’s to streamline the most common editor disparities. While checked by our pre-commit file, using this in your IDE (Either automatically or via a plugin) will minimize the corrections that may need to be made.
Doc Strings
We prefer either PEP-287/reStructuredText or Google Docstrings.
These can be used to automatically generate our Sphinx documentation in either format.
Best Practice
It is advisable to avoid wide formatting changes on code that is not being modified by an MR. Further to this, automated code formatting should be kept to a minimal when modifying sections of existing files.
If you are contributing whole modules or rewriting large sections of code, you may use any legible code formatting valid under Python.
Setup dev environment for AA
Here you find guides on how to setup your development environment for AA.
Development on Windows 10 with WSL and Visual Studio Code
This document describes step-by-step how to set up a complete development environment for Alliance Auth apps on Windows 10 with Windows Subsystem for Linux (WSL) and Visual Studio Code.
The main benefit of this setup is that it runs all services and code in the native Linux environment (WSL) and at the same time can be fully controlled from within a comfortable Windows IDE (Visual Studio Code) including code debugging.
In addition, all tools described in this guide are open source or free software.
Hint
This guide is meant for development purposes only and not for installing AA in a production environment. For production installation, please see chapter Installation.
Overview
The development environment consists of the following components:
Visual Studio Code with the Remote WSL and Python extension
WSL with Ubuntu (18.04. LTS or higher)
Python environment on WSL (3.8 or higher)
MySQL server on WSL
Redis on WSL
Alliance Auth on WSL
Celery on WSL
We will use the build-in Django development web server, so we don’t need to set up a WSGI server or a web server.
Note
This setup works with both WSL 1 and WSL 2. However, due to the significantly better performance, we recommend WSL 2.
Requirement
The only requirement is a PC with Windows 10 and Internet connection to download the additional software components.
Installing Windows apps
Windows Subsystem for Linux
Install from here: Microsoft docs
Choose Ubuntu 18.04. LTS or higher
Visual Studio Code
Install from here: VSC Download
Open the app and install the following VSC extensions:
Remote WSL
Connect to WSL. This will automatically install the VSC server on the VSC server for WSL
Once connected to WSL, install the Python extension on the WSL side
Setting up WSL / Linux
Open a WSL bash and update all software packets:
sudo apt update && sudo apt upgrade -y
Install Tools
sudo apt-get install build-essential
sudo apt-get install gettext
Install Python
Next, we need to install Python and related development tools.
Note
Should your Ubuntu come with a newer version of Python we recommend to still set up your dev environment with the oldest Python 3 version currently supported by AA (e.g., Python 3.8 at this time of writing) to ensure your apps are compatible with all current AA installations
You can check out this page <https://askubuntu.com/questions/682869/how-do-i-install-a-different-python-version-using-apt-get/1195153>
_ on how to install additional Python versions on Ubuntu.
If you install a different python version from the default, you need to adjust some commands below to install appopriate versions of those packages, for example, using Python 3.8 you might need to run the following after using the setup steps for the repository mentioned in the AskUbuntu post above:
sudo apt-get install python3.8 python3.8-dev python3.8-venv python3-setuptools python3-pip python-pip
Use the following command to install Python 3 with all required libraries with the default version:
sudo apt-get install python3 python3-dev python3-venv python3-setuptools python3-pip python-pip
Install redis and other tools
sudo apt-get install unzip git redis-server curl libssl-dev libbz2-dev libffi-dev pkg-config
Start redis
sudo redis-server --daemonize yes
Installing the DBMS
Install MySQL and required libraries with the following command:
sudo apt-get install mysql-server mysql-client libmysqlclient-dev
Note
We chose to use MySQL instead of MariaDB, because the standard version of MariaDB that comes with this Ubuntu distribution will not work with AA.
We need to apply a permission fix to mysql, or you will get a warning with every startup:
sudo usermod -d /var/lib/mysql/ mysql
Start the mysql server
sudo service mysql start
Create a database and user for AA
sudo mysql -u root
CREATE DATABASE aa_dev CHARACTER SET utf8mb4;
CREATE USER 'admin'@'localhost' IDENTIFIED BY 'YOUR-PASSWORD';
GRANT ALL PRIVILEGES ON * . * TO 'admin'@'localhost';
FLUSH PRIVILEGES;
exit;
Add timezone info to mysql:
sudo mysql_tzinfo_to_sql /usr/share/zoneinfo | sudo mysql -u root mysql
Note
If your WSL does not have an init.d service, it will not automatically start your services such as MySQL and Redis when you boot your Windows machine, and you have to manually start them. For convenience, we recommend putting these commands in a bash script. Here is an example:
#/bin/bash
# start services for AA dev
sudo service mysql start
sudo redis-server --daemonize yes
Setup dev folder on WSL
Set up your folders on WSL bash for your dev project. Our approach will set up one AA project with one venv and multiple apps running under the same AA project, but each in their own folder and git.
A good location for setting up this folder structure is your home folder or a subfolder of your home:
~/aa-dev
|- venv
|- myauth
|- my_app_1
|- my_app_2
|- ...
Following this approach, you can also set up additional AA projects, e.g. aa-dev-2, aa-dev-3 if needed.
Create the root folder aa-dev
.
Hint
The folders venv
and myauth
will be created automatically in later steps. Please do not create them manually as this would lead to errors.
Setup virtual Python environment for aa-dev
Create the virtual environment. Run this in your aa-dev folder:
python3 -m venv venv
And activate your venv:
source venv/bin/activate
Install and update basic Python packages
pip install -U pip setuptools wheel
Installing Alliance Auth
Install and create AA instance
pip install allianceauth
Now we are ready to set up our AA instance. Make sure to run this command in your aa-dev folder:
allianceauth start myauth
Next, we will set up our VSC project for aa-dev by starting it directly from the WSL bash:
code .
First you want to make sure exclude the venv folder from VSC as follows:
Open settings and go to Files:Exclude
Add the pattern: **/venv
Create EVE Online SSO App
For the Eve Online related setup you need to create an SSO app on the developer site:
Create your Eve Online SSO App on the Eve Online developer site
Add all ESI scopes
Set callback URL to:
http://127.0.0.1:8000/sso/callback
Update Django settings
Open your local Django settings with VSC. The file is under myauth/myauth/settings/local.py
Hint
There are two Django settings files: base.py
and local.py
. The base settings file is controlled by the AA project and may change at any time. It is therefore recommended to only change the local settings file.
DEBUG = True
Define URL and name of your site:
SITE_URL = "http://127.0.0.1:8000"
...
SITE_NAME = "AA Dev"
Update name, user and password of your DATABASE configuration.
DATABASES['default'] = {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'aa_dev',
'USER': 'admin',
'PASSWORD': 'YOUR-PASSWORD',
'HOST': '127.0.0.1',
'PORT': '3306',
'OPTIONS': {'charset': 'utf8mb4'},
"TEST": {"CHARSET": "utf8mb4"},
}
Add the credentials for your Eve Online SSO app as defined above:
ESI_SSO_CLIENT_ID = 'YOUR-ID'
ESI_SSO_CLIENT_SECRET = 'YOUR_SECRET'
Disable email registration:
REGISTRATION_VERIFY_EMAIL = False
Migrations and superuser
Before we can start AA, we need to run migrations:
cd myauth
python manage.py migrate
We also need to create a superuser for our AA installation:
python manage.py createsuperuser
Running Alliance Auth
AA instance
We are now ready to run out AA instance with the following command:
python manage.py runserver
Once running, you can access your auth site on the browser under http://localhost:8000
. Or the admin site under http://localhost:8000/admin
Hint
You can start your AA server directly from a terminal window in VSC or with a VSC debug config (see chapter about debugging for details).
Note
Debug vs. Non-Debug mode Usually it is best to run your dev AA instance in debug mode, so you get all the detailed error messages that help a lot for finding errors. But there might be cases where you want to test features that do not exist in debug mode (e.g. error pages) or just want to see how your app behaves in non-debug / production mode.
When you turn off debug mode, you will see a problem though: Your pages will not render correctly. The reason is that Django will stop serving your static files in production mode and expect you to serve them from a real web server. Luckily, there is an option that forces Django to continue serving your static files directly even when not in debug mode. Start your server with the following option: python manage.py runserver --insecure
Celery
In addition, you can start a celery worker instance for myauth. For development purposes, it makes sense to only start one instance and add some additional logging.
This can be done from the command line with the following command in the myauth folder (where manage.py is located):
celery -A myauth worker -l info -P solo
Same as AA itself, you can start Celery from any terminal session, from a terminal window within VSC or as a debug config in VSC (see chapter about debugging for details). For convenience, we recommend starting Celery as debug config.
Debugging setup
To be able to debug your code, you need to add a debugging configuration to VSC. At least one for AA and one for celery.
Breakpoints
By default, VSC will break on any uncaught exception. Since every error raised by your tests will cause an uncaught exception, we recommend deactivating this feature.
To deactivate, click on the debug icon to switch to the debug view. Then uncheck “Uncaught Exceptions” in the “Breakpoints” window.
AA debug config
In VSC, click on Debug / Add Configuration and choose “Django”. Should Django not appear as an option, make sure to first open a Django file (e.g., the local.py settings) to help VSC detect that you are using Django.
The result should look something like this:
{
"name": "Python: Django",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/myauth/manage.py",
"cwd": "${workspaceFolder}/myauth",
"args": [
"runserver",
"--noreload"
],
"django": true,
"justMyCode": true,
}
Debug celery
For celery, we need another debug config, so that we can run it in parallel to our AA instance.
Here is an example debug config for Celery:
{
"name": "Python: Celery",
"type": "python",
"request": "launch",
"module": "celery",
"cwd": "${workspaceFolder}/myauth",
"console": "integratedTerminal",
"args": [
"-A",
"myauth",
"worker",
"-l",
"info",
"-P",
"solo",
],
"django": true,
"justMyCode": true,
},
Debug config for unit tests
Finally, it makes sense to have a dedicated debug config for running unit tests. Here is an example config for running all tests of the app example
.
{
"name": "Python: myauth unit tests",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/myauth/manage.py",
"cwd": "${workspaceFolder}/myauth",
"args": [
"test",
"--keepdb",
"--failfast",
"example",
],
"django": true,
"justMyCode": true
},
You can also specify to run just a part of your test suite down to a test method. Give the full path to the test you want to run, e.g. example.test.test_models.TestDemoModel.test_this_method
Debugging normal python scripts
Finally, you may also want to have a debug config to debug a non-Django Python script:
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
},
Additional tools
The following additional tools are very helpful when developing for AA with VS Code:
VS Code extensions
Django Template
This extension adds language colorization support and user snippets for the Django template language to VS Code: Django Template
Code Spell Checker
Typos in your user facing comments can be quite embarrassing. This spell checker helps you avoid them: Code Spell Checker
Git History
Very helpful to visualize the change history and compare different branches. Git History
markdownlint
Extension for Visual Studio Code - Markdown linting and style checking for Visual Studio Code: markdownlint
Live Server
Live Server allows you to start a mini webserver for any file quickly. This can e.g. be useful for looking at changes to Sphinx docs.: Live Server
Django apps
Django Extensions
django-extensions is a swiss army knife for django developers with adds a lot of useful features to your Django site. Here are a few highlights:
shell_plus
- An enhanced version of the Django shell. It will autoload all your models at startup, so you don’t have to import anything and can use them right away.graph_models
- Creates a dependency graph of Django models. Visualizing a model dependency structure can be useful for trying to understand how an existing Django app works, or e.g., how all the AA models work together.runserver_plus
- The standard runserver stuff but with the debugger baked in. This is a must-have for any serious debugging.
Django Debug Toolbar
The Django Debug Toolbar is a configurable set of panels that display various debug information about the current request/response and when clicked, display more details about the panel’s content. This tool is invaluable to debug and fix performance issues with Django queries.
Windows applications
DBeaver
DBeaver is a free universal database tool and works with many different kinds of databases including MySQL. It can be installed on Windows 10 and will be able to help manage your MySQL databases running on WSL.
Install from here. DBeaver
Adding apps for development
The idea behind the particular folder structure of aa-dev is to have each and every app in its own folder and git repo. To integrate them with the AA instance, they need to be installed once using the -e option that enabled editing of the package. And then added to the INSTALLED_APPS settings.
To demonstrate, let’s add the example plugin to our environment.
Open a WSL bash and navigate to the aa-dev folder. Make sure you have activated your virtual environment. (source venv/bin/activate
)
Run these commands:
git clone https://gitlab.com/ErikKalkoken/allianceauth-example-plugin.git
pip install -e allianceauth-example-plugin
Add 'example'
to INSTALLED_APPS in your local.py
settings.
Run migrations and restart your AA server, e.g.:
cd myauth
python manage.py migrate
Developing apps
In this section, you find topics useful for app developers.
API
To reduce redundancy and help speed up development, we encourage developers to utilize the following packages when developing apps for Alliance Auth.
Discord Client
AA contains a web client for interacting with the Discord API. This client can be used independently from an installed Discord service in AA.
Location: allianceauth.services.modules.discord.discord_client
client
Client for interacting with the Discord API.
- class DiscordApiStatusCode(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Status code returned from the Discord API.
- UNKNOWN_MEMBER = 10007
- class DiscordClient(access_token: str, redis: Redis = None, is_rate_limited: bool = True)[source]
This class provides a web client for interacting with the Discord API.
The client has rate limiting that supports concurrency. This means it is able to ensure the API rate limit is not violated, even when used concurrently, e.g. with multiple parallel celery tasks.
In addition the client support proper API backoff.
Synchronization of rate limit infos across multiple processes is implemented with Redis and thus requires Redis as Django cache backend.
The cache is shared across all clients and processes (also using Redis).
All durations are in milliseconds.
Most errors from the API will raise a requests.HTTPError.
- Parameters:
access_token – Discord access token used to authenticate all calls to the API
redis – Redis instance to be used.
is_rate_limited – Set to False to turn off rate limiting (use with care). If not specified will try to use the Redis instance from the default Django cache backend.
- Raises:
ValueError – No access token provided
- add_guild_member(guild_id: int, user_id: int, access_token: str, role_ids: list = None, nick: str = None) bool | None [source]
Adds a user to the guild.
- Returns:
True when a new user was added
None if the user already existed
False when something went wrong or raises exception
- add_guild_member_role(guild_id: int, user_id: int, role_id: int) bool | None [source]
Adds a role to a guild member
Returns: - True when successful - None if member does not exist - False otherwise
- create_guild_role(guild_id: int, role_name: str, **kwargs) Role | None [source]
Create a new guild role with the given name.
See official documentation for additional optional parameters.
Note that Discord allows the creation of multiple roles with the same name, so to avoid duplicates it’s important to check existing roles before creating new one
- Parameters:
guild_id – Discord ID of the guild
role_name – Name of new role to create
- Returns:
new role on success
- guild_infos(guild_id: int) Guild [source]
Fetch all basic infos about this guild.
- Parameters:
guild_id – Discord ID of the guild
- guild_member(guild_id: int, user_id: int) GuildMember | None [source]
Fetch info for a guild member.
- Parameters:
guild_id – Discord ID of the guild
user_id – Discord ID of the user
- Returns:
guild member or
None
if the user is not a member of the guild
- guild_member_roles(guild_id: int, user_id: int) RolesSet | None [source]
Fetch the current guild roles of a guild member.
Args: - guild_id: Discord guild ID - user_id: Discord user ID
Returns: - Member roles - None if user is not a member of the guild
- guild_name(guild_id: int, use_cache: bool = True) str [source]
Fetch the name of this guild (cached).
- Parameters:
guild_id – Discord ID of the guild
use_cache – When set to False will force an API call to get the server name
- Returns:
Name of the server or an empty string if something went wrong.
- guild_roles(guild_id: int, use_cache: bool = True) Set[Role] [source]
Fetch all roles for this guild.
- Parameters:
guild_id – Discord ID of the guild
use_cache – If is set to False it will always hit the API to retrieve fresh data and update the cache.
Returns:
- match_or_create_role_from_name(guild_id: int, role_name: str, guild_roles: RolesSet = None) Tuple[Role, bool] [source]
Fetch or create Discord role matching the given name.
Will try to match with existing roles names Non-existing roles will be created, then created flag will be True
- Parameters:
guild_id – ID of guild
role_name – strings defining name of a role
guild_roles – All known guild roles as RolesSet object. Helps to void redundant lookups of guild roles when this method is used multiple times.
- Returns:
Tuple of Role and created flag
- match_or_create_roles_from_names(guild_id: int, role_names: Iterable[str]) List[Tuple[Role, bool]] [source]
Fetch or create Discord roles matching the given names (cached).
Will try to match with existing roles names Non-existing roles will be created, then created flag will be True
- Parameters:
guild_id – ID of guild
role_names – list of name strings each defining a role
- Returns:
List of tuple of Role and created flag
- match_or_create_roles_from_names_2(guild_id: int, role_names: Iterable[str]) RolesSet [source]
Fetch or create Discord role matching the given name.
Wrapper for
match_or_create_role_from_name()
- Returns:
Roles as RolesSet object.
- match_role_from_name(guild_id: int, role_name: str) Role | None [source]
Fetch Discord role matching the given name (cached).
- Parameters:
guild_id – Discord ID of the guild
role_name – Name of role
- Returns:
Matching role or None if no match is found
- modify_guild_member(guild_id: int, user_id: int, role_ids: List[int] = None, nick: str = None) bool | None [source]
Set properties of a guild member.
- Parameters:
guild_id – Discord ID of the guild
user_id – Discord ID of the user
roles_id – New list of role IDs (if provided)
nick – New nickname (if provided)
- Returns
True when successful
None if user is not a member of this guild
False otherwise
- remove_guild_member(guild_id: int, user_id: int) bool | None [source]
Remove a member from a guild.
- Parameters:
guild_id – Discord ID of the guild
user_id – Discord ID of the user
- Returns:
True when successful
None if member does not exist
False otherwise
- remove_guild_member_role(guild_id: int, user_id: int, role_id: int) bool | None [source]
Remove a role to a guild member
- Parameters:
guild_id – Discord ID of the guild
user_id – Discord ID of the user
role_id – Discord ID of role to be removed
- Returns:
True when successful
None if member does not exist
False otherwise
models
Implementation of Discord objects used by this client.
Note that only those objects and properties are implemented, which are needed by AA.
Names and types are mirrored from the API whenever possible. Discord’s snowflake type (used by Discord IDs) is implemented as int.
- class User(id: int, username: str, discriminator: str)[source]
A user on Discord.
- class Role(id: int, name: str, managed: bool = False)[source]
A role on Discord.
- class Guild(id: int, name: str, roles: FrozenSet[Role])[source]
A guild on Discord.
- class GuildMember(roles: FrozenSet[int], nick: str = None, user: User = None)[source]
A member of a guild on Discord.
- classmethod from_dict(data: dict) GuildMember [source]
Create object from dictionary as received from the API.
exceptions
Custom exceptions for the Discord Client package.
- exception DiscordApiBackoff(retry_after: int)[source]
Exception signaling we need to backoff from sending requests to the API for now.
- Parameters:
retry_after – time to retry after in milliseconds
- property retry_after_seconds
Time to retry after in seconds.
settings
Settings for the Discord client.
To overwrite a default set the variable in your local Django settings, e.g:
DISCORD_GUILD_NAME_CACHE_MAX_AGE = 7200
- DISCORD_API_BASE_URL = 'https://discord.com/api/'
Base URL for all API calls. Must end with /.
- DISCORD_API_TIMEOUT_CONNECT = 5
Low level connect timeout for requests to the Discord API in seconds.
- DISCORD_API_TIMEOUT_READ = 30
Low level read timeout for requests to the Discord API in seconds.
- DISCORD_DISABLE_ROLE_CREATION = False
Turns off creation of new roles. In case the rate limit for creating roles is exhausted, this setting allows the Discord service to continue to function and wait out the reset. Rate limit is about 250 per 48 hrs.
- DISCORD_GUILD_NAME_CACHE_MAX_AGE = 86400
How long the Discord guild names retrieved from the server are caches locally in seconds.
- DISCORD_OAUTH_BASE_URL = 'https://discord.com/api/oauth2/authorize'
Base authorization URL for Discord Oauth.
- DISCORD_OAUTH_TOKEN_URL = 'https://discord.com/api/oauth2/token'
Base authorization URL for Discord Oauth.
- DISCORD_ROLES_CACHE_MAX_AGE = 3600
How long Discord roles retrieved from the server are caches locally in seconds.
Discord Service
This page contains the technical documentation for the Discord service.
Location: allianceauth.services.modules.discord
api
Public interface for community apps who want to interact with the Discord server of the current Alliance Auth instance.
Example
Here is an example for using the api to fetch the current roles from the configured Discord server.
from allianceauth.services.modules.discord.api import create_bot_client, discord_guild_id
client = create_bot_client() # create a new Discord client
guild_id = discord_guild_id() # get the ID of the configured Discord server
roles = client.guild_roles(guild_id) # fetch the roles from our Discord server
See also
The docs for the client class can be found here: DiscordClient
- class DiscordUser(*args, **kwargs)[source]
The Discord user account of an Auth user.
- Parameters:
uid (BigIntegerField) – Uid. user’s ID on Discord
username (CharField) – Username. user’s username on Discord
discriminator (CharField) – Discriminator. user’s discriminator on Discord
activated (DateTimeField) – Activated. Date & time this service account was activated
Relationship fields:
- Parameters:
user (
OneToOneField
toUser
) – Primary key: User. Auth user owning this Discord account (related name:discord
)
- class Role(id: int, name: str, managed: bool = False)[source]
A role on Discord.
- create_bot_client(is_rate_limited: bool = True) DiscordClient [source]
Create new bot client for accessing the configured Discord server.
- Parameters:
is_rate_limited – Set to False to turn off rate limiting (use with care).
- Returns:
Discord client instance
settings
- DISCORD_APP_ID = 'appid'
App ID for the AA bot on Discord. Needs to be set.
- DISCORD_APP_SECRET = 'secret'
App secret for the AA bot on Discord. Needs to be set.
- DISCORD_BOT_TOKEN = 'bottoken'
Token used by the AA bot on Discord. Needs to be set.
- DISCORD_CALLBACK_URL = 'http://example.com/discord/callback'
Callback URL for OAuth with Discord. Needs to be set.
- DISCORD_GUILD_ID = '0118999'
ID of the Discord Server. Needs to be set.
- DISCORD_SYNC_NAMES = False
Automatically sync Discord users names to user’s main character name when created.
- DISCORD_TASKS_MAX_RETRIES = 3
Max retries of tasks after an error occurred.
- DISCORD_TASKS_RETRY_PAUSE = 60
Pause in seconds until next retry for tasks after the API returned an error.
django-esi
The django-esi package provides an interface for easy access to the ESI.
This is an external package. Please see here for it’s documentation.
evelinks
This package generates profile URLs for eve entities on 3rd party websites like evewho and zKillboard.
Location: allianceauth.eveonline.evelinks
eveimageserver
- alliance_logo_url(alliance_id: int, size: int = 32) str [source]
image URL for the given alliance ID
- character_portrait_url(character_id: int, size: int = 32) str [source]
image URL for the given character ID
dotlan
eveho
zkillboard
eveonline
The eveonline package provides models for commonly used Eve Online entities like characters, corporations and alliances. All models have the ability to be loaded from ESI.
Location: allianceauth.eveonline
models
- class EveAllianceInfo(*args, **kwargs)[source]
An alliance in Eve Online.
- Parameters:
id (AutoField) – Primary key: ID
alliance_id (PositiveIntegerField) – Alliance id
alliance_name (CharField) – Alliance name
alliance_ticker (CharField) – Alliance ticker
executor_corp_id (PositiveIntegerField) – Executor corp id
Reverse relationships:
- Parameters:
state (Reverse
ManyToManyField
fromState
) – All states of this eve alliance info (related name ofmember_alliances
)evecorporationinfo (Reverse
ForeignKey
fromEveCorporationInfo
) – All eve corporation infos of this eve alliance info (related name ofalliance
)managedalliancegroup (Reverse
ForeignKey
fromManagedAllianceGroup
) – All managed alliance groups of this eve alliance info (related name ofalliance
)
- class EveCharacter(*args, **kwargs)[source]
A character in Eve Online.
- Parameters:
id (AutoField) – Primary key: ID
character_id (PositiveIntegerField) – Character id
character_name (CharField) – Character name
corporation_id (PositiveIntegerField) – Corporation id
corporation_name (CharField) – Corporation name
corporation_ticker (CharField) – Corporation ticker
alliance_id (PositiveIntegerField) – Alliance id
alliance_name (CharField) – Alliance name
alliance_ticker (CharField) – Alliance ticker
faction_id (PositiveIntegerField) – Faction id
faction_name (CharField) – Faction name
Reverse relationships:
- Parameters:
state (Reverse
ManyToManyField
fromState
) – All states of this eve character (related name ofmember_characters
)userprofile (Reverse
OneToOneField
fromUserProfile
) – The user profile of this eve character (related name ofmain_character
)character_ownership (Reverse
OneToOneField
fromCharacterOwnership
) – The character ownership of this eve character (related name ofcharacter
)ownership_records (Reverse
ForeignKey
fromOwnershipRecord
) – All ownership records of this eve character (related name ofcharacter
)application (Reverse
ForeignKey
fromApplication
) – All applications of this eve character (related name ofreviewer_character
)timer (Reverse
ForeignKey
fromTimer
) – All timers of this eve character (related name ofeve_character
)srpfleetmain (Reverse
ForeignKey
fromSrpFleetMain
) – All srp fleet mains of this eve character (related name offleet_commander
)srpuserrequest (Reverse
ForeignKey
fromSrpUserRequest
) – All srp user requests of this eve character (related name ofcharacter
)optimer (Reverse
ForeignKey
fromOpTimer
) – All op timers of this eve character (related name ofeve_character
)fat (Reverse
ForeignKey
fromFat
) – All fats of this eve character (related name ofcharacter
)
- property alliance: EveAllianceInfo | None
Pseudo foreign key from alliance_id to EveAllianceInfo :raises: EveAllianceInfo.DoesNotExist :return: EveAllianceInfo or None
- property corporation: EveCorporationInfo
Pseudo foreign key from corporation_id to EveCorporationInfo :raises: EveCorporationInfo.DoesNotExist :return: EveCorporationInfo
- property faction: EveFactionInfo | None
Pseudo foreign key from faction_id to EveFactionInfo :raises: EveFactionInfo.DoesNotExist :return: EveFactionInfo
- class EveCorporationInfo(*args, **kwargs)[source]
A corporation in Eve Online.
- Parameters:
id (AutoField) – Primary key: ID
corporation_id (PositiveIntegerField) – Corporation id
corporation_name (CharField) – Corporation name
corporation_ticker (CharField) – Corporation ticker
member_count (IntegerField) – Member count
ceo_id (PositiveIntegerField) – Ceo id
Relationship fields:
- Parameters:
alliance (
ForeignKey
toEveAllianceInfo
) – Alliance (related name:evecorporationinfo
)
Reverse relationships:
- Parameters:
state (Reverse
ManyToManyField
fromState
) – All states of this eve corporation info (related name ofmember_corporations
)managedcorpgroup (Reverse
ForeignKey
fromManagedCorpGroup
) – All managed corp groups of this eve corporation info (related name ofcorp
)applicationform (Reverse
OneToOneField
fromApplicationForm
) – The application form of this eve corporation info (related name ofcorp
)timer (Reverse
ForeignKey
fromTimer
) – All timers of this eve corporation info (related name ofeve_corp
)corpstats (Reverse
OneToOneField
fromCorpStats
) – The corp stats of this eve corporation info (related name ofcorp
)
- class EveFactionInfo(*args, **kwargs)[source]
A faction in Eve Online.
- Parameters:
id (AutoField) – Primary key: ID
faction_id (PositiveIntegerField) – Faction id
faction_name (CharField) – Faction name
Reverse relationships:
- Parameters:
state (Reverse
ManyToManyField
fromState
) – All states of this eve faction info (related name ofmember_factions
)
notifications
The notifications package has an API for sending notifications.
Location: allianceauth.notifications
models
- class Notification(*args, **kwargs)[source]
Notification to a user within Auth
- Parameters:
id (AutoField) – Primary key: ID
level (CharField) – Level
title (CharField) – Title
message (TextField) – Message
timestamp (DateTimeField) – Timestamp
viewed (BooleanField) – Viewed
Relationship fields:
- Parameters:
user (
ForeignKey
toUser
) – User (related name:notification
)
managers
tests
Here you find utility functions and classes, which can help speed up writing test cases for AA.
Location: allianceauth.tests.auth_utils
auth_utils
- class AuthUtils[source]
Utilities for making it easier to create tests for Alliance Auth
- classmethod add_main_character(user, name, character_id, corp_id=2345, corp_name='', corp_ticker='', alliance_id=None, alliance_name='', faction_id=None, faction_name='')[source]
- classmethod add_main_character_2(user, name, character_id, corp_id=2345, corp_name='', corp_ticker='', alliance_id=None, alliance_name='', disconnect_signals=False) EveCharacter [source]
new version that works in all cases
- classmethod add_permission_to_user_by_name(perm, user, disconnect_signals=True) User [source]
returns permission specified by qualified name
perm: Permission name as ‘app_label.codename’
user: user object
disconnect_signals: whether to run process without signals
- classmethod add_permissions_to_user(perms, user, disconnect_signals=True) User [source]
add list of permissions to user
perms: list of Permission objects
user: user object
disconnect_signals: whether to run process without signals
- classmethod add_permissions_to_user_by_name(perms: List[str], user: User, disconnect_signals: bool = True) User [source]
Add permissions given by name to a user
- Parameters:
perms – List of permission names as ‘app_label.codename’
user – user object
disconnect_signals – whether to run process without signals
- Returns:
Updated user object
- classmethod create_state(name, priority, member_characters=None, member_corporations=None, member_alliances=None, public=False, disconnect_signals=False)[source]
- classmethod create_user(username, disconnect_signals=False)[source]
create a new user
username: Name of the user
disconnect_signals: whether to run process without signals
- static get_permission_by_name(perm: str) Permission [source]
returns permission specified by qualified name
perm: Permission name as ‘app_label.codename’
Returns: Permission object or throws exception if not found
utils
Utilities and helper functions.
Location: allianceauth.utils
cache
testing
Celery FAQ
Alliance Auth uses Celery for asynchronous task management. This page aims to give developers some guidance on how to use Celery when developing apps for Alliance Auth.
For the complete documentation of Celery, please refer to the official Celery documentation.
When should I use Celery in my app?
There are two main reasons for using celery. Long duration of a process, and recurrence of a process.
Duration
Alliance Auth is an online web application, and as such, the user expects fast and immediate responses to any of his clicks or actions. Same as with any other good website. Good response times are measured in ms, and a user will perceive everything that takes longer than 1 sec as an interruption of his flow of thought (see also Response Times: The 3 Important Limits).
As a rule of thumb, we therefore recommend using celery tasks for every process that can take longer than 1 sec to complete (also think about how long your process might take with large amounts of data).
Note
Another solution for dealing with long response time in particular when loading pages is to load parts of a page asynchronously, for example, with AJAX.
Recurrence
Another case for using celery tasks is when you need recurring execution of tasks. For example, you may want to update the list of characters in a corporation from ESI every hour.
These are called periodic tasks, and Alliance Auth uses celery beat to implement them.
What is a celery task?
For the most part, a celery task is a Python function configured to be executed asynchronously and controlled by Celery. Celery tasks can be automatically retried, executed periodically, executed in work flows and much more. See the celery docs for a more detailed description.
How should I use Celery in my app?
Please use the following approach to ensure your tasks are working properly with Alliance Auth:
All tasks should be defined in a module of your app’s package called
tasks.py
Every task is a Python function with has the
@shared_task
decorator.Task functions and the tasks module should be kept slim, just like views by mostly utilizing business logic defined in your models/managers.
Tasks should always have logging, so their function and potential errors can be monitored properly
Here is an example implementation of a task:
import logging
from celery import shared_task
logger = logging.getLogger(__name__)
@shared_task
def example():
logger.info('example task started')
This task can then be started from any another Python module like so:
from .tasks import example
example.delay()
How should I use celery tasks in the UI?
There is a well-established pattern for integrating asynchronous processes in the UI, for example, when the user asks your app to perform a longer running action:
Notify the user immediately (with a Django message) that the process for completing the action has been started and that he will receive a report once completed.
Start the celery task
Once the celery task is completed, it should send a notification containing the result of the action to the user. It’s important to send that notification also in case of errors.
Can I use long-running tasks?
Long-running tasks are possible, but in general Celery works best with short running tasks. Therefore, we strongly recommend trying to break down long-running tasks into smaller tasks if possible.
If contextually possible, try to break down your long-running task in shorter tasks that can run in parallel.
However, many long-running tasks consist of several smaller processes that need to run one after the other. For example, you may have a loop where you perform the same action on hundreds of objects. In those cases, you can define each of the smaller processes as its own task and then link them together, so that they are run one after the other. That is called chaining in Celery and is the preferred approach for implementing long-running processes.
Example implementation for a celery chain:
import logging
from celery import shared_task, chain
logger = logging.getLogger(__name__)
@shared_task
def example():
logger.info('example task')
@shared_task
def long_runner():
logger.info('started long runner')
my_tasks = list()
for _ in range(10):
task_signature = example.si()
my_task.append(task_signature)
chain(my_tasks).delay()
In this example, we first add 10 example tasks that need to run one after the other to a list. This can be done by creating a so-called signature for a task. Those signatures are a kind of wrapper for tasks and can be used in various ways to compose work flow for tasks.
The list of task signatures is then converted to a chain and started asynchronously.
Hint
In our example we use si()
, which is a shortcut for “immutable signatures” and prevents us from having to deal with result sharing between tasks.
For more information on signature and work flows see the official documentation on Canvas <https://docs.celeryproject.org/en/latest/userguide/canvas.html>
_.
In this context, please note that Alliance Auth currently only supports chaining because all other variants require a so-called results back, which Alliance Auth does not have.
How can I define periodic tasks for my app?
Periodic tasks are normal celery tasks that are added to the scheduler for periodic execution. The convention for defining periodic tasks for an app is to define them in the local settings. So user will need to add those settings manually to his local settings during the installation process.
Example setting:
CELERYBEAT_SCHEDULE['structures_update_all_structures'] = {
'task': 'structures.tasks.update_all_structures',
'schedule': crontab(minute='*/30'),
}
structures_update_all_structures
is the name of the scheduling entry. You can choose any name, but the convention is name of your app plus name of the task.'task'
: Name of your task (full path)'schedule'
: Schedule definition (see Celery documentation on Periodic Tasks for details)
How can I use priorities for tasks?
In Alliance Auth we have defined task priorities from 0 to 9 as follows:
Number
Priority
Description
0
Reserved
Reserved for Auth and may not be used by apps
1, 2
Highest
Needs to run right now
3, 4
High
needs to run as soon as practical
5
Normal
default priority for most tasks
6, 7
Low
needs to run soonish, but is less urgent than most tasks
8, 9
Lowest
not urgent, can be run whenever there is time
Warning
Please make sure to use task priorities with care and especially do not use higher priorities without a good reason. All apps including Alliance Auth share the same task queues, so using higher task priorities excessively can potentially prevent more important tasks (of other apps) from completing on time.
You also want to make sure to run use lower priorities if you have a large number of tasks or long-running tasks, which are not super urgent. (e.g., the regular update of all Eve characters from ESI runs with priority 7)
Hint
If no priority is specified, all tasks will be started with the default priority, which is 5.
To run a task with a different priority, you need to specify it when starting it.
Example for starting a task with priority 3:
example.apply_async(priority=3)
Hint
For defining a priority to tasks, you cannot use the convenient shortcut delay()
, but instead need to start a task with apply_async()
, which also requires you to pass parameters to your task function differently. Please check out the official docs <https://docs.celeryproject.org/en/stable/reference/celery.app.task.html#celery.app.task.Task.apply_async>
_ for details.
What special features should I be aware of?
Every Alliance Auth installation will come with a couple of special celery related features “out-of-the-box” that you can make use of in your apps.
celery-once
Celery-once is a celery extension “that allows you to prevent multiple execution and queuing of celery tasks”. What that means is that you can ensure that only one instance of a celery task runs at any given time. This can be useful, for example, if you do not want multiple instances of your task to talk to the same external service at the same time.
We use a custom backend for celery_once in Alliance Auth defined here You can import it for use like so:
from allianceauth.services.tasks import QueueOnce
An example of Alliance Auth’s use within the @sharedtask
decorator, can be seen here in the discord module
You can use it like so:
@shared_task(bind=True, name='your_modules.update_task', base=QueueOnce)
Please see the official documentation of celery-once for details.
task priorities
Alliance Auth is using task priorities to enable priority-based scheduling of task execution. Please see How can I use priorities for tasks? for details.
Core models
The following diagram shows the core models of AA and Django and their relationships:
Contributing
Alliance Auth is developed by the community, and we are always looking to welcome new contributors. If you are interested in contributing, here are some ideas where to start:
Publish a new community app or service
One great way to contribute is to develop and publish your own community app or service for Alliance Auth. By design, Auth only comes with some basic features and therefore heavily relies on the community to provide apps to extend Auth with additional features.
To publish your app, make sure it can be installed from a public repo or PyPI. Once it’s ready, you can inform everybody about your new app by posting it to our list of community apps.
If you are looking for ideas on what to make, you can check out Auth’s issue list. Many of those issues are feature requests that will probably never make into Auth core, but would be awesome to have as community app or service. You could also ask the other devs on our Discord server for ideas or to help you get a feeling about which new features might be in higher demand than others.
Help to maintain an existing community app or service
There are quite a few great community apps that need help from additional maintainers. Often the initial author has no time anymore to support his app or would just appreciate some support for working on new features or to fix bugs.
Sometimes original app owners may even be looking to completely hand over their apps to a new owner.
If you are interested to help maintain an existing community app or service, you can start working on open issues and create merge requests. Or just ask other devs on our Discord.
Help with improving Auth documentation
Auth has an extensive documentation, but there are always things to improve and add. If you notice any errors or see something to improve or add please feel free to issue a change for the documentation (via MRs same as code changes).
Help with support questions on Discord
One of the main functions of the Auth Discord server is to help the community with any support question they may have when installing or running an Auth installation.
Note that you do not need to be part of any official group to become a supporter. Jump in and help with answering new questions from the community if you know how to help.
Help to improve Alliance Auth core
Alliance Auth has an issue list, which is usually the basis for all maintenance activities for Auth core. That means that bug fixes and new features are primarily delivered based on existing open issues.
We usually have a long list of open issues and very much welcome every help to fix existing bugs or work on new features for Auth.
Before starting to code on any topic, we’d suggest talking to the other devs on Discord to make sure your issue is not already being worked on. Also, some feature requests may be better implemented in a community app. Another aspect, which is best clarified by talking with the other devs.
If you like to contribute to Auth core, but are unsure where to start, we have a dedicated label for issues that are suitable for beginners: beginner-friendly.
Additional Resources
For more information on how to create community apps or how to set up a developer environment for Auth, please see our official developer documentation.
For getting in touch with other contributors, please feel free to join the Alliance Auth Discord server.