This project was created and deployed one time on a personal server as a learning exercise. This does NOT imply the project is deployment ready for other use cases.
This will assume you are have experience and are comfortable deploying express/nodejs servers in a command line shell.
Create a dedicated non-privileged user. Login as the non-privledged user. Open a command line terminal in the user's home directory or other directory where the repository will be installed. Clone the collab-auth repository and install the npm package dependencies. The "ci" clean install option will remove previous packages before install. An environment variable NODE_ENV=production is required. "export NODE_ENV=production" may be added to the .bashrc script for the non-privledged user.
git clone git@github.com:cotarr/collab-auth.git cd collab-auth npm ci
If you do not wish to install the entire repository with the documentation and git commit history, only the following files and folders are required for deployment. Sub-folder contents should be included. Folders for node_modules and logs will be created automatically.
bin/ data/ public/ server/ SQL-tools/ views/ package.json package-lock.json
Program default configuration is stored in the "server/config/index.js" Configuration parameters can be overridden using unix environment variables. The configuration file will load the npm dotenv package in order to support an optional ".env" file in the base folder of the repository. Boolean values should be set as environment variable lower case string values "true" or "false". Configuration items discussed in the following sections may be modified in either the configuration file or in the environment variables.
The default server port is TCP port 3500. In Debian 11 and node 18, the server will accept connection using IPV4 and IPV6 on all interfaces. The configured port number will be printed to console.log on program start. Use the following environment variable assignment to configure a custom port number.
SERVER_PORT=3500
Firewall filter rules can be quite complex and therefore beyond the scope of these instructions. However, at minimum, one port must be opened to allowing incoming TCP connections to the authorization server port.
If you intend to include rate limiting rules at the firewall, the authorization server will need to accommodate access_token validation requests from resource servers, such as a REST API, that may occur as users navigate web pages.
Assuming you will be using PostgreSQL database locally, the PostgreSQL ports must also be protected from external internet access (see below).
For security, it is recommended to run collab-auth as a non-privileged user. Non-privileged unix accounts do not have access to reserved port 443 to use HTTPS connections for TLS. There are several ways for a non-privileged user to access port 443. One option is use a reverse proxy such as nginx. Running an application inside a docker container allows port redirection. A network load balancer may have port redirection capabilities. When collab-auth was deployed on a VPS cloud droplet with LetsEncrypt TLS certificates, A firewall using nftables rules was written to exchange the port port 443 the the server non-privileged port as the connection passed through the firewall. When using redirected ports, there must be a temporary way to restore the original ports without redirection for the purpose of renewing the LetsEncrypt TLS certificates. This type of firewall rules are beyond the scope of this tutorial.
To function properly on the internet, a fully qualified domain name must be used. It is required that the full domain name of the authorization server be different from any of the web servers or resource server that use the authorization server. This would avoid overlap of any cookies related to the authorization server domain name.
If you are familiar with running express/nodejs servers, then you are familiar with this process of registering a unique domain name and creating DNS A and AAAA records for the domain name using the IP address of the authorization server.
The authorization server will need to know it's full URL starting with the https:// protocol. It will also need to know the full hostname of the sever without the https:// protocol. If there is a non-standard port number, it should be appended to both as shown below. Do not append 443 when using the standard port. This refers to the internet visible port, which may be different from the server listen port if they were changed by the firewall or reverse proxy.
Example using standard 443 port with TLS.
SITE_AUTH_URL=https://auth.example.com SITE_OWN_HOST=auth.example.com
Or, with non-standard ports:
SITE_AUTH_URL=https://auth.example.com:3500 SITE_OWN_HOST=auth.example.com:3500
This is optional. By default the vhost configuration is set to "*" which will disable hostname checking. If a vhost domain name is configured, any http request not matching the intended hostname will return a 404 Not Found response. This is not intended to allow multiple vhosts, but rather it is intended to reduce attack surface by blocking requests that are addressed to the server numeric IP address and only permitting requests to the servers domain name. The port number is not included.
SITE_VHOST=auth.example.com
When deployed for use on the internet, valid domain certificates should be used for encryption of https requests to the server and verification of the hostname. Use of TLS is critical to protecting passwords and access_tokens in transit on the internet. Generation of TLS certificates for a domain name is beyond the scope of this document.
When this was deployed as as a learning exercise, the Lets Encrypt certbot was used to generate TLS certificates. In this case, there was an issue because the non-privileged user account did not have access to the /etc/letsencrypt system folder. This was solved by using a bash script to copy the Lets Encrypt TLS certificates to the non-privileged user folder and set file permissions. Obviously, there are other ways to address this. The full chain file was used to include the CA certificates.
TLS is enabled by setting "SERVER_TLS" to the string value "true" as shown below. By default, the minimum TLS version is set to TLS 1.2 in the bin/www file, which seemed reasonable considering this is an authorization server.
The express/node.js web server will read TLS certificates at startup using configuration settings. The following can be set using Unix environment variables or including in a .env file. A relative path can be used. Edit the filenames as needed.
SERVER_TLS_KEY=/home/someuser/tls/auth_privkey.pem SERVER_TLS_CERT=/home/someuser/tls/auth_fullchain.pem SERVER_TLS=true
Openssl is used as a command line tool to generate a pair of files containing the RSA keys. The RSA private key is used to add a digital signature to a new token using the "RS256" algoithm. The RSA public key is used to verify the signature whe then token is decoded. Although the "certifiacte.pem" file is a standard self signed SSL/TLS web certificate, only the RSA Public-Key within the certificate appears to be relevant. The certificate attributes for "Validity Not After:..." and "Subject: CN..." appear to be ignored. The presence of an expired TLS certificate expiration date does not appear to generate any errors when decoding the JWT token. Therefore, these files can be considered as non-expiring credentials and the certificate validity dates, which default to 30 days, are ignored.
In the case of this program, each JWT token will have it's own unique expiration within the digitally signed token payload, so ignoring the alternate certificate date this does not appear to an issue.
The following commands can be used to generate the RSA keys.
cd data/token-certs/ openssl req -newkey rsa:2048 -nodes -keyout privatekey.pem -x509 -out certificate.pem -subj "/CN=collab-auth" cd ../..
Logging to disk file is enabled when the server is started with NODE_ENV=production, otherwise logging is sent to stdout. Log files are located in the "logs/" folder located in the repository base folder. The log folder will be created automatically if it does not exist.
End user login events and administrative modifications of the client/user database are logged to the file "logs/auth.log" Log file rotation is not available for the auth.log.HTTP access log events will be written to the file "logs/access.log".
HTTP access log filename rotation available, but disabled by default. Optional log file rotation may enabled by assigning the "SERVER_LOG_ROTATE_INTERVAL" environment variable or the "SERVER_LOG_ROTATE_SIZE" environment variable to the proper string value. Log rotate time interval available time units (s, m, h, d) are used to specify the interval string. (Example: '5m', '2h', '1d'). Minutes and hours are a time divisor, but the day interval is a simple timer that is re-initialized each time the application is restarted. Log rotate file size available time units (B, K, M, G) are used to specify the file size string. (Example: '100K', '1M'). Optionally either interval, size, both, or neither may be used. This will allow log file log rotation similar to Linux system files. Files will be renamed access.log.1, access.log.2, up to access.log.5. Older files are discarded when log rotation is enabled. For more information see rotating-file-stream documentation. In the case where neither interval nor size properties exist or both are empty strings, then no log rotation will occur, and file size management must be done manually.,
In some cases the access log may contain excessive entries due to continuous checking of access tokens. It is possible to filter the HTTP access log file to limit logging to include only HTTP errors where status codes >=400. The error filter may be enabled by setting the environment variable SERVER_LOG_FILTER to the string value "error"
SERVER_LOG_ROTATE_INTERVAL=7d SERVER_LOG_ROTATE_SIZE=1M SERVER_LOG_FILTER=error
The "auth.log" file is not rotated, filtered, or managed for size.
In the case where log files are managed outside the scope of the program, it will be necessary to restart the web server after rename or deletion of the access log file. The environment variable SERVER_PID_FILENAME can be used to specify the full path filename of a PID file to be written by the server when it is started.
SERVER_PID_FILENAME=
By default, when operating in the development environment, the database is emulated using data storage in RAM variables. This is not suitable for deployment in a production enviornment because passwords and client secrets are stored in plain text, and all account edits will be lost each time the program is stopped.
For deployment to production, configuration options enable use of a PostgreSQL database for storage of access tokens, refresh tokens, user accounts and client accounts. When the database option is enabled, user passwords are hashed using bcrypt. Client secrets are encrypted using crypto-js/aes for storage in the database. Client secrets may be visible in plain text using the editor in the administration page if this is enabled in the configuration.
Requirements:
The pg client will use the following environment variables to connect to the database at startup. Use of a .env file is supported by dotenv.
PGUSER=xxxxxxx PGPASSWORD=xxxxxxxx PGHOSTADDR=127.0.0.1 PGPORT=5432 PGDATABASE=collabauth PGSSLMODE=disable
Database storage is enabled with the following environment variable. If this is not defined or not contain the string value "true" then the program will revert back to the RAM memory storage option.
DATABASE_ENABLE_POSTGRES=true
Storage of program data requires 4 tables. These tables can be created using the database command line client "psql". The file: "SQL-tools/create-oauth-tables.sql" includes SQL query utility commands. These can by copy/paste directly into the psql client to better observe the result and check for errors.
This is a list of required tables
Table | Description |
---|---|
accesstokens | Issued access token meta-data |
refreshtokens | Issued refresh token meta-data |
authclients | Client id, name, client secret |
authusers | IUser id, name, hashed password |
A javascript utility located at "SQL-tools/create-postgres-admin-user.js" can be used to create the an initial admin user account. It is then possible to use the admin panel at "/panel/menu" to login as the "admin" user and create any additional user or client accounts that may be needed. When run from the command line terminal, this will prompt for input: User number (default=1000), Username (default="admin"), Name (default="Admin Account"), and Password. Further instructions can be found in the Admin Editor section. The utility should be run from base folder of repository folder using:
npm run create-postgres-admin-user
The default development configuration uses the npm package "memorystore" to store session information from express-session. Memorystore is a memory safe implementation of RAM memory storage of HTTP sessions. Each time the server is restarted, previous session data is lost, so users must login to the authorization server each time the collab-auth server is restarted. This will not impact collab-frontend web server, or other web servers, as they have their own cookies.
Optionally, the npm package "connect-pg-simple" can be used to store session data in a PostgreSQL database. This would permit cookies to remain persistent across restarts of the collab-auth authorization server.
Some online discussion claim using PostgreSQL for session storage may be slow compared to RAM memory cached alternatives like Redis or memorystore. However, with collab-auth, the cookie and session storage are only relevant during interactive browser calls like password entry or using the admin panel. This occurs infrequently, therefore postgresql seems reasonable for session storage where persistence is needed. Since memorystore includes pruning of obsolete records, either postgresql or memorystore options should be acceptable for session storage when used with limited deployments, like a home network.
Session storage will use the same PostgreSQL database that was used previously to store user and client accounts. It is assumed the database has been created and the environment variables have been configured to allow the pg client to connect at program startup (see instructions above).
One table needs to be created for storage of session data. There is a SQL script located at "SQL-tools/create-session-table.sql" that contains SQL query commands to create the required database table. The contents can be copy/pasted into the psql terminal database client.
After the table has been created, PostgreSQL database storage is enabled using the following environment variable. If this is not defined or not contain the string value "true" then the program will revert back to memorystore.
SESSION_ENABLE_POSTGRES=true
The authorization server supports use of a security.txt file as described at https://securitytxt.org/. The specification defines a URL route at ".well-known/security.txt" that is intended to provide a method to notify the owner of the server in the event that a security issue is detected. This is disabled by default. To enable the security.txt notification, set the following environment variables. Remember to replace the contact with your contact. The time must be in standard unix format typically 1 year ahead. (Suggested bash command: "date --date='1 year' --iso-8601=seconds")
SITE_SECURITY_CONTACT=security@example.com SITE_SECURITY_EXPIRES="2022-12-26T05:39:02-06:00"
Server response to ".well-known/security.txt"
# Website security contact Contact: security@example.com Expires: 2022-12-26T05:39:02-06:00
There is a common security suggestion that states: If you are not using something on a server, remove it. It decreases the attack surface should a vulnerability be found in the future.
This oauth2orize library supports Oauth 2.0 grant types: code grant, client grant, password grant, implicit grant, and refresh token grant. In the event that some grant types are not required for your use case, each grant type may be disabled individually in the configuration. This will prevent the oauth2orize server from loading the associated callback. Requests would be treated as an error for unsupported grant type. In addition, the input validation checker would block the request should the request parameters contain an improper grant_type parameter. The boolean value should be an environment variable string "true". By default all grant type are enabled (Disabled false). This is optional. Disable as needed. It is necessary to restart the nodejs server after making changes.
OAUTH2_DISABLE_TOKEN_GRANT=true OAUTH2_DISABLE_CODE_GRANT=true OAUTH2_DISABLE_CLIENT_GRANT=true OAUTH2_DISABLE_PASSWORD_GRANT=true OAUTH2_DISABLE_REFRESH_TOKEN_GRANT=true
Likewise, the authorization server includes an administration page that can be used to add, edit, or delete user account records and client account records. Access to the admin editor requires a user account to have scope "auth.admin" listed as a user role.
If the user database does not require frequent changes, the administration page found at "/panel/menu" may be disabled in the configuration. Requests to "/panel/menu" will return 404 Not Found. It is necessary to restart the nodejs server after making changes.
DATABASE_DISABLE_WEB_ADMIN_PANEL=true
When using the PostreSQL database option, client secrets are encrypted before storage in the database using crypto-js/aes. During deployment on a personal server and home network, numerous Raspberry Pi IOT devices were configured to use access_tokens. It was useful to have the capability to view existing client secrets in plain text. By default, the client edit form will present an empty input element for the client secret. Entry of a new client secret will replace the previous client secret. As a configuration option the visibility of the client secret may be enabled in the configuration and the server restarted.
OAUTH2_EDITOR_SHOW_CLIENT_SECRET=true
By default, browser cookies for the collab-auth authorization server are set to expire at a fixed time after user login. The permitted age of the session is defined by providing a configuration value for SESSION_EXPIRE_SEC.
Setting SESSION_SET_ROLLING_COOKIE=true will instruct the server to reset the cookie expiration with each browser request. In this case, the session will expire after a time period where no requests are received in the time period specified by SESSION_EXPIRE_SEC.
SESSION_SET_ROLLING_COOKIE=false SESSION_EXPIRE_SEC=3600 SESSION_PRUNE_INTERVAL_SEC=3600 SESSION_SECRET="A Secret That Should Be Changed"
OAUTH2_CLIENT_SECRET_AES_KEY="A Secret That Should Be Changed" OAUTH2_AUTH_CODE_EXPIRES_IN_SECONDS=10 OAUTH2_TOKEN_EXPIRES_IN_SECONDS=3600 OAUTH2_REFRESH_TOKEN_EXPIRES_IN_SECONDS=2592000 OAUTH2_CLIENT_TOKEN_EXPIRES_IN_SECONDS=86400
This following options are hard coded. Changes would require modification of repository javascript code.
Ideally the server should be run in the background using the non-privledged account. This can easily be done using a user crontab and shell script. In it's current form, collab-auth is not capable to run as a unix service using systemd. The following commands are examples of server startup to run in the foreground of the non-privileged user command line terminal.
export NODE_ENV=production npm start
Or equivalently:
export NODE_ENV=production node bin/www
While in production mode (NODE_ENV=production) the log output can be redirected to the terminal by running the server in the foreground with the following command. (NODE_DEBUG_LOG variable not supported in .env file.)
export NODE_ENV=production NODE_DEBUG_LOG=1 npm start
When the server has successfully started, the following startup information will be logged to logs/node.log.
Server timestamp: 2021-11-06T15:01:00.337Z Using PostgreSQL for OAuth2 storage. Auth activity Log: /home/someuser/node/collab-auth/logs/auth.log Vhost: auth.example.com logFolder /home/someuser/node/collab-auth/logs HTTP Access Log: /home/someuser/node/collab-auth/logs/access.log Using PostgreSQL connect-pg-simple for session storage Serving static files from /home/someuser/node/collab-auth/public NODE_ENV production starting https (TLS encrypted) listening: :::3500 IPv6
The authorization server includes some additional middleware.
For example, consider a test to challenge a user password form submission for a case of missing password data. What would happend if the oauthorize custom callback function for user database lookup and password hash comparrison actually had a coding error that failed to reject submissions with missing password data? This coding error could be overlooked because the test expected an error, but the expected error may have been generated by input validation or rate limit and viewed as a successful test without actually challenging the password lookup.
One way to approach testing is to disable the different middleware and test independently. In order to manage this, the chained node/express middleware functions are written with one middleware per line to allow each middleware to be disabled for testing by prefixing the line with javascript // comment caracters. Below is an example of the oauth 2.0 decision dialog route showing the input validation and scrf protection middlewares. The core oauth2 function is server.decision(), shown here following the other middleware.
exports.decision = [ checkSessionAuth(), inputValidation.dialogAuthDecision, csrfProtection, server.decision(), server.authorizationErrorHandler(), server.errorHandler() ];
These critical routes with concurrent middleware should only appear in the following 3 files.
As a side note, an argument can be made that the oauthorize library is robustly coded and has a good track record. Therefore the addition of csrf protection and input validation may not be necessary for the oauth routes. On the other hand, they may needlessly increase the attack surface and make things worse. However, they are likely needed for the admin panel account editor as it is a simple form editor.