In addition to making its source code available publicly, Sentry offers and maintains a minimal setup that works out-of-the-box for simple use cases. This repository also serves as a blueprint for how various Sentry services connect for a complete setup, which is useful for folks willing to maintain larger installations. For the sake of simplicity, we have chosen to use Docker and Docker Compose for this, along with a bash-based install and upgrade script.
Our recommendation is to download the latest release of the onpremise repository, and then run
./install.sh inside this directory. This script will take care of all the things you need to get started, including a base-line configuration, and then will tell you to run
docker-compose up -d to start Sentry. Sentry binds to port
9000 by default. You should be able to reach the login page at http://127.0.0.1:9000.
You very likely will want to adjust the default configuration for Sentry. These facilities are available for that purpose:
sentry/config.yml—Contains most, if not all, configuration options to adjust. This file is generated from
sentry/config.example.ymlat the time of installation. The file itself documents the most common configuration options as code comments. Some popular settings in this file include:
system.url-prefix(we prompt you to set this at the welcome screen, right after the installation)
mail.*(though we do ship with a basic SMTP server)
integrations for GitHub, Slack, etc.
sentry/sentry.conf.py—Contains more advanced configuration. This file is generated from
Environment variables—The available keys are defined in .env. Use some system-dependent means of setting environment variables if you need to override any of them.
Geolocation uses a custom configuration file to conform to the underlying technology.
You can find more about configuring Sentry at the configuration section of our developer documentation.
Here is further information on specific configuration topics related to self-hosting:
We strongly recommend using a dedicated load balancer in front of your Sentry setup bound to a dedicated domain or subdomain. A dedicated load balancer that does SSL/TLS termination that also forwards the client IP address as Docker Compose internal network (as this is close to impossible to get otherwise) would give you the best Sentry experience.
Keep in mind that all this setup uses single-nodes for all services, including Kafka. For larger loads, you'd need a beefy machine with lots of RAM and disk storage. To scale up even further, you are very likely to use clusters with a more complex tool, such as Kubernetes. Due to self-hosted installations' very custom nature, we do not offer any recommendations or guidance around scaling up. We do what works for us for our thousands of customers over at sentry.io and would love to have you over when you feel your local install's maintenance becomes a burden instead of a joy.
Sentry cuts regular releases for self-hosting to keep it as close to sentry.io as possible. We encourage everyone to regularly update their Sentry installations to get the best and the most recent Sentry experience. You can read more about our versioning strategy and philosophy over at the releases page.
We recommend (and sometimes require) you to upgrade Sentry one version at a time. That means if you are running 20.6.0, instead of going directly to 20.8.0, first go through 20.7.0. Skipping versions would work most of the time, but there will be times that we require you to stop at specific versions to ensure essential data migrations along the way.
To upgrade, all you need to do is download or check out the version of onpremise repository you want, replace your existing folder's contents with that, and then run
./install.sh. We may have some updated configuration, especially for new features, so always check the example config files under the sentry directory and see if you need to update your existing configuration. We do our best to automate critical configuration updates, but you should always check your configs during upgrades.
Before starting the upgrade, we shut down all the services and then run some data migrations, so expect to have some downtime. There is an experimental
--minimize-downtime option to reduce the downtime during upgrades. Use this at your own risk and see the pull request it was implemented in for more information.