v2.0.0 #3132
Replies: 9 comments 13 replies
-
After the seemingly successful migration to Are these tables expected to exist after the migration? Or are they as temporary as their name suggests and can they be dropped along with the My full list of tables is currently as follows:
|
Beta Was this translation helpful? Give feedback.
-
I'm having difficulties enabling city geolocation. When I try to add I get a 502 error. Here are my docker-compose logs, can you point me in the right direction? https://gist.github.com/Lastofthefirst/08ba4ba464eccb00826936bfa774a874 |
Beta Was this translation helpful? Give feedback.
-
I attempted upgrading from v1.5.1 > v2.0.0 and ran into some significant issues due to what appears to have been some latent issues with the Clickhouse DB that I was unaware of. Since it took me several hours to get a backup working I thought I'd post what I did to get my v1.5.1 back up and running and avoid losing 6 mos. of data. For starters, I followed the v2.0.0 upgrade instructions but upon bringing Plausible backup noticed tons of errors streaming by in docker.
The first was (truncated):
...followed by
I decided to restore my backup and ran into the Clickhouse issues restarting docker makes me think there were problems prior to the upgrade. Btw, I'd previously performed the steps to get city level data via Maxmind. I searched for multiple Clickhouse errors including "Suspiciously many (12 parts, 0.00 B in total) broken parts...". I made two changes that allowed me to get my backup working which is what I think is worth sharing. This article (among others) was helpful.
<merge_tree>
<max_suspicious_broken_parts>12</max_suspicious_broken_parts>
</merge_tree> After these changes my v1.5.1 backup started working again. I haven't repeated the v2.0.0 upgrade but will give it another go. |
Beta Was this translation helpful? Give feedback.
-
UPDATE: I solved my error. See reply down below. I'm trying to use the new Bamboo Mailgun adapter. I've added my API key and Mailgun domain but I don't receive mail anymore. Is there a way to check for error logs and also send test mails when troubleshooting? I can't see any obvious errors in my compose file and running docker logs doesn't show any mail related errors. My compose file: services:
plausible_db:
image: postgres:14-alpine
networks:
- plausible_network
volumes:
- db:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
secrets:
- source: plausible_postgres_password
target: /run/secrets/postgres_password
ports:
- "5432:5432"
deploy:
mode: replicated
replicas: 1
restart: always
plausible_events_db:
image: clickhouse/clickhouse-server:22.6-alpine
networks:
- plausible_network
volumes:
- event_db:/var/lib/clickhouse
- event_db_logs:/var/log/clickhouse-server
- event_db_config:/etc/clickhouse-server
deploy:
mode: replicated
replicas: 1
ulimits:
nofile:
soft: 262144
hard: 262144
restart: always
app:
image: plausible/analytics:v2.0.0
command: sh -c "sleep 10 && /entrypoint.sh db createdb && /entrypoint.sh db migrate && /entrypoint.sh run"
networks:
- plausible_network
depends_on:
- plausible_db
- plausible_events_db
ports:
- "8888:8000"
environment:
- CONFIG_DIR=/run/secrets
- DISABLE_REGISTRATION=false
- BASE_URL=/run/secrets/plausible_base_url
- SECRET_KEY_BASE=/run/secrets/plausible_secret_key_base
- GOOGLE_CLIENT_ID=/run/secrets/plausible_google_client_id
- GOOGLE_CLIENT_SECRET=/run/secrets/plausible_google_client_secret
- MAILGUN_API_KEY=/run/secrets/plausible_mailgun_api_key
- MAILGUN_DOMAIN=/run/secrets/plausible_mailgun_domain
secrets:
- source: plausible_base_url
target: /run/secrets/BASE_URL
- source: plausible_secret_key_base
target: /run/secrets/SECRET_KEY_BASE
- source: plausible_google_client_id
target: /run/secrets/GOOGLE_CLIENT_ID
- source: plausible_google_client_secret
target: /run/secrets/GOOGLE_CLIENT_SECRET
- source: plausible_mailgun_api_key
target: /run/secrets/MAILGUN_API_KEY
- source: plausible_mailgun_domain
target: /run/secrets/MAILGUN_DOMAIN
deploy:
mode: replicated
replicas: 1
restart: always
networks:
plausible_network:
external: true
volumes:
db:
name: plausible_db
event_db:
name: plausible_event_db
event_db_logs:
name: plausible_event_db_logs
event_db_config:
name: plausible_event_db_config
secrets:
plausible_postgres_password:
external: true
plausible_base_url:
external: true
plausible_secret_key_base:
external: true
plausible_google_client_id:
external: true
plausible_google_client_secret:
external: true
plausible_mailgun_api_key:
external: true
plausible_mailgun_domain:
external: true |
Beta Was this translation helpful? Give feedback.
-
The migration will take a bit of time here and I confirmed that the first few months are ok. But we have nearly 3 years to migrate, and I see that the rpc supports a non-interactive mode here How can I enable it? Tried --interactive=false and interactive=false without success |
Beta Was this translation helpful? Give feedback.
-
Couple of extra caveats I ran into to be mindful of:
But frankly, props to the Plausible team. Other than these few bits to have in mind, it was very smooth and the replayability really helpful, especially given the amount of data involved. |
Beta Was this translation helpful? Give feedback.
-
I upgraded from v1.5.1 to v2.0.0 on Fly.io. I ran into a couple issues and hopefully these notes will make it easier for others that do this upgrade on Fly: I needed to set The data migration failed for me with pool error, which were really nxdomain errors. This was because v2.0.0 doesn't pass in transport options to the migration repo, see issue #3173. Fortunately already fixed in #3179, but that is not yet released as a docker image afaict. womp womp. I built my own docker image off the commit with the fix, and then temporarily deployed that to run the migration. Then went back to the v2.0.0 release once that was done. The docker image I built and temporarily deployed to run the migration is: The steps I used to build/publish the container, in case you rightfully don't trust rando docker images:
Hopefully we get a point release with the transport options fix for the data migration tool soon. 🤞 |
Beta Was this translation helpful? Give feedback.
-
Seamless upgrade as ever, though I needed to 'docker-compose' rather than 'docker compose'. Some great new features, thanks team P! |
Beta Was this translation helpful? Give feedback.
-
v2.1.0 release candidate is now available! |
Beta Was this translation helpful? Give feedback.
-
The highlights of this release are:
Upgrading Plausible Analytics to v2.0
Warning
This guide assumes you are running v1.5.1.
If you are upgrading from an earlier version, you might encounter the error reported (and resolved) in #4779
Warning
Upgrading to
v2.0
requires performing a data migration.Please read these notes until the end before deploying
v2.0.0
Ensure you are using a new ClickHouse version
The steps below have been tested with
clickhouse/clickhouse-server:22.6-alpine
please make sure to upgrade ClickHouse to at least this version.Here's the excerpt from
v1.5 release discussion
regarding the ClickHouse upgradeIn your docker-compose.yml update the image used for plausible_events_db to a newer ClickHouse version:
Upgrading ClickHouse to 22.6
Restart the container
This will boot up the new version of ClickHouse.
Related PR: plausible/community-edition#45
Update image tag
In your
docker-compose.yml
update the image used forplausible
tov2.0.0
and restart the container
This will boot up the new version of the app.
If you open the dashboards now, you wouldn't see any past metrics. This is expected as
v2.0
uses the newevents_v2
andsessions_v2
tables to store analytics data. We need to perform data migration to copy the data into the new tables.Run data migration
Connect to the running
plausible
container and start the migration flowYou can attempt this migration multiple times unless you drop
v1
tables.Drop v1 tables (optional)
Once you verify the migration went well, the old tables can be dropped. It's easiest to use
clickhouse-client
for thisSee https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#max-table-size-to-drop for how to drop tables with more than 50GB of data.
Enable automatic MaxMind GeoLite2 updates (optional)
In your
plausible-config.env
setMAXMIND_LICENSE_KEY
environment variable and get an automatically updated GeoLite2 City geolocation database. The database edition is configurable withMAXMIND_EDITION
environment variable and defaults toGeoLite2-City
.Note that for the changes in
plausible-config.env
to propagate to theplausible
, the container needs to be recreated:Also note that using
GeoLite2-City
edition requires more RAM than usingGeoLite2-Country
.Now you can remove any other volumes and services used to download, store, and update geolocation databases.
Changelog
Following changes have been made since v1.5:
Added
with_imported=true
in Stats API aggregate endpointtagged-events
classnamesBamboo.MailgunAdapter
,Bamboo.MandrillAdapter
,Bamboo.SendGridAdapter
Support alternative mailing services (Mailgun, Mandrill, Sendgrid) #2649PUT /api/v1/sites
)LOG_FAILED_LOGIN_ATTEMPTS
environment variable to enable failed login attempts logs add LOG_FAILED_LOGIN_ATTEMPTS #2936MAILER_NAME
environment variable support add MAILER_NAME #2937MAILGUN_BASE_URI
support forBamboo.MailgunAdapter
add MAILGUN_BASE_URI #2935Fixed
Changed
bounce_rate
Removed
IP_BLOCKLIST
environment variablecustom_dimension_filter
feature flag remove custom_dimension_filter feature and views_per_visit_metric flags #2996This discussion was created from the release v2.0.0.
Beta Was this translation helpful? Give feedback.
All reactions