Salesforce (docs)
This package models Salesforce data from Fivetran's connector. It uses data in the format described by this ERD.
The main focus of this package is enable users to better understand the performance of your opportunities. You can easily understand what is going on in your sales funnel and dig into how the members of your sales team are performing.
The primary outputs of this package are described below. Staging and intermediate models are used to create these output models. This package contains transformation models, designed to work simultaneously with staging models in our Salesforce source package.
model | description |
---|---|
salesforce__manager_performance | Each record represents a manager, enriched with data about their team's pipeline, bookings, losses, and win percentages. |
salesforce__owner_performance | Each record represents an individual member of the sales team, enriched with data about their pipeline, bookings, losses, and win percentages. |
salesforce__sales_snapshot | A single row snapshot that provides various metrics about your sales funnel. |
salesforce__opportunity_enhanced | Each record represents an opportunity, enriched with related data about the account and opportunity owner. |
Check dbt Hub for the latest installation instructions, or read the dbt docs for more information on installing packages.
Include in your packages.yml
packages:
- package: fivetran/salesforce
version: [">=0.5.0", "<0.6.0"]
By default, this package looks for your Salesforce data in the salesforce
schema of your target database. If this is not where your Salesforce data is, add the following configuration to your dbt_project.yml
file:
# dbt_project.yml
...
config-version: 2
vars:
salesforce_schema: your_schema_name
salesforce_database: your_database_name
This package allows users to add additional columns to the opportunity enhanced table. Columns passed through must be present in the downstream source account table or user table. If you want to include a column from the user table, you must specify if you want it to be a field relate to the opportunity_manager or opportunity_owner.
# dbt_project.yml
...
vars:
opportunity_enhanced_pass_through_columns: [account_custom_field_1, account_custom_field_2, opportunity_manager.user_custom_column]
account_pass_through_columns: [account_custom_field_1, account_custom_field_2]
user_pass_through_columns: [user_custom_column]
If you have Salesforce History Mode enabled for your connector, the source tables will include all historical records. This package is designed to deal with non-historical data. As such, if you have History Mode enabled you will want to set the desired using_[table]_history_mode_active_records
variable(s) as true
to filter for only active records. These variables are disabled by default; however, you may add the below variable configuration within your dbt_project.yml
file to enable the feature.
# dbt_project.yml
...
vars:
using_account_history_mode_active_records: true # false by default. Only use if you have history mode enabled.
using_opportunity_history_mode_active_records: true # false by default. Only use if you have history mode enabled.
using_user_role_history_mode_active_records: true # false by default. Only use if you have history mode enabled.
using_user_history_mode_active_records: true # false by default. Only use if you have history mode enabled.
Your connector may not be syncing all tabes that this package references. This might be because you are excluding those tables. If you are not using those tables, you can disable the corresponding functionality in the package by specifying the variable in your dbt_project.yml. By default, all packages are assumed to be true. You only have to add variables for tables you want to disable, like so:
The salesforce__user_role_enabled
variable below refers to the user_role
table.
# dbt_project.yml
...
config-version: 2
vars:
salesforce__user_role_enabled: false # Disable if you do not have the user_role table
The corresponding metrics from the disabled tables will not populate in downstream models.
Additional contributions to this package are very welcome! Please create issues
or open PRs against main
. Check out
this post
on the best workflow for contributing to a package.
This package has been tested on BigQuery, Snowflake, Redshift, Postgres, and Databricks.
dbt v0.20.0
introduced a new project-level dispatch configuration that enables an "override" setting for all dispatched macros. If you are using a Databricks destination with this package you will need to add the below (or a variation of the below) dispatch configuration within your dbt_project.yml
. This is required in order for the package to accurately search for macros within the dbt-labs/spark_utils
then the dbt-labs/dbt_utils
packages respectively.
# dbt_project.yml
dispatch:
- macro_namespace: dbt_utils
search_order: ['spark_utils', 'dbt_utils']
- Provide feedback on our existing dbt packages or what you'd like to see next
- Have questions, feedback, or need help? Book a time during our office hours here or email us at [email protected]
- Find all of Fivetran's pre-built dbt packages in our dbt hub
- Learn how to orchestrate dbt transformations with Fivetran here
- Learn more about Fivetran overall in our docs
- Check out Fivetran's blog
- Learn more about dbt in the dbt docs
- Check out Discourse for commonly asked questions and answers
- Join the chat on Slack for live discussions and support
- Find dbt events near you
- Check out the dbt blog for the latest news on dbt's development and best practices