This guide walks you through connecting Google BigQuery to Klairr. You’ll need a Google Cloud project with BigQuery data and admin access to create a service account.
Prerequisites
- A Google Cloud project with BigQuery enabled
- At least one dataset with tables you want to query
- Permission to create service accounts and manage IAM roles
Step 1: Create a service account
- Go to the Google Cloud Console
- Select your project
- Navigate to IAM & Admin > Service Accounts
- Click Create Service Account
- Enter a name (e.g., “klairr-reader”) and description
- Click Create and Continue
Step 2: Grant permissions
Assign the following roles to the service account:
| Role | Why it’s needed |
|---|---|
| BigQuery Data Viewer | Read access to tables and views |
| BigQuery Job User | Permission to run queries |
These are the minimum required roles. Do not grant broader access — Klairr only needs read permissions.
Step 3: Create a JSON key
- Click on your new service account
- Go to the Keys tab
- Click Add Key > Create new key
- Select JSON format
- Download the key file
Keep this file secure. You’ll paste its contents into Klairr in the next step.
Step 4: Add the data source in Klairr
- In Klairr, go to Data sources
- Click Add data source
- Select Google BigQuery
- Paste the entire JSON key file contents
- Select the project and dataset you want to query
- Click Connect
Step 5: Schema introspection
After connecting, Klairr automatically introspects your schema:
- Tables — all tables in the selected dataset
- Columns — column names, types, and descriptions (if set in BigQuery)
- Enum values — sample values from low-cardinality columns (used for intelligent filtering)
This typically takes a few seconds. Once complete, you can start asking questions.
Cost estimation
Klairr shows the estimated BigQuery cost before executing each query:
- Bytes to be processed — shown before execution
- Estimated cost — calculated using BigQuery’s on-demand pricing ($5/TB)
- Dry-run mode — in SQL Live Edit, you can check cost before running a modified query
Spend limits
Admins can set spend limits to prevent runaway costs:
- Per-query limit — maximum bytes scanned per query
- Daily limit — maximum total bytes scanned per day per user
- Automatic LIMIT — Klairr injects a LIMIT clause to prevent full-table scans
Configure these in Settings > Data sources for each data source.
Troubleshooting
”Permission denied” error
- Verify the service account has BigQuery Data Viewer and BigQuery Job User roles
- Confirm the roles are assigned at the project level (not just the dataset level)
- Check that the correct project is selected in Klairr
”Dataset not found”
- Ensure the service account has access to the specific dataset
- Verify the dataset exists in the selected project
- Check for region restrictions on the dataset
Schema not loading
- The service account may lack permission to list tables — verify the Data Viewer role
- Try disconnecting and reconnecting the data source
- Large datasets (100+ tables) may take longer to introspect
Queries timing out
- Klairr injects a LIMIT clause by default, but complex queries on very large tables may still be slow
- Consider creating views or materialized views for commonly queried data
- Check if the dataset uses partitioning and ensure Klairr’s queries can leverage it
What’s next
- Ask Your First Question — start querying your data
- AI Memory — the organizational knowledge layer Klairr builds and you refine
- Supported Data Sources — explore other data sources