Understanding Microsoft Fabric: How to Structure and Manage Workspaces for Teams, Governance, and Scale

You can build the best pipelines and the most optimized Warehouse in Microsoft Fabric, but if your workspaces are poorly organized, everything falls apart. Teams step on each other’s work. Capacity gets throttled by the wrong workload. Dev and production items live side by side with no separation. And nobody can find anything in the OneLake Catalog.
In this article, we will cover everything you need to know about workspace management in Microsoft Fabric. From workspace creation and role assignment to capacity binding, domains, Spark compute settings, workspace identity, and how to structure workspaces for development, testing, and production environments.
If you are preparing for the DP-700: Fabric Data Engineer Associate exam, workspace management questions test your understanding of roles, capacity assignment, and governance patterns. Microsoft expects you to choose the right workspace structure for a given organizational scenario.
This article is part of the Understanding Microsoft Fabric series, a practical guide designed to help you master every key component of Fabric and prepare for the DP-700 certification.
If you have been following the series, we have already covered:
Understanding Microsoft Fabric: Cost and Performance Optimization
Understanding Microsoft Fabric: Monitoring and Troubleshooting
Understanding Microsoft Fabric: Shortcuts and External Connections
Understanding Microsoft Fabric: AI Integration and Copilot Features
Take Your DP-700 Prep to the Next Level
Reading articles is great for understanding concepts. But passing the exam requires practice. I created a comprehensive practice test course specifically for DP-700: Implementing Data Engineering Solutions Using Microsoft Fabric. Inside you will find:
300+ exam-style questions 5 full practice tests Case studies and scenario-based questions Detailed explanations, learn WHY each answer is correct All 3 exam domains covered
Whether you are just starting or doing final prep, these tests will show you exactly where you stand.
Limited offer: Use code EARLY_BIRD_1 at checkout Get the practice exams on Udemy
1. What is a Workspace in Microsoft Fabric?
A workspace is the primary organizational container in Microsoft Fabric. It is where you create, store, and collaborate on Fabric items: Lakehouses, Warehouses, Notebooks, Pipelines, Dataflows Gen2, Reports, Semantic Models, and all other artifacts.
Think of a workspace as a project folder with built-in access control, capacity assignment, and lifecycle management. Every Fabric item belongs to exactly one workspace, and the workspace determines who can access it, which capacity powers it, and how it moves through development stages.
The Fabric Hierarchy
Understanding where workspaces sit in the Fabric hierarchy is essential:
Tenant is the top-level boundary. It maps to your Microsoft Entra organization.
Capacities are compute and billing resources tied to a specific Azure region. Workspaces are assigned to capacities to get compute power.
Domains are logical groupings that organize workspaces by business unit or subject area (Sales, Finance, Operations). Domains enable delegated governance.
Workspaces sit inside capacities and optionally inside domains. They contain all Fabric items and define access control through roles.
Items are the actual artifacts (Lakehouses, Warehouses, Notebooks, Pipelines, Reports) stored within workspaces.
DP-700 Exam Tip
Understand the hierarchy: Tenant > Capacity > Domain > Workspace > Items. Questions often test whether you know the relationship between these levels, especially the difference between capacity (compute/billing) and domain (governance/organization).
2. Workspace Roles: Admin, Member, Contributor, and Viewer
Workspace roles are the primary mechanism for controlling who can do what inside a workspace. There are four roles, each building on the permissions of the previous one.
Role Capabilities
Viewer: Can view all content in the workspace but cannot modify anything. Can read data through the SQL analytics endpoint. Viewers are subject to RLS, CLS, and DDM. This is the role for business users and report consumers.
Contributor: Everything a Viewer can do, plus create, edit, and delete all items in the workspace. This is the standard role for data engineers and developers. Contributors get db_owner access to Lakehouse and Warehouse databases. They cannot manage workspace settings or add users to workspace roles directly, but they can share individual items if they have Reshare permissions.
Member: Everything a Contributor can do, plus share items with other users and add users to the Member, Contributor, or Viewer roles (but not Admin). Members can also publish Power BI apps.
Admin: Full control. Everything a Member can do, plus manage workspace settings, assign the workspace to a capacity, configure Git integration, manage all roles including Admin, and delete the workspace.
Key Rules
If a user has multiple roles (for example, through different security groups), the most permissive role wins.
Workspace roles are confined to a single workspace. They do not carry over to other workspaces, the capacity, or the tenant.
Admin, Member, and Contributor roles bypass all data-level security (RLS, CLS, DDM). Only Viewers are subject to data security policies.
Each workspace supports a maximum of 1,000 users or groups in workspace roles. This limit does not affect the number of users within a group.
Each workspace can contain a maximum of 1,000 Fabric and Power BI items. Plan your workspace structure to avoid hitting this limit in large environments.
Each user can create a maximum of 1,000 items across all workspaces.
Best Practices
Use security groups instead of individual users for role assignment. This simplifies management and auditing.
Follow the principle of least privilege. Give Contributors access to developers, Viewers to consumers.
Reserve Admin for workspace owners and platform administrators.
Document your role assignment strategy for consistency across workspaces.
DP-700 Exam Tip
Only Viewers are subject to RLS, CLS, and DDM. Admin, Member, and Contributor roles bypass all data security. If a question asks why a user can see all rows despite having RLS configured, check whether they have a role higher than Viewer.
3. Assigning Workspaces to Capacities
Every workspace must be assigned to a capacity to use Fabric workloads beyond Power BI Pro. The capacity determines the compute resources (CUs) available to the workspace and the Azure region where data is stored.
How Assignment Works
Workspace Admins can assign their workspace to a Fabric capacity if they also have the Capacity Contributor role on the target capacity.
Fabric Admins and Global Admins can move any workspace to any capacity.
Capacity Admins for Fabric (F) capacities must also be Workspace Admins to assign a workspace. For Power BI Premium (P) capacities, Capacity Admins can claim any workspace without being a Workspace Admin.
Data Residency
Capacities are bound to specific Azure regions. When you assign a workspace to a capacity, the data in that workspace resides in the capacity’s region. Moving a workspace to a capacity in a different region requires data migration and has restrictions:
Power BI items can move across regions.
Most non-Power BI Fabric items (Lakehouses, Warehouses, Notebooks, etc.) cannot move across regions. They must be removed from the workspace before migration, and new items must be created after the workspace is reassigned.
Reassignment Restrictions
Moving workspaces between capacities can fail or have issues when Private Link networking is enabled, Customer-Managed Key (CMK) configurations conflict, or the capacity is paused or scaling during the operation. Always ensure both source and destination capacities are active and stable during migration.
DP-700 Exam Tip
If a question describes moving a workspace to a different Azure region, remember that non-Power BI Fabric items cannot be moved across regions. They must be deleted and recreated. This is a critical restriction for data residency scenarios.
4. Organizing Workspaces with Domains
Domains provide a governance layer above workspaces. They group workspaces by business unit, department, or subject area, making it easier to manage and discover data at scale.
What Domains Do
Organize workspaces logically. Assign workspaces to domains like “Sales,” “Finance,” “Operations,” or “Data Engineering.” Users can filter by domain in the OneLake Catalog to find relevant content.
Delegate governance. Some tenant-level settings can be delegated to the domain level, allowing each business unit to define its own rules and restrictions. For example, one domain might allow external sharing while another blocks it.
Support subdomains. Domains can have subdomains for more granular organization (for example, “Finance > Accounting” and “Finance > Treasury”).
Default domains. Admins can designate a domain as the default for specific users or security groups. New workspaces created by those users are automatically assigned to the default domain.
What Domains Do Not Do
Domains do NOT grant workspace roles. Assigning a workspace to a domain does not give users in that domain any access to the workspace. Workspace roles (Admin, Member, Contributor, Viewer) must be assigned separately. Domains are purely a governance and organization layer, not an access control mechanism.
Domain Roles
Domain Admin: Can manage domain settings, assign workspaces, manage domain contributors, and override workspace assignments (if the tenant setting allows).
Domain Contributor: Can assign workspaces they administer to the domain. Must have the Admin role on the workspace to assign it.
DP-700 Exam Tip
Domains do not grant workspace permissions. If a question asks whether assigning a workspace to a domain gives users in that domain access to the workspace, the answer is no. Domains organize workspaces; workspace roles control access.
5. Managing Spark Compute Settings at Scale
For data engineering workloads, Spark compute configuration is one of the most impactful workspace management decisions. Fabric provides three levels of Spark compute governance: capacity, workspace, and individual item.
Starter Pools
Starter pools are preconfigured Spark compute environments that provide fast session startup. They are available by default and provide a quick way to start notebooks and Spark jobs without configuring custom environments.
Capacity admins can disable Starter Pools across all workspaces attached to a capacity. When disabled, users must use custom pools explicitly created and managed by the admin.
Custom Spark Pools
Workspace admins can create custom Spark pools with specific executor counts, core configurations, and autoscaling settings. For centralized governance, capacity admins can create standardized pools and optionally disable workspace-level customization, preventing workspace admins from modifying pool settings or creating their own.
Job-Level Bursting
Fabric supports 3x bursting for Spark vCores, allowing a single job to temporarily use more compute than the base capacity provides. Capacity admins can disable job-level bursting to prevent a single job from monopolizing capacity resources.
Spark Environments
Environments are workspace-level configurations that define Python and R libraries, Spark properties, and runtime settings. They can be shared with other users with read or read-and-edit permissions. Only Workspace Admins can create environments.
DP-700 Exam Tip
If a question describes a capacity admin wanting to standardize Spark compute across all workspaces, the answer involves creating custom pools at the capacity level and disabling workspace-level customization.
6. Workspace Identity: Secure Service-Level Authentication
Workspace Identity is a managed identity assigned to a Fabric workspace. It provides credential-free, service-principal-like authentication for production data pipelines, notebooks, and shortcuts.
Why It Matters
Without Workspace Identity, pipelines and shortcuts rely on individual user credentials or stored secrets. This creates problems: if the user leaves the organization, pipelines break. If a secret expires, shortcuts stop working.
Workspace Identity eliminates this by providing a persistent, managed identity that:
Authenticates to ADLS Gen2 storage accounts without passwords or SAS tokens.
Enables Trusted Workspace Access for storage behind firewalls.
Works with OneLake shortcuts, Pipelines, Notebooks, and Dataflows Gen2.
Integrates with Azure RBAC (assign Storage Blob Data Reader to the workspace identity).
How to Enable
Go to Workspace Settings.
Navigate to the Identity tab.
Enable Workspace Identity.
Copy the Application ID and use it to grant Azure RBAC roles on your storage accounts.
DP-700 Exam Tip
If a question describes a production pipeline that fails because the user who created it left the organization, the solution is Workspace Identity. It provides persistent, credential-free authentication that does not depend on individual users.
7. Structuring Workspaces for Dev, Test, and Production
One of the most common mistakes in Fabric is mixing development and production items in the same workspace. This creates risk: an accidental change to a notebook can break a production pipeline.
Recommended Structure
Use separate workspaces for each stage of your development lifecycle:
Development workspace: Where data engineers build and test new items. Connected to a Git repository for version control. Assigned to a dev/test capacity (smaller, lower cost, can be paused outside business hours).
Test/Staging workspace: Where items are validated before promotion to production. Can use automated testing with Pipelines or Notebooks. Assigned to a test capacity or the production capacity with workspace-level surge protection.
Production workspace: Where items run on schedule and serve reports to business users. Locked down with minimal Admin and Contributor roles. Assigned to the production capacity with reserved pricing.
Deployment Pipelines
Fabric Deployment Pipelines automate the promotion of items between workspaces (Dev > Test > Prod). They handle item comparison, differential deployment, and parameter rules for environment-specific configurations (like connection strings or storage paths).
We will cover Deployment Pipelines in detail in the next article on Deployment and CI/CD.
Git Integration
Fabric supports Git integration with Azure DevOps and GitHub. When connected, workspace items are stored as source-controlled files, enabling branching, pull requests, code review, and rollback.
Key considerations:
Connect your development workspace to a feature or development branch.
Use pull requests to merge changes to the main branch.
Deployment pipelines deploy from the main branch to production.
Not all Fabric items support Git integration equally. Check the latest documentation for item-specific support.
DP-700 Exam Tip
If a question asks how to separate development from production in Fabric, the answer is separate workspaces with Deployment Pipelines for promotion and Git integration for version control.
8. Tenant Settings That Affect Workspace Management
Several tenant-level settings in the Fabric Admin Portal directly impact how workspaces behave.
Key Settings
Create workspaces: Controls who in the organization can create workspaces. By default, all users can create workspaces. Restrict this in large organizations to prevent workspace sprawl.
Block users from reassigning personal workspaces: Prevents My Workspace owners from moving their workspace to a different capacity, which could violate data residency requirements.
PBIR format for reports: Automatically converts reports to PBIR (Power BI enhanced metadata format) after editing. PBIR provides a source-control-friendly file structure for better Git integration.
Workspace workload management (March 2026): The new Manage Workloads tab in the Admin Portal provides centralized governance for custom workloads across the tenant, including tenant and workspace-level assignment controls.
Copilot delegation: Capacity admins can override tenant-level Copilot settings for their capacity, enabling or disabling AI features independently.
DP-700 Exam Tip
If a question describes workspace sprawl or unauthorized workspace creation, the answer is restricting the “Create workspaces” tenant setting to specific security groups.
Common Workspace Management Mistakes to Avoid in Microsoft Fabric
Mixing dev and prod items in one workspace. This is the most common and most dangerous mistake. A developer editing a notebook can accidentally break a production pipeline. Always use separate workspaces for each lifecycle stage.
Assigning all workspaces to one capacity. If all workspaces share one capacity, a heavy Spark job in one workspace can throttle Power BI reports in another. Use separate capacities or workspace-level surge protection for workload isolation.
Not using security groups for role assignment. Adding individual users to workspace roles creates management overhead. Use security groups, and update group membership instead of workspace roles when people join or leave.
Ignoring domains. Without domains, the OneLake Catalog becomes a flat list of hundreds of workspaces. Organize workspaces into domains early to enable filtering, delegated governance, and discoverability.
Using personal accounts for production shortcuts and pipelines. If the user leaves or their credentials expire, production breaks. Always use Workspace Identity for production authentication.
Not restricting workspace creation. In large organizations, unrestricted workspace creation leads to hundreds of unused or duplicate workspaces. Restrict creation to specific security groups and enforce a naming convention.
DP-700 Exam Tip
The exam tests common anti-patterns. Watch for scenarios with mixed dev/prod, single capacity, personal credentials in production, and unrestricted workspace creation.
What is Next
In the next article, we will explore Deployment and CI/CD in Microsoft Fabric, covering Deployment Pipelines, Git integration with Azure DevOps and GitHub, branching strategies, deployment rules, and how to implement a complete release management workflow for your Fabric environment.
Make sure to bookmark this series so you do not miss any upcoming articles.

