LongCut logo

Fabric Update - February 2026

By Microsoft Fabric

Summary

Topics Covered

  • Relative References Simplify CI/CD
  • Semantic Link Unifies Lakehouse Management
  • Notebooks Bridge BI and Data Science
  • Semantic Link Automates Model Governance
  • Private Networks Secure Eventstream Ingestion

Full Transcript

(bright music) - If you've been waiting for Microsoft Fabric to level up, this month, it did not disappoint.

We've got major capabilities going generally available, powerful new previews that hint at where the platform is heading and some updates that are going to change how you design solutions in Fabric.

Before we get into all of the great information, I wanna remind you that the Microsoft Fabric Community Conference and the Microsoft SQL Community Conference are coming in March, 2026 to Atlanta.

Scan the QR code on the screen to learn more and sign up.

We'd love to have you register, learn new things, and connect with the community.

You can use the discount code FABCOMM, F-A-B-C-O-M-M to get $200 off registration, For all of the details for this month, you'll see a QR code on the screen scan that QR code to jump straight to the monthly update blog.

I'm Adam Saxton.

This is the Microsoft Fabric Update for February, 2026.

Let's begin.

This month's general availability updates are all about maturity and confidence.

We're seeing continued hardening across core workloads, data movements, storage experiences, semantic modeling and governance improvements that make Fabric more enterprise ready than ever.

Several features that many of you have been testing, in preview, are now production supported, meaning you can roll them into real world solutions.

If you're building Lakehouse architectures, modern Warehouses or Power BI Solutions, inside Fabric, this month strengthens the foundation.

This isn't just incremental polish, it's about stability, scale, and performance.

On the public preview side, this is where things get really interesting.

We're seeing continued improvements around AI assisted experiences, deeper integration across workloads and enhancements that make complex architectures easier to build and manage.

A lot of these preview features point towards a more connected Fabric experience where engineering, warehousing, real time and BI don't just coexist.

They operate as one cohesive data system.

We've got demos lined up that walk you through a couple of these updates so that you can see exactly how this works in your own environment.

Okay, enough of all this talking, let's see some of the demos.

- [Miguel] When creating a solution that is CI/CD friendly in Dataflow Gen2, you can use dataflow parameters and you can even use Fabric variable libraries.

What if you could use something much easier and something much quicker and you can do so now with what we call relative references inside of the Fabric connectors.

To use them, all you have to do is go to get data, find the connector of your choice.

This case, I'm gonna find the Lakehouse connector and go through the credentials process or land into the navigator window, and I can see all of the workspaces that I have access to.

I will be able to see the first node that is called current workspace.

When you select this node, it will always create relative references, so it is listing all of the lakehouses that I have within my current workspace, where this dataflow is located, gonna be selecting this table that is called categories and just load it as a new query.

I have also previously loaded the same table using absolute references and the difference between those two queries because they're loading the exact same data is in how the M script is being created.

When I see the M script for the relative references, I don't see any references to any workspace ID or any Lakehouse ID.

Everything is relative.

We're simply using the Lakehouse name, in this case.

This wouldn't work the way that you expect it to be, if this was to be deployed through a deployment pipeline.

You can always change this navigation steps or this specific query to use a relative reference, all you have to do is go into navigation steps, double click them or click on the gear icon and that will display the reentrance or the dialogue for the navigation, and you can repurpose this navigation to now go into the current workspace node, which will create relative preferences.

So I'm gonna go ahead and simply select the Categoriesx and all that is gonna change is simply the M code of this query.

- [Ruixin] I'm a data engineer responsible for keeping our Lakehouse reliable and healthy.

I'm constantly asking myself, "Are my Spark settings right for the workload I'm running today?

Are my Lakehouse tables quietly growing in size and cost?

And are my SQL endpoints still aligned with the latest table schemas?"

Traditionally, answering these questions means jumping between Notebooks, admin portals, and manual scripts, each one showing only part of the system.

That's where Semantic Link comes in.

With Semantic Link, I can treat my workspace, Lakehouse and SQL endpoints as one connected system directly from my Notebook.

I start by checking the workspace configuration.

In seconds, I can see the active Spark runtime and workspace Spark settings, no guessing and no context switching.

This matters because the same workspace is used differently over time.

During the holidays, it runs production workloads focused on stability and cost control.

After that, it becomes a shared development workspace where flexibility matters more.

With Semantic Link, I update Spark settings safely in context.

Next, I look at the Lakehouse.

Using Semantic Link, I list my tables and immediately see file counts and storage size.

This helps me spot tables that are growing larger than expected.

In Delta Lake, updates, merges and backfills leave behind older data files.

Those files are kept to support reliability and time travel, but over time they accumulate, driving up storage costs and slowing queries.

That's why we run Vacuum.

Vacuum permanently removes inactive files that are no longer needed, but running it blindly can affect downstream users.

With Semantic Link and Notebook, I validate table state, run Vacuum only where it's truly needed, and confirm that storage drops without changing query results.

Finally, I turn to SQL Endpoints.

Schema drift is a quiet problem.

Tables evolve, but endpoints don't always keep up.

With Semantic Link, I can list these SQL endpoints for my workspace right from the Notebook and update them in place, keeping everything aligned with the latest Lakehouse schema.

Here we're in a report showing sales revenue data broken down per product category.

What if you as a creator could enrich this data with predictive insights in a quick and easy way?

To do that, I'm going to start by opening a Python Notebook in Fabric.

A Notebook is a code authoring tool that allows you to create, share, visualize, and document code.

You can work against your data in OneLake and also easily schedule and automate your code.

Let's use Notebook to help us explore the Power BI dataset feeding the report that we just saw, and this is where Semantic Link can help.

I'll begin by checking which data sets I can access.

Semantic Link offers built-in functions with clear explanations making it simple to get started even if you're new to Python.

With just one line of code, you can explore datasets, tables, columns, and even relationships.

There's no need to worry about authentication.

Semantic Link uses your existing token to make calls automatically.

Semantic Link allows us to access measures as well.

If you're familiar with DAX you can even query tables including measures using DAX directly in a notebook cell and return the results into a data frame.

So now you can explore your data in Power BI with a new set of tools.

We can visualize the data using Python visuals and you can also use Python APIs to evaluate measures and validate and test your data.

Let's use the data we just queried and create a simple machine learning model for forecasting.

We'll build the forecast using the popular Python library Profit, and once we're done, we're going to run the predictions and write the forecasted values back to OneLake with Power BI, Direct Lake and Fabric.

Power BI can now seamlessly read our table in the lake with the sales forecast, and this means that we can iterate much faster to incorporate the forecasted values back in our report.

Here you can see the forecasted sales added to our charts per product category.

With Fabric Notebooks and Semantic Link, we can bridge BI and data science and unlock your data's potential beyond traditional BI.

I'm a Power BI engineer.

My job is to ensure our team's semantic models up to date, healthy and accessible across my organization, spanning workspaces and regions while staying within capacity limits.

I don't wanna spend hours on manual steps or risking broken reports.

Semantic Link is how I automate all of that.

I start my day inside a Fabric Workspace.

Semantic Link is already installed in the default Python runtime, so I import the package and connect to a semantic model by name.

No authentication scripts and no hunting for dataset IDs.

In real projects, not every table refreshes on the same schedule.

Fact tables update daily, dimension tables might refresh weekly.

Full refreshes are slow and expensive.

With Semantic Link, I can refresh only what's needed.

I can also inspect each refresh request, timestamps, durations and outcomes, turning what used to be a black box into a clear, repeatable, developer friendly process.

Before I make any changes, I check Model health.

One line of code pulls detailed model statistics, memory usage, table growth, row counts, column sizes, partition information.

I store these snapshots in Delta Lake to track trends and catch growing tables, unused fields or potential bottlenecks before they impact performance.

Quality matters too.

With Semantic Link, I can run Best Practice Analyzer Rules right inside the Notebook, both Microsoft's rules and our own custom rules.

Issues are grouped by category with clear explanations.

It becomes a consistent automated quality gate before a model moves toward production.

Our team works across multiple languages.

Manual translation used to be painful, exporting spreadsheets, copying terms, and fixing mismatches.

Semantic link makes translation effortless.

I choose the model, choose the languages, and it generates complete translations for tables, columns, and measures.

When I switch a report to Chinese, the metadata updates instantly with no drift or missing terms. Finally, when it's time to promote a report, I can move it across workspaces without breaking anything.

A single command clones the report into the new workspace and another command rebinds it to the correct semantic model.

No manual clicking, no broken visuals, no orphan data sets.

Workspace organization stays clean and reliable.

- [Xu] Fabric eventstream is designed to ingest real-time data from many sources and routes it to multiple destinations.

With this feature, you can securely stream data from private network environments such as cloud virtual networks or on-premises infrastructure into Eventstream.

To enable this feature, an Azure virtual network is configured as a secure intermediary.

This virtual network connects to the private network hosting the data source using appropriate connectivity options such as VPN or ExpressRoute for on-premises environments and private endpoints or vNet peering for Azure based sources.

The Eventstream connector is provisioned and injected into the vNet to securely connect the data source with Eventstream.

To abstract the Azure virtual network resource within Fabric, a new concept has been introduced, streaming virtual network data gateway.

This enables the virtual network resource to be provided to a Eventstreams connector service, allowing the connector to be injected into this virtual network.

To get started with this feature, first, register the connector resource provider in your subscription.

Navigate to the subscription that will be used to create your virtual network and select resource providers.

Search for messaging connectors and check whether it's registered.

If it is not registered, select it and click register.

Two, set up an Azure virtual network.

The Azure virtual network must be created in the same region as your Fabric capacity.

In this virtual network, a subnet must be created with specific requirements such as having at least 16 available IP addresses and being delegated to the messaging connector service.

For detailed requirements, please refer to our guide documentation.

Three, connect your streaming sources private network to the Azure virtual network you've created.

For sources and third-party cloud virtual networks or on-premises environments, you need a VPN or Azure ExpressRoute connection For Azure sources in a private network, you can use a private endpoint or vNet peering.

In this demo, we use Azure database for PostgreSQL and connect to it through a private endpoint.

Navigate to the network setting in your PostgreSQL DB resource, you'll find place to create private endpoint.

Here is one private endpoint that has been created before and it points to the Azure virtual network that has been created in previous step.

Open the Azure virtual network resource.

Go to the private endpoint settings where you can see the private endpoint with all required prerequisites in place, let's head to Fabric to add the data source to your Eventstream.

Go to the add data tab and the Fabric realtime hub and select the PostgreSQL database CDC source.

Before configuring the connector settings, click Set up to create a streaming virtual network data gateway that includes the Azure virtual network resource information.

Click New and you will be prompted to provide the Azure virtual network resource details such as the subscription, resource group, virtual network name, and subnet name.

Give it a name and description, then save it.

Once saved, the new streaming virtual network data gateway appears in the list.

Switch back to the connector configuration wizard to complete the setup.

First, create the connection using the source details such as the server address, database name and connection name.

Don't forget to select the newly created streaming virtual network data gateway, and remember to skip the Test connection.

Complete the rest of the configuration as usual, such as specifying the table name.

Hit Connect to add this source to Eventstream.

This step may take longer than usual because it includes an additional step to inject the connector into the Azure virtual network you created earlier.

Once it's done, open the Eventstream and preview the data to verify that it has been retrieved from your source.

You can now see the data being ingested into Eventstream.

- Okay, those are pretty incredible, and if you have any questions about any of these features, drop 'em in the comments below.

We're always watching and we'd love seeing what you're building.

For all the details, scan that QR code on the screen or hit the link in the description below to check out the monthly update blog.

If you're not already there, join us over at the Microsoft Fabric Community site.

It's the best place to stay connected and keep learning.

As always, thank you so much for watching.

Keep being awesome and we'll see you in the next video.

Loading...

Loading video analysis...