Essential Strategies and Practices for Entity Framework Core

Admir Mujkic
10 min readMar 12, 2024

--

In this article, we will explore different approaches to using Entity Framework, some of the advantages you get from them, and how, for example, using async/await might increase the scalability of your application. Also, by logging queries can help you fine tune the SQL output and get better performance.

Essential Strategies and Practices for Entity Framework Core

We will see the advantages or creating resource files and you can use them to store the initial data for your tables. We’ll talk about deferred execution and how that can optimize when and how the data is actually loaded.

We’ll review a technique called .AsNoTracking() that makes data-access quicker than treating data as read-only. Why not make the most out of the capabilities of your database to make things efficient?

Lastly, how AutoMapper can simplify the transformation of data from one object type to another and make your code much cleaner and maintainable but we have to be careful in terms of projections.

Verify Your Model

If you are generating your models by using the database-first approach, meaning you have an existing database that you are going to create your models and classes from, make sure that you have Indexes, Relationships, Identity Fields, and Foreign Keys in your database so that all of the elements of the database represent the model that you intend to create.

Why is this important?

When you create your DbContext instance with the Scaffold-Database command, it’s using your existing database to create the instance. It’s taking everything into account. If your relationships are not correct, then when you are accessing navigation properties on your model when you’re accessing them through your DbContext instance, these navigation properties are going to be empty.

Why Use Async/Await for Scalable Applications

For I/O-bound activities, such as database operations, implementing async/await will create a more scalable application. You may not notice much of a difference when you run your web application locally, but it will become crucial as hundreds of users access the website concurrently.

The primary reason for using async/await is to prevent blocking threads.

How it works?

The .NET Framework uses a pool of threads to handle requests. When a request arrives at the server, the .NET Framework grabs a thread from the pool and assigns it to the request. The thread processes the request in a synchronous but separate way. The thread is blocked for the duration of the process a.k.a. blocking thread.

Upon completion, the thread returns to the pool. You don’t explicitly use a thread from the pool by using async/await. There’s no thread that sits between async and await, between the two operations. Operations that run between async and await do not require a thread. This means the thread is free to serve other requests.

This diagram abstracts the process into key components

It is important to use async/await for Entity Framework Core calls that can be made for proper performance.

Synchronous Version

In the synchronous version the thread from the thread pool that handles the request is blocked while waiting for the database operation to complete. This means the thread cannot be used for anything else during this time, which can lead to scalability issues under heavy load what can see in snippet bellowe:

using (var context = new MyDbContext())
{
// Synchronous database operation
var orders = context.Orders.ToList();
// The thread is blocked here until the database operation completes
// Process orders...
}

Asynchronous Version

In the asynchronous version, we use async and await to free up the thread while waiting for the database operation to complete. This allows the thread to return to the pool and be used for other requests. When the database operation completes, a thread is taken from the pool to continue processing the result what can see in snippet bellowe:

using (var context = new MyDbContext())
{
// Asynchronous database operation
var orders = await context.Orders.ToListAsync();
// The thread is not blocked here. It can return to the pool and be used for other requests.
// Process orders...
}

It’s important to note that asynchronous programming does not make the operation itself faster, instead, it allows better utilization of resources, leading to applications that can handle more load and remain responsive to user interactions.

Analyzing and Troubleshooting Queries with Logging

I believe (based on my experience) Most Database Administrators I’ve encountered aren’t huge fans of letting programmers use Entity Framework. This is mainly because they can pass off ad hoc queries to the database and have them executed nearly instantaneously.

Similar like the Windows kernel, Entity Framework is a black box. You can’t see what’s happening behind the scenes. To solve this issue we are lucky since we have a method was added to the OptionsBuilder class called the LogTo.

To enable straightforward logging with Entity Framework, insert a .LogTo() method inside the onConfiguring() method of your DbContext instance. Here’s how you do it in the code snippet below:

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
if (!optionsBuilder.IsConfigured)
{
optionsBuilder.LogTo(Console.WriteLine);
}
}

Understanding the .LogTo() Method

The .LogTo() method offers flexibility in how you manage log data. It accepts either an action or a function to determine the log’s destination. In the provided example, we demonstrate straightforward logging to the debug window.

  • Logging directly to the debug window offers these advantages:
  • Requires only the Console.Write() method.
  • Eliminates the need for third-party packages.Alternative Logging Options

While convenient, debug window logging might not always be ideal. There are other equally simple logging solutions available for Entity Framework Core.

Leveraging Assets for Extensive Seed Data

If you require initial seed information with under approximately 20 records for your tables, it is easiest manually compose the records on the fly in your DbContext example utilizing .HasData() as I believe.

However, what if you have a table necessitating many logs seeded on the initial load?

One gem in .NET is employing assets documents for storing straightforward strings, generally utilized for localization/translation, but it additionally can be used for filling seed information.

Hundreds of records may populate tables through resource files converted to .csv then imported, solving the tedious problem of manual coding while maintaining complexity and variation in output.

Let’s imagine that we have next JSON file with Mercedes-Benz models.

[
{
"Id": 1,
"Make": "Mercedes-Benz",
"Model": "C-Class",
"Year": 2020
},
{
"Id": 2,
"Make": "Mercedes-Benz",
"Model": "E-Class",
"Year": 2021
},
{
"Id": 3,
"Make": "Mercedes-Benz",
"Model": "S-Class",
"Year": 2022
},
{
"Id": 4,
"Make": "Mercedes-Benz",
"Model": "GLE",
"Year": 2020
},
{
"Id": 5,
"Make": "Mercedes-Benz",
"Model": "GLC",
"Year": 2021
},
{
"Id": 6,
"Make": "Mercedes-Benz",
"Model": "AMG GT",
"Year": 2022
},
{
"Id": 7,
"Make": "Mercedes-Benz",
"Model": "CLS",
"Year": 2020
}
]

Copy the JSON returned. Creating the Resource File SeedResource.resx

Ensuring you work within the right Visual Studio project where you desire to embed this resource is crucial.

Please right-click within the Solution Explorer, right-click on your undertaking (or a directory inside it for categorization) and choose “Include” -> “New Item…”

In the dialogue, pick Resources Document and give it the name SeedResource.resx. Click Add. The SeedResource.resx report will accessible in the Visual Studio resource editor. It will seem like a table with numerous cells.

Parameters

  • Name: MercedesBenzRecords — This is how you’ll reference the information later in code.
  • Value: Insert your actual JSON data within the quotes, formatting it precisely as it will be dealt with as a string.
  • Comment: These are optional, but added to recognize for other devs

Understanding the Limitations

  • Name: This is the exclusive identifier you will use to refer to and recover the JSON information from within your C# code.
  • Value: This is where you really store your JSON information.
  • Comment: Remarks help other developers (and your future self!) comprehend the purpose of your asset.

Access Modifier

  • Public: The asset can be accessed from anywhere in your application and even potentially other projects.
  • Internal: The asset is restricted to use within the current undertaking.

Open your CarsConfiguration class and locate your .HasData() attraction
in your DbContext instance and replace it with the following code:

var records = JsonSerializer.Deserialize<Car[]>(SeedResource.MercedesBenzRecords);
if (records != null)
{
builder.HasData(records);
}

If you need a lot of seed data, it’s easier to put that data in JSON files instead of typing it all into the code by hand.

Understanding Deferred Execution

Deferred execution in Entity Framework means that the execution of a LINQ query is postponed until you actually need the results. This can lead to significant performance improvements in data access.

Let’s image and consider these two scenarios

Scenario 1 — Less Efficient

var cars = this.Cars.ToList().Where(car => car.Id == 1);

In this case, the entire Cars table is loaded into memory and then filtered.

Scenario 2 — Optimized

var cars = this.Cars.Where(car => car.Id == 1).ToListAsync(cancellationToken);

Here, the Where clause is translated directly into a SQL WHERE clause, fetching only the necessary cars from the database. So, deferred execution allows you to build up a query expression without immediately executing it.

Implementing a Read-Only Mode with .AsNoTracking()

Imagine your DbContext as a meticulous librarian managing a collection of books like your data entities. Every time a book is taken off the shelf like an entity is loaded from the database, the librarian makes a note, keeping track of any changes made. This is similar to how Entity Framework ChangeTracker meticulously monitors modifications to your entities.

Now, if you simply want to browse a books contents without making changes, the librarians note-taking becomes extra work. This is where .AsNoTracking() comes in. By adding it to your LINQ query, you’re essentially telling the librarian:

Hey, I’m just reading, no need to keep track of changes.

This saves the librarian, I mean Entity Framework, precious time and effort. For example, the following LINQ query will retrieve an Cars object without updating
the ChangeTracker:

public async Task<Car?> GetCarAsync(int id)
{
return await _context.Cars
.AsNoTracking()
.FirstOrDefaultAsync(car => car.Id == id);
}

In the previous snippet, we place the .AsNoTracking() method right after the DbSet instance, letting Entity Framework Core know not to track anything. Let’s see diagram.

Read-Only Mode with .AsNoTracking()

.AsNoTrackingWithIdentityResolution()

Like above imagine you’re in a library searching for a specific customer record in a section. Using Find to retrieve a record is like being handed a magic book that directly links to that customer story.

This book is special, any changes you make in it will rewrite the customer story in the library archives when you inform the librarian, in EF Core world this is SaveChanges. But, if you’re just looking to read without altering the story, this magic book is more than you need.

.AsNoTrackingWithIdentityResolution() is like asking the librarian for a photocopy of the customer story instead. This photocopy is perfect for reading. It’s lightweight and doesn’t change the original story, no matter what notes you add to it. It’s ideal when you just need to look at the information without modifying it.

Let’s try to simplify as much as possible.

  • Find gives you a magic book that can change the original story, useful but heavy if you’re just reading.
  • .AsNoTrackingWithIdentityResolution() offers a photocopy of the story for those times when viewing information is all you require, making it a lighter, more efficient choice.

Clear of Manual Property Mapping

Imagine DTOs as the compact cars of your C# project. Just like a compact car is designed to be efficient and straightforward, DTOs trim down your complex classes to just the essentials. For example, you might create a CustomerCar class that only includes key details like make, model, and year, skipping the extras. This approach keeps your data transfer fast and efficient, making web pages load quicker.

Manual Mapping

Manually transferring data from a detailed domain Car class to a simplified CarDto feels like moving boxes by hand from one car to another. It’s repetitive and easy to slip up. This is where the problem lies. But using AutoMapper we will achieve effortless transfer

AutoMapper works like a vehicle transport trailer, automatically moving your boxes (data) from one car (class) to another. Just set it up once (see the C# example below), and it does the heavy lifting, eliminating the need to manually assign each property.

using AutoMapper;

public class CarProfile : Profile
{
public CarProfile()
{
// Map from Car (detailed class) to CarDto (simplified class)
CreateMap<Car, CarDto>()
.ForMember(dest => dest.Make, opt => opt.MapFrom(src => src.Make))
.ForMember(dest => dest.Model, opt => opt.MapFrom(src => src.Model))
.ForMember(dest => dest.Year, opt => opt.MapFrom(src => src.Year));
}
}

Popularity and Caution

While AutoMapper is like the go-to tool for easy loading and unloading of data, it’s important to use it wisely. Improper use can be like taking a long route… It might end up causing more traffic (database queries) than necessary. Make sure your data mappings are efficient to keep your application running smoothly.

Since AutoMapper will original query change and there is no guarantee that is optimized. Recommendation is use SQL Server Monitoring Tool and Log Queries.

Final Word

In this article, we explored how to expertly wield Entity Framework, a software assisting seamless database coordination within .NET applications. Let us review what we’ve learned:

  • Asynchronous implementation unlocks scalability: Employ await within database tasks to retain responsiveness irrespective of user volumes.
  • Query optimization via logging: Inspect SQL using .LogTo() to identify and remedy inefficient queries.
  • Seed data management streamlined: Resource files expedite populating databases, bypassing clunky initial loads.
  • Deferred execution economizes: Postpone queries until indispensably needed, streamlining interactions.
  • Faster reads through .AsNoTracking(): Retrieving data skips change tracking where unnecessary, particularly for displays.
  • Simplified transformation via AutoMapper: Automatically convert between object types, cleaning code — yet cautiously, as projection may sometimes showdown performance.

When utilizing AutoMapper projections, one must proceed with caution to circumvent any potential execution issues. A basic advance in this procedure is to consistently inspect your information showcasing demonstrate, guaranteeing it’s intended for the most ideal results.

Assembling a strong application goes past only mapping questions. It includes a thorough investigation of your database composition and the connections inside it.

This careful investigation confirms that your Entity Framework (EF) inquiries and route properties work disconnected from any issues, driving a smoother, more productive application.

Careful thought is given to each query intricacy and length, keeping up a change of shorter and longer structures while coordinating check.

Good Luck :-)

Originally published at https://www.admirmujkic.com.

--

--

Admir Mujkic
Admir Mujkic

Written by Admir Mujkic

I am Admir Mujkić, a technical architect with over 15 years of experience helping businesses harness technology to innovate, grow, and succeed.

Responses (1)