When we developers write data access code what should we really worry about if the application should scale well and handle the load / Hits.
Given this simple problem , how would you solve it in scalable manner.
1.ProjectResource
is a Class ( Encapsulating resources assigned to a Project
)
2.Each resource assigned to Project is User
Class
3.Each User
in the Project
also has ReportingHead
and ProjectManager
who are also instance of User
4.Finally there is a Project
class containing project details
Legend of classes used
User
Project
ProjectResource
Table Diagram
ProjectResource
ResourceId
ProjectId
UserId
ReportingHead
ProjectManager
Class Diagram
ProjectResource
ResourceId : String / Guid
Project : Project
User : User
ReportingHead : User
ProjectManager : User
note:
All the user information is stored in the User table
All the Project information is stored in the project table
Here's the Problem
When the application requests for Resource In a Project operations below are followed
First Get the Records for the Project
Get the UserId , make the request(using
Users DAL
) to get the user instanceGet the ProjectId, make the request(using
Projects DAL
) to get the project informationFinally assign
Users
andProject
to instance ofProjectResource
clearly you can see 3 Db Calls
are made here for populating single ProjectResource
but the concerns and who manages the objects are clearly defined. This is the way i have planned to , since there is also connection pooling
available in Sql Server
& ADO.net
There is also another way where all the details are retrieved in single hit using Table Inner Joins and then Populating.
Which way should i really be taking and Why?
Extras:
.NET 2.0,ASP.net 2.0,C#,Sql Server 2005,DB on same machine hosting application.
For best performance and scalability, you should minimize the number of round-trips to the DB. To prove that to yourself, just run some benchmarks; it becomes clear very quickly.
One approach to a single round-trip is to use joins. Another is to return multiple result sets. The latter can be helpful in eliminating possible duplicate data.