• Overview to the Data Model Scorecard®.
The Scorecard is a set of ten categories for validating a data model. We will explore best practices from the perspectives of both the modeller and reviewer, and you will be provided with a template to use on your current projects.
• Reviewing conceptual, logical, and physical data models.
The conceptual data model captures a business need within a well-defined scope; the logical data model captures an application-independent business solution; and the physical data model captures the technical solution by focusing on factors such as performance and security.
• Ensuring the model captures the requirements.
There is no one way to elicit requirements – rather it requires knowing when to use certain elicitation techniques such as interviewing and prototyping. We will focus on techniques to ensure the data model meets the business requirements. All of the techniques discussed are consistent with the Business Analysis Body of Knowledge (BABOK).
• Validating model scope.
We will focus on techniques for validating that the scope of the requirements matches the scope of the model. If the scope of the model is greater than the requirements, we have a situation known as “scope creep.” If the model scope is less than the requirements, we will be leaving information out of the resulting application.
• Following acceptable modelling principles.
If someone showed you a blueprint for a house, you would probably catch some obvious errors, such as if the blueprint depicted a bedroom in the middle of the kitchen! This is the same category for the data model – we are looking for obvious errors on the data model. We will focus on techniques for building sound designs.
• Determining the optimal use of generic concepts.
Abstraction is a technique for redefining business terms into more generic concepts such as Party and Event. This module will explain abstraction and cover where it is most useful.
• Applying consistent naming standards.
Consistent naming standards will get your organization one step closer to a successful enterprise architecture. We will focus on techniques for applying naming standards.
• Arranging the model for maximum understanding.
A data model is a communication tool and if the model is difficult to read it can hamper communication. We will focus on techniques for arranging the entities, data elements, and relationships to maximize readability.
• Writing clear, correct, and consistent definitions.
Although definitions may not appear on the data model diagram itself, the definitions are integral to data model precision. We will focus on techniques for writing useable definitions.
• Fitting the model within an enterprise architecture.
A data modeller is not only responsible to the project for capturing the application requirements, but also responsible to the organization to ensure all terms and relationships are consistent within the larger framework of the enterprise data model. We will focus on techniques for ensuring the data model fits within a “big picture”.
• Comparing the metadata with the data.
A logical or physical should not be considered complete until at least some data analysis has been done on the data that will be loaded into the resulting data structures. We will focus on techniques for confirming the data elements and their rules match reality. Does the data element Customer Last Name really contain the customer’s last name, for example?
• Modelling workshop.
To further reinforce the content of this seminar, we will build a series of data models and validate them with the Data Model Scorecard.