Reality Data Modeling

Reality data modeling in context of enterprise line of business applications development.

As components of technology advance engineers find dealing with legacy versions more and more frustrating. The birth of cloud computing allowed (or forced) developers to look at the architecture of their work in terms of unlimited scalability. Having gone through that transition many years ago, I was frustrated with the data model pattern the development community had been using for so long. And while new persistent storage mediums and methodologies have been growing in popularity, a core problem remains.

Eventually, using the current modeling pattern, your database will need to be extended beyond its original intent. This makes for messy and unmanageable platforms. While big data platforms allow for real-time entity definitions, utilizing this type of data is cumbersome and not ideal for a common data silo. The core of the problem is located in the relationships between entities. Extending a single property on a table is not difficult. Hell, if you have a single application, or even a single API, it could be downright easy. Compare that to changing from a one-to-one single property to a many-to-many relationship. Much more difficult at every level.

This is the problem I addressed with the Reality Modeling data design pattern. The basic idea is that you are only allowed to model reality, not business entities.

THERE IS NO ACCOUNT
In the business world they speak of accounts, clients, leads, and orders. An account might look like this;

“Standard” Table
Accounts
• AccountId
• FirstName
• LastName
• HomePhone
• CellPhone
• CompanyName
• Address1
• Address2
• SubRegion
• Region
• PostalCode

“Normalized” Tables
Accounts
• AccountId
• ContactId
• AddressId

Contacts
• ContactId
• FirstName
• LastName
• CompanyName

Address
• AddressId

God Properties
The issue in both cases is the direct relationships between the entities. Only a many-to-many relationship type will give us the flexibility we need for unlimited adaptation. Does that mean every relationship must be managed through a link table? In a perfect world, yes. Properties of an entity should be only God Properties. These are properties given to entities by nature (or God) and maintain a one-to-one relationship. Here is an example of some reality models.

Reality Tables
People
• PersonId
• FirstName
• LastName
• DisplayName

Organizations
• OrganizationId
• Name

Locations
• LocationId
• Address1
• Address2
• SubRegion
• Region
• PostalCode

Associations
The Reality Model uses an advanced concept of the link table called an Association. Associations define the relationship between two entities and will contain properties that result from the relationship.

Association Tables
Person.Organizations
• AssocationID
• PersonId
• OrganizationId
• IsDefaultOrganization
• AssocationType (Work, Affiliate, Member etc.)

Person.Locations
• AssocationID
• PersonId
• LocationId
• IsDefaultLocation
• AssocationType (Home, Rental, Billing)

Conclusion
My first reaction when this concept went on the white board two years ago was; “Man that is going to be hard to deal with on the back-end.” After having used it for a year and a half, I can’t imagine living without the flexibility of the design. When combined with an agile development method, it makes all the sense in the world. At least to me.

reality

Advertisements

Link to Powerpoint for Hypermedia Talk

Hypermedia API’s and the History of the Internet

I gave a presentation to the Bellingham .NET group on Hypermedia. This is a link to the PowerPoint content of the presentation,

DOWNLOAD

The REST _connection

Today I created a simple REST connection class to standardize all REST requests from a client running on a server. We are building our REST.SDKs using this client framework. In addition to the supporting Morpheus model classes, the PF.REST.Client contains a Connection Class as defined below.

PF.REST.Client.Connection

You must create an instance by supplying API Credentials.

public Connection(string apiIdentity, string apiKey)

All Authorization calls are handled automatically. A typical work flow looks like this;
1. Request top level domain to obtain valid Links (no authorization)
2. Request tokens resource (via link from 1) using BASIC authorization, retrieving a bearer token
3. Request top level domain to obtain valid Links using BEARER authorization
4. All future requests are made using BEARER authorization until token expiration.

There is one public method: SendRequest. There are 6 overloads.

public string SendRequest(Request request)
public string SendRequest(Link link, params object[] formatValues)
public string SendRequest(Link link, string body, params object[] formatValues)

public T SendRequest<T>(Request request)
public T SendRequest<T>(Link link, params object[] formatValues)
public T SendRequest<T>( Link link, string body, params object[] formatValues)

The return values are simple, you either get strings from the Response.Content or a generic type of your choosing (assuming the content and content-type can be deserialize into the object).

The Request argument is of type PF.REST.Client.Request. This class is basically a Link class with a body.
The Link argument is of type PF.REST.Client.Morpheus.Link as defined below.
The Body is the string to be used in the request body.
And the formatValues allow you to pass in any type for link merging.

public class Link
{
    public string Href { get; set; }
    public string Rel { get; set; }
    public string Class { get; set; }
    public string Method { get; set; }
    public bool Templated { get; set; }
    public string ContentType { get; set; }
}

Link Merging
A typical Href from Morpheus looks like this;

http://domain.com/applications/{network}/{tennantkey}/members?membername={membername}

The REST.Client will attempt to merge the formatValues object[] with the data-tags located in the link. Objects are explored using reflection for matching property and field names. If the formatValue is of a simpler type, such as string or int, they will replace the remaining values in order.

Example #1: Simple

public string[] GetAllRoles()
{
    Link link = Links.GetLink("approles");
    List<string> roles = _connection.SendRequest<List<string>>(link, Config.Network);
    return roles.ToArray();
}

Example #2: Slick

public bool ChangePassword(string userName, string oldPassword, string newPassword)
{
    Link link = Links.GetLink("changepassword");

    string body = Newtonsoft.Json.JsonConvert.SerializeObject(new
    {
        UserName = userName,
        NewPassword = newPassword,
        OldPassword = oldPassword
    });

    string response = _connection.SendRequest(link, body, userName);
    dynamic responseObject = JObject.Parse(response);

    if (responseObject != null)
    {
        return responseObject.success;
    }
    else
    {
        return false;
    }
}

Errors:
If a server error occurs, our REST servers will deliver a proper status code and then serialize the Exception into the content of the response. The Connection class will  deserialize the error and throw that error as an inner Exception.

Hypermedia Types for Morpheus

Hypermedia Types for Morpheus

Author: Joseph Kowalski
Last update: 5/21/2013
Stats: Peer Review

Description: Morpheus enabled projects use a combination of hypermedia APIs and adaptive user interfaces. The three main development concepts needed for a Morpheus enabled client include; the Morpheus development pattern, Morpheus.js, and a hypermedia API which serves the media type listed below. Developers may embrace each of these concepts individually or as a whole. This document defines the media type used by a Morpheus client.

The media type defined here is designed to solve several issues encountered with emerging line of business (LOB) applications built on hypermedia APIs.

1. Reduce or eliminate the need of unique media types for every independent LOB project.
2. Deliver response data which allows clients to validate resources prior to future requests and allow for dynamic, client side, user validation.
3. Define a generic methodology which promotes adaptive UI design, delivering content based on three factors; the user, the application, and the client device.

Hypermedia Content Type
application/morpheus+json?format=meta|data|all(default)

var morpheus = {
    version:"1.0",
    links:[
        {href:"uri", rel="self" class="smi.account"},
        {href:"uri", rel="next" class="smi.account"}
    ],
    meta:[
	{class:"smi.account",
        readOnly:false,
        properties:[{
            name:"accountName",
            dtype:"string",
            required:true,
            readOnly:false,
            maxLength:100,
            minLength:3,
            scale:0,
            precision:0,
            minValue:0,
            maxValue:0,
            defaultValue:"",
            valuesLinkRel:""}
	],
        links:[{rel="update", href:"uri/{id}", method:"PATCH", templated:true}]}
    ],
    data:{
        ctype:"resource|collection"
    }
}

var resource = {
    ctype:"resource",
    class:"smi.account",
    id:1001,
    accountName:"Crichton, John",
    firstName:"John",
    lastName:"Crichton"
}

var collection = {
    ctype:"collection",
    class:"smi.account",
    count:1234,
    pageIndex:0,
    pageSize:12,
    items:[{id:1001,
        accountName:"Crichton, John",
        firstName:"John",
        lastName:"Crichton"}]
}

Asynchronous Web.API requests

When I noticed that Web.API uses Task<> in it’s pipeline I assumed that we could do things like, log a request asynchronous form the HttpRequestMessage. There are even examples of logging all over the nets but they’re not asynchronous. Well they are, but they’re not. The Task<> in the Web.API pipeline  is a mechanism to handle the http request path and then catch the http response path. It’s a cleaver mechanism but it takes place on a single thread. There are two different implementations you might find. One example uses lambda the second uses the keyword await, which basically tells the compiler to write the lambda for us.

Build our own
I still needed an asynchronous task handler. But not just for logging. Now that we have our model exposed through our REST API, we need to add some service resources. Many of these requests will process time consuming operations ( > 500 ms). Because we will be building any number/type of applications on this access layer, I want http requests to be fast, regardless of the time it takes to complete the process. So our process will be to use the request to queue up a Task<> on a separate thread and return the response which includs a taskID. We will also need to build a v1/Tasks resource to handle task queries.

AsyncTaskService
I have created an AsyncTaskService class to handle the Task creation. A simple static method which receives the HttpRequestMessage and an Action<object> parameter. First we create a GUID.Comb from our Globals helper class to use as the taskID. Then we use the Task.Factory.StartNew() method, passing in the Action<object> and immediately fall to the Anonymous Type declaration. I want to send the taskID back to the client in the response using Json and javascript standards, that’s why I named the taskID variable, theTaskID. Finally we return the newly created HttpResponseMassage using status code 201 (received but not processed) with the Json content.

Async-Web.API

Using the AsyncTask Service
Now that we can create trully asynchronous tasks. Lets take a look at using the service. Notice both comments in the code below. Any code placed here will execute asynchronous to the main request/response pipline. This example will simply log the request and response. We are not even using the return here because the client does not need to know that this process is running.

Async-Web.API-4

We can get the HttpResponseMessage and return it to the client using the code below. Once the client receives the response, they can extract the taskID and query the task status using v1/tasks/{id}.

Async-Web.API-3

Mr. Smith Goes to Microsoft

We all live in our own bubble. Our perception of the universe is filtered by our individual experiences. As open minded and knowledgeable as we try to be we must reach outside of our natural surroundings to gain additional perceptive. Simply put, you don’t know what you don’t know. This is why I jumped at the opportunity to visit Microsoft for a day when Glenn Block extended the offer. And it took me a minute to realize that this was also Glenn’s motivation for rolling out the red carpet.

Whose who
In less than 8 hours I met with 9 technical giants. Given the current direction of our company this could not have been a better combination of talent and knowledge. I have written about the lack of passion that most developers seem to have. Even though my time with them was short, I got the feeling that they all have a passion for what they do.

Glenn BlockAuthor, speaker, blogger and Engineer. Glenn was my tour guide during my visit and is a great resource for a handful of technologies.

Henrik Frystyk Nielsen – One of the principal authors of HTTP! Henrick is currently working with the Web.API team.

Conner Cunningham – Principal Software Architect for SQL Azure.

Madhan Arumugam – Lead Program Manager, SQL Server Engine

Corey Sanders – Principal Lead Program Manager at Microsoft on the IaaS team

Josh Twist – Program Manager at Microsoft on the Mobile Services team

Vittorio Bertocci – Principal Program Manager at Microsoft on the Access Control Services team

Levi Broderick – Developer on the ASP.NET team, focusing on security

Justin Beckwith – Program Manager at Microsoft Corporation on the Web Matrix team

Microsoft’s Redmond Campus
This was my first visit to Microsoft and while I arrived in Redmond with time to spare, homing in on building 35 was like chasing Brigadoon. Apparently there are over 80 buildings on campus with nearly 10 million square feet of office space. This houses a work force of 50,000 people! The campus manages a fleet of shuttles which move people from building to building during the work day. And the commons area supports several dozen independently owned shops and restaurants.

Ok, so a Microsoft employee’s bubble is a little larger than most, but I can’t help but think that teams form bubbles of their own until they need to intersect. Having access to that many resources could further remove you from your customers, so I see why technical teams could use direct feedback from respected clients once in a while.

Building 35, where I started my day, was largely conference rooms. Room after room of conference tables, whiteboards, cork boards and projectors. While it wasn’t difficult to reserve a room, we were kicked out at the top of each hour to make room for the next meeting. Between musical conference rooms, shuttle rides, and the vast quantities of people, I can now say; I’ve seen a human version of a bee hive.

Meeting #1: SQL Azure Issues
So the main reason for my visit was to talk to the SQL Azure team about the issues we are having with timeouts. Conner and Madhan were very open and helpful in getting me to understand one simple truth; timeouts are a fact of the platform. How often they occur and how long they last is improving all the time. Even several 30 second timeouts, several times a day are still within the constraints of the SLA. What this really means is that apps built using SQL azure should be built to deal with these issues. Not an easy feat to be sure, but I look at it like this; we have traded occasional, severely disruptive hardware failures for more common, less disruptive, short term timeouts. At the moment all SQL Databases have the some class of service. Conner suggested that SQL Azure will eventually have enterprise level solutions. He also was sure to make the point; “spread it like peanut butter”. Meaning we might want to split our databases into even smaller chunks.

The good news is that Virtual Machines (VM) hosting SQL Server will be released to General Availability (GA) at the same time VMs are released. In addition, the physical storage for these databases is currently protected by the GA agreement. This was my primary fear when pondering SQL on an Azure VM vs SQL Azure. Madhan was quick to point out that this is really a temporary solution and that, ultimately, we should be on SQL Azure.

Meeting #2: Azure IaaS
We had a quick phone meeting with Corey from the IaaS team. I only had a few questions but I heard “Non-Disclosure Agreement” (NDA) several times in the conversation so my lips are sealed. One good thing I can report is that I have seen dramatic improvement in Azure IaaS over the last six months. That being said, Corey told me they will be releasing an even more significant changes “soon” and that the GA environment will been operational a few months before they really call it GA. Sweet.

Meeting #3: Mobile Services
I had a great conversation with Josh from Mobile Services. He was fast, detailed and to the point. There are a few services that we might benefit from but their primary objective is to take away the friction of developing mobile applications. Out of the box the framework supports data storage, authentication, and push messaging just to name a few.

Two new mobile technologies I learned about were Phone Gap and Xamarin. Both are designed for “native application” development to get a mobile app offering to each marketplace quickly.

Meeting #4: REST
Between meetings Glenn and I talked about a handful of topics. The most exciting and educational for me was our discussion around REST. We spent over 2 hours talking about what REST really is, what it means for real world companies like mine, and what the future of web development looks like. This post is already long enough, so I’ll be addressing these topics in another blog entry.

Meeting #5: Web.API
This was a quick meeting with Henrik. For the most part, it was just praise for the simplicity and elegance of Web.API. Glenn, Henrik and the Web.API team have delivered a strong typed HTTP framework. HTTPRequest, HTTPResponse, and an extendible management pipeline. A simple and effective .Net API from one of the principal authors if HTTP itself. Simply genius.

Meeting #6: Security
Vittorio and Levi helped me to understand what Microsoft has in store for Azure security services. While not yet available, it promises to be a great base for single sign-on systems. Based on Active Directory, Azure ACS could be what we are looking for. Unfortunately their timeline does not fit with our own. So it looks like I’m going to have to build what we need. Levi suggested that we DO NOT use the Asp.Net’s forms authentication membership provider as the starting point. Vittrio, sounding much like Giotto from the movie Cars tells me; “You don’t know what you want. Tell me what you need, and I will tell you want you want”. A great conversation with two very smart security people.

Meeting #7: Web Matrix
Web Matrix is a free and easy to use lightweight web development tool. If you are familiar with Visual Studio, you may not see a benefit. If you’re a web developer who is not comfortable with VS and its complexity, Web Matrix may be the droid you’re looking for.

All in all it was an amazing day with some amazing people. I received confirmation on much of what I already knew (always helpful), picked up a handful of new insights, and extended my perception bubble. My world was a little larger on the way home. And of course, with my head spinning, I missed my I5 exit, and ended up taking a 2 hour detour. No worries though because I think I have a new vision for Morpheus.

Web.API Base Controller for REST: One Ring to Rule Them All

As our company grows, the demand for a full fledged API has increased rapidly. As we move up the long tail, vendors, corporations, and users want their services to work with our clients data. Sure, we have a few SOAP calls, but not a complete resource library. Additionally, and after the fall of Silverlight, we need a more streamlined data access methodology. After reading about the ever growing popularity of REST, we decided to go all in. I went from zero to REST in an afternoon with Apigee’s great video outline of what some REST standards are and/or what they should be.

NOTE: Apigee’s development video is a great launching point, but some of their general methodology is based in Ruby and break the HTTP spec.

Our REST API (in beta) will be the data access point for not only vendors and clients, but for ourselves as well. Using jQuery, css3, breeze.js, and a handful of other new javascript technologies, we intend on rewriting our mobile application as a Rich-Internet-Application (can we still say that?). I think John Papa is calling them SPAs.

After a few false starts (don’t use the MVC3 REST template) I discovered Web.API. MVC was designed to abstract web protocols. This is exactly the opposite of what you want in a REST project. The Web.API embraces http and ‘re-stracts’ the protocol, while providing developers with a very extendable strongly typed library.

On day two, I developed a generic base controller to handle all resource requests. This way, my team only had to worry about their resource implementation logic. We have also added basic Authentication, LOB caching, logging, IP restriction rules, and used reflection to complete all self-validation and self documentation for our developer site. We will be adding OAuth2 after we complete our Authentication Server project. This post will focus on the ControllerBase class and the GET request.

The Goods
The abstract base class utilizes generics and inherits from the ApiController class.

The routing system sends the request to the proper resource controller, which inherits from the base controller. A GET request is then delivered to the Get method of the base class.

We use the [Authorize] method decorator to ensure all calls have been properly authenticated. We return an object so that we can control the response, especially in the event of an error. I have created a static method in our Response service class to package all errors into the requested format (defaulting to json) and return it to the client. Assuming all is well, the Get method packages the method parameters (and some additional framework data) in a GetCollectionEvenArgs class. In this case, ‘Get’ is referring to the http verb. We also have PostEventArgs, PutEventArgs and so on. From here, we call the abstract method GetCollection().

Untitled-1Untitled-2

Untitled-3

PagedCollection

The GetCollection() returns a PagedCollection class which is serialized by the ResponseService. NOTE: The json object we are returning IS NOT THE COLLECTION. It has a List type structure for paging. We give the client the total count of possible items for the request and the collection. This is great for extendability  We can now add properties to the return as the development community sees reason, and not break interface.

Untitled-4

The ResourceControler can now inherit from ControllerBase passing the resource reference  it will be implementation. Here we are implementing the GetCollection from the ControllerBase, receiving the EventArgs and then calling our IDataService, which is also controlled and loaded in the ControllerBase.

Untitled-5

The rest of the ControllerBase class is implementing the remaining verbs in much of the same way as the GET method. The response returns a single json value, the identity of the newly inserted record; cus, you know, that’s kinda important. You can have that one. Feel free to make it a standard.