My DateTime serialization practices

on September 4th, 2013 at 5:32am 0 responses

Show me your date

When sending DateTimes as string across the wire, it is quite useful to use ISO 8601 date formatting. For one, it holds all required info (including timezone offset specified), it is easy to infer the Kind of the DateTime (UTC, Local or Unspecified), it is widely and commonly used across most (if not all) platforms, and if omitting milliseconds,  is lexicographically ordered, which makes it useful for indexed storage as well (for example on the filesystem, or as keys in a string based Key Value stores such as Azure Table Storage)

During the many times I had to deal with serialization implementations, while working on one of the many web frameworks I’ve been involved with, or with serialization libraries, I keep getting back to be needing to remember what I did last time, so this post is to serve as a future reminder to self on how I want it to be done.

Serializing a DateTime to a string:

string Serialize(DateTime value) {
const string ISO8601Format = "o"; // this is a terrific little gem!
return value.ToString(ISO8601Format);
}

Did you notice the “o” format specifier? This is a much better than typing "yyyy'-'MM'-'dd'T'HH':'mm':'ss.fffffffK", which I’ve been doing until recently.

For lexicographically ordered version, we would only go so far to the seconds, and be sure to force the input datetime kind to UTC (otherwise order is difficult to maintain…):

string SerializeOrdered(DateTime value) {
const string OrderedUniversalFormat = "u";
return value.ToUniversalTime().ToString(OrderedUniversalFormat);
}

 

What’s the deal with Kind and Timezone offset?

The ‘K’ specifier will render the following:

  1. If the input datetime is of UTC kind, it will render the letter Z
  2. If the input datetime is of Unspecified kind, it will render … nothing
  3. If the input datetime is of Local kind, it will render + (or –) the offset

The interesting bit here is that there is a difference between 2013-09-03T10:00:00-00:00 and 2013-09-03T10:00:00Z. They refer to the same point in time, however the former refer to the Local time where the offset is 0 (e.g. London, UK at winter time – a lovely picture), while the latter refer to the UTC time. This knowledge allow us to infer the actual datetime kind when parsing the result. How do we do that you may rightfully ask?

Deserializing DateTimes from a string:

DateTime DeserializeDateTime(string value) {
return DateTime.Parse(value, null, DateTimeStyles.RoundtripKind);
}

That’s it. The trick is in the DateTimeStyles.RoundtripKind bit. I keep forgetting that, and this (and the “o” specifier) is the reason for this post.

When Deserializing ordered DateTimes, the former deserialization code would end up with a DateTime of Unspeficied kind, so it would be better to do that:

DateTime DeserializeOrderedDateTime(string value) {
return DateTime.Parse(value, null, DateTimeStyles.AssumeUniversal).ToUniversalTime();
}

Two years and one baby later

on October 11th, 2012 at 5:30pm 0 responses

two years ago, my Noam (who was 1y and a wee back then) got caught on camera doing this:

IMG_2370
(from http://kenegozi.com/blog/2010/10/07/baby-smash-on-big-screen)

 

And now, Alma (8 months) is doing that:

alma_smash

 

smaller screen, shorter hair, the rest is quite the same

Excel and Mobile Services take 2

on September 11th, 2012 at 7:51am , 0 responses

I have updated the code from my recent post, Using Excel to edit Azure Mobile Service table data, to support Insert and Update.

In order to delete a record, you’d need to have a cell from that record selected, then click on the Delete icon on the Add-Ins ribbon menu. The only requirement is to actually have a value in the first cell of that line.

In order to Insert a record, you’d need to set the fields, and keep the id empty, then click on the Add (plus) icon on the Add-Ins ribbon menu when a cell in that new record’s row is selected.

Using Excel to edit Azure Mobile Service table data

on September 7th, 2012 at 8:31am , 4 responses

When building apps for Mobile Services, you often need to manipulate the data stored in your table from an admin point of view.

The management portal of Azure does let you browse your data, but not edit it.

A few days back, Amit showed on his blog a way to create a simple data manager as a Windows 8 Application, using the official SDK.

I however like the UI of Excel for data editing, so I wanted to create a simple editor that taps to Excel mechanisms, and uses the unofficial SDK to communicate with the mobile service.

The results can be seen in the following recording (you’d want to watch it in HD):

 

How?

First, I created an Excel AddIn project in VS2012. Then I grabbed the latest SDK file from github, and added it to the project. Lastly, I changed the AddIn code to look like that gist (you’d need to set your app url and key), and ran the project.

 

Current limitations:

  1. I am a very poor VSTO developer. There are probably million ways to do the Excel bits better. I’d appreciate constructive comments on the gist.
  2. The current implementation does not support row inserts and deletes.
    UPDATE [9/11/2012] Insert and Delete works now!
  3. Dates will lose millisecond precision.
  4. And it will not work with “Authenticated only” tables.
    I will be adding a support for the Admin Key that Amit showed on his blog to solve this

Using gists in your blog? Embed them into your feed

on September 4th, 2012 at 7:22am , , 1 responses

I love using github gists for code snippets on my blog. It has many pros, especially how easy it becomes for people to comment and suggest improvements, via the github mechanisms we all love.

There are however two drawbacks that people commonly refer to with regards to using gists that way:

  1. Since the content of the code is no longer part of the post, it is not being indexed by search engines, and
  2. Since the content of the code is not part of the post, and since the embedding mechanism is JS based, people who consume the blog via the feed and use feed aggregators that does not run javascript (most of them don’t), will not get the contents.

My answer to the first one is simple. I don’t really care. Not that I do not care about SEO, just that I do not need to have my post indexed and tagged under a bunch of irrelevant reserved words and common code. If the snippet is about using an interesting component ABC, I will mention that said ABC in the post content outside of the snippet, problem solved.

The latter is more interesting. I used to manually add a link to the gist page whenever embedding one, but it is not a very fun thing to do.

So, in order to overcome this, I wrote a small code snippet (yey) that upon saving a post (or updating it), will look for gists embeds, grab the gist source from github, stick in into the post as a “ContentForFeed” and serve with a link to the gist page, just for the fun of it.

And the code for it (it’s a hacky c#, but easily translatable to other languages, and/or to a cleaner form)

Have fun reading snippets

Using Azure Mobile Services with Windows Phone

on August 29th, 2012 at 9:40am , 7 responses

Windows 8 app building is great

With the new awesomeness that is Azure Mobile Services, building a cloud-connected application became much easier.

Now you just probably say “I wish it would have worked with other client platforms as well as Windows 8”

Guess what?

HTTP

The service is actually talking to the SDK via HTTP, and the Windows 8 SDK that is published along is a (very rich, awesomely done) wrapper around that HTTP API. Given that, I jumped ahead and implemented a (very poor, awfully done) SDKs for Windows Phone*.

Disclaimer #1

What you see here in this post and other related ones is 99.999% guaranteed to fail for you. It is a hack job that I put together in a few late-night hours, and it is *not* endorsed by the Mobile Services team. It is likely that if and when we do come up with an official WP SDK, it would be looking different. Very different. Even the HTTP api that I’m using here is likely to change by the time the service gets out of Preview mode.

codez

You can peak at some of the usages for the API in the following gist:

In follow up posts, I will cover the API more, and I will also be adding xml comments to the SDK to make it easier to use.

How to get it?

Head over to https://github.com/kenegozi/azure-mobile-csharp-sdk.
you could either clone the repo, or just navigate to /src/MobileServiceClient.cs , click on the ‘Raw’ button and save it in your project.
You’d need to have the latest Newtonsoft’s Json.NET referenced as well (if you don’t have it already).
A NuGet based delivery is in the works.

 

* I am also working on a similar SDK for Android. I’ll get to work on a iOS one as well once I get around to install Mountain Lion on my MBP

Disclaimer #2

Although I do work on the Mobile Services feature in Azure, the opinions, code, and sub-par grammar I voice on this blog is completely my own, and does not reflect my employer’s opinions, code, or grammar. This is *not* where you will get any official Azure announcements, you’d need to check out other places (I’d suggest http://WindowsAzure.com and Scott Guthrie’s blog as good starting points)

Silverlight with VS2012 - Microsoft.Silverlight.CSharp.targets was not found

on July 26th, 2012 at 6:47am 0 responses

I tried opening a solution with silverlight project in it, and the silverlight project was not loading, complaining that the said targets file is missing. Now this is a machine without VS2010, only VS2012. The usual fix for this is “install the Silverlight tools for VS2010” alas since vs2010 is not present, it tried to install Visual Web Developer 2010 first.

Instead, I downloaded and installed the “Silverlight 5 SDK”, (from here, scroll down a bit)which apparently is not dependent on VS, hence installed correctly and problem is gone.

Code Proxies and why You should care

on July 22nd, 2012 at 9:21am 0 responses

Head first

Instead of going about what a proxy is, I’ll first describe usage scenario or two to make the explanation more concrete.

non functional aspects aka AOP

Consider any “service class” you might have written, lets assume it has a well defined public API – probably using an interface. Now lets say that you want to start logging the amount of time each of these methods of the public API take. A common solution would look like that:

This violates a few engineering principles (repeated code, magic strings, etc.), makes debugging annoying, clutter the view, and overall not-fun

Meanwhile in Dynamicland

With the ability to replace a method in runtime, a developer in Ruby/Javascript/et-el can easily patch these methods and add the cross cutting concern in run-time.

Back to c#

The concept of AOP is not unfamiliar to c# developers. While some solutions use compile-time code weaving (a-la postsharp) and other techniques, the more common one (which is in use with most IoC containers, as NHiberate and other frameworks) is to use a DynamicProxy. Meaning that in runtime, user code will ask a factory (or IoC) for an object of type X, and will get and object of type Y, where Y is subtype of X, and was dynamically generated in runtime to override X’s public methods, and apply the aspect there. Not unlike any other Wrapper / Decorator class, except for the fact that no-one needs to manually writing code for the wrappers, but instead write the aspect once, and apply it for many types/methods

Examples

NHibernate, to allow lazy loading of properties, uses a dynamic proxy when creating instances of objects that were read from the DB, decorating public virtual mapped getters with a “Load the content when first accessed” concern. this is totally transparent to the user. The fact the NH uses (at least by default) runtime dynamic proxies, and that (at least by default) it works with class-based pocos for entities (and not interfaces) is why the docs tell you to use virtual properties if you want Lazy Loading.

And wouldn’t it be nice when writing GUI apps to have PropertyChanged events be wired automatically?

Proxying interfaces

Here is where it is getting even more interesting IMO

The proxying technique can be actually applied to interfaces, not only virtual classes. Meaning that you can actually generate code in runtime to implement certain contracts without having actual implementation of those interfaces in your user code at all!

A fine example of that approach is in Castle’s DictionaryAdapterFactory (see http://docs.castleproject.org/Default.aspx?Page=DictionaryAdapter-Introduction&NS=Tools&AspxAutoDetectCookieSupport=1)
In essence, a dynamic proxy is created in runtime to implement a given interface’s properties, allowing typed read/write access to untyped <string,object> datastores (Session, Cache, ViewBag, you name it)

Another example where I used that technique in the past – in a RPC client/server scenario, you need to keep a few things in sync: The server’s endpoints (http in my scenario), the method signatures on both the server and client, and more.
The system was using an interface (with a couple of attributes for metadata e.g. URL parts) to declare the servers’ API. The server holds implementations  for the interface and in runtime it reflects over the interface to build the endpoints (think MVC routes), while the client generates dynamic proxies from the interfaces that call out to the server in a transparent way. This way we avoided the need to constantly regenerate client proxies (lots of repetitive code and clutter in the codebase, tax on source control and process, and difficult to manipulate and extend), as well as being refactoring-friendly (because it is all code, and magic-strings such as url prefixes etc are defined in exactly one place).

Where is the code?

Sorry, running out of time here. I will post an example implementation for a dynamic proxy in c# in a follow-up post.

From MongoDB to Azure Storage

on July 17th, 2012 at 8:42am , , 0 responses

My blog has been running happily for some time on a MongoDB storage. It used to be hosted on a VM in a really awesome company, where I had both the application and the DB sharing a 2GB VPS, and it worked pretty well.

At some point I moved the app to AppHarbor (which runs in AWS) and I moved the data to MongoLab (which is also on AWS). Both are really great services.

Before it was running on MongoDB, it used to be running on RDBMS (via NHibernate OR/M) and I remember the exercise of translating the Data Access calls from RDMBS to a Document store as fun. Sure, a blog is a very simplistic specimen but even at that level you get to think about modeling approaches (would comments go on separate collection or as subdocuments? how to deal with comment-count? and what about tag clouds? what about pending comments that are suspected to be spam?)

I am now going to repeat that exercise with Azure Storage.

The interesting data API requirements are:

  1. Add Comment to Post – atomically adds a comment to a post, and updates the post’s CommentsCount field.
  2. View-By-Tag (e.g. all posts tagged with ‘design’, order by publish-date DESC)
  3. view latest N posts (for atom feed, and for the homepage)
  4. view Monthly archive (e.g. all posts from July 2012)
  5. Get a single post by its permalink (for a post’s page)
  6. Tag Cloud – get posts count per tag
  7. Archive summary – how many posts were published on each month?
  8. Get total comments count (overall across all posts)
  9. Store images, while using a hash of the content to generate etags for controlling duplications.

Given the rich featureset of MongoDB, I was able to use secondary indexes, sub-documents, atomic document updates and (for 4, 6 and 8) simple mapReduce calls. The only de-normalization was done with CommentsCount field on post, which is atomically updated every time a comment is added or removed from the post, so the system stayed fully consistent all the time. The queries that required mapReduce (which could get pricy on larger data-sets, and annoying even on small ones) where actually prone to aggressive caching, so no big pain there.

I will be exploring (in upcoming posts, its 2am now and the baby just woke up) what it takes to get the same spec implemented using Azure Storage options – mostly Table Storage and Blog Storage.*

 

* yeah there is SQL Azure which is full fledged RDBMS, which can easily sport all of the requirements, but where’s the fun in that?

A little MongoDB stability anecdote

on July 14th, 2012 at 8:11am 0 responses

Once upon a time

So a while back I set up a system for a customer. They are not a tech company, but rather a more traditional business constructed around “buy stuff for cheap and sell for more”.

The system (which some aspect and the history and evolution of it are material for a few other blob posts) is automating a lot of the pre-processing for incoming buy and sell requests, filtering a real noisy stream of incoming data into relevant pieces of information that is handled to the sales persons quickly, making the business far more productive and competitive than without.

Given the importance, the system needs to be pretty robust. Given the amount of moving parts, it is not a very trivial task.

Show me the money

The backend storage for the system’s internal state (it also coordinates with several other data sources) was MongoDB.

The setup – a single Mongod process, running version 1.8.something (the latest at the time) with journaling on, and all write ops from the client require full ack and flush-to-disk (fsync) to complete. It also is running on a machine that already runs many other things, and is not a very beefy machine to begin with.

Oh yeah, and nobody is watching over it (not a tech company – did I mention that?).

What?

Single instance you say? but sir this is completely and utterly stupid!
Sharing the machine you say? but it would eat up all memory and kill everything!
No db admin? do IT person who know anything about it? it’s doomed!

The (not that) bad

In over a year, the system and it only suffered one-time breakdown, which is only attributed to my stupidity – I installed a 32bit version and once the system needed to allocate >2gb file it broke down.

The good

The fix was very simple and super fast – downloaded the 64bit package, replaced the binaries, restarted the service.
no data loss, the system picked up jobs from the queue and quickly restored full capacity.

State of affairs

The system have been running for well over a year now, completely unattended, and the only melt-down was avoidable, yet solved quickly and easily. MongoDB proved to be a robust piece of the puzzle. It also is showing a rather small memory footprint (most queries and updates are on the newest data, insertions are usually to the end of collections, so most of the files are kept paged to disk).

So yeah it is not a “web-scale” system in terms of request/sec or data size, but it proved to be a fairly good solution for an internal system that is in charge of *tons* of money.

So what’s my point

Given the design I did for the system (another time, another post), I was not very afraid of possible problems with the data store, knowing that given a problem, once I solve it the system can quickly get back to work. Then I needed a solution that was cheap (low resources, run on existing hardware and OS), flexible to develop with, and with super easy install and upgrade story (xcopy deployment ftw). MongoDB was a perfect fit.

Bottom line - is MongoDB stable?

I’ve seen in my consulting years quite a few systems being very fragile, although they were relying on “proven stable” systems such as top-of-the-line RDBMS. Solid architecture and good design are *far* more crucial to system’s stability than specific tech choices. The question you need to ask yourself when you need to build a complex system (be it on the amount-of-moving parts front, dataset volume, system stress, data sensitivity or a mix of the above),  is not “Is tech X stable enough or good enough”, but rather “Do I (or my people) know enough about building complex systems to build a stable one”. If you lack the experience, bring a person in who can help.

Phantom feed entries with Google Reader–problem hopefully solved

on July 9th, 2012 at 6:11am , 5 responses

I have had a few glitches recently with my blog’s feed. From time to time, the latest 20 items would re-appear as new, unread items in Google Reader.

It annoyed me, annoyed a few of my readers – some contacted personally, and eventually this happened:

feed-problems-facebook-post

I first suspected that the Updated or Created timestamp fields might be wrong, but looking at both the feed generated by the blog, and the feed as it is being served by feedburner showed me that these fields did not magically change.

I did however find the problem.

My feed is in ATOM 1.0 format, and each entry has an <id> field.

The id I am putting there is the permalink to the post, and here comes the interesting part – I was taking the domain part of the permalink from the current request’s url. I was doing that because I was, how to put it, short sighted.

Anyway as soon as the blog engine moved from my own, fully controlled VM hosted somewhere, to more dynamic environments (AppHarbor at first, now Azure WebRole), behind request routers, load balancers and such, the request that actually got to the blog engine had its domain name changed, and apparently not in a 100% consistent way. The custom cname that was used was changing every now and then (every few or more weeks) and then Google Reader would pick up the changed <id> and even though the title, timestamps and content of the posts remained, the changed <id> made it believe it is a new post.

I now hardcoded the domain part, and all is (hopefully) well.

If not – you can always bash me on facebook :)

The new Web Sites feature in Windows Azure–A story in pictures

on June 8th, 2012 at 1:10am 0 responses

Hello portal:

websites-portal-no-websites

Create new site:

websites-portal-create-new

 

Hi there!

websites-portal-running

 

Can do some stuff, maybe later

websites-portal-actions

 

What’s inside? first-time wizard

websites-portal-first-time-wizard

 

Setup git (I’ll spare you the username/password)

websites-portal-git-repo-is-ready

 

The best ide ever – echo

websites-echo-awesome

 

git it

websites-gitit

 

browse it

websites-awesome

 

monitor it

websites-portal-monitor-it

 

scale it

websites-portal-scale-it

 

Have you noticed the “reserved” option? you could actually scale it to a dedicated VM (or a few), using the exact same simple deployment model.

 

And of course it’s not only for text files. you could run PHP, node.js, as well as the more expected ASP.NET stack, on top of this.

 

want it

The Web Sites feature is still in preview mode. To start using Preview Features like Virtual Network and Web Sites, request access on the ‘Preview Features’ page under the ‘account’ tab, after you log into your Windows Azure account. Don't have an account?  Sign-up for a free trial here

Awesome companies

on June 3rd, 2012 at 7:06am 1 responses

During the last year I got to meet several companies. With some I just had a chat over coffee or during a rock concert, with some ran a couple of consulting sessions, for some I did some coding work, with some I discussed maters of process management and agile adoption. With some I interviewed for various fulltime roles, and from some I got very attractive offers.

I’d like to point out a few of the most awesome ones. If you ever get a chance to work with them or for them – you won’t be wrong to take it.

Asana (http://www.asana.com/)

Suffice to say that interviewing with them was the single most difficult interview I have ever gone through. And I have been through some hairy interviews in my time. Just browse their team page, full of successful startup veterans, to understand their capacity for execution, and deep understanding of how a web company is to be build on business, tech and team spirit aspects.

Bizzabo (http://www.bizzabo.com/)

I wish they were around back when I ran IDCC ‘09. The team is super focused, and their product is great. Take a couple of minutes off this page, and go read http://blog.bizzabo.com/5-useful-tips-for-maximizing-your-exhibition.

Commerce Sciences (http://www.commercesciences.com/)

If having Ron Gross there was not enough, they recently added Oren Ellenbogen to their impressive cast. I had the immense pleasure to work with these guys for quite some time. You’ll be able to learn a ton just by being around them. If you’re not following their tweeters and blogs, go do that right now.  And if all that is still not enough, the founders have long, successful history in e-commerce and global-scale web services. E-commerce analysis suddenly sounds super interesting!

Gogobot (http://www.gogobot.com/)

With an incredible ability to deliver top-quality features in virtually no-time, focus on customers, tons of talent and super fun team spirit, this gang is re-inventing social travel planning. If you’re travelling somewhere without using the service you are missing out. If you are looking for great team to work with in the Bay area – give them a call.

Windward (http://www.windward.eu/)

This was a refreshing change from all the social-web-mobile-2.0 related companies. these guys are back to basics – solving some actual problems for actual customers with actual money. Forget the long tail – we’re talking big-time clients. They are also dealing with some seriously complex data-crunching, and non-trivial tech challenges. The management crew are extremely professional, experienced and friendly. I spent a truly remarkable month with them, and I’m sure anyone who will be working with them would feel the same.

Yotpo (https://b2b.yotpo.com/)

I think that Tomer and Omri have one of the best age:matureness ratio in the business. They also appear to be able to crack down the social e-commerce formula into a compelling business model.

YouSites (http://yousites.net)

A really unique atmosphere. Working from an old villa in the relaxed Rehovot suburb, with home-cocked food and pets running around. Their sunlit garden is one of the best meeting rooms I’ve been to. With a passionate and experienced team, they got a nice thing going there. Keep an eye on them.

 

I might have forgotten a few others (sorry) – it has been a crazy year after all

 

Some of these places are hiring. If you are awesome (you probably are if you’re reading my blog) and want an introduction – ping me.

A year of changes

on June 1st, 2012 at 7:52am 2 responses

Just one year ago, I was working for Sears Israel, living in Raanana and being generally happy with my life.

Since then I left my job, met, consulted and worked with a few awesome startups, and finally joined Microsoft and moved with my family to Redmond, WA.

And had a new baby.

So tons of things were going on, lets see if I can capture some thoughts on them:

Central Israel vs. Seattle suburbs

Life here on the “east-side” are much more relaxed. The amazing scenery, the very low honk/minutes-on-road ratio, switching from an 60 years old tiny apartment to a 20 years old house, cool, drizzly weather vs the hot and moist middle east. We do miss our families very much, but we also have much richer social lives here, with many friends, and plenty of play-dates and outdoor activities for the kids.

Startups vs. Corporate

I’ve been working with and for startups for many years now. The move to a ~100,000 strong company is a huge change. Half a year in, and I am still struggling to adapt to the big-company way of thinking. There is also a big sense of responsibility knowing that my work now will soon be affecting a serious amount of customers globally, a thing that in many cases in startup world is not entirely true.

Startups are also often times between financially promising, and money-down-the-drain. Microsoft is in business for many decades, and still manages to net tons of money every year, and every year do so more than the last one.

I also need to re-prove myself. When I was employed full-time in the past I was holding top positions such as Architect, Dev manager, and was offered a few VP R&D and CTO jobs. As a busy consulted, I way actually paid to come in and voice my opinions out loud. In corporate land I started much lower, and now need to work very hard to get my voice heard. Especially when I am surrounded with a really talented and experienced bunch of people. I see it as a challenge and as an opportunity to grow and learn. Being a Lion’s tail beats Wolf’s head almost any day of the year. And it is full of Lions around here.

One kid vs. two

Given W[n]<=>work required for n kids, and F[n]<=>fun gained from n kids, it is sad that

F[n+1] = F[n] * 2, while  W[n+1] = W[n] 2 

Totally worth it though.

 

Settling down

It has been a heck of a year, with so many things to do that it kept me busy from engaging the OSS and dev community activities as I did in the past. I only gave two short tech presentations (on git and on NoSQL data stores), did very little OSS contributions, and wrote no blog posts for seven months!
Now that the whirlwind slowed down, I find myself getting back to these things. I already have tons of things to write about, and a few session proposals to send out to conferences.

As far as this blog goes - the year of changes has just ended, and the year of new and exciting (at least for me) content begins. Stay tuned.

Determining SQL Server edition

on November 1st, 2011 at 2:21pm , 2 responses

Thanks to http://support.microsoft.com/kb/321185 and to Ariel (@Q) who have read it more carefully than I did, I learnt that there is a SERVERPROPERTY that you can query:

SELECT SERVERPROPERTY ('edition')

 

I expected to find Developer, but found Express instead.

MySQL error 105 - Phantom Table Menace

on October 10th, 2011 at 4:30pm , 4 responses

MySQL is weird

The weirdest problem happened to a college today.

When creating the database schema during integration tests run, he got “Cannot create table FOO error 105” from MySQL.

There *used* to be a table named FOO with a VARCHAR primary key. The schema then changed so that the primary key of FOO became BIGINT. There is also a second table in the system (call it BAR) which has a foreign-key into FOO’s primary key. A classic master/details scenario.

However, the table BAR was obsoleted from the schema.

The integration tests runner is dropping all tables and recreating them before running the test suite. It is inferring the schema from the persisted classes using NHibernate’s mapper and the Schema creation feature of NHibernate.

Sleeves up

We cranked open the mysql console and started to look around:

  1. When doing “SHOW TABLES”, the FOO table was not listed.
  2. CREATE TABLE FOO (`Id` BIGINT)  - fail with error 105.
  3. CREATE TABLE FOO (`Id` VARCHAR) – success !!
  4. huh?
  5. DROP TABLE FOO – success
  6. encouraging !
  7. CREATE TABLE FOO (`Id` BIGINT) - fail with error 105 – again
  8. huh ???
  9. DROP TABLE FOO – fail with “cannot delete … foreign key …”
  10. but SHOW TABLES still does not list FOO
  11. huh ?????
  12. DROP DATABASE dev; CREATE DATABASE dev;
  13. now everything works.

Back to work

Luckily this was not a production database, and even more lucky – the said DB change (change that PK from VARVHAR to BIGINT) would need to run on production within a separate DB instance that can be recreated on deploy.

 

And while we’re at it

Way can’t MySQL store non-indexed columns in an index?

How do you pass values from Controller to View with MVC3

on October 8th, 2011 at 1:28pm , 0 responses

The scenario:

Given a blog application, with the following layout
image

with two possible usages – a post page:
image
and a homepage:
image

Let’s define the view model:

PostData:
  string Title
  string Body

PostView
  PostData Post; 

HomepageView
  PostData[] Posts

LayoutView
  Tuple<string, int>[] Archive
  Tuple<string, int>[] TagCloud
  string[] Similar

 

The views:

  1. _Layout.cshtml – obvious
  2. Post.cshtml – given a PostData instance will render Title and Body
  3. PostPage.cshtml – given a PostData, will call Post.cshtml and then render “add comment” form
  4. Homepage.cshtml – given PostData array, will iterate and call Post.cshtml for each post

How data moves around:

  • Controller is passing PostView (or HomepageView) *along with* LayoutView to the views
  • Post.cshtml should only see its parameters, not the layout’s (which are passed but are not interesting within the post template).
  • same goes for the other views
  • All views should be able to “see” a shared parameter named “IsCurrentUserAdmin”

Given that I want typed access to the view parameters in the view (for the sake of intellisense and refactorings), how would I model and pass the data around?

I’ve pseudo-code-grade written two options: the first is to use inheritence in the view model to achieve type-ness, on the expense of flexibility (composition is difficult with class hierarchy, and you need to be aware of and grab the viewModel instance in various places). The second is flexible (use the ViewData dictionary) but getting type-ness is cumbersome and partial (strings scattered around, casting needed etc.)

see https://gist.github.com/1272269 if the gist widget does not load in-place

I do have a solution that works for me

With the many years that I’ve been writing complex web apps using various ASP.NET frameworks and almost always with c# based, static-typed view engines, I have a solution that works very nicely for me.

But I want to be aware of the MVC3 canonical / textbox way

So for all you MVC3 ninja’s out there – please describe your way of doing it.

 

I will describe my approach in an upcoming post and I’d appreciate any input on it

Re-blog

on October 7th, 2011 at 10:06pm , 1 responses

My blog has moved to AppHarbor, and while doing that I also changed the engine for a completely custom thing (running on custom-y stuff like WebOnDiet and NTemplate), to a wee bit more conventional codebase based on MVC3 and Razor, with lots of nuget packages replacing custom code that I wrote myself.

The packages file now contains AntiXSS, AttributeRouting, Castle.Core (for my good pal DictionaryAdapterFactory), elmah, MarkdownSharp, mongocsharpdriver, XmlRpcMvc and XmlRpcMvc.MetaWeblog (awesome!)

BTW, expect a post on using the DictionaryAdapterFactory to make handling Controller=>View data transport truly awesome.

What’s missing here? IoC !

yeah I did not bother with that now. I have my tiny 15LOC thing and this blog does not need anything of this sort.

 

Some things might still break. Files I used to host for downloading would probably won’t work now. I will fix that soon I hope, time permitting.

 

note to self – reshuffle the tags here on the blog. I need to re-tag may entries. Maybe I’ll let site visitors suggest tags?

ASP.NET MVC3 Model Validation using Interface Attributes

on August 22nd, 2011 at 9:33pm , 0 responses

After reading Brad Wilson’s post on that, I thought to myself:

Brad is 100% correct regarding the way the CLR treat interface attributes, but this does not mean the users should not be able to use validation attributes on model interfaces

So I sat down to extend the model validation to do just that: (see https://gist.github.com/1163635 if it is broken here)

 

Now I know it is hacky – it should not go on a FilterAttributes() method. If I had access to the sources I’d have added a virtual “GetValidationAttribute” method on DataAnnotationsModelMetadataProvider… (hint hint hint)

Count number of times a file has been download via HTTP logs - DOS edition

on August 19th, 2011 at 8:07pm 0 responses

Apparently you can also do this with DOS, but it is not nearly as easy as it is with Unix shells or Poweshell.

 

DOS corner:

Good old DOS have FINDSTR which is quite similar to ‘grep’ but does not have anything to resemble ‘wc –l’.
However, the FIND command can count occurrences so

type log_file | find /c url

would work – for a single file.

in order to do that for all files, an on-disk concatenated version would need to be created and removed. Here’s a batch file to accomplish this:

@Echo off
pushd C:\inetpub\logs\LogFiles\W3SVC1
if exist all.log del all.log
copy /a *.log all.log > nul
type all.log | find /c %1
del all.log > nul
popd

yuck? indeed


Follow

Statistics

Posts count:
447
Comments:
950