Technology stack choice for startups

on March 28th, 2011 at 1:42pm no tags 2 responses

There has been some rants lately going on about “you shouldn’t use .NET (and J2EE)”, and “shouldn’t hire .NET people” (see here, here and here for e.g.)

So Ron went ahead to ask on the mailing list “is specializing in .NET risky”?

My short answer is: specializing in a single technology only is somewhat risky, as the world keeps changing.

I would never hire a “.NET developer” or a “J2EE developer”, or a “Ruby developer”. I will hire a great developer, and I do not really care which language he used the most lately, as long as he is great in it. Here in delver we had many success stories with great people coming in with having almost no experience in .NET or Java (which are the major technologies we use here), but php/python/c++/whatever instead. Because they are great people, they quickly covered the gap, and became productive in no-time.

Because, at the end of the day, even if you use technology A, and you hire someone who is an expert in technology A, then that one would still need time to adjust to the specific paradigms, techniques, and spirit of the code used in your company. So why limit yourself (and to limited people - ones who care religiously about language instead of caring about being great)?

Anyway, this is my take on the specific MySpace story:

I've seen the Scoble article on Myspace being killed by MS technologies, and I call bullshit.

It is classic "Not My Fault Lets Blame Someone Else" syndrome.

Specifically, MySpace were driven by bad business decisions (like using a 3rd party package for its core functionality - the 'social engine'), and bad management and leadership (it is owned by an old-media corporation, which is a firm believer in pre-web2.0 concepts - re the new iPad newspaper that does not allow talkbacks or any other feedback mechnism !?!?!?? )

It has *nothing* to do with specific technology. Not even with bad system architectures.

As for the "cannot find good XYZ people" - this is again a fallacy. Good (and especially superb) people, which is what you want in a startup (or anywhere for that matter) would always be polyglot. They would adapt to any technology. Skills has nothing to do with a specific syntax, or familiarity with certain libraries. Cuz these change rapidly anyway.

As for the "Hollywood is not a place for startups" - well probably yCombinator is currently only suited for the valley, however a startup that already has a seed investment, and has a reasonably crafted business plan, does not *have* to be in the "meet all the dudes in the coffee shops" scene. Plus LA is about a couple of hours flight away from the valley, or 6 hours drive.

And people *will* come work for you if your product is right, your company is great, compensation is good, location is appealing - more or less in that order.

As for "it does not scale" - it is never about the technology stack. It is architectures and processes that do not scale.

My bottom line - choosing the stack has very low impact on the overall success probability of a company, because it has no impact on business decision. What is important is the possibility for a company to quickly adapt to changes, and be able to deliver features and changes as fast as possible so that it could get feedback from users and grow accordingly. 

So what does matter is the skill set of the first technical person(s) in the company, whichever stack they are most rapid with. Later additions to the team will be made of excellent people, which will adapt to the stack if the product and the company is compelling enough.

so worry about making your product fricking compelling, and your company a fricking great place to work at. Leave religious debates to silly youtube clips.

Fetching from many urls concurrently - the right way

on March 15th, 2011 at 10:50pm 2 responses

I’ve recently bumped into a post describing how to access a bunch of urls. The post was describing usage of Parallel.ForEach as a mean to parallelize such code.

It goes down to describe the the improvement over the iterative, single threaded version, was bounded to the number of cores.

Multithreading is definitely a way to increase performance of long running, independent jobs. However, there is a major difference between jobs that are CPU bound (such a complex computations) and jobs that are IO bounds (file-system, web services, DB calls, whatnot). First, there is no reason to keep all those threads waiting for the IO call to complete, and second - the number of available threads is limited, so the improvement is bound to the number of cores.


The way to achieve a better improvement for the said  scenario and similar others, is to use non-blocking calls that are using IO-completion ports instead of threads.


I’ve ran three versions of the code - serial, parallel, and Async IO based. The results are stunning. For 300 files, the serial one took ~3.5 minutes, the parallel to ~30 seconds, and the Async IO base took just under a second!


Notice the ServicePointManager.DefaultConnectionLimit = 1000; part, which tells the application how many concurrent outgoing connections are allowed. A modern windows host allow up to ~64k file descriptors (the stuff that amongst other things allow this behaviour) so a 1000 is not a problem.


I also like CountdownEvent very much. Easier than ManualResetEvent with a counter and Interlocked.Decrement calls.



Posts count: